Disruptive Library Technology Jester Disruptive Library Technology Jester We're Disrupted, We're Librarians, and We're Not Going to Take It Anymore DLTJ Now Uses Webmention and Bridgy to Aggregate Social Media Commentary When I converted this blog from WordPress to a static site generated with Jekyll in 2018, I lost the ability for readers to make comments. At the time, I thought that one day I would set up an installation of Discourse for comments like Boing Boing did in 2013. But I never found the time to do that. Alternatively, I could do what NPR has done— abandon comments on its site in favor of encouraging people to use Twitter and Facebook—but that means blog readers don’t see where the conversation is happening. This article talks about IndieWeb—a blog-to-blog communication method—and the pieces needed to make it work on both a static website and for social-media-to-blog commentary. The IndieWeb is a combination of HTML markup and an HTTP protocol for capturing discussions between blogs. To participate in the IndieWeb ecosystem, a blog needs to support the “ h-card” and “ h-entry” microformats. These microformats are ways to add HTML markup to a site to be read and recognized by machines. If you follow the instructions at IndieWebify.me, the “Level 2” steps will check your site’s webpages for the appropriate markup. The Jekyll theme I use here, minimal-mistakes, didn’t include the microformat markup, so I made a pull request to add it. With the markup in place, dltj.org uses the Webmention protocol to notify others when I link to their content and receive notifications from others. If you’re setting this up for yourself, hopefully someone has already gone through the effort of adding the necessary Webmention communication bits to your blog software. Since DLTJ is a static website, I’m using the Webmention.IO service to send and receive Webmention information on behalf of dltj.org and a Jekyll plugin called jekyll-webmention_io to integrate Webmention data into my blog’s content. The plugin gets that data from webmention.io, caches it locally, and builds into each article the list of webmentions and pingbacks (another kind of blog-to-blog communication protocol) received. Webmention.IO and jekyll-webmention_io will capture some commentary. To get comments from Twitter, Mastodon, Facebook, and elsewhere, I added the Bridgy service to the mix. From their About page : “Bridgy periodically checks social networks for responses to your posts and links to your web site and sends them back to your site as webmentions.” So all of that commentary gets fed back into the blog post as well. I’ve just started using this Webmention/Bridgy setup, so I may have some pieces misconfigured. I’ll be watching over the next several blog posts to make sure everything is working. If you notice something that isn’t working, please reach out to me via one of the mechanisms listed in the sidebar of this site. Digital Repository Software: How Far Have We Come? How Far Do We Have to Go? Bryan Brown’s tweet led me to Ruth Kitchin Tillman’s Repository Ouroboros post about the treadmill of software development/deployment. And wow do I have thoughts and feelings. Ouroboros: an ancient symbol depicting a serpent or dragon eating its own tail. Or—in this context—constantly chasing what you can never have. Source: Wikipedia Let’s start with feelings. I feel pain and misery in reading Ruth’s post. As Bryan said in a subsequent tweet, I’ve been on both sides: a system maintainer watching much-needed features put off to major software updates (or rewrites) and the person participating in decisions to put off feature development in favor of major updates and rewrites. It is a bit like a serpent chasing its tail (a reference to “Ouroboros” in Ruth’s post title)—as someone who just wants a workable, running system, it seems like a never-ending quest to get what my users need. I think it will get better. I offer as evidence the fact that almost all of us can assume network connectivity. That certainly wasn’t always the case: routers used to break, file servers crash would under stress, network drivers go out of date at inopportune times. Now we take network connectivity for granted—almost (almost!) as if it a utility as common as water and electricity. We no longer have to chase our tail to assume those things. When we make those assumptions, we push that technology down the stack and layer on new things. Only after electricity is reliable can we layer on network connectivity. With reliable network connectivity, we layer on—say—digital repositories. Each layer goes through its own refinement process…getting better and better as it relies on the layers below it. Are digital repositories as reliable as printed books? No way! Without electricity and network connectivity, we can’t have digital repositories but we can still use books. Will there come a time when digital repositories are as reliable as electricity and network connectivity? That sounds like a Star Trek world, but if history is our guide, I think the profession will get there. (I’m not necessarily saying I’ll get there with it—such reliability is probably outside my professional lifetime.) So, yeah, I feel pain and misery in Ruth’s post about the achingly out-of-reach nature of repository software that can be pushed down the stack…that can be assumed to exist with all of the capabilities that our users need. That brings me around to one of Bryan’s tweets: If the idea of a digital preservation platform is that it is purpose-built to preserve assets for a long period of time, then isn't it an obvious design flaw to build it with an EOL in mind? If the system is no longer supported, then can it really be trusted for preservation?— Bryan J. Brown (@bryjbrown) June 22, 2021 Can digital repositories really be trusted in-and-of-themselves? No. (Not yet?) That isn’t to say that steps aren’t being made. Take, for example, HTTP and HTML. Those are getting pretty darn reliable, and assumptions can be built that rely on HTML as a markup language and HTTP as a protocol to move it around the network. I think that is a driver behind the growth of “static websites”—systems that rely on nothing more than delivering HTML and other files over HTTP. The infrastructure for doing that—servers, browsers, caching, network connectivity, etc.—is all pretty sound. HTML and HTTP have also stood the test of time—much like how we assume we will always understand how to process TIFF files for images. Now there are many ways to generate static websites. This blog uses Markdown text files and Jekyll as a pre-processor to create a stand-alone folder of HTML and supporting files. A more sophisticated method might use Drupal as a content management system that exports to a static site. Jekyll and Drupal are nowhere near as assumed-to-work as HTML and HTTP, but they work well as mechanisms for generating a static site. Last year, colleagues from the University of Iowa published a paper about making a static site front-end to CONTENTdm in the Code4Lib Journal, which could be the basis of a digital collection website development. So if your digital repository creates HTML to be served over HTTP and—for the purposes of preservation—the metadata can be encoded in HTML structures that are readily machine-processable? Well, then you might be getting pretty close to a system you can trust. But what about the digital objects themselves. Back in 2006, I crowed about the ability of Fedora repository software to recover itself just based on the files stored to disk. (Read the article for more details…it has the title “Why Fedora? Because You Don’t Need Fedora” in case that might make it more enticing to read.) Fedora used a bespoke method of saving digital objects as a series of files on disk, and the repository software provided commands to rebuild the repository database from those files. That worked for Fedora up to version 3. For Fedora version 4, some of the key object metadata only existed in the repository database. From what I understand of version 5 and beyond, Fedora adopted the Oxford Common File Layout (OCFL), “an application-independent approach to the storage of digital information in a structured, transparent, and predictable manner.” The OCFL website goes on to say: “It is designed to promote long-term object management best practices within digital repositories.” So Fedora is back again in a state where you could rebuild the digital object repository system from a simple filesystem backup. The repository software becomes a way of optimizing access to the underlying digital objects. Will OCFL stand the test of time like HTML, HTTP, TIFF, network connectivity, and electricity? Only time will tell. So I think we are getting closer. It is possible to conceive of a system that uses simple files and directories as long-term preservation storage. Those can be backed up and duplicated using a wide variety of operating systems and tools. We also have examples of static sites of HTML delivered over HTTP that various tools can create and many, many programs can deliver and render. We’re missing some key capabilities—access control comes to mind. I, for one, am not ready to push JavaScript very far down our stack of technologies—certainly not as far as HTML—but JavaScript robustness seems to be getting better over time. Ruth: I’m sorry this isn’t easy and that software creators keep moving the goalposts. (I’ll put myself in the “software creator” category.) We could be better at setting expectations and delivering on them. (There is probably another lengthy blog post in how software development is more “art” than it is “engineering”.) Developers—the ones fortunate to have the ability and permission to think long term—are trying to make new tools/techniques good enough to push down the stack of assumed technologies. We’re clearly not there for digital repository software, but…hopefully…we are moving in the right direction. Thoughts on Growing Up It ‘tis the season for graduations, and this year my nephew is graduating from high school. My sister-in-law created a memory book—”a surprise Book of Advice as he moves to the next phase of his life.” What an interesting opportunity to reflect! This is what I came up with: Sometime between when I became an adult and now, the word “adulting” was coined. My generation just called it “growing up.” The local top-40 radio station uses “hashtag-adulting” to mean all of those necessary life details that now become your own responsibility. (“Hashtag” is something new, too, for what that’s worth.) Growing up is more than life necessities, though. This is an exciting phase of life that you’ve built up to—many more doors of possibilities are opening and now you get to pick which ones to go through. Pick carefully. Each door you go through starts to close off others. Pick many. Use this life stage to try many things to find what is fun and what is meaningful (and aim for both fun and meaningful). You are on a solid foundation, and I’m eager to see what you discover “adulting” means to you. More Thoughts on Pre-recording Conference Talks Over the weekend, I posted an article here about pre-recording conference talks and sent a tweet about the idea on Monday. I hoped to generate discussion about recording talks to fill in gaps—positive and negative—about the concept, and I was not disappointed. I’m particularly thankful to Lisa Janicke Hinchliffe and Andromeda Yelton along with Jason Griffey, Junior Tidal, and Edward Lim Junhao for generously sharing their thoughts. Daniel S and Kate Deibel also commented on the Code4Lib Slack team. I added to the previous article’s bullet points and am expanding on some of the issues here. I’m inviting everyone mentioned to let me know if I’m mischaracterizing their thoughts, and I will correct this post if I hear from them. (I haven’t found a good comments system to hook into this static site blog.) Pre-recorded Talks Limit Presentation Format Lisa Janicke Hinchliffe made this point early in the feedback: @DataG For me downside is it forces every session into being a lecture. For two decades CfPs have emphasized how will this season be engaging/not just a talking head? I was required to turn workshops into talks this year. Even tho tech can do more. Not at all best pedagogy for learning— Lisa Janicke Hinchliffe (@lisalibrarian) April 5, 2021 Jason described the “flipped classroom” model that he had in mind as the NISOplus2021 program was being developed. The flipped classroom model is one where students do the work of reading material and watching lectures, then come to the interactive time with the instructors ready with questions and comments about the material. Rather than the instructor lecturing during class time, the class time becomes a discussion about the material. For NISOplus, “the recording is the material the speaker and attendees are discussing” during the live Zoom meetings. In the previous post, I described how the speaker could respond in text chat while the recording replay is beneficial. Lisa went on to say: @DataG Q+A is useful but isn't an interactive session. To me, interactive = participants are co-creating the session, not watching then commenting on it.— Lisa Janicke Hinchliffe (@lisalibrarian) April 5, 2021 She described an example: the SSP preconference she ran at CHS. I’m paraphrasing her tweets in this paragraph. The preconference had a short keynote and an “Oprah-style” panel discussion (not pre-prepared talks). This was done live; nothing was recorded. After the panel, people worked in small groups using Zoom and a set of Google Slides to guide the group work. The small groups reported their discussions back to all participants. Andromeda points out (paraphrasing twitter-speak): “Presenters will need much more— and more specialized—skills to pull it off, and it takes a lot more work.” And Lisa adds: “Just so there is no confusion … I don’t think being online makes it harder to do interactive. It’s the pre-recording. Interactive means participants co-create the session. A pause to chat isn’t going to shape what comes next on the recording.” Increased Technical Burden on Speakers and Organizers @ThatAndromeda @DataG Totally agree on this. I had to pre-record a conference presentation recently and it was a terrible experience, logistically. I feel like it forces presenters to become video/sound editors, which is obviously another thing to worry about on top of content and accessibility.— Junior Tidal (@JuniorTidal) April 5, 2021 Andromeda also agreed with this: “I will say one of the things I appreciated about NISO is that @griffey did ALL the video editing, so I was not forced to learn how that works.” She continued, “everyone has different requirements for prerecording, and in [Code4Lib’s] case they were extensive and kept changing.” And later added: “Part of the challenge is that every conference has its own tech stack/requirements. If as a presenter I have to learn that for every conference, it’s not reducing my workload.” It is hard not to agree with this; a high-quality (stylistically and technically) recording is not easy to do with today’s tools. This is also a technical burden for meeting organizers. The presenters will put a lot of work into talks—including making sure the recordings look good; whatever playback mechanism is used has to honor the fidelity of that recording. For instance, presenters who have gone through the effort to ensure the accessibility of the presentation color scheme want the conference platform to display the talk “as I created it.” The previous post noted that recorded talks also allow for the creation of better, non-real-time transcriptions. Lisa points out that presenters will want to review that transcription for accuracy, which Jason noted adds to the length of time needed before the start of a conference to complete the preparations. Increased Logistical Burden on Presenters @ThatAndromeda @DataG @griffey Even if prep is no more than the time it would take to deliver live (which has yet to be case for me and I'm good at this stuff), it is still double the time if you are expected to also show up live to watch along with everyone else.— Lisa Janicke Hinchliffe (@lisalibrarian) April 5, 2021 This is a consideration I hadn’t thought through—that presenters have to devote more clock time to the presentation because first they have to record it and then they have to watch it. (Or, as Andromeda added, “significantly more than twice the time for some people, if they are recording a bunch in order to get it right and/or doing editing.”) No. Audience. Reaction. @DataG @griffey 3) No. Audience. Reaction. I give a joke and no one laughs. Was it funny? Was it not funny? Talks are a *performance* and a *relationship*; I'm getting energy off the audience, I'm switching stuff on the fly to meet their vibe. Prerecorded/webinar is dead. Feels like I'm bombing.— Andromeda Yelton (@ThatAndromeda) April 5, 2021 Wow, yes. I imagine it would take a bit of imagination to get in the right mindset to give a talk to a small camera instead of an audience. I wonder how stand-up comedians are dealing with this as they try to put on virtual shows. Andromeda summed this up: @DataG @griffey oh and I mean 5) I don't get tenure or anything for speaking at conferences and goodness knows I don't get paid. So the ENTIRE benefit to me is that I enjoy doing the talk and connect to people around it. prerecorded talk + f2f conf removes one of these; online removes both.— Andromeda Yelton (@ThatAndromeda) April 5, 2021 Also in this heading could be “No Speaker Reaction”—or the inability for subsequent speakers at a conference to build on something that someone said earlier. In the Code4Lib Slack team, Daniel S noted: “One thing comes to mind on the pre-recording [is] the issue that prerecorded talks lose the ‘conversation’ aspect where some later talks at a conference will address or comment on earlier talks.” Kate Deibel added: “Exactly. Talks don’t get to spontaneously build off of each other or from other conversations that happen at the conference.” Currency of information Lisa points out that pre-recording talks before en event means there is a delay between the recording and the playback. In the example she pointed out, there was a talk at RLUK that pre-recorded would have been about the University of California working on an Open Access deal with Elsevier; live, it was able to be “the deal we announced earlier this week”. Conclusions? Near the end of the discussion, Lisa added: @DataG @griffey @ThatAndromeda I also recommend going forward that the details re what is required of presenters be in the CfP. It was one thing for conferences that pivoted (huge effort!) but if you write the CfP since the pivot it should say if pre-record, platform used, etc.— Lisa Janicke Hinchliffe (@lisalibrarian) April 5, 2021 …and Andromeda added: “Strong agree here. I understand that this year everyone was making it up as they went along, but going forward it’d be great to know that in advance.” That means conferences will need to take these needs into account well before the Call for Proposals (CfP) is published. A conference that is thinking now about pre-recording their talks must work through these issues and set expectations with presenters early. As I hoped, the Twiter replies tempered my eagerness for the all-recorded style with some real-world experience. There could be possibilities here, but adapting face-to-face meetings to a world with less travel won’t be simple and will take significant thought beyond the issues of technology platforms. Edward Lim Junhao summarized this nicely: “I favor unpacking what makes up our prof conferences. I’m interested in recreating that shared experience, the networking, & the serendipity of learning sth you didn’t know. I feel in-person conferences now have to offer more in order to justify people traveling to attend them.” Related, Andromeda said: “Also, for a conf that ultimately puts its talks online, it’s critical that it have SOMEthing beyond content delivery during the actual conference to make it worth registering rather than just waiting for youtube. realtime interaction with the speaker is a pretty solid option.” If you have something to add, reach out to me on Twitter. Given enough responses, I’ll create another summary. Let’s keep talking about what that looks like and sharing discoveries with each other. The Tree of Tweets It was a great discussion, and I think I pulled in the major ideas in the summary above. With some guidance from Ed Summers, I’m going to embed the Twitter threads below using Treeverse by Paul Butler. We might be stretching the boundaries of what is possible, so no guarantees that this will be viewable for the long term. Should All Conference Talks be Pre-recorded? The Code4Lib conference was last week. That meeting used all pre-recorded talks, and we saw the benefits of pre-recording for attendees, presenters, and conference organizers. Should all talks be pre-recorded, even when we are back face-to-face? Note! After I posted a link to this article on Twitter, there was a great response of thoughtful comments. I've included new bullet points below and summarized the responses in another blog post. As an entirely virtual conference, I think we can call Code4Lib 2021 a success. Success ≠ Perfect, of course, and last week the conference coordinating team got together on a Zoom call for a debriefing session. We had a lengthy discussion about what we learned and what we wanted to take forward to the 2022 conference, which we’re anticipating will be something with a face-to-face component. That last sentence was tough to compose: “…will be face-to-face”? “…will be both face-to-face and virtual”? (Or another fully virtual event?) Truth be told, I don’t think we know yet. I think we know with some certainty that the COVID pandemic will become much more manageable by this time next year—at least in North America and Europe. (Code4Lib draws from primarily North American library technologists with a few guests from other parts of the world.) I’m hearing from higher education institutions, though, that travel is going to be severely curtailed…if not for health risk reasons, then because budgets have been slashed. So one has to wonder what a conference will look like next year. I’ve been to two online conferences this year: NISOplus21 and Code4Lib. Both meetings recorded talks in advance and started playback of the recordings at a fixed point in time. This was beneficial for a couple of reasons. For organizers and presenters, pre-recording allowed technical glitches to be worked through without the pressure of a live event happening. Technology is not nearly perfect enough or ubiquitously spread to count on it working in real-time. 1 NISOplus21 also used the recordings to get transcribed text for the videos. (Code4Lib used live transcriptions on the synchronous playback.) Attendees and presenters benefited from pre-recording because the presenters could be in the text chat channel to answer questions and provide insights. Having the presenter free during the playback offers new possibilities for making talks more engaging: responding in real-time to polls, getting forehand knowledge of topics for subsequent real-time question/answer sessions, and so forth. The synchronous playback time meant that there was a point when (almost) everyone was together watching the same talk—just as in face-to-face sessions. During the Code4Lib conference coordinating debrief call, I asked the question: “If we saw so many benefits to pre-recording talks, do we want to pre-record them all next year?” In addition to the reasons above, pre-recorded talks benefit those who are not comfortable speaking English or are first-time presenters. (They have a chance to re-do their talk as many times as they need in a much less stressful environment.) “Live” demos are much smoother because a recording can be restarted if something goes wrong. Each year, at least one presenter needs to use their own machine (custom software, local development environment, etc.), and swapping out presenter computers in real-time is risky. And it is undoubtedly easier to impose time requirements with recorded sessions. So why not pre-record all of the talks? I get it—it would be different to sit in a ballroom watching a recording play on big screens at the front of the room while the podium is empty. But is it so different as to dramatically change the experience of watching a speaker at a podium? In many respects, we had a dry-run of this during Code4Lib 2020. It was at the early stages of the coming lockdowns when institutions started barring employee travel, and we had to bring in many presenters remotely. I wrote a blog post describing the setup we used for remote presenters, and at the end, I said: I had a few people comment that they were taken aback when they realized that there was no one standing at the podium during the presentation. Some attendees, at least, quickly adjusted to this format. For those with the means and privilege of traveling, there can still be face-to-face discussions in the hall, over meals, and social activities. For those that can’t travel (due to risks of traveling, family/personal responsibilities, or budget cuts), the attendee experience is a little more level—everyone is watching the same playback and in the same text backchannels during the talk. I can imagine a conference tool capable of segmenting chat sessions during the talk playback to “tables” where you and close colleagues can exchange ideas and then promote the best ones to a conference-wide chat room. Something like that would be beneficial as attendance grows for events with an online component, and it would be a new form of engagement that isn’t practical now. There are undoubtedly reasons not to pre-record all session talks (beyond the feels-weird-to-stare-at-an-unoccupied-ballroom-podium reasons). During the debriefing session, one person brought up that having all pre-recorded talks erodes the justification for in-person attendance. I can see a manager saying, “All of the talks are online…just watch it from your desk. Even your own presentation is pre-recorded, so there is no need for you to fly to the meeting.” That’s legitimate. So if you like bullet points, here’s how it lays out. Pre-recording all talks is better for: Accessibility: better transcriptions for recorded audio versus real-time transcription (and probably at a lower cost, too) Engagement: the speaker can be in the text chat during playback, and there could be new options for backchannel discussions Better quality: speakers can re-record their talk as many times as needed Closer equality: in-person attendees are having much the same experience during the talk as remote attendees Downsides for pre-recording all talks: Feels weird: yeah, it would be different Erodes justification: indeed a problem, especially for those for whom giving a speech is the only path to getting the networking benefits of face-to-face interaction Limits presentation format: it forces every session into being a lecture. For two decades CfPs have emphasized how will this season be engaging/not just a talking head? (Lisa Janicke Hinchliffe) Increased Technical Burden on Speaker and Organizers: conference organizers asking presenters to do their own pre-recording is a barrier (Junior Tidal), and organizers have added new requirements for themselves No Audience Feedback: pre-recording forces the presenter into an unnatural state relative to the audience (Andromeda Yelton) Currency of information: pre-recording talks before en event naturally introduces a delay between the recording and the playback. (Lisa Janicke Hinchliffe) I’m curious to hear of other reasons, for and against. Reach out to me on Twitter if you have some. The COVID-19 pandemic has changed our society and will undoubtedly transform it in ways that we can’t even anticipate. Is the way that we hold professional conferences one of them? Can we just pause for a moment and consider the decades of work and layers of technology that make a modern teleconference call happen? For you younger folks, there was a time when one couldn’t assume the network to be there. As in: the operating system on your computer couldn’t be counted on to have a network stack built into it. In the earliest years of my career, we were tickled pink to have Macintoshes at the forefront of connectivity through GatorBoxes. Go read the first paragraph of that Wikipedia article on GatorBoxes…TCP/IP was tunneled through LocalTalk running over PhoneNet on unshielded twisted pairs no faster than about 200 kbit/second. (And we loved it!) Now the network is expected; needing to know about TCP/IP is pushed so far down the stack as to be forgotten…assumed. Sure, the software on top now is buggy and bloated—is my Zoom client working? has Zoom’s service gone down?—but the network…we take that for granted. ↩ User Behavior Access Controls at a Library Proxy Server are Okay Earlier this month, my Twitter timeline lit up with mentions of a half-day webinar called Cybersecurity Landscape - Protecting the Scholarly Infrastructure. What had riled up the people I follow on Twitter was the first presentation: “Security Collaboration for Library Resource Access” by Cory Roach, the chief information security officer at the University of Utah. Many of the tweets and articles linked in tweets were about a proposal for a new round of privacy-invading technology coming from content providers as a condition of libraries subscribing to publisher content. One of the voices that I trust was urging caution: I highly recommend you listen to the talk, which was given by a university CIO, and judge if this is a correct representation. FWIW, I attended the event and it is not what I took away.— Lisa Janicke Hinchliffe (@lisalibrarian) November 14, 2020 As near as I can tell, much of the debate traces back to this article: Scientific publishers propose installing spyware in university libraries to protect copyrights - Coda Story https://t.co/rtCokIukBf— Open Access Tracking Project (@oatp) November 14, 2020 The article describes Cory’s presentation this way: One speaker proposed a novel tactic publishers could take to protect their intellectual property rights against data theft: introducing spyware into the proxy servers academic libraries use to allow access to their online services, such as publishers’ databases. The “spyware” moniker is quite scary. It is what made me want to seek out the recording from the webinar and hear the context around that proposal. My understanding (after watching the presentation) is that the proposal is not nearly as concerning. Although there is one problematic area—the correlation of patron identity with requested URLs—overall, what is described is a sound and common practice for securing web applications. To the extent that it is necessary to determine a user’s identity before allowing access to licensed content (an unfortunate necessity because of the state of scholarly publishing), this is an acceptable proposal. (Through the university communications office, Corey published a statement about the reaction to his talk.) In case you didn’t know, a web proxy server ensures the patron is part of the community of licensed users, and the publisher trusts requests that come through the web proxy server. The point of Cory’s presentation is that the username/password checking at the web proxy server is a weak form of access control that is subject to four problems: phishing (sending email to tricking a user into giving up their username/password) social engineering (non-email ways of tricking a user into giving up their username/password) credential reuse (systems that are vulnerable because the user used the same password in more than one place) hactivism (users that intentionally give out their username/password so others can access resources) Right after listing these four problems, Cory says: “But anyway we look at it, we can safely say that this is primarily a people problem and the technology alone is not going to solve that problem. Technology can help us take reasonable precautions… So long as the business model involves allowing access to the data that we’re providing and also trying to protect that same data, we’re unlikely to stop theft entirely.” His proposal is to place “reasonable precautions” in the web proxy server as it relates to the campus identity management system. This is a slide from his presentation: Slide from presentation by Cory Roach I find this layout (and lack of labels) somewhat confusing, so I re-imagined the diagram as this: Revised 'Modern Library Design' The core of Cory’s presentation is to add predictive analytics and per-user blocking automation to the analysis of the log files from the web proxy server and the identity management server. By doing so, the university can react quicker to compromised usernames and passwords. In fact, it could probably do so more quicker than the publisher could do with its own log analysis and reporting back to the university. Where Cory runs into trouble is this slide: Slide from presentation by Cory Roach In this part of the presentation, Cory describes the kinds of patron-identifying data that the university could-or-would collect and analyze to further the security effort. In search engine optimization, these sorts of data points are called “signals” and are used to improve the relevance of search results; perhaps there is an equivalent term in access control technology. But for now, I’ll just call them “signals”. There are some problems in gathering these signals—most notably the correlation between user identity and “URLs Requested”. In the presentation, he says: “You can also move over to behavioral stuff. So it could be, you know, why is a pharmacy major suddenly looking up a lot of material on astrophysics or why is a medical professional and a hospital suddenly interested in internal combustion. Things that just don’t line up and we can identify fishy behavior.” It is core to the library ethos that we make our best effort to not track what a user is interested in—to not build a profile of a user’s research unless they have explicitly opted into such data collection. As librarians, we need to gracefully describe this professional ethos and work that into the design of the systems used on campus (and at the publishers). Still, there is much to be said for using some of the other signals to analyze whether a particular request is from an authorized community member. For instance, Cory says: “We commonly see this user coming in from the US and today it’s coming in from Botswana. You know, has there been enough time that they could have traveled from the US to Botswana and actually be there? Have they ever access resources from that country before is there residents on record in that country?” The best part of what Cory is proposing is that the signals’ storage and processing is at the university and not at the publisher. I’m not sure if Cory knew this, but a recent version of EZProxy added a UsageLimit directive that builds in some of these capabilities. It can set per-user limits based on the number of page requests or the amount of downloaded information over a specified interval. One wonders if somewhere in OCLC’s development queue is the ability to detect IP addresses from multiple networks (geographic detection) and browser differences across a specified interval. Still, pushing this up to the university’s identity provider allows for a campus-wide view of the signals…not just the ones coming through the library. Also, in designing the system, there needs to be clarity about how the signals are analyzed and used. I think Cory knew this as well: “we do have to be careful about not building bias into the algorithms.” Yeah, the need for this technology sucks. Although it was the tweet to the Coda Story about the presentation that blew up, the thread of the story goes through TechDirt to a tangential paragraph from Netzpolitik in an article about Germany’s licensing struggle with Elsevier. With this heritage, any review of the webinar’s ideas are automatically tainted by the disdain the library community in general has towards Elsevier. It is reality—an unfortunate reality, in my opinion—that the traditional scholarly journal model has publishers exerting strong copyright protection on research and ideas behind paywalls. (Wouldn’t it be better if we poured the anti-piracy effort into improving scholarly communication tools in an Open Access world? Yes, but that isn’t the world we live in.) Almost every library deals with this friction by employing a web proxy server as an agent between the patron and the publisher’s content. The Netzpolitik article says: …but relies on spyware in the fight against „cybercrime“ Of Course, Sci-Hub and other shadow libraries are a thorn in Elsevier’s side. Since they have existed, libraries at universities and research institutions have been much less susceptible to blackmail. Their staff can continue their research even without a contract with Elsevier. Instead of offering transparent open access contracts with fair conditions, however, Elsevier has adopted a different strategy in the fight against shadow libraries. These are to be fought as „cybercrime“, if necessary also with technological means. Within the framework of the „Scholarly Networks Security Initiative (SNSI)“, which was founded together with other large publishers, Elsevier is campaigning for libraries to be upgraded with security technology. In a SNSI webinar entitled „Cybersecurity Landscape – Protecting the Scholarly Infrastructure“*, hosted by two high-ranking Elsevier managers, one speaker recommended that publishers develop their own proxy or a proxy plug-in for libraries to access more (usage) data („develop or subsidize a low cost proxy or a plug-in to existing proxies“). With the help of an „analysis engine“, not only could the location of access be better narrowed down, but biometric data (e.g. typing speed) or conspicuous usage patterns (e.g. a pharmacy student suddenly interested in astrophysics) could also be recorded. Any doubts that this software could also be used—if not primarily—against shadow libraries were dispelled by the next speaker. An ex-FBI analyst and IT security consultant spoke about the security risks associated with the use of Sci-Hub. The other commentary that I saw was along similar lines: [Is the SNSI the new PRISM? bjoern.brembs.blog](http://bjoern.brembs.net/2020/10/is-the-snsi-the-new-prism/) [Academics band together with publishers because access to research is a cybercrime chorasimilarity](https://chorasimilarity.wordpress.com/2020/11/14/academics-band-together-with-publishers-because-access-to-research-is-a-cybercrime/) [WHOIS behind SNSI & GetFTR? Motley Marginalia](https://csulb.edu/~ggardner/2020/11/16/snsi-getftr/) Let’s face it: any friction beyond follow-link-to-see-PDF is more friction than a researcher deserves. I doubt we would design a scholarly communication system this way were we to start from scratch. But the system is built on centuries of evolving practice, organizations, and companies. It really would be a better world if we didn’t have to spend time and money on scholarly publisher paywalls. And I’m grateful for the Open Access efforts that are pivoting scholarly communications into an open-to-all paradigm. That doesn’t negate the need to provide better options for content that must exist behind a paywall. So what is this SNSI thing? The webinar where Cory presented was the first mention I’d seen of a new group called the Scholarly Networks Security Initiative (SNSI). SNSI is the latest in a series of publisher-driven initiatives to reduce the paywall’s friction for paying users or library patrons coming from licensing institutions. GetFTR (my thoughts) and Seamless Access (my thoughts). (Disclosure: I’m serving on two working groups for Seamless Access that are focused on making it possible for libraries to sensibly and sanely integrate the goals of Seamless Access into campus technology and licensing contracts.) Interestingly, while the Seamless Access initiative is driven by a desire to eliminate web proxy servers, this SNSI presentation upgrades a library’s web proxy server and makes it a more central tool between the patron and the content. One might argue that all access on campus should come through the proxy server to benefit from this kind of access control approach. It kinda makes one wonder about the coordination of these efforts. Still, SNSI is on my radar now, and I think it will be interesting to see what the next events and publications are from this group. As a Cog in the Election System: Reflections on My Role as a Precinct Election Official I may nod off several times in composing this post the day after election day. Hopefully, in reading it, you won’t. It is a story about one corner of democracy. It is a journal entry about how it felt to be a citizen doing what I could do to make other citizens’ voices be heard. It needed to be written down before the memories and emotions are erased by time and naps. Yesterday I was a precinct election officer (PEO—a poll worker) for Franklin County—home of Columbus, Ohio. It was my third election as a PEO. The first was last November, and the second was the election aborted by the onset of the coronavirus in March. (Not sure that second one counts.) It was my first as a Voting Location Manager (VLM), so I felt the stakes were high to get it right. Would there be protests at the polling location? Would I have to deal with people wearing candidate T-shirts and hats or not wearing masks? Would there be a crash of election observers, whether official (scrutinizing our every move) or unofficial (that I would have to remove)? It turns out the answer to all three questions was “no”—and it was a fantastic day of civic engagement by PEOs and voters. There were well-engineered processes and policies, happy and patient enthusiasm, and good fortune along the way. This story is going to turn out okay, but it could have been much worse. Because of the complexity of the election day voting process, last year Franklin County started allowing PEOs to do some early setup on Monday evenings. The early setup started at 6 o’clock. I was so anxious to get it right that the day before I took the printout of the polling room dimensions from my VLM packet, scanned it into OmniGraffle on my computer, and designed a to-scale diagram of what I thought the best layout would be. The real thing only vaguely looked like this, but it got us started. What I imagined our polling place would look like We could set up tables, unpack equipment, hang signs, and other tasks that don’t involve turning on machines or breaking open packets of ballots. One of the early setup tasks was updating the voters’ roster on the electronic poll pads. As happened around the country, there was a lot of early voting activity in Franklin County, so the update file must have been massive. The electronic poll pads couldn’t handle the update; they hung at step 8-of-9 for over an hour. I called the Board of Elections and got ahold of someone in the equipment warehouse. We tried some of the simple troubleshooting steps, and he gave me his cell phone number to call back if it wasn’t resolved. By 7:30, everything was done except for the poll pad updates, and the other PEOs were wandering around. I think it was 8 o’clock when I said everyone could go home while the two Voting Location Deputies and I tried to get the poll pads working. I called the equipment warehouse and we hung out on the phone for hours…retrying the updates based on the advice of the technicians called in to troubleshoot. I even “went rogue” towards the end. I searched the web for the messages on the screen to see if anyone else had seen the same problem with the poll pads. The electronic poll pad is an iPad with a single, dedicated application, so I even tried some iPad reset options to clear the device cache and perform a hard reboot. Nothing worked—still stuck at step 8-of-9. The election office people sent us home at 10 o’clock. Even on the way out the door, I tried a rogue option: I hooked a portable battery to one of the electronic polling pads to see if the update would complete overnight and be ready for us the next day. It didn’t, and it wasn’t. Text from Board of Elections Polling locations in Ohio open at 6:30 in the morning, and PEOs must report to their sites by 5:30. So I was up at 4:30 for a quick shower and packing up stuff for the day. Early in the setup process, the Board of Elections sent a text that the electronic poll pads were not going to be used and to break out the “BUMPer Packets” to determine a voter’s eligibility to vote. At some point, someone told me what “BUMPer” stood for. I can’t remember, but I can imagine it is Back-Up-something-something. “Never had to use that,” the trainers told me, but it is there in case something goes wrong. Well, it is the year 2020, so was something going to go wrong? Fortunately, the roster judges and one of the voting location deputies tore into the BUMPer Packet and got up to speed on how to use it. It is an old fashioned process: the voter states their name and address, the PEO compares that with the details on the paper ledger, and then asks the voter to sign beside their name. With an actual pen…old fashioned, right? The roster judges had the process down to a science. They kept the queue of verified voters full waiting to use the ballot marker machines. The roster judges were one of my highlights of the day. And boy did the voters come. By the time our polling location opened at 6:30 in the morning, they were wrapped around two sides of the building. We were moving them quickly through the process: three roster tables for checking in, eight ballot-marking machines, and one ballot counter. At our peak capacity, I think we were doing 80 to 90 voters an hour. As good as we were doing, the line never seemed to end. The Franklin County Board of Elections received a grant to cover the costs of two greeters outside that helped keep the line orderly. They did their job with a welcoming smile, as did our inside greeter that offered masks and a squirt of hand sanitizer. Still, the voters kept back-filling that line, and we didn’t see a break until 12:30. The PEOs serving as machine judges were excellent. This was the first time that many voters had seen the new ballot equipment that Franklin County put in place last year. I like this new equipment: the ballot marker prints your choices on a card that it spits out. You can see and verify your choices on the card before you slide it into a separate ballot counter. That is reassuring for me, and I think for most voters, too. But it is new, and it takes a few extra moments to explain. The machine judges got the voters comfortable with the new process. And some of the best parts of the day were when they announced to the room that a first-time voter had just put their card into the ballot counter. We would all pause and cheer. The third group of PEOs at our location were the paper table judges. They handle all of the exceptions. Someone wants to vote with a pre-printed paper ballot rather than using a machine? To the paper table! The roster shows that someone requested an absentee ballot? That voter needs to vote a “provisional” ballot that will be counted at the Board of Elections office if the absentee ballot isn’t received in the mail. The paper table judges explain that with kindness and grace. In the wrong location? The paper table judges would find the correct place. The two paper table PEOs clearly had experience helping voters with the nuances of election processes. Rounding out the team were two voting location deputies (VLD). By law, a polling location can’t have a VLD and a voting location manager (VLM) of the same political party. That is part of the checks and balances built into the system. One VLD had been a VLM at this location, and she had a wealth of history and wisdom about running a smooth polling location. For the other VLD, this was his first experience as a precinct election officer, and he jumped in with both feet to do the visible and not-so-visible things that made for a smooth operation. He reminded me a bit of myself a year ago. My first PEO position was as a voting location deputy last November. The pair handled a challenging curbside voter situation where it wasn’t entirely clear if one of the voters in the car was sick. I’d be so lucky to work with them again. The last two hours of the open polls yesterday were dreadfully dull. After the excitement of the morning, we may have averaged a voter every 10 minutes for those last two hours. Everyone was ready to pack it in early and go home. (Polls in Ohio close at 7:30, so counting the hour early for setup and the half an hour for tear down, this was going to be a 14 to 15 hour day.) Over the last hour, I gave the PEOs little tasks to do. At one point, I said they could collect the barcode scanners attached to the ballot markers. We weren’t using them anyway because the electronic poll pads were not functional. Then, in stages (as it became evident that there was no final rush of voters), they could pack up one or two machines and put away tables. Our second to last voter was someone in medical scrubs that just got off their shift. I scared our last voter because she walked up to the roster table at 7:29:30. Thirty seconds later, I called out that the polls are closed (as I think a VLM is required to do), and she looked at me startled. (She got to vote, of course; that’s the rule.) She was our last voter; 799 voters in our precinct that day. Then our team packed everything up as efficiently as they had worked all day. We had put away the equipment and signs, done our final counts, closed out the ballot counter, and sealed the ballot bin. At 8:00, we were done and waving goodbye to our host facility’s office manager. One of the VLD rode along with me to the board of elections to drop off the ballots, and she told me of a shortcut to get there. We were among the first reporting results for Franklin County. I was home again by a quarter of 10—exhausted but proud. I’m so happy that I had something to do yesterday. After weeks of concern and anxiety for how the election was going to turn out, it was a welcome bit of activity to ensure the election was held safely and that voters got to have their say. It was certainly more productive than continually reloading news and election results pages. The anxiety of being put in charge of a polling location was set at ease, too. I’m proud of our polling place team and that the voters in our charge seemed pleased and confident about the process. Maybe you will find inspiration here. If you voted, hopefully it felt good (whether or not the result turned out as you wanted). If you voted for the first time, congratulations and welcome to the club (be on the look-out for the next voting opportunity…likely in the spring). If being a poll worker sounded like fun, get in touch with your local board of elections (here is information about being a poll worker in Franklin County). Democracy is participatory. You’ve got to tune in and show up to make it happen. Certificate of Appreciation Running an All-Online Conference with Zoom [post removed] This is an article draft that was accidentally published. I hope to work on a final version soon. If you really want to see it, I saved a copy on the Internet Archive Wayback Machine. With Gratitude for the NISO Ann Marie Cunningham Service Award During the inaugural NISO Plus meeting at the end of February, I was surprised and proud to receive the Ann Marie Cunningham Service award. Todd Carpenter, NISO’s executive director, let me know by tweet as I was not able to attend the conference. Pictured in that tweet is my co-recipient, Christine Stohn, who serves NISO with me as the co-chair of the Information Delivery and Interchange Topic Committee. This got me thinking about what NISO has meant to me. As I think back on it, my activity in NISO spans at least four employers and many hours of standard working group meetings, committee meetings, presentations, and ballot reviews. NISO Ann Marie Cunningham Service Award I did not know Ms Cunningham, the award’s namesake. My first job started when she was the NFAIS executive director in the early 1990s, and I hadn’t been active in the profession yet. I read her brief biography on the NISO website: The Ann Marie Cunningham Service award was established in 1994 to honor NFAIS members who routinely went above and beyond the normal call of duty to serve the organization. It is named after Ann Marie Cunningham who, while working with abstracting and information services such as Biological Abstracts and the Institute for Scientific Information (both now part of NISO-member Clarivate Analytics), worked tirelessly as an dedicated NFAIS volunteer. She ultimately served as the NFAIS Executive Director from 1991 to 1994 when she died unexpectedly. NISO is pleased to continue to present this award to honor a NISO volunteer who has shown the same sort of commitment to serving our organization. As I searched the internet for her name, I came across the proceedings of the 1993 NFAIS meeting, in which Ms Cunningham wrote the introduction with Wendy Wicks. These first sentences from some of the paragraphs of that introduction are as true today as they were then: In an era of rapidly expanding network access, time and distance no longer separate people from information. Much has been said about the global promise of the Internet and the emerging concept of linking information highways, to some people, “free” ways. What many in the networking community, however, seem to take for granted is the availability of vital information flowing on these high-speed links. I wonder what Ms Cunningham of 1993 would think of the information landscape today? Hypertext linking has certainly taken off, if not taken over, the networked information landscape. How that interconnectedness has improved with the adaptation of print-oriented standards and the creation of new standards that match the native capabilities of the network. In just one corner of that space, we have the adoption of PDF as a faithful print replica and HTML as a common tool for displaying information. In another corner, MARC has morphed into a communication format that far exceeds its original purpose of encoding catalog cards; we have an explosion of purpose-built metadata schemas and always the challenge of finding common ground in tools like Dublin Core and Schema.org. We’ve seen several generations of tools and protocols for encoding, distributing, and combining data in new ways to reach users. And still we strive to make it better…to more easily deliver a paper to its reader—a dataset to its next experimenter—an idea to be built upon by the next generation. It is that communal effort to make a better common space for ideas that drives me forward. To work in a community at the intersection of libraries, publishers, and service providers is an exciting and fulfilling place to be. I’m grateful to my employers that have given me the ability to participate while bringing the benefits of that connectedness to my organizations. I was not able to be at NISO Plus to accept the award in person, but I was so happy to be handed it by Jason Griffey of NISO about a week later during the Code4lib conference in Pittsburgh. What made that even more special was to learn that Jason created it on his own 3D printer. Thank you to the new NFAIS-joined-with-NISO community for honoring me with this service award. Tethering a Ubiquity Network to a Mobile Hotspot I saw it happen. The cable-chewing device The contractor in the neighbor’s back yard with the Ditch Witch trencher burying a cable. I was working outside at the patio table and just about to go into a Zoom meeting. Then the internet dropped out. Suddenly, and with a wrenching feeling in my gut, I remembered where the feed line was buried between the house and the cable company’s pedestal in the right-of-way between the properties. Yup, he had just cut it. To be fair, the utility locator service did not mark the my cable’s location, and he was working for a different cable provider than the one we use. (There are three providers in our neighborhood.) It did mean, though, that our broadband internet would be out until my provider could come and run another line. It took an hour of moping about the situation to figure out a solution, then another couple of hours to put it in place: an iPhone tethered to a Raspberry Pi that acted as a network bridge to my home network’s UniFi Security Gateway 3P. Network diagram with tethered iPhone A few years ago I was tired of dealing with spotty consumer internet routers and upgraded the house to UniFi gear from Ubiquity. Rob Pickering, a college comrade, had written about his experience with the gear and I was impressed. It wasn’t a cheap upgrade, but it was well worth it. (Especially now with four people in the household working and schooling from home during the COVID-19 outbreak.) The UniFi Security Gateway has three network ports, and I was using two: one for the uplink to my cable internet provider (WAN) and one for the local area network (LAN) in the house. The third port can be configured as another WAN uplink or as another LAN port. And you can tell the Security Gateway to use the second WAN as a failover for the first WAN (or as load balancing the first WAN). So that is straight forward enough, but do I get the Personal Hotspot on the iPhone to the second WAN port? That is where the Raspberry Pi comes in. The Raspberry Pi is a small computer with USB, ethernet, HDMI, and audio ports. The version I had laying around is a Raspberry Pi 2—an older model, but plenty powerful enough to be the network bridge between the iPhone and the home network. The toughest part was bootstrapping the operating system packages onto the Pi with only the iPhone Personal Hotspot as the network. That is what I’m documenting here for future reference. Bootstrapping the Raspberry Pi The Raspberry Pi runs its own operating system called Raspbian (a Debian/Linux derivative) as well as more mainstream operating systems. I chose to use the Ubuntu Server for Raspberry Pi instead of Raspbian because I’m more familiar with Ubuntu. I tethered my MacBook Pro to the iPhone to download the Ubuntu 18.04.4 LTS image and follow the instructions for copying that disk image to the Pi’s microSD card. That allows me to boot the Pi with Ubuntu and a basic set of operating system packages. The Challenge: Getting the required networking packages onto the Pi It would have been really nice to plug the iPhone into the Pi with a USB-Lightning cable and have it find the tethered network. That doesn’t work, though. Ubuntu needs at least the usbmuxd package in order to see the tethered iPhone as a network device. That package isn’t a part of the disk image download. And of course I can’t plug my Pi into the home network to download it (see first paragraph of this post). My only choice was to tether the Pi to the iPhone over WiFi with a USB network adapter. And that was a bit of Ubuntu voodoo. Fortunately, I found instructions on configuring Ubuntu to use a WPA-protected wireless network (like the one that the iPhone Personal Hotspot is providing). In brief: sudo -i cd /root wpa_passphrase my_ssid my_ssid_passphrase > wpa.conf screen -q wpa_supplicant -Dwext -iwlan0 -c/root/wpa.conf <control-a> c dhclient -r dhclient wlan0 Explanation of lines: Use sudo to get a root shell Change directory to root’s home Use the wpa_passphrase command to create a wpa.conf file. Replace my_ssid with the wireless network name provided by the iPhone (your iPhone’s name) and my_ssid_passphrase with the wireless network passphrase (see the “Wi-Fi Password” field in Settings -> Personal Hotspot). Start the screen program (quietly) so we can have multiple pseudo terminals. Run the wpa_supplicant command to connect to the iPhone wifi hotspot. We run this the foreground so we can see the status/error messages; this program must continue running to stay connected to the wifi network. Use the screen hotkey to create a new pseudo terminal. This is control-a followed by a letter c. Use dhclient to clear out any DHCP network parameters Use dhclient to get an IP address from the iPhone over the wireless network. Now I was at the point where I could install Ubuntu packages. (I ran ping www.google.com to verify network connectivity.) To install the usbmuxd and network bridge packages (and their prerequisites): apt-get install usbmuxd bridge-utils If your experience is like mine, you’ll get an error back: couldn't get lock /var/lib/dpkg/lock-frontend The Ubuntu Pi machine is now on the network, and the automatic process to install security updates is running. That locks the Ubuntu package registry until it finishes. That took about 30 minutes for me. (I imagine this varies based on the capacity of your tethered network and the number of security updates that need to be downloaded.) I monitored the progress of the automated process with the htop command and tried the apt-get command when it finished. If you are following along, now would be a good time to skip ahead to Configuring the UniFi Security Gateway if you haven’t already set that up. Turning the Raspberry Pi into a Network Bridge With all of the software packages installed, I restarted the Pi to complete the update: shutdown -r now While it was rebooting, I pulled out the USB wireless adapter from the Pi and plugged in the iPhone’s USB cable. The Pi now saw the iPhone as eth1, but the network did not start until I went to the iPhone to say that I “Trust” the computer that it is plugged into. When I did that, I ran these commands on the Ubuntu Pi: dhclient eth1 brctl addbr iphonetether brctl addif iphonetether eth0 eth1 brctl stp iphonetether on ifconfig iphonetether up Explanation of lines: Get an IP address from the iPhone over the USB interface Add a network bridge (the iphonetether is an arbitrary string; some instructions simply use br0 for the zero-ith bridge) Add the two ethernet interfaces to the network bridge Turn on the Spanning Tree Protocol (I don’t think this is actually necessary, but it does no harm) Bring up the bridge interface The bridge is now live! Thanks to Amitkumar Pal for the hints about using the Pi as a network bridge. More details about the bridge networking software is on the Debian Wiki. Note! I'm using a hardwired keyboard/monitor to set up the Raspbery Pi. I've heard from someone that was using SSH to run these commands, and the SSH connection would break off at brctl addif iphonetecther eth0 eth1 Configuring the UniFi Security Gateway I have a UniFi Cloud Key, so I could change the configuration of the UniFi network with a browser. (You’ll need to know the IP address of the Cloud Key; hopefully you have that somewhere.) I connected to my Cloud Key at https://192.168.1.58:8443/ and clicked through the self-signed certificate warning. First I set up a second Wide Area Network (WAN—your uplink to the internet) for the iPhone Personal Hotspot: Settings -> Internet -> WAN Networks. Select “Create a New Network”: Network Name: Backup WAN IPV4 Connection Type: Use DHCP IPv6 Connection Types: Use DHCPv6 DNS Server: 1.1.1.1 and 1.0.0.1 (CloudFlare’s DNS servers) Load Balancing: Failover only The last selection is key…I wanted the gateway to only use this WAN interfaces as a backup to the main broadband interface. If the broadband comes back up, I want to stop using the tethered iPhone! Second, assign the Backup WAN to the LAN2/WAN2 port on the Security Gateway (Devices -> Gateway -> Ports -> Configure interfaces): Port WAN2/LAN2 Network: WAN2 Speed/Duplex: Autonegotiate Apply the changes to provision the Security Gateway. After about 45 seconds, the Security Gateway failed over from “WAN iface eth0” (my broadband connection) to “WAN iface eth2” (my tethered iPhone through the Pi bridge). These showed up as alerts in the UniFi interface. Performance and Results So I’m pretty happy with this setup. The family has been running simultaneous Zoom calls and web browsing on the home network, and the performance has been mostly normal. Web pages do take a little longer to load, but whatever Zoom is using to dynamically adjust its bandwidth usage is doing quite well. This is chewing through the mobile data quota pretty fast, so it isn’t something I want to do every day. Knowing that this is possible, though, is a big relief. As a bonus, the iPhone is staying charged via the 1 amp power coming through the Pi.