1-bp-blogspot-com-1169 ---- None 1-bp-blogspot-com-2257 ---- None 1-bp-blogspot-com-2069 ---- None 1-bp-blogspot-com-1216 ---- None 1-bp-blogspot-com-2964 ---- None 1-bp-blogspot-com-1476 ---- None 1-bp-blogspot-com-4663 ---- None 1-bp-blogspot-com-1557 ---- None 1-bp-blogspot-com-3975 ---- None 1-bp-blogspot-com-2006 ---- None 1-bp-blogspot-com-29 ---- None 1-bp-blogspot-com-4186 ---- None 1-bp-blogspot-com-1697 ---- None 1-bp-blogspot-com-4512 ---- None 1-bp-blogspot-com-4571 ---- None 1-bp-blogspot-com-3287 ---- None 1-bp-blogspot-com-4811 ---- None 1-bp-blogspot-com-3656 ---- None 1-bp-blogspot-com-4853 ---- None 1-bp-blogspot-com-4944 ---- None 1-bp-blogspot-com-4808 ---- None 1-bp-blogspot-com-2322 ---- None 1-bp-blogspot-com-5142 ---- None 1-bp-blogspot-com-5175 ---- None 1-bp-blogspot-com-5210 ---- None 1-bp-blogspot-com-527 ---- None 1-bp-blogspot-com-5369 ---- None 1-bp-blogspot-com-5536 ---- None 1-bp-blogspot-com-5558 ---- None 1-bp-blogspot-com-5929 ---- None 1-bp-blogspot-com-5934 ---- None 1-bp-blogspot-com-6104 ---- None 1-bp-blogspot-com-5686 ---- None 1-bp-blogspot-com-6241 ---- None 1-bp-blogspot-com-6368 ---- None 1-bp-blogspot-com-6601 ---- None 1-bp-blogspot-com-668 ---- None 1-bp-blogspot-com-6702 ---- None 1-bp-blogspot-com-6706 ---- None 1-bp-blogspot-com-6965 ---- None 1-bp-blogspot-com-7194 ---- None 1-bp-blogspot-com-7626 ---- None 1-bp-blogspot-com-7867 ---- None 1-bp-blogspot-com-7902 ---- None 1-bp-blogspot-com-8085 ---- None 1-bp-blogspot-com-8270 ---- None 1-bp-blogspot-com-8317 ---- None 1-bp-blogspot-com-8456 ---- None 1-bp-blogspot-com-8809 ---- None 1-bp-blogspot-com-8836 ---- None 1-bp-blogspot-com-8999 ---- None 1-bp-blogspot-com-6630 ---- None 1-bp-blogspot-com-9016 ---- None 1-bp-blogspot-com-9008 ---- None 1-bp-blogspot-com-9244 ---- None 1-bp-blogspot-com-9270 ---- None 1-bp-blogspot-com-9750 ---- None 1-bp-blogspot-com-997 ---- None 9to5google-com-4824 ---- Google 'One Today' shuts down this week after several years - 9to5Google Switch site Exclusives Pixel Google Pixel 3a Google Pixel 4 Google Pixel 4 XL Google Pixel 4a Google Pixel 5 Google Pixel 5a Nest Google Nest Hub Google Nest Hub Max Google Nest Mini Google Nest Audio Google Home Max Google Nest Wifi Google Wifi Nest Thermostats Nest Cam Nest Hello Nest Protect Android Android 11 Auto Wear OS Samsung OnePlus LG Motorola Google TV Android TV Chromecast Chromecast with Google TV Chrome Stadia Workspace Gmail Google Meet Google Chat Google Calendar Google Docs Google Drive Google Keep YouTube Alphabet Waymo Verily Life Sciences DeepMind Google Ventures Google Fiber Access & Energy Calico Videos Reviews Toggle main menu More social networks Submit a Tip / Contact Us Trade In Toggle dark mode Search Search Toggle search 9to5Mac 9to5Toys Electrek DroneDJ Space Explored About Privacy January 29, 2020 Google is killing its ‘One Today’ donation app w/ only one week’s notice Ben Schoon - Jan. 29th 2020 12:25 pm PT @NexusBen 0 Facebook Twitter Pinterest LinkedIn Reddit For quite some time, Google’s “One Today” app has made it really easy to support nonprofit charities from a wide variety of causes. After a few years, though, Google is set to kill off One Today, and it’s only giving users a single week’s notice. Not to be confused with Google One, the storage hub for your account, One Today has been available since 2013. In an email to its users today (via Android Police), Google explains that a decline in the use of One Today over the years triggered the company to shut things down. Beyond just allowing for easy donations to nonprofits, the One Today app also hosted projects with photos and videos showing how your donation is helping the cause. The app also provided users with a single receipt at the end of the year to be used for tax deductions. Google One Today will be shut down next week, on February 6. By that date, 100% of funds donated prior will be sent to the relevant nonprofit organizations. After the 6th, the app will be turned off and all projects deleted. Hello, We have an important update to share with you. We launched Google One Today seven years ago to help people donate to causes they care about. In the last few years, we have seen donors choose other products to fundraise for their favorite nonprofits. As a result, we will shut down One Today on February 6, 2020. New nonprofits will no longer be able to sign up for One Today. The Google One Today app will be turned off, and any open projects will be deleted. We will ensure that 100% of funds donated on One Today prior to February 6 are disbursed to the relevant nonprofits. If you have any questions, please feel free to contact the One Today team. Thank you for your donations and partnership. The Google One Today team Google One Today wasn’t exactly a well-known product, so its loss probably won’t be felt by a ton of people. Still, it’s a shame to see such a useful concept go to waste. More on Google Graveyard: Google made an actual product graveyard for Halloween w/ G+, Reader, more Google shuts down Translator Toolkit this week after a decade As Google+ goes dark, here’s what the social network meant to the 9to5Google team FTC: We use income earning auto affiliate links. More. Check out 9to5Google on YouTube for more news: You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel Guides Google One Today About the Author Ben Schoon @NexusBen Ben is a writer and video producer for 9to5Google. Find him on Twitter @NexusBen. Send tips to schoon@9to5g.com or encrypted to benschoon@protonmail.com. Ben Schoon's favorite gear Fitbit Versa 3 The best Android smartphones Galaxy Watch 4 hands-on: Wear OS reinvigorated [Video] The future lost by making watches miniature phones Google Camera points to 50MP Samsung sensor on Pixel 6 Here’s everything new in Android 12 Beta 4 [Gallery] acrl-ala-org-211 ----
Fatal error: Cannot declare class WP_Block_Template, because the name is already in use in /home/customer/www/acrl.ala.org/public_html/techconnect/wp-content/plugins/gutenberg/lib/full-site-editing/class-wp-block-template.php on line 12
ag-ny-gov-6413 ---- Attorney General James Ends Virtual Currency Trading Platform Bitfinex’s Illegal Activities in New York | New York State Attorney General Skip to main content Our Office Bio of the Attorney General Year in Review Divisions and Bureaus Regional Offices Regional Office Contact Information Media Press Releases Event Archive Livestream Resources Charities Registry Complaint Forms Consumer Resources Data Security Breach Information Effective REF Policy Memoranda Employment Opportunities FAQs Find an Attorney Forms Help for Homeowners Identity Theft Lemon Law Protections Make a FOIL Request Offering Plan Data Search Opinions Presentation Request Form Publications Registrations Student Lending Tenants’ Rights Triple C Awards Victims’ Rights Initiatives Animal Protection Initiative Conviction Review Bureau CUFFS Debt Settlement & Collection Tips on Debt Settlement Companies Debt Collection Companies Process Servers: Know Your Rights Bureau of Consumer Frauds & Protection Free Educational Programs Human Trafficking Initiative Immigration Services Fraud Initiative Land Bank Community Revitalization Land Bank Report NY Open Government Pennies for Charity Protect Our Homes Smart Seniors Office of Special Investigation Source of Income Discrimination Taxpayer Protection Initiative Contact Us Search You are here Home » Media Center » Press Releases » February 23rd 2021 Local Menu Attorney General James Ends Virtual Currency Trading Platform Bitfinex’s Illegal Activities in New York Bitfinex and Tether Must Submit to Mandatory Reporting on Efforts to Stop New York Trading Bitfinex and Tether Deceived Clients and Market by Overstating Reserves, Hiding Approximately $850 Million in Losses Around the Globe  NEW YORK – New York Attorney General Letitia James today continued her efforts to protect investors from fraudulent and deceptive virtual or “crypto” currency trading platforms by requiring Bitfinex and Tether to end all trading activity with New Yorkers. Millions around the country and the world today use virtual currencies as decentralized digital currencies — unlike real, regulated government currencies, including the U.S. dollar — to buy goods and services, often times anonymously, through secure online transactions. Stablecoins, specifically, are virtual currencies that are always supposed to have the same real-dollar value. In the case of Tether, the company represented that each of its stablecoins were backed one-to-one by U.S. dollars in reserve. However, an investigation by the Office of the Attorney General (OAG) found that iFinex — the operator of Bitfinex — and Tether made false statements about the backing of the “tether” stablecoin, and about the movement of hundreds of millions of dollars between the two companies to cover up the truth about massive losses by Bitfinex. An agreement with iFinex, Tether, and their related entities will require them to cease any further trading activity with New Yorkers, as well as force the companies to pay $18.5 million in penalties, in addition to requiring a number of steps to increase transparency. “Bitfinex and Tether recklessly and unlawfully covered-up massive financial losses to keep their scheme going and protect their bottom lines,” said Attorney General James. “Tether’s claims that its virtual currency was fully backed by U.S. dollars at all times was a lie. These companies obscured the true risk investors faced and were operated by unlicensed and unregulated individuals and entities dealing in the darkest corners of the financial system. This resolution makes clear that those trading virtual currencies in New York state who think they can avoid our laws cannot and will not. Last week, we sued to shut down Coinseed for its fraudulent conduct. This week, we’re taking action to end Bitfinex and Tether’s illegal activities in New York. These legal actions send a clear message that we will stand up to corporate greed whether it comes out of a traditional bank, a virtual currency trading platform, or any other type of financial institution.” A Stablecoin Without Stability – Tethers Weren’t Fully Backed At All Times The OAG’s investigation found that, starting no later than mid-2017, Tether had no access to banking, anywhere in the world, and so for periods of time held no reserves to back tethers in circulation at the rate of one dollar for every tether, contrary to its representations. In the face of persistent questions about whether the company actually held sufficient funds, Tether published a self-proclaimed ‘verification’ of its cash reserves, in 2017, that it characterized as “a good faith effort on our behalf to provide an interim analysis of our cash position.” In reality, however, the cash ostensibly backing tethers had only been placed in Tether’s account as of the very morning of the company’s ‘verification.’ On November 1, 2018, Tether publicized another self-proclaimed ‘verification’ of its cash reserve; this time at Deltec Bank & Trust Ltd. of the Bahamas. The announcement linked to a letter dated November 1, 2018, which stated that tethers were fully backed by cash, at one dollar for every one tether. However, the very next day, on November 2, 2018, Tether began to transfer funds out of its account, ultimately moving hundreds of millions of dollars from Tether’s bank accounts to Bitfinex’s accounts. And so, as of November 2, 2018 — one day after their latest ‘verification’ — tethers were again no longer backed one-to-one by U.S. dollars in a Tether bank account.  As of today, Tether represents that over 34 billion tethers have been issued and are outstanding and traded in the market. When No Bank Backs You, Turn to Shady Entities — Bitfinex Hid Massive Losses In 2017 and 2018, Bitfinex began to increasingly rely on third-party “payment processors” to handle customer deposits and withdrawals from the Bitfinex trading platform. In 2018, while attempting to “move money [more] efficiently,” Bitfinex suffered a massive and undisclosed loss of funds because of its relationship with a purportedly Panama-based entity known as “Crypto Capital Corp.” Bitfinex responded to pervasive public reports of liquidity problems by misleading the market and its own clients. On October 7, 2018, Bitfinex claimed to “not entirely understand the arguments that purport to show us insolvent,” when, for months, its executives had been pleading with Crypto Capital to return almost a billion dollars in assets. On April 26, 2019 — after the OAG revealed in court documents that approximately $850 million had gone missing and that Bitfinex and Tether had been misleading their clients — the company issued a false statement that “we have been informed that these Crypto Capital amounts are not lost but have been, in fact, seized and safeguarded.” The reality, however, was that Bitfinex did not, in fact, know the whereabouts of all of the customer funds held by Crypto Capital, and so had no such assurance to make.  The OAG Investigation Shines a Light on Unlawful Trading in New York State From the beginning of its interaction with the OAG, iFinex and Tether falsely claimed that they did not allow trading activity by New Yorkers. The OAG investigation determined that to be untrue and that the companies have operated for years as unlicensed and unregulated entities, illegally trading virtual currencies in the state of New York. In April 2019, the OAG sought and obtained an injunction against further transfers of assets between and among Bitfinex and Tether, which are owned and controlled by the same small group of individuals. That action — under Section 354 of New York’s Martin Act — ultimately led to a July 2020 decision by the New York State Appellate Division of the Supreme Court, First Department, holding that: Bitfinex and Tether — and other virtual currency trading platforms and cryptocurrencies operating from various locations around the world — are still subject to OAG jurisdiction if doing business in New York; The stablecoin “tether” and other virtual currencies were “commodities” under section 352 of the Martin Act, and noted that virtual currencies may also constitute securities under the act; and The OAG had established the factual predicate necessary to uphold the injunction and require production of documents and information relevant to its investigation in advance of the filing of a formal suit. Bitfinex and Tether Banned from Continuing Illegal Activities in New York Today’s agreement requires Bitfinex and Tether to discontinue any trading activity with New Yorkers. In addition, these companies must submit regular reports to the OAG to ensure compliance with this prohibition. Further, the companies must submit to mandatory reporting on core business functions. Specifically, both Bitfinex and Tether will need to report, on a quarterly basis, that they are properly segregating corporate and client accounts, including segregation of government-issued and virtual currency trading accounts by company executives, as well as submit to mandatory reporting regarding transfers of assets between and among Bitfinex and Tether entities. Additionally, Tether must offer public disclosures, by category, of the assets backing tethers, including disclosure of any loans or receivables to or from affiliated entities. The companies will also provide greater transparency and mandatory reporting regarding the use of non-bank “payment processors” or other entities used to transmit client funds. Finally, Bitfinex and Tether will be required to pay $18.5 million in penalties to the state of New York.   In September 2018, the OAG issued its Virtual Markets Integrity Initiative Report, which highlighted the “substantial potential for conflicts between the interests” of virtual currency trading platforms, insiders, and issuers. Bitfinex was one of the trading platforms examined in the report.    This matter was handled by Senior Enforcement Counsel John D. Castiglione and Assistant Attorneys General Brian M. Whitehurst and Tanya Trakht of the Investor Protection Bureau; Assistant Attorneys General Ezra Sternstein and Johanna Skrzypczyk of the Bureau of Internet and Technology; and Legal Assistant Charmaine Blake — all supervised by Bureau of Internet and Technology Chief Kim Berger and Senior Enforcement Counsel for Economic Justice Kevin Wallace. The Investor Protection Bureau is led by Chief Peter Pope. Both the Bureau of Internet and Technology and the Investor Protection Bureau are part of the Division for Economic Justice, which is overseen by Chief Deputy Attorney General Chris D’Angelo and First Deputy Attorney General Jennifer Levy.  Groups audience:  Bureau of Internet and Technology (BIT) Investor Protection Bureau Translate This Page Translation Disclaimer Previous Press Releases August 2021 July 2021 June 2021 May 2021 April 2021 March 2021 February 2021 January 2021 December 2020 November 2020 October 2020 September 2020 August 2020 July 2020 June 2020 May 2020 April 2020 March 2020 February 2020 January 2020 December 2019 November 2019 October 2019 September 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 Visit the Press Release Archive Search: Home Contact Us Employment Disclaimer Privacy Policy Accessibility Policy Employees Resources © NEW YORK STATE ATTORNEY GENERAL. All rights reserved. Select a Language Below / Seleccione el Idioma Abajo Disclaimer This Google™ translation feature is provided for informational purposes only. The Office of Attorney General's website is provided in English. However, the "Google Translate" option may assist you in reading it in other languages. Google Translate cannot translate all types of documents, and it may not give you an exact translation all the time. Anyone relying on information obtained from Google Translate does so at his or her own risk. The Office of Attorney General does not make any promises, assurances, or guarantees as to the accuracy of the translations provided. The State of New York, its officers, employees, and/or agents shall not be liable for damages or losses of any kind arising out of, or in connection with, the use or performance of such information, including but not limited to, damages or losses caused by reliance upon the accuracy of any such information, or damages incurred from the viewing, distributing, or copying of such materials. A copy of this disclaimer can also be found on our Disclaimer page. Close this box or use the [ X ] alacorenews-org-3281 ---- Core News – Find Community. Share Expertise. Enhance Library Careers. Skip to content Twitter Instagram YouTube Website Core News Find Community. Share Expertise. Enhance Library Careers. Home About Become a Blog Contributor Core Calendar Get Involved Posts New Core Jobs: August 11, 2021 Posted on August 11, 2021August 11, 2021 Leave a Comment on New Core Jobs: August 11, 2021 Browse new job openings on the Core Jobs Site Outreach & Education Librarian, University of Maryland, Baltimore – Health Sciences & Human Services Library, Baltimore, MD Data Management Librarian, University of Maryland, Baltimore – Health Sciences & Human Services Library, Baltimore, MD Technical Services Coordinator (PDF), Northeastern Illinois University, Chicago, IL Associate University Librarian for Digital Strategies, University at Buffalo, Buffalo, NY Associate University Librarian for Research, Collections & Outreach, University at Buffalo, Buffalo, NY Cataloger & Metadata Librarian, Charleston County Public Library, Charleston, SC Visit the Core Jobs Site for additional job openings and information on submitting your own job posting. Register for a Forum Preconference! Posted on August 5, 2021August 5, 2021 Leave a Comment on Register for a Forum Preconference! Preconference registration is open for Core Forum! Register for an all-day library buildings tour or an afternoon preconference offered on Thursday, October 7, in Baltimore, Maryland. We Create Inclusive Experiences! Library Buildings Tour (8am–4pm) Choose Your Own Adventure Scholarly Communication Assessment Rubrics (12–4pm) presented by Suzanna Conrad, Emily Chan, Daina Dickman and Nicole Lawson Crisis Communication/Message Dissemination for Libraries (12–4pm) presented by Gregg Dodd, Christine Feldmann and Meghan McCorkell Evidence Based Practice in Libraries (12–4pm) presented by Amanda Click, Claire Walker Wiley and Meggan Houlihan New Connections in the RDA Toolkit (12pm–4pm) presented by Stephen Hearn, Robert Maxwell and Kathy Glennan If you’ve already registered for Forum and would like to add a preconference to your existing registration, please contact us. The 2021 Forum is our inaugural conference for Core and is ALA’s first return to in-person events. It will bring together decision-makers and practitioners from the ALA division that focuses on: Access & Equity Assessment…Continue Reading New Core Jobs: August 4, 2021 Posted on August 4, 2021August 5, 2021 Leave a Comment on New Core Jobs: August 4, 2021 Browse new job openings on the Core Jobs Site Community Engagement and Economic Development Services Manager, Seattle Public Library, WA Electronic Resources Librarian, Miami University Libraries, Oxford, OH Systems Coordinator, PrairieCat (library consortium), Coal Valley or Bolingbrook, IL Digital Archivist, Rice University, Fondren Library, Houston, TX Instruction Librarian, Radford University, McConnell Library, Radford, VA Librarian I/II – Technology Education Librarian, Virginia Beach Public Library, VA IT Programmer Analyst, Sonoma County Library, Rohnert Park, CA Visit the Core Jobs Site for additional job openings and information on submitting your own job posting. Interest Group Week Recordings Available Posted on August 3, 2021August 3, 2021 Leave a Comment on Interest Group Week Recordings Available We had another successful Interest Group Week last month with more than 6,000 registrations. A big thank you 💖 to all the chairs, moderators, and speakers, who put together 23 great presentations and discussions! We’ve added links to the recordings on the IG Week page, and you can find links to the slides in the interest groups that participated. Summary, June 2021 Core e-Forum, “Does Better Training Lead to Greater Job Satisfaction?” Posted on July 29, 2021July 26, 2021 In the June Core e-Forum, participants were asked to discuss the relationship between job training and job satisfaction. The discussion also addressed ways to organize a successful training program and using instructional design methods to improve technical services training. The first day’s discussion revolved around more general questions about the nature of on the job training and participants’ own roles in on the job training. The purpose of the second day was to address more specific topics of training structure and organization, and to examine the link between successful training and job satisfaction. It was obvious from the lively discussion on Day 1 that we all are almost always engaged in on the job training and feel strongly about its nature and organization. Themes that consistently emerged were cross-training, documentation, continuity, understanding the reasons for changes, but also preserving institutional memory. The following questions were asked and answered many times…Continue Reading New Core Jobs: July 28, 2021 Posted on July 28, 2021July 29, 2021 Browse new job openings on the Core Jobs Site Cataloging/Metadata Librarian, Lehigh University, Bethlehem, PA Acquisitions Librarian, Marquette University Libraries, Milwaukee, WI Cataloging and Metadata Librarian, Marquette University Libraries, Milwaukee, WI University Librarian, Capilano University, North Vancouver, WA Librarian, SLAC National Accelerator Laboratory, Menlo Park, CA User Experience and Digital Projects Librarian, Texas A&M University Libraries, College Station, TX Branch Manager II, Mid-Columbia Libraries, Kennewick, MA Head of Library Systems, Virginia Tech University Libraries, Blacksburg, VA Head of Support Desk Operations, Virginia Tech University Libraries, Blacksburg, VA Cloud Infrastructure Engineer, Massachusetts Institute of Technology Libraries, Cambridge, MA Senior Electronic Resources Librarian, Rice University, Fondren Library, Houston, TX Visit the Core Jobs Site for additional job openings and information on submitting your own job posting. New Version of Cataloging Correctly for Kids Posted on July 27, 2021July 27, 2021 Cataloging library materials for children in the internet age has never been as challenging or as important. RDA: Resource Description and Access is now the descriptive standard, there are new ways to find materials using classifications, and subject heading access has been greatly enhanced by the keyword capabilities of today’s online catalogs. It’s the perfect moment to present a completely overhauled edition of this acclaimed bestseller. This new sixth edition guides catalogers, children’s librarians, and LIS students in taking an effective approach towards materials intended for children and young adults. Informed by recent studies of how children search, this handbook’s top-to-bottom revisions address areas such as: how RDA applies to a variety of children’s materials, with examples provided;   authority control, bibliographic description, subject access, and linked data; electronic resources and other non-book materials; and cataloging for non-English-speaking and preliterate children. With advice contributed by experienced, practicing librarians, this one-stop…Continue Reading Summary, May 2021 Core e-Forum, “Advocacy for Implementing Faceted Vocabularies” Posted on July 26, 2021July 26, 2021 During the May 2021, Core eForum, ‘We Faceted our Seatbelts, Now What? Advocacy for Implementing Faceted Vocabularies in Public Facing Interfaces,’ we hoped to open a wide discussion on implementing faceted vocabs, perceptions of how those headings should be used or displayed, and what making use of these vocabularies can accomplish for institutions and collections metadata. There were some, albeit unintentional, thematic developments over the course of the 2 days, and we hope this report offers some glimpses of that development. For instance, we posed the question of whether inclusion of faceted terminology was perceived as duplication of LCSH terms present in the display for a resource’s metadata. Many feel there is duplication, and many are simply open to considering the topic. One interesting thing is that even though the term may look the same to the user, that the term coming from a faceted vocabulary is actually different. But…Continue Reading New Core Jobs: July 21, 2021 Posted on July 21, 2021July 23, 2021 Browse new job openings on the Core Jobs Site Acquisitions and Electronic Resources Librarian, Colgate University, Hamilton, NY Director of Public Library, Town of Needham, MA Librarian II ~ eResources & Discovery Librarian, University of Maryland, Baltimore County, Baltimore, MD Head, Scholarly Communication & Data Services, UNLV University Libraries, University of Nevada, Las Vegas, NV IT Library Portfolio Manager, Multnomah County, Portland, OR ILS Manager, Charleston County Public Library, Charleston, SC Manuscripts Archivist, Ohio University Libraries, Athens, OH Development Officer, Charleston County Public Library, Charleston, SC Visit the Core Jobs Site for additional job openings and information on submitting your own job posting. 2021 John Cotton Dana Awards Announced Posted on July 9, 2021August 2, 2021 ~ Eight Libraries Awarded $10,000 Grants from H.W. Wilson Foundation  ~ The 2021 John Cotton Dana (JCD) Award winners, recognized for their strategic communications efforts, have been selected. The John Cotton Dana Awards provide up to eight grants for libraries that demonstrate outstanding library public relations. The award is managed by the American Library Association’s Core Division and consists of $10,000 grants from the H.W. Wilson Foundation. The grants highlight campaigns feature a wide variety of strategies including: civic engagement programming, a virtual story time with more than 800 million views, a virtual Open House to celebrate a three-year renovation, and an awareness media campaign to highlight pandemic services. Other winning campaigns include the launch of a local artist music streaming site that had to retool mid-campaign as libraries closed, a Park & Connect Internet access campaign, a 2020 Census campaign that increases the county’s self-response rate and a library…Continue Reading Posts navigation Older posts Search for: Search View Posts by Topic View Posts by TopicSelect Category Access and Equity  (21) ALA Annual Conference  (5) Assessment  (22) Awards and Scholarships  (10) Buildings and Operations  (17) Continuing Education  (22) Core Forum  (12) e-Forum Summaries  (9) General News  (29) Job Listings  (40) Leadership and Management  (31) Metadata and Collections  (44)    Preservation Week  (5) Publications  (16) Technology  (43) New Core Jobs Outreach & Education Librarian, University of Maryland, Baltimore - Health Sciences & Human Services Library -- Data Management Librarian, University of Maryland, Baltimore - Health Sciences & Human Services Library -- Technical Services Coordinator (PDF), Northeastern Illinois University -- Associate University Librarian for Digital Strategies, University at Buffalo -- Associate University Librarian for Research, Collections & Outreach, University at Buffalo -- Cataloger & Metadata Librarian, Charleston County Public Library   Browse More Job Openings Get Core News in Your Email Your Name* Email* View ALA's Personal Data Notification Upcoming Events August 2021 Aug 16 2021 - Sep 24 2021 Fundamentals of Metadata Aug 23 2021 - Oct 01 2021 Fundamentals of Digital Library Projects Aug 30 2021 - Oct 08 2021 Fundamentals of Acquisitions September 2021 Sep 13 2021 - Oct 08 2021 Fundamentals of Electronic Resources Acquisitions October 2021 Oct 04 - 29 2021 Fundamentals of Preservation Oct 04 2021 - Nov 12 2021 Fundamentals of Collection Assessment No event found! Load More Past Posts by Month Past Posts by Month Select Month August 2021 July 2021 June 2021 May 2021 April 2021 March 2021 February 2021 January 2021 December 2020 November 2020 October 2020 September 2020 August 2020 The official blog for Core: Leadership, Infrastructure, Futures, a division of ALA. Copyright © 2021 American Library Association. All rights reserved. Twitter Instagram YouTube Website alair-ala-org-5195 ---- Information Literacy Competency Standards for Higher Education   ALA Institutional Repository Information Literacy Competency Standards for Higher Education Login ALAIR Home → Divisions → Association of College & Research Libraries (ACRL) → Guidelines, Standards, and Frameworks → View Item JavaScript is disabled for your browser. Some features of this site may not work without it. Information Literacy Competency Standards for Higher Education URI: http://hdl.handle.net/11213/7668 Date: 2000-01 Abstract: The Information Literacy Competency Standards for Higher Education (originally approved in January 2000) were rescinded by the ACRL Board of Directors on June 25, 2016, at the 2016 ALA Annual Conference in Orlando, Florida, which means they are no longer in force. Show full item record Files in this item Name: ACRL Information ... Size: 793.7Kb Format: PDF Description: Information Literacy ... View/Open This item appears in the following Collection(s) Guidelines, Standards, and Frameworks Search Search This Collection Browse All Content Communities & Collections By Issue Date Authors Titles Subjects This Collection By Issue Date Authors Titles Subjects My Account Login Register DSpace software copyright © 2002-2020  LYRASIS   Contact Us | Send Feedback   am-jpmorgan-com-6902 ---- JPMorgan Prime Money Market Fund-Morgan | J.P. Morgan Asset Management Welcome Funds Products Mutual Funds ETFs SmartRetirement Funds 529 Portfolios Money Market Funds Commingled Funds Featured Funds Asset Class Capabilities Fixed Income Equity Multi-Asset Solutions Alternatives Global Liquidity Investment Strategies Investment Approach ETF Investing Model Portfolios Separately Managed Accounts Sustainable Investing Variable Insurance Portfolios Commingled Pension Trust Funds College Planning 529 College Savings Plan College Planning Essentials Defined Contribution Retirement Solutions Target Date Strategies Full-service 401(k) Solution Retirement Income Insights Market Insights Market Insights Overview Guide to the Markets Quarterly Economic & Market Update Guide to Alternatives Market Updates On the Minds of Investors Principles for Successful Long-Term Investing Weekly Market Recap Portfolio Insights Portfolio Insights Overview Asset Class Views Equity Fixed Income Long-Term Capital Market Assumptions Monthly Strategy Report Sustainable Investing Retirement Insights Retirement Insights Overview Guide to Retirement Principles for a Successful Retirement Defined Contribution Insights Tools Portfolio Construction Portfolio Construction Tools Overview Portfolio Analysis Model Portfolios Investment Comparison Bond Ladder Illustrator Defined Contribution Retirement Plan Tools & Resources Overview Target Date Compass® Core Menu Evaluator℠ Price Smart℠ Resources Account Service Forms Tax Planning News & Fund Announcements Insights App Events Library About Us Contact Us Skip to main content Account Login Login Register Welcome My Collections Logout Role Country Shareholder Login Search Menu CLOSE Search You are about to leave the site Close J.P. Morgan Asset Management’s website and/or mobile terms, privacy and security policies don't apply to the site or app you're about to visit. Please review its terms, privacy and security policies to see how they apply to you. J.P. Morgan Asset Management isn’t responsible for (and doesn't provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the J.P. Morgan Asset Management name. CONTINUE Go Back J.P. Morgan Asset Management Capital Gains Distributions eDelivery Fund Documents Glossary Help How to invest Important Links Mutual Fund Fee Calculator Accessibility Form CRS and Form ADV Brochures Investment stewardship Privacy Proxy Information Senior Officer Fee Summary SIMPLE IRAs Site disclaimer Terms of use J.P. Morgan J.P. Morgan JPMorgan Chase Chase This website is a general communication being provided for informational purposes only. It is educational in nature and not designed to be a recommendation for any specific investment product, strategy, plan feature or other purposes. By receiving this communication you agree with the intended purpose described above. Any examples used in this material are generic, hypothetical and for illustration purposes only. None of J.P. Morgan Asset Management, its affiliates or representatives is suggesting that the recipient or any other person take a specific course of action or any action at all. Communications such as this are not impartial and are provided in connection with the advertising and marketing of products and services. Prior to making any investment or financial decisions, an investor should seek individualized advice from personal financial, legal, tax and other professionals that take into account all of the particular facts and circumstances of an investor's own situation.   Opinions and statements of financial market trends that are based on current market conditions constitute our judgment and are subject to change without notice. We believe the information provided here is reliable but should not be assumed to be accurate or complete. The views and strategies described may not be suitable for all investors.   INFORMATION REGARDING MUTUAL FUNDS/ETF:   Investors should carefully consider the investment objectives and risks as well as charges and expenses of a mutual fund or ETF before investing. The summary and full prospectuses contain this and other information about the mutual fund or ETF and should be read carefully before investing. To obtain a prospectus for Mutual Funds: Contact JPMorgan Distribution Services, Inc. at 1-800-480-4111 or download it from this site. Exchange Traded Funds: Call 1-844-4JPM-ETF or download it from this site.   J.P. Morgan Funds and J.P. Morgan ETFs are distributed by JPMorgan Distribution Services, Inc., which is an affiliate of JPMorgan Chase & Co. Affiliates of JPMorgan Chase & Co. receive fees for providing various services to the funds. JPMorgan Distribution Services, Inc. is a member of FINRA  FINRA's BrokerCheck   INFORMATION REGARDING COMMINGLED FUNDS:   For additional information regarding the Commingled Pension Trust Funds of JPMorgan Chase Bank, N.A., please contact your J.P. Morgan Asset Management representative.   The Commingled Pension Trust Funds of JPMorgan Chase Bank N.A. are collective trust funds established and maintained by JPMorgan Chase Bank, N.A. under a declaration of trust. The funds are not required to file a prospectus or registration statement with the SEC, and accordingly, neither is available. The funds are available only to certain qualified retirement plans and governmental plans and is not offered to the general public. Units of the funds are not bank deposits and are not insured or guaranteed by any bank, government entity, the FDIC or any other type of deposit insurance. You should carefully consider the investment objectives, risk, charges, and expenses of the fund before investing.   INFORMATION FOR ALL SITE USERS:   J.P. Morgan Asset Management is the brand name for the asset management business of JPMorgan Chase & Co. and its affiliates worldwide.   NOT FDIC INSURED | NO BANK GUARANTEE | MAY LOSE VALUE   Telephone calls and electronic communications may be monitored and/or recorded. Personal data will be collected, stored and processed by J.P. Morgan Asset Management in accordance with our privacy policies at https://www.jpmorgan.com/privacy.   If you are a person with a disability and need additional support in viewing the material, please call us at 1-800-343-1113 for assistance.    Copyright © 2021 JPMorgan Chase & Co., All rights reserved amsaw-org-849 ---- Robert Benchley It Happened in History! (Go to It Happened in History Archives) Robert Benchley One of America's greatest humorists just happened to have been an accomplished actor, drama critic, and author, as well.  Born on September 15, 1889, in Worcester, Massachusetts, Robert Benchley seemed destined for success.  In school, he built a reputation for his creative interpretation of essay assignments.  When asked to write an essay about something practical, he penned a theme entitled "How to Embalm a Corpse."  For an assignment concerning the dispute between the United States and Canada over Newfoundland fishing rights, he wrote an essay from the point of view of the fish. Upon leaving Harvard in 1912, Benchley joined the staff of the New York Tribune.  He wasn't a very good reporter, however, and his editors soon switched him to feature writing, which served his talents better.  He wrote stories and humorous essays such as "Did Prehistoric Man Walk On His Head?"  After serving in World War I, he returned to New York and accepted a position as managing editor at Vanity Fair magazine, where he met fellow writer and wit Dorothy Parker, who soon became his closest  friend.  The two developed a reputation as office pranksters.  Once, after management asked its staff not to discuss their salaries, Benchley and Parker had them printed on placards that they wore around their necks. The two literary cut-ups formed the nucleus of a group of writers, actors, and artists who met for lunch at New York's Algonquin Hotel to share sparkling conversation, juicy gossip, and scathing insults.  Together with Alexander Woollcott, George S. Kaufman, Marc Connelly, Harpo Marx, and others, they became known as the Algonquin Round Table.  Benchley thought so highly of Parker that he resigned from Vanity Fair when she was fired.  He took a position as drama critic for Life magazine and The New Yorker.  But, since he knew absolutely nothing about the theater, he quickly turned his reviews into humorous essays. He once wrote a review of the New York City Telephone Directory.  He said it had no plot.  He was also a notorious prevaricator.  When asked to provide a brief biography of himself for an encyclopedia, he wrote that he was born on the Isle of Wight, wrote A Tale of Two Cities, married a princess in Portugal, and was buried in Westminster Abbey. Through his work in Life magazine, as well as in books such as Pluck and Luck (1925) and Early Worm (1927), Benchley emerged as one of America's most popular and well-regarded writers.  He had an uncanny knack for dissecting the comic futility of society during the Roaring Twenties.  His subtle, whimsical brand of humor played well against the struggles of the common man.  Often his treatises spun off on whimsical, nonsensical tangents.  One of Benchley's friends, Donald Ogden Stewart (The Philadelphia Story), described his sense of humor as "crazy."  Nevertheless, it found a receptive audience among his pre-Depression Era readers.  Benchley began working in movies in 1928, with a reprise of The Treasurer's Report in one of the earliest short films to feature sound.  In 1932, he began writing for feature films, marking his debut with The Sport Parade, in which he also co-starred as a broadcaster.  He continued to play comedic supporting roles in the years to come, typically cast as a bumbling yet lovable sophisticate, a cocktail glass or cigarette-and-holder clenched firmly in hand.  In 1940, he appeared in the Alfred Hitchcock thriller, Foreign Correspondent, a film to which he also contributed dialogue. His work was collected in many books, including From Bed to Worse (1934), Why Does Nobody Collect Me? (1935), and My Ten Years in a Quandary, and How It Grew (1936). Robert Benchley, who once said, "It took me fifteen years to discover that I had no talent for writing, but I couldn't give it up because by that time I was too famous,"  died on November 21, 1945, at the peak of his fame.  Benchley's son, Nathanial, was a well-regarded novelist and children's book author, while his grandson, Peter, later became famous as author of the book that inspired the film, Jaws. Discover Robert Benchley at Amazon.com Search Now: Indulge Yourself - Check Out Today's Best-Selling Fiction - Nonfiction - DVDs - HOME -   NOTE: All material on this site is copyright protected.  No portion of this material may be copied or reproduced, either electronically,  mechanically, or by any other means, for resale or distribution without the written consent of the author.  Contact the editors for right to reprint.  All copy has been dated and registered with the American Society of Authors and Writers.  Copyright 2006 by the American Society of Authors and Writers.             amycastor-com-7767 ---- Binance: A crypto exchange running out of places to hide – Amy Castor Primary Menu Amy Castor Independent Journalist About Me Selected Clips Contact Me Blog Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 14,571 other followers Email Address: Subscribe Twitter Updates FYI - I'm taking an actual vacation for the next week, so I'll be quiet on Twitter and not following the news so mu… twitter.com/i/web/status/1… 1 day ago RT @davidgerard: News: the Senate hates Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network… 1 day ago RT @franciskim_co: @WuBlockchain Translation https://t.co/hPQeFLjHpU 1 day ago RT @WuBlockchain: The Chinese government is cracking down on fraud. They posted a fraud case involving USDT on the wall to remind the publi… 1 day ago RT @patio11: The core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us… 2 days ago Recent Comments cryptobuy on Binance: Fiat off-ramps keep c… Steve on Binance: A crypto exchange run… Amy Castor on El Salvador’s bitcoin plan: ta… Amy Castor on El Salvador’s bitcoin plan: ta… Clearwell Trader on El Salvador’s bitcoin plan: ta… Skip to content Amy Castor Binance: A crypto exchange running out of places to hide Binance, the world’s largest dark crypto slush fund, is struggling to find corners of the world that will tolerate its lax anti-money laundering policies and flagrant disregard for securities laws.  On Thursday, the Cayman Islands Monetary Authority issued a statement that Binance, the Binance Group and Binance Holdings Limited are not registered, licensed, regulated, or otherwise authorized to operate a crypto exchange in the Cayman Islands. “Following recent press reports that have referred to Binance, the Binance Group and Binance Holdings Limited as being a crypto-currency company operating an exchange based in the Cayman Islands, the Authority reiterates that Binance, the Binance Group or Binance Holdings Limited are not subject to any regulatory oversight by the Authority,” the statement said. This is clearly CIMA reacting to everyone else blaming Binance on the Caymans, where it’s been incorporated since 2018.  On Friday, Thailand’s Security and Exchange Commission filed a criminal complaint against the crypto exchange for operating a digital asset business without a license within its borders.  Last week, Binance opted to close up shop In Ontario rather than meet the fate of other cryptocurrency exchanges that have had actions filed against them for allegedly failing to comply with Ontario securities laws. Singapore’s central bank, the Monetary Authority of Singapore, said Thursday that it would look into Binance Asia Services Pte., the local unit of Binance Holdings, Bloomberg reported.  The Binance subsidiary applied for a license to operate in Singapore. While it awaits a review of its license application, Binance Asia Services has a grace period that allows it to continue to operate in the city-state.  “We are aware of the actions taken by other regulatory authorities against Binance and will follow up as appropriate,” the MAS said in a statement. On June 26, the UK’s Financial Conduct Authority issued a consumer warning that Binance’s UK entity, Binance Markets Limited, was prohibited from doing business in the country.  “Due to the imposition of requirements by the FCA, Binance Markets Limited is not currently permitted to undertake any regulated activities without the prior written consent of the FCA,” the regulator said. It continued: “No other entity in the Binance Group holds any form of UK authorisation, registration or licence to conduct regulated activity in the UK.”  Following the UK’s financial watchdog crackdown, Binance customers were temporarily frozen out of Faster Payments, a major UK interbank payments platform. Withdrawals were reinstated a few days later. Only a few days before, Japan’s Financial Services Agency issued a warning that Binance was operating in the country without a license. (As I explain below, this is the second time the FSA has issued such a warning.) Last summer, Malaysia’s Securities Commission also added Binance to its list of unauthorised entities, indicating Binance was operating without a license in the Malaysian market. A history of bouncing around Binance offers a wide range of services, from crypto spot and derivatives trading to tokenized versions of corporate stocks. It also runs a major crypto exchange and has its own cryptocurrency, Binance Coin (BNB), currently the fifth largest crypto by market cap, according to Coinmarketcap.  The company was founded in Hong Kong in the summer of 2017 by Changpeng Zhao, more commonly known as “CZ.”  China banned bitcoin exchanges a few months later, and ever since, Binance has been bouncing about in search of a more tolerant jurisdiction to host its offices and servers.   Its first stop after Hong Kong was Japan, but Japan was quick to put up the “You’re not welcome here” sign. The country’s Financial Services Agency sent Binance its first warning in March 2018.  “The exchange has irked the FSA by failing to verify the identification of Japanese investors at the time accounts are opened. The Japanese officials suspect Binance does not have effective measures to prevent money laundering; the exchange handles a number of virtual currencies that are traded anonymously,” Nikkei wrote.  Binance responded by moving its corporate registration to the Cayman Islands and opening a branch office in Malta, the FT reported in March 2018. In February 2020, however, Maltese authorities announced Binance was not licensed to do business in the island country.  “Following a report in a section of the media referring to Binance as a ‘Malta-based cryptocurrency’ company, the Malta Financial Services Authority (MFSA) reiterates that Binance is not authorised by the MFSA to operate in the crypto currency sphere and is therefore not subject to regulatory oversight by the MFSA.” The ‘decentralized’ excuse CZ lives in Singapore but has continually refused to say where his company is headquartered, insisting over and over again that Binance is decentralized. This is absolute nonsense, of course. The company is run by real people and its software runs on real servers. The problem is, CZ, whose net worth Forbes estimated to be $2 billion in 2018, doesn’t want to abide by real laws.  As a result, his company faces a slew of other problems.  Binance is currently under investigation by the US Department of Justice and the Internal Revenue Service, Bloomberg reported in May. It’s also being probed by the Commodity Futures Trading Commission over whether it allowed US residents to place wagers on the exchange, according to another Bloomberg report.  Also in May, Germany’s financial regulator BaFin warned that Binance risked being fined for offering its securities-tracking tokens without publishing an investor prospectus. Binance offers “stock tokens” representing MicroStrategy, Microsoft, Apple, Tesla, and Coinbase Global.   Binance has for five years done whatever it pleases, all the while using the excuse of “decentralization” to ignore laws and regulations. Regulators are finally putting their collective foot down. Enough is enough. Image: Changpeng Zhao, YouTube If you like my work, please subscribe to my Patreon account for as little as $5 a month.  Share this: Twitter Facebook LinkedIn Like this: Like Loading... BaFinBinanceCZFinancial Conduct AuthorityMonetary Authority of Singapore Posted on July 2, 2021July 2, 2021 by Amy Castor in Blogging 1 Post navigation Previous postNotes on NFTs, the high-art trade, and money laundering Next postRSA Conference goes full blockchain, for a moment One thought on “Binance: A crypto exchange running out of places to hide” Steve says: July 4, 2021 at 9:00 am The sooner CZ is out of the crypto space the better! Reply Leave a Reply Cancel reply Enter your comment here... Fill in your details below or click an icon to log in: Email (Address never made public) Name Website You are commenting using your WordPress.com account. ( Log Out /  Change ) You are commenting using your Google account. ( Log Out /  Change ) You are commenting using your Twitter account. ( Log Out /  Change ) You are commenting using your Facebook account. ( Log Out /  Change ) Cancel Connecting to %s Notify me of new comments via email. Notify me of new posts via email. Create a website or blog at WordPress.com %d bloggers like this: allenai-org-8002 ---- CORD-19: COVID-19 Open Research Dataset — Allen Institute for AI ··· About ··· Programs ··· Projects ··· Careers ··· Research ··· Press ··· ··· Home About Programs Projects Careers Research Press Research PapersDataVideosDemosLeaderboardsSoftwarePodcasts Research CORD-19: COVID-19 Open Research Dataset Semantic Scholar • 2020 CORD-19 is a free resource of tens of thousands of scholarly articles about COVID-19, SARS-CoV-2, and related coronaviruses for use by the global research community. DownloadRead PaperView Website License: CORD-19 Dataset License AI for the Common Good Email us: ai2-info@allenai.org Call us: 206.548.5600 Follow us: @allen_ai Subscribe to the AI2 Newsletter Home About Mission Team Founder Board of Directors Scientific Advisory Board AI2 Blog Programs AI2 Israel AI2 Irvine Young Investigators Predoctoral Young Investigators Visiting Scholars Diversity, Equity, & Inclusion Projects AllenNLP Aristo Mosaic PRIOR Semantic Scholar AI & Fairness Incubator Careers Working at AI2 Current Openings Internships Young Investigators Predoctoral Young Investigators Research Papers Data Videos Demos Leaderboards Software Podcasts Press News Articles Press Resources Newsletters © The Allen Institute for Artificial Intelligence - All Rights Reserved. Privacy Policy |Terms and Conditions amycastor-com-2178 ---- Binance: Fiat off-ramps keep closing, reports of frozen funds, what happened to Catherine Coley? – Amy Castor Primary Menu Amy Castor Independent Journalist About Me Selected Clips Contact Me Blog Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 14,571 other followers Email Address: Subscribe Twitter Updates FYI - I'm taking an actual vacation for the next week, so I'll be quiet on Twitter and not following the news so mu… twitter.com/i/web/status/1… 1 day ago RT @davidgerard: News: the Senate hates Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network… 1 day ago RT @franciskim_co: @WuBlockchain Translation https://t.co/hPQeFLjHpU 1 day ago RT @WuBlockchain: The Chinese government is cracking down on fraud. They posted a fraud case involving USDT on the wall to remind the publi… 1 day ago RT @patio11: The core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us… 2 days ago Recent Comments cryptobuy on Binance: Fiat off-ramps keep c… Steve on Binance: A crypto exchange run… Amy Castor on El Salvador’s bitcoin plan: ta… Amy Castor on El Salvador’s bitcoin plan: ta… Clearwell Trader on El Salvador’s bitcoin plan: ta… Skip to content Amy Castor Binance: Fiat off-ramps keep closing, reports of frozen funds, what happened to Catherine Coley? Last thing I remember, I was running for the door. I had to find the passage back To the place I was before. Relax,” said the night man. “We are programmed to receive. You can check out any time you like, But you can never leave.” ~ Eagles Binance customers are becoming trapped inside of Binance — or at least their funds are — as the fiat exits to the world’s largest crypto exchange close around them. You can almost hear the echoes of doors slamming, one by one, down a long empty corridor leading to nowhere.  In the latest bit of unfolding drama, Binance told its customers today that it had disabled withdrawals in British Pounds after its key payment partner, Clear Junction, ended its business relationship with the exchange. Clear Junction provides access to Faster Payments through a UK lender called Clear Bank. Faster Payments is a major UK payments network that offers near real-time transfers between the country’s banks — the thing the US Federal Reserve hopes to get with FedNow. In a statement on its website on Monday, Clear Junction said: “Clear Junction can confirm that it will no longer be facilitating payments related to Binance. The decision has been made following the Financial Conduct Authority’s recent announcement that Binance is not permitted to undertake any regulated activity in the UK.  We have decided to suspend both GBP and EUR payments and will no longer be facilitating deposits or withdrawals in favor of or on behalf of the crypto trading platform. Clear Junction acts in full compliance with FCA regulations and guidance in regards to handling payments of Binance.”  The Financial Conduct Authority, or FCA, ruled on June 26 that Binance cannot conduct any “regulated activity” in the UK. Binance downplayed the ruling at the time, telling everyone the FCA notice related to Binance Markets Ltd and had “no direct impact on the services provided on Binance.com.” Binance waited a day after learning it was cut off by Clear Junction before emailing its customers and telling them that the suspension of payments was temporary.  This means that there is now no way for UK customers to withdraw GBP from @Binance. This came with ZERO advance warning. pic.twitter.com/lZmUOs15so — Crypto Cnut (@CryptoCnut) July 13, 2021 “We are working to resume this service as soon as we can,” Binance said. It reassured customers they can still buy crypto with British Pounds via credit and debit cards on the platform.    This is the second time in recent weeks that Binance customers have been frozen out of Faster Payments. They were also frozen out at the end of June. A few days later, the service was restored — presumably when Binance started putting payments through Clear Junction. I am guessing that Clear Bank’s banking partners warned them that Binance was too risky and that if they wanted to maintain their banking relationships, they’d better drop them as a customer asap, so they did.  Binance talks like all of these issues are temporary snafus that it’s going to fix in due time. In fact, the exchange’s struggle to secure banking in many parts of the world is likely to intensify.  Despite numerous claims in the past about taking its legal obligations seriously, Binance has been loosey-goosey with its anti-money laundering and know-your-customer rules, opening up loopholes for dirty money to flow through the exchange. Now that the word is out, no bank is going to want to touch them.  Other developments I wrote about Binance’s global pariah status earlier this month. Since I published that story, UK high-street banks have moved to ban Binance, all following the FCA ban. On July 5, Barclays said it is blocking its customers from using their debit and credit cards to make payments to Binance “to help keep your money safe.” Barclays customers can still withdraw funds from the exchange, however. (Since Clear Junction cut Binance off, credit cards remain the only means for UK customers to get fiat off the exchange at this point.) Banks crack down on Binance even further. Just had this text from Barclays about banks blocking credit/debit card payments to Binance to "keep money safe". Turbulent times ahead. #binance #btc #bitcoin @LucaLand97 @RealWillyBot pic.twitter.com/XdiRfTYHLD — Thomas Davies (@ThomasDavies33) July 5, 2021 Two days later, Binance told its users that it will temporarily disable deposits via Single Euro Payments Area (SEPA) bank transfers — the most used wire method in the EU. Binance blamed the move on “events beyond our control” and indicated users could still make withdrawals via SEPA. On July 8, Santander, another high-street bank, told its customers it was also stopping payments to Binance. “In recent months we have seen a large increase in UK customers becoming the victims of cryptocurrency fraud. Keeping our customers safe is a top priority, so we have decided to prevent payments to Binance following the FCA’s warning to consumers,” Santander UK’s support page tweeted. Hello, In recent months we have seen a large increase in UK customers becoming the victims of cryptocurrency fraud. Keeping our customers safe is a top priority, so we have decided to prevent payments to Binance following the FCA’s warning to consumers. ^TC — Santander UK Help (@santanderukhelp) July 8, 2021 As I detailed in my earlier story, regulators around the world have been putting out warnings about Binance. Poland doesn’t regulate crypto markets, but the Polish Financial Supervisory Authority also issued a caution about the exchange. Its notice included links to all the other regulatory responses. Amidst the firestorm, Binance has been whistling Dixie. On July 6, the exchange sent a letter to its customers, saying “compliance is a journey” and drawing odd parallels between developments in crypto and the introduction of the automobile.  “When the car was first invented, there weren’t any traffic laws, traffic lights or even safety belts,” said Binance. “Laws and guidelines were developed along the way as the cars were running on the road.”  Do I look more regulated already? 😂 pic.twitter.com/AGTl4erl7H — CZ 🔶 Binance (@cz_binance) July 12, 2021 Frozen funds, lawsuits, and other red flags There’s a lot of unhappy people on r/BinanceUS right now complaining their withdrawals are frozen or suspended — and they can’t seem to get a response from customer support either. Binance.US is a subsidiary of Binance Holdings Ltd. Unlike its parent company, Binance.US, does not allow highly leveraged crypto-derivatives trading, which is regulated in the US. There's a lot of unhappy people on r/BinanceUs complaining their withdrawals are frozen or suspended, and they can't get a response from customer support https://t.co/qZ4CIrPsgb pic.twitter.com/wWG4j0azi4 — Amy Castor (@ahcastor) July 11, 2021 A quick look at the subreddit’s weekly support thread reveals even more troubling posts about lost access to funds.  This mirrors Gizmodo’s recent findings. The media outlet submitted a Freedom of Information Act request with the Federal Trade Commission asking for any customer issues filed with the FTC about Binance. The agency located 760 complaints filed since June of 2020 — presumably mainly from Binance.US customers.   In an article titled “32 Angry Complaints to the FTC About Binance,” Gizmodo uncovered some startling patterns. “The first, and arguably most alarming pattern, appears to be people who put large amounts of money into Binance but say they can’t get their money out.” Also, Binance is known for having “maintenance issues” during periods of heavy market volatility. As a result, margin traders, unable to exit their positions, are left to watch in horror while the exchange seizes their margin collateral and liquidates their holdings.   Hundreds of traders around the world are now working with a lawyer in France to recoup their losses. In a recent front-page piece, the Wall Street Journal said it suspected that the collective complaints may be the reason why Binance has received continuous warnings from many countries. If you still have funds on Binance, I would urge you to get them off the exchange now — while you still can. When hoards of people start complaining about lost and frozen funds, it’s usually a sign of liquidity problems.   We saw a similar pattern leading up to February 2014 when Tokyo Bitcoin exchange Mt Gox bit the dust. And also just before Canadian crypto exchange QuadrigaCX went belly up in early 2019. In both instances, users of those defunct exchanges are still waiting to recoup a portion of their lost funds. Bankruptcy cases take a long, long time, and you are lucky to get back pennies on the dollar.  Whatever is going on with Binance withdrawals, it does appear to be getting larger. The reason this is of interest (despite it being largely anecdotal) is that a very similar pattern preceded the Mt. Gox episode. Could be this is different, but… sure does feel like an echo… — Travis Kimmel (@coloradotravis) July 11, 2021 Finally, where is Catherine Coley? In another bizarre development, folks on Twitter are wondering what happened to Catherine Coley, the previous CEO of Binance.US. She stepped down in May when Brian Brooks, the former Acting Comptroller of the Currency, took over. Nobody has heard from her since. Where did she disappear off to?   Coley’s last tweet was on April 19. And both her LinkedIn Profile and Twitter account indicate she is still the CEO of Binance.US.  Catherine Coley, heralded as “the lone female chief of a major crypto exchange in an industry dominated by men” by @forbes completely vanishes off the radar & *nobody* in crypto media wants to talk about it? Really? Okay. How about @TheStalwart, @FutureTenseNow or @andrewrsorkin? https://t.co/WCBQ193aZX — Grant Gulovsen (@gulovsen) July 9, 2021 She hasn’t been in any interviews or podcasts. She doesn’t respond to DMs, and there are no reports of anyone being able to contact her.  A Forbes article from last year says that Binance.US may have been set up as a smokescreen — the “Tai Chi entity” — to divert US regulators from looking too closely at Binance, the parent company.  Binance.US maintains that it is a separate entity. However, Forbes 40 under 40 reported that Coley was “chosen” by CZ, the CEO of Binance, which suggests that Binance is more involved with Binance.US than it claims.  Has CZ told her to stop talking? What does she know? Catherine, if you are reading this, send us a message! (Updated July 13 to clarify that Barclays still allows customers to withdraw funds via credit card and to note that Binance.US is the Tai Chi entity.) If you like my work, please subscribe to my Patreon for as little as $5 a month. Your support keeps me going. Share this: Twitter Facebook LinkedIn Like this: Like Loading... BarclaysBinanceCatherine ColeyClear BankClear JunctionCZFCAFinancial Conduct AuthoritySantander Posted on July 13, 2021July 14, 2021 by Amy Castor in Blogging 1 Post navigation Previous postWhat’s backing Circle’s 25B USDC? We may never know Next postBinance: Italy, Lithuania, Hong Kong, all issue warnings; Brazil director quits One thought on “Binance: Fiat off-ramps keep closing, reports of frozen funds, what happened to Catherine Coley?” cryptobuy says: July 14, 2021 at 11:21 am Honestly Binance have been so shady lately, not sure what they are playing at but its not a good look for crypto and new investors wanting to come in the market. Reply Leave a Reply Cancel reply Enter your comment here... Fill in your details below or click an icon to log in: Email (Address never made public) Name Website You are commenting using your WordPress.com account. ( Log Out /  Change ) You are commenting using your Google account. ( Log Out /  Change ) You are commenting using your Twitter account. ( Log Out /  Change ) You are commenting using your Facebook account. ( Log Out /  Change ) Cancel Connecting to %s Notify me of new comments via email. Notify me of new posts via email. Create a website or blog at WordPress.com %d bloggers like this: andromedayelton-com-605 ---- andromeda yelton andromeda yelton I haven’t failed, I’ve tried an ML approach that *might* work! When last we met I was turning a perfectly innocent neural net into a terribly ineffective one, in an attempt to get it to be better at face recognition in archival photos. I was also (what cultural heritage technology experience would be complete without this?) being foiled by metadata. So, uh, I stopped using metadata. … Continue reading I haven’t failed, I’ve tried an ML approach that *might* work! → I haven’t failed, I’ve just tried a lot of ML approaches that don’t work “Let’s blog every Friday,” I thought. “It’ll be great. People can see what I’m doing with ML, and it will be a useful practice for me!” And then I went through weeks on end of feeling like I had nothing to report because I was trying approach after approach to this one problem that simply … Continue reading I haven’t failed, I’ve just tried a lot of ML approaches that don’t work → this time: speaking about machine learning No tech blogging this week because most of my time was taken up with telling people about ML instead! One talk for an internal Harvard audience, “Alice in Dataland”, where I explained some of the basics of neural nets and walked people through the stories I found through visualizing HAMLET data. One talk for the … Continue reading this time: speaking about machine learning → archival face recognition for fun and nonprofit In 2019, Dominique Luster gave a super good Code4Lib talk about applying AI to metadata for the Charles “Teenie” Harris collection at the Carnegie Museum of Art — more than 70,000 photographs of Black life in Pittsburgh. They experimented with solutions to various metadata problems, but the one that’s stuck in my head since 2019 … Continue reading archival face recognition for fun and nonprofit → sequence models of language: slightly irksome Not much AI blogging this week because I have been buried in adulting all week, which hasn’t left much time for machine learning. Sadface. However, I’m in the last week of the last deeplearning.ai course! (Well. Of the deeplearning.ai sequence that existed when I started, anyway. They’ve since added an NLP course and a GANs … Continue reading sequence models of language: slightly irksome → Adapting Coursera’s neural style transfer code to localhost Last time, when making cats from the void, I promised that I’d discuss how I adapted the neural style transfer code from Coursera’s Convolutional Neural Networks course to run on localhost. Here you go! Step 1: First, of course, download (as python) the script. You’ll also need the nst_utils.py file, which you can access via … Continue reading Adapting Coursera’s neural style transfer code to localhost → Dear Internet, merry Christmas; my robot made you cats from the void Recently I learned how neural style transfer works. I wanted to be able to play with it more and gain some insights, so I adapted the Coursera notebook code to something that works on localhost (more on that in a later post), found myself a nice historical cat image via DPLA, and started mashing it … Continue reading Dear Internet, merry Christmas; my robot made you cats from the void → this week in my AI After visualizing a whole bunch of theses and learning about neural style transfer and flinging myself at t-SNE I feel like I should have something meaty this week but they can’t all be those weeks, I guess. Still, I’m trying to hold myself to Friday AI blogging, so here are some work notes: Finished course … Continue reading this week in my AI → Though these be matrices, yet there is method in them. When I first trained a neural net on 43,331 theses to make HAMLET, one of the things I most wanted to do is be able to visualize them. If word2vec places documents ‘near’ each other in some kind of inferred conceptual space, we should be able to see some kind of map of them, yes? … Continue reading Though these be matrices, yet there is method in them. → Of such stuff are (deep)dreams made: convolutional networks and neural style transfer Skipped FridAI blogging last week because of Thanksgiving, but let’s get back on it! Top-of-mind today are the firing of AI queen Timnit Gebru (letter of support here) and a couple of grant applications that I’m actually eligible for (this is rare for me! I typically need things for which I can apply in my … Continue reading Of such stuff are (deep)dreams made: convolutional networks and neural style transfer → andromedayelton-com-9630 ---- andromeda yelton Skip to content andromeda yelton Menu Home About Contact Resume HAMLET LITA Talks Machine Learning (ALA Midwinter 2019) Boston Python Meetup (August 21, 2018) SWiB16 LibTechConf 2016 Code4Lib 2015 Keynote Texas Library Association 2014 Online Northwest 2014: Five Conversations About Code New Jersey ESummit (May 2, 2013) Westchester Library Association (January 7, 2013) Bridging the Digital Divide with Mobile Services (Webjunction, July 25 2012) I haven’t failed, I’ve tried an ML approach that *might* work! When last we met I was turning a perfectly innocent neural net into a terribly ineffective one, in an attempt to get it to be better at face recognition in archival photos. I was also (what cultural heritage technology experience would be complete without this?) being foiled by metadata. So, uh, I stopped using metadata. 🤦‍♀️ With twinges of guilt. And full knowledge that I was tossing out a practically difficult but conceptually straightforward supervised learning problem for…what? Well. I realized that the work that initially inspired me to try my hand at face recognition in archival photos was not, in fact, a recognition problem but a similarity problem: could the Charles Teenie Harris collection find multiple instances of the same person? This doesn’t require me to identify people, per se; it just requires me to know if they are the same or different. And you know what? I can do a pretty good job of getting different people by randomly selecting two photos from my data set — they’re not guaranteed to be different, but I’ll settle for pretty good. And I can do an actually awesome job of guaranteeing that I have two photos of the same person with the ✨magic✨ of data augmentation. Keras (which, by the way, is about a trillionty times better than hand-coding stuff in Octave, for all I appreciate that Coursera made me understand the fundamentals by doing that) — Keras has an ImageDataGenerator class which makes it straightforward to alter images in a variety of ways, like horizontal flips, rotations, or brightness changes — all of which are completely plausible ways that archival photos of the same person might differ inter alia! So I can get two photos of the same person by taking one photo, and messing with it. And at this point I have a Siamese network with triplet loss, another concept that Coursera set me up with (via the deeplearning.ai sequence). And now we are getting somewhere! Well. We’re getting somewhere once you realize that, when you make a Siamese network architecture, you no longer have layers with the names of your base network; you have one GIANT layer which is just named VGGFace or whatever, instead of having all of its constituent layers, and so when you try to set layer.trainable = True whenever the layer name is in a list of names of VGGFace layers…uh…well…it turns out you just never encounter any layers by that name and therefore don’t set layers to be trainable and it turns out if you train a neural net which doesn’t have any trainable parameters it doesn’t learn much, who knew. But. Anyway. Once you, after embarrassingly long, get past that, and set layers in the base network to be trainable before you build the Siamese network from it… This turns out to work much better! I now have a network which does, in fact, have decreased loss and increased accuracy as it trains. I’m in a space where I can actually play with hyperparameters to figure out how to do this best. Yay! …ok, so, does it get me anywhere in practice? Well, to test that I think I’m actually going to need a corpus of labeled photos so that I can tell if given, say, one of WEB Du Bois, it thinks the most similar photos in the collection are also those of WEB Du Bois, which is to say… Alas, metadata. Andromeda Uncategorized Leave a comment May 23, 2021 I haven’t failed, I’ve just tried a lot of ML approaches that don’t work “Let’s blog every Friday,” I thought. “It’ll be great. People can see what I’m doing with ML, and it will be a useful practice for me!” And then I went through weeks on end of feeling like I had nothing to report because I was trying approach after approach to this one problem that simply didn’t work, hence not blogging. And finally realized: oh, the process is the thing to talk about… Hi. I’m Andromeda! I am trying to make a neural net better at recognizing people in archival photos. After running a series of experiments — enough for me to have written 3,804 words of notes — I now have a neural net that is ten times worse at its task. 🎉 And now I have 3,804 words of notes to turn into a blog post (a situation which gets harder every week). So let me catch you up on the outline of the problem: Download a whole bunch of archival photos and their metadata (thanks, DPLA!) Use a face detection ML library to locate faces, crop them out, and save them in a standardized way Benchmark an off-the-shelf face recognition system to see how good it is at identifying these faces Retrain it Benchmark my new system Step 3: profit, right? Well. Let me also catch you up on some problems along the way: Alas, metadata Archival photos are great because they have metadata, and metadata is like labels, and labels mean you can do supervised learning, right? Well…. Is he “Du Bois, W. E. B. (William Edward Burghardt), 1868-1963” or “Du Bois, W. E. B. (William Edward Burghardt) 1868-1963” or “Du Bois, W. E. B. (William Edward Burghardt)” or “W.E.B. Du Bois”? I mean, these are all options. People have used a lot of different metadata practices at different institutions and in different times. But I’m going to confuse the poor computer if I imply to it that all these photos of the same person are photos of different people. (I have gone through several attempts to resolve this computationally without needing to do everything by hand, with only modest success.) What about “Photographs”? That appears in the list of subject labels for lots of things in my data set. “Photographs” is a person, right? I ended up pulling in an entire other ML component here — spaCy, to do some natural language processing to at least guess which lines are probably names, so I can clear the rest of them out of my way. But spaCy only has ~90% accuracy on personal names anyway and, guess what, because everything is terrible, in predictable ways, it has no idea “Kweisi Mfume” is a person. Is a person who appears in the photo guaranteed to be a person who appears in the photo? Nope. Is a person who appears in the metadata guaranteed to be a person who appears in the photo? Also nope! Often they’re a photographer or other creator. Sometimes they are the subject of the depicted event, but not themselves in the photo. (spaCy will happily tell you that there’s personal name content in something like “Martin Luther King Day”, but MLK is unlikely to appear in a photo of an MLK day event.) Oh dear, linear algebra OK but let’s imagine for the sake of argument that we live in a perfect world where the metadata is exactly what we need — no more, no less — and its formatting is perfectly consistent. 🦄 Here you are, in this perfect world, confronted with a photo that contains two people and has two names. How do you like them apples? I spent more time than I care to admit trying to figure this out. Can I bootstrap from photos that have one person and one name — identify those, subtract them out of photos of two people, go from there? (Not reliably — there’s a lot of data I never reach that way — and it’s horribly inefficient.) Can I do something extremely clever with matrix multiplication? Like…once I generate vector space embeddings of all the photos, can I do some sort of like dot-product thing across all of my photos, or big batches of them, and correlate the closest-match photos with overlaps in metadata? Not only is this a process which begs the question — I’d have to do that with the ML system I have not yet optimized for archival photo recognition, thus possibly just baking bad data in — but have I mentioned I have taken exactly one linear algebra class, which I didn’t really grasp, in 1995? What if I train yet another ML system to do some kind of k-means clustering on the embeddings? This is both a promising approach and some really first-rate yak-shaving, combining all the question-begging concerns of the previous paragraph with all the crystalline clarity of black box ML. Possibly at this point it would have been faster to tag them all by hand, but that would be admitting defeat. Also I don’t have a research assistant, which, let’s be honest, is the person who would usually be doing this actual work. I do have a 14-year-old and I am strongly considering paying her to do it for me, but to facilitate that I’d have to actually build a web interface and probably learn more about AWS, and the prospect of reading AWS documentation has a bracing way of reminding me of all of the more delightful and engaging elements of my todo list, like calling some people on the actual telephone to sort out however they’ve screwed up some health insurance billing. Nowhere to go but up Despite all of that, I did actually get all the way through the 5 steps above. I have a truly, spectacularly terrible neural net. Go me! But at a thousand-plus words, perhaps I should leave that story for next week…. Andromeda Uncategorized 1 Comment April 16, 2021 this time: speaking about machine learning No tech blogging this week because most of my time was taken up with telling people about ML instead! One talk for an internal Harvard audience, “Alice in Dataland”, where I explained some of the basics of neural nets and walked people through the stories I found through visualizing HAMLET data. One talk for the NISO plus conference, “Discoverability in an AI World”, about ways libraries and other cultural heritage institutions are using AI both to enhance traditional discovery interfaces and provide new ones. This was recorded today but will be played at the conference on the 23rd, so there’s still time to register if you want to see it! NISO Plus will also include a session on AI, metadata, and bias featuring Dominique Luster, who gave one of my favorite code4lib talks, and one on AI and copyright featuring one of my go-to JD/MLSes, Nancy Sims. And I’m prepping for an upcoming talk that has not yet been formally announced. Which is to say, I guess, I have a lot of talks about AI and cultural heritage in my back pocket, if you were looking for someone to speak about that 😉 Andromeda Uncategorized Leave a comment February 12, 2021 archival face recognition for fun and nonprofit In 2019, Dominique Luster gave a super good Code4Lib talk about applying AI to metadata for the Charles “Teenie” Harris collection at the Carnegie Museum of Art — more than 70,000 photographs of Black life in Pittsburgh. They experimented with solutions to various metadata problems, but the one that’s stuck in my head since 2019 is the face recognition one. It sure would be cool if you could throw AI at your digitized archival photos to find all the instances of the same person, right? Or automatically label them, given that any of them are labeled correctly? Sadly, because we cannot have nice things, the data sets used for pretrained face recognition embeddings are things like lots of modern photos of celebrities, a corpus which wildly underrepresents 1) archival photos and 2) Black people. So the results of the face recognition process are not all that great. I have some extremely technical ideas for how to improve this — ideas which, weirdly, some computer science PhDs I’ve spoken with haven’t seen in the field. So I would like to experiment with them. But I must first invent the universe set up a data processing pipeline. Three steps here: Fetch archival photographs; Do face detection (draw bounding boxes around faces and crop them out for use in the next step); Do face recognition. For step 1, I’m using DPLA, which has a super straightforward and well-documented API and an easy-to-use Python wrapper (which, despite not having been updated in a while, works just fine with Python 3.6, the latest version compatible with some of my dependencies). For step 2, I’m using mtcnn, because I’ve been following this tutorial. For step 3, face recognition, I’m using the steps in the same tutorial, but purely for proof-of-concept — the results are garbage because archival photos from mid-century don’t actually look anything like modern-day celebrities. (Neural net: “I have 6% confidence this is Stevie Wonder!” How nice for you.) Clearly I’m going to need to build my own corpus of people, which I have a plan for (i.e. I spent some quality time thinking about numpy) but haven’t yet implemented. So far the gotchas have been: Gotcha 1: If you fetch a page from the API and assume you can treat its contents as an image, you will be sad. You have to treat them as a raw data stream and interpret that as an image, thusly: from PIL import Image import requests response = requests.get(url, stream=True) response.raw.decode_content = True data = requests.get(url).content Image.open(io.BytesIO(data)) This code is, of course, hilariously lacking in error handling, despite fetching content from a cesspool of untrustworthiness, aka the internet. It’s a first draft. Gotcha 2: You see code snippets to convert images to pixel arrays (suitable for AI ingestion) that look kinda like this: np.array(image).astype('uint8'). Except they say astype('float32') instead of astype('uint32'). I got a creepy photonegative effect when I used floats. Gotcha 3: Although PIL was happy to manipulate the .pngs fetched from the API, it was not happy to write them to disk; I needed to convert formats first (image.convert('RGB')). Gotcha 4: The suggested keras_vggface library doesn’t have a Pipfile or requirements.txt, so I had to manually install keras and tensorflow. Luckily the setup.py documented the correct versions. Sadly the tensorflow version is only compatible with python up to 3.6 (hence the comment about DPyLA compatibility above). I don’t love this, but it got me up and running, and it seems like an easy enough part of the pipeline to rip out and replace if it’s bugging me too much. The plan from here, not entirely in order, subject to change as I don’t entirely know what I’m doing until after I’ve done it: Build my own corpus of identified people This means the numpy thoughts, above It also means spending more quality time with the API to see if I can automatically apply names from photo metadata rather than having to spend too much of my own time manually labeling the corpus Decide how much metadata I need to pull down in my data pipeline and how to store it Figure out some kind of benchmark and measure it Try out my idea for improving recognition accuracy Benchmark again Hopefully celebrate awesomeness Andromeda Uncategorized Leave a comment February 5, 2021 sequence models of language: slightly irksome Not much AI blogging this week because I have been buried in adulting all week, which hasn’t left much time for machine learning. Sadface. However, I’m in the last week of the last deeplearning.ai course! (Well. Of the deeplearning.ai sequence that existed when I started, anyway. They’ve since added an NLP course and a GANs course, so I’ll have to think about whether I want to take those too, but at the moment I’m leaning toward a break from the formal structure in order to give myself more time for project-based learning.) This one is on sequence models (i.e. “the data comes in as a stream, like music or language”) and machine translation (“what if we also want our output to be a stream, because we are going from a sentence to a sentence, and not from a sentence to a single output as in, say, sentiment analysis”). And I have to say, as a former language teacher, I’m slightly irked. Because the way the models work is — OK, consume your input sentence one token at a time, with some sort of memory that allows you to keep track of prior tokens in processing current ones (so far, so okay). And then for your output — spit out a few most-likely candidate tokens for the first output term, and then consider your options for the second term and pick your most-likely two-token pairs, and then consider all the ways your third term could combine with those pairs and pick your most likely three-token sequences, et cetera, continue until done. And that is…not how language works? Look at Cicero, presuming upon your patience as he cascades through clause after clause which hang together in parallel but are not resolved until finally, at the end, a verb. The sentence’s full range of meanings doesn’t collapse until that verb at the end, which means you cannot be certain if you move one token at a time; you need to reconsider the end in light of the beginning. But, at the same time, that ending token is not equally presaged by all former tokens. It is a verb, it has a subject, and when we reached that subject, likely near the beginning of the sentence, helpfully (in Latin) identified by the nominative case, we already knew something about the verb — a fact we retained all the way until the end. And on our way there, perhaps we tied off clause after clause, chunking them into neat little packages, but none of them nearly so relevant to the verb — perhaps in fact none of them really tied to the verb at all, because they’re illuminating some noun we met along the way. Pronouns, pointing at nouns. Adjectives, pointing at nouns. Nouns, suspended with verbs like a mobile, hanging above and below, subject and object. Adverbs, keeping company only with verbs and each other. There’s so much data in the sentence about which word informs which that the beam model casually discards. Wasteful. And forcing the model to reinvent all these things we already knew — to allocate some of its neural space to re-engineering things we could have told it from the beginning. Clearly I need to get my hands on more modern language models (a bizarre sentence since this class is all of 3 years old, but the field moves that fast). Andromeda Uncategorized 1 Comment January 15, 2021 Adapting Coursera’s neural style transfer code to localhost Last time, when making cats from the void, I promised that I’d discuss how I adapted the neural style transfer code from Coursera’s Convolutional Neural Networks course to run on localhost. Here you go! Step 1: First, of course, download (as python) the script. You’ll also need the nst_utils.py file, which you can access via File > Open. Step 2: While the Coursera file is in .py format, it’s iPython in its heart of hearts. So I opened a new file and started copying over the bits I actually needed, reading them as I went to be sure I understood how they all fit together. Along the way I also organized them into functions, to clarify where each responsibility happened and give it a name. The goal here was ultimately to get something I could run at the command line via python dpla_cats.py, so that I could find out where it blew up in step 3. Step 3: Time to install dependencies. I promptly made a pipenv and, in running the code and finding what ImportErrors showed up, discovered what I needed to have installed: scipy, pillow, imageio, tensorflow. Whatever available versions of the former three worked, but for tensorflow I pinned to the version used in Coursera — 1.2.1 — because there are major breaking API changes with the current (2.x) versions. This turned out to be a bummer, because tensorflow promptly threw warnings that it could be much faster on my system if I compiled it with various flags my computer supports. OK, so I looked up the docs for doing that, which said I needed bazel/bazelisk — but of course I needed a paleolithic version of that for tensorflow 1.2.1 compat, so it was irritating to install — and then running that failed because it needed a version of Java old enough that I didn’t have it, and at that point I gave up because I have better things to do than installing quasi-EOLed Java versions. Updating the code to be compatible with the latest tensorflow version and compiling an optimized version of that would clearly be the right answer, but also it would have been work and I wanted messed-up cat pictures now. (As for the rest of my dependencies, I ended up with scipy==1.5.4, pillow==8.0.1, and imageio==2.9.0, and then whatever sub-dependencies pipenv installed. Just in case the latest versions don’t work by the time you read this. 🙂 At this point I had achieved goal 1, aka “getting anything to run at all”. Step 4: I realized that, honestly, almost everything in nst_utils wanted to be an ImageUtility, which was initialized with metadata about the content and style files (height, width, channels, paths), and carried the globals (shudder) originally in nst_utils as class data. This meant that my new dpla_cats script only had to import ImageUtility rather than * (from X import * is, of course, deeply unnerving), and that utility could pingpong around knowing how to do the things it knew how to do, whenever I needed to interact with image-y functions (like creating a generated image or saving outputs) rather than neural-net-ish stuff. Everything in nst_utils that properly belonged in an ImageUtility got moved, step by step, into that class; I think one or two functions remained, and they got moved into the main script. Step 5: Ughhh, scope. The notebook plays fast and loose with scope; the raw python script is, rightly, not so forgiving. But that meant I had to think about what got defined at what level, what got passed around in an argument, what order things happened in, et cetera. I’m not happy with the result — there’s a lot of stuff that will fail with minor edits — but it works. Scope errors will announce themselves pretty loudly with exceptions; it’s just nice to know you’re going to run into them. Step 5a: You have to initialize the Adam optimizer before you run sess.run(tf.global_variables_initializer()). (Thanks, StackOverflow!) The error message if you don’t is maddeningly unhelpful. (FailedPreconditionError, I mean, what.) Step 6: argparse! I spent some quality time reading this neural style implementation early on and thought, gosh, that’s argparse-heavy. Then I found myself wanting to kick off a whole bunch of different script runs to do their thing overnight investigating multiple hypotheses and discovered how very much I wanted there to be command-line arguments, so I could configure all the different things I wanted to try right there and leave it alone. Aw yeah. I’ve ended up with the following: parser.add_argument('--content', required=True) parser.add_argument('--style', required=True) parser.add_argument('--iterations', default=400) # was 200 parser.add_argument('--learning_rate', default=3.0) # was 2.0 parser.add_argument('--layer_weights', nargs=5, default=[0.2,0.2,0.2,0.2,0.2]) parser.add_argument('--run_until_steady', default=False) parser.add_argument('--noisy_start', default=True) content is the path to the content image; style is the path to the style image; iterations and learning_rate are the usual; layer_weights is the value of STYLE_LAYERS in the original code, i.e. how much to weight each layer; run_until_steady is a bad API because it means to ignore the value of the iterations parameter and instead run until there is no longer significant change in cost; and noisy_start is whether to use the content image plus static as the first input or just the plain content image. I can definitely see adding more command line flags if I were going to be spending a lot of time with this code. (For instance, a layer_names parameter that adjusted what STYLE_LAYERS considered could be fun! Or making “significant change in cost” be a user-supplied rather than hardcoded parameter!) Step 6a: Correspondingly, I configured the output filenames to record some of the metadata used to create the image (content, style, layer_weights), to make it easier to keep track of which images came from which script runs. Stuff I haven’t done but it might be great: Updating tensorflow, per above, and recompiling it. The slowness is acceptable — I can run quite a few trials on my 2015 MacBook overnight — but it would get frustrating if I were doing a lot of this. Supporting both num_iterations and run_until_steady means my iterator inside the model_nn function is kind of a mess right now. I think they’re itching to be two very thin subclasses of a superclass that knows all the things about neural net training, with the subclass just handling the iterator, but I didn’t spend a lot of time thinking about this. Reshaping input files. Right now it needs both input files to be the same dimensions. Maybe it would be cool if it didn’t need that. Trying different pretrained models! It would be easy to pass a different arg to load_vgg_model. It would subsequently be annoying to make sure that STYLE_LAYERS worked — the available layer names would be different, and load_vgg_model makes a lot of assumptions about how that model is shaped. As your reward for reading this post, you get another cat image! A friend commented that a thing he dislikes about neural style transfer is that it’s allergic to whitespace; it wants to paint everything with a texture. This makes sense — it sees subtle variations within that whitespace and it tries to make them conform to patterns of variation it knows. This is why I ended up with the noisy_start flag; I wondered what would happen if I didn’t add the static to the initial image, so that the original negative space stayed more negative-spacey. This, as you can probably tell, uses the Harlem renaissance style image. It’s still allergic to negative space — even without the generated static there are variations in pixel color in the original — but they are much subtler, so instead of saying “maybe what I see is coiled hair?” it says “big open blue patches; we like those”. But the semantics of the original image are more in place — the kittens more kitteny, the card more readable — even though the whole image has been pushed more to colorblocks and bold lines. I find I like the results better without the static — even though the cost function is larger, and thus in a sense the algorithm is less successful. Look, one more. Superhero! Andromeda Uncategorized Leave a comment January 3, 2021 Dear Internet, merry Christmas; my robot made you cats from the void Recently I learned how neural style transfer works. I wanted to be able to play with it more and gain some insights, so I adapted the Coursera notebook code to something that works on localhost (more on that in a later post), found myself a nice historical cat image via DPLA, and started mashing it up with all manner of images of varying styles culled from DPLA’s list of primary source sets. (It really helped me that these display images were already curated for looking cool, and cropped to uniform size!) These sweet babies do not know what is about to happen to them. Let’s get started, shall we? Style image from the Fake News in the 1890s: Yellow Journalism primary source set. I really love how this one turned out. It’s pulled the blue and yellow colors, and the concerned face of the lower kitten was a perfect match for the expression on the right-hand muckraker. The lines of the card have taken on the precise quality of those in the cartoon — strong outlines and textured interiors. “Merry Christmas” the bird waves, like an eager newsboy. Style image from the Food and Social Justice exhibit. This is one of the first ones I made, and I was delighted by how it learned the square-iness of its style image. Everything is more snapped to a grid. The colors are bolder, too, cueing off of that dominant yellow. The Christmas banner remains almost readable and somehow heraldic. Style image from the Truth, Justice, and the American Way primary source set. How about Christmas of Steel? These kittens have broadly retained their shape (perhaps as the figures in the comic book foreground have organic detail?), but the background holly is more polygon-esque. The colors have been nudged toward primary, and the static of the background has taken on a swirl of dynamic motion lines. Style image from the Visual Art During the Harlem Renaissance primary source set. How about starting with something boldly colored and almost abstract? Why look: the kittens have learned a world of black and white and blue, with the background transformed into that stippled texture it picked up from the hair. The holly has gone more colorblocky and the lines bolder. Style image from the Treaty of Versailles and the End of World War I primary source set. This one learned its style so aptly that I couldn’t actually tell where the boundary between the second and third images was when I was placing that equals sign. The soft pencil lines, the vertical textures of shadows and jail bars, the fact that all the colors in the world are black and white and orange (the latter mostly in the middle) — these kittens are positively melting before the force of Wilsonian propaganda. Imagine them in the Hall of Mirrors, drowning in gold and reflecting back at you dozens of times, for full nightmare effect. Style image from the Victorian Era primary source set. Shall we step back a few decades to something slightly more calming? These kittens have learned to take on soft lines and swathes of pale pink. The holly is perfectly happy to conform itself to the texture of these New England trees. The dark space behind the kittens wonders if, perhaps, it is meant to be lapels. I totally can’t remember how I found this cropped version of US food propaganda. And now for kittens from the void. Brown, it has learned. The world is brown. The space behind the kittens is brown. Those dark stripes were helpfully already brown. The eyes were brown. Perhaps they can be the same brown, a hole dropped through kitten-space. I thought this was honestly pretty creepy, and I wondered if rerunning the process with different layer weights might help. Each layer of the neural net notices different sorts of things about its image; it starts with simpler things (colors, straight lines), moves through compositions of those (textures, basic shapes), and builds its way up to entire features (faces). The style transfer algorithm looks at each of those layers and applies some of its knowledge to the generated image. So I thought, what if I change the weights? The initial algorithm weights each of five layers equally; I reran it weighted toward the middle layers and entirely ignoring the first layer, in hopes that it would learn a little less about gaping voids of brown. Same thing, less void. This worked! There’s still a lot of brown, but the kitten’s eye is at least separate from its facial markings. My daughter was also delighted by how both of these images want to be letters; there are lots of letter-ish shapes strewn throughout, particularly on the horizontal line that used to be the edge of a planter, between the lower cat and the demon holly. So there you go, internet; some Christmas cards from the nightmare realm. May 2021 bring fewer nightmares to us all. Andromeda Uncategorized 1 Comment December 24, 2020December 24, 2020 this week in my AI After visualizing a whole bunch of theses and learning about neural style transfer and flinging myself at t-SNE I feel like I should have something meaty this week but they can’t all be those weeks, I guess. Still, I’m trying to hold myself to Friday AI blogging, so here are some work notes: Finished course 4 of the deeplearning.ai sequence. Yay! The facial recognition assignment is kind of buggy and poorly documented and I felt creepy for learning it in the first place, but I’m glad to have finished. Only one more course to go! It’s a 3-week course, so if I’m particularly aggressive I might be able to get it all done by year’s end. Tried making a 3d version of last week’s visualization — several people had asked — but it turned out to not really add anything. Oh well. Been thinking about Charlie Harper’s talk at SWiB this year, Generating metadata subject labels with Doc2Vec and DBPedia. This talk really grabbed me because he started with the exact same questions and challenges as HAMLET — seriously, the first seven and a half minutes of this talk could be the first seven and a half minutes of a talk on HAMLET, essentially verbatim — but took it off in a totally different direction (assigning subject labels). I have lots of ideas about where one might go with this but right now they are all sparkling Voronoi diagrams in my head and that’s not a language I can readily communicate. All done with the second iteration of my AI for librarians course. There were some really good final projects this term. Yay, students! Andromeda Uncategorized 1 Comment December 18, 2020December 19, 2020 Though these be matrices, yet there is method in them. When I first trained a neural net on 43,331 theses to make HAMLET, one of the things I most wanted to do is be able to visualize them. If word2vec places documents ‘near’ each other in some kind of inferred conceptual space, we should be able to see some kind of map of them, yes? Even if I don’t actually know what I’m doing? Turns out: yes. And it’s even better than I’d imagined. 43,331 graduate theses, arranged by their conceptual similarity. Let me take you on a tour! Region 1 is biochemistry. The red dots are biology; the orange ones, chemistry. Theses here include Positional cloning and characterization of the mouse pudgy locus and Biosynthetic engineering for the assembly of better drugs. If you look closely, you will see a handful of dots in different colors, like a buttery yellow. This color is electrical engineering & computer science, and its dots in this region include Computational regulatory genomics : motifs, networks, and dynamics — that is to say, a computational biology thesis that happens to have been housed in computation rather than biology. The green south of Region 2 is physics. But you will note a bit of orange here. Yes, that’s chemistry again; for example, Dynamic nuclear polarization of amorphous and crystalline small molecules. If (like me), you almost majored in chemistry and realized only your senior year that the only chemistry classes that interested you were the ones that were secretly physics…this is your happy place. In fact, most of the theses here concern nuclear magnetic resonance applications. Region 3 has a striking vertical green stripe which turns out to be the nuclear engineering department. But you’ll see some orange streaks curling around it like fingers, almost suggesting three-dimensional depth. I point this out as a reminder that the original neural net embeds these 43,331 documents in a 52-dimensional space; I have projected that down to 2 dimensions because I don’t know about you but I find 52 dimensions somewhat challenging to visualize. However — just as objects may overlap in a 2-dimensional photo even when they are quite distant in 3-dimensional space — dots that are close together in this projection may be quite far apart in reality. Trust the overall structure more than each individual element. The map is not the territory. That little yellow thumb by Region 4 is mathematics, now a tiny appendage off of the giant discipline it spawned — our old friend buttery yellow, aka electrical engineering & computer science. If you zoom in enough you find EECS absolutely everywhere, applied to all manner of disciplines (as above with biology), but the bulk of it — including the quintessential parts, like compilers — is right here. Dramatically red Region 5, clustered together tightly and at the far end, is architecture. This is a renowned department (it graduated I.M. Pei!), but definitely a different sort of creature than most of MIT, so it makes sense that it’s at one extreme of the map. That said, the other two programs in its school — Urban Studies & Planning and Media Arts & Sciences — are just to its north. Region 6 — tiny, yellow, and pale; you may have missed it at first glance — is linguistics island, housing theses such as Topics in the stress and syntax of words. You see how there are also a handful of red dots on this island? They are Brain & Cognitive Science theses — and in particular, ones that are secretly linguistics, like Intonational phrasing in language production and comprehension. Similarly — although at MIT it is not the department of linguistics, but the department of linguistics & philosophy — the philosophy papers are elsewhere. (A few of the very most abstract ones are hanging out near math.) And what about Region 7, the stingray swimming vigorously away from everything else? I spent a long time looking at this and not seeing a pattern. You can tell there’s a lot of colors (departments) there, randomly assorted; even looking at individual titles I couldn’t see anything. Only when I looked at the original documents did I realize that this is the island of terrible OCR. Almost everything here is an older thesis, with low-quality printing or even typewriting, often in a regrettable font, maybe with the reverse side of the page showing through. (A randomly chosen example; pdf download.) A good reminder of the importance of high-quality digitization labor. A heartbreaking example of the things we throw away when we make paper the archival format for born-digital items. And also a technical inspiration — look how much vector space we’ve had to carve out to make room for these! the poor neural net, trying desperately to find signal in the noise, needing all this space to do it. I’m tempted to throw out the entire leftmost quarter of this graph, rerun the 2d projection, and see what I get — would we be better able to see the structures in the high-quality data if they had room to breathe? And were I to rerun the entire neural net training process again, I’d want to include some sort of threshhold score for OCR quality. It would be a shame to throw things away — especially since they will be a nonrandom sample, mostly older theses — but I have already had to throw away things I could not OCR at all in an earlier pass, and, again, I suspect the neural net would do a better job organizing the high-quality documents if it could use the whole vector space to spread them out, rather than needing some of it to encode the information “this is terrible OCR and must be kept away from its fellows”. Clearly I need to share the technical details of how I did this, but this post is already too long, so maybe next week. tl;dr I reached out to Matt Miller after reading his cool post on vectorizing the DPLA and he tipped me off to UMAP and here we are — thanks, Matt! And just as clearly you want to play with this too, right? Well, it’s super not ready to be integrated into HAMLET due to any number of usability issues but if you promise to forgive me those — have fun. You see how when you hover over a dot you get a label with the format 1721.1-X.txt? It corresponds to a URL of the format https://hamlet.andromedayelton.com/similar_to/X. Go play :). Andromeda Uncategorized 2 Comments December 11, 2020December 11, 2020 Of such stuff are (deep)dreams made: convolutional networks and neural style transfer Skipped FridAI blogging last week because of Thanksgiving, but let’s get back on it! Top-of-mind today are the firing of AI queen Timnit Gebru (letter of support here) and a couple of grant applications that I’m actually eligible for (this is rare for me! I typically need things for which I can apply in my individual capacity, so it’s always heartening when they exist — wish me luck). But for blogging today, I’m gonna talk about neural style transfer, because it’s cool as hell. I started my ML-learning journey on Coursera’s intro ML class and have been continuing with their deeplearning.ai sequence; I’m on course 4 of 5 there, so I’ve just gotten to neural style transfer. This is the thing where a neural net outputs the content of one picture in the style of another: Via https://medium.com/@build_it_for_fun/neural-style-transfer-with-swift-for-tensorflow-b8544105b854. OK, so! Let me explain while it’s still fresh. If you have a neural net trained on images, it turns out that each layer is responsible for recognizing different, and progressively more complicated, things. The specifics vary by neural net and data set, but you might find that the first layer gets excited about straight lines and colors; the second about curves and simple textures (like stripes) that can be readily composed from straight lines; the third about complex textures and simple objects (e.g. wheels, which are honestly just fancy circles); and so on, until the final layers recognize complex whole objects. You can interrogate this by feeding different images into the neural net and seeing which ones trigger the highest activation in different neurons. Below, each 3×3 grid represents the most exciting images for a particular neuron. You can see that in this network, there are Layer 1 neurons excited about colors (green, orange), and about lines of particular angles that form boundaries between dark and colored space. In Layer 2, these get built together like tiny image legos; now we have neurons excited about simple textures such as vertical stripes, concentric circles, and right angles. Via https://adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html, originally from Zeller & Fergus, Visualizing and Understanding Convolutional Networks So how do we get from here to neural style transfer? We need to extract information about the content of one image, and the style of another, in order to make a third image that approximates both of them. As you already expect if you have done a little machine learning, that means that we need to write cost functions that mean “how close is this image to the desired content?” and “how close is this image to the desired style?” And then there’s a wrinkle that I haven’t fully understood, which is that we don’t actually evaluate these cost functions (necessarily) against the outputs of the neural net; we actually compare the activations of the neurons, as they react to different images — and not necessarily from the final layer! In fact, choice of layer is a hyperparameter we can vary (I super look forward to playing with this on the Coursera assignment and thereby getting some intuition). So how do we write those cost functions? The content one is straightforward: if two images have the same content, they should yield the same activations. The greater the differences, the greater the cost (specifically via a squared error function that, again, you may have guessed if you’ve done some machine learning). The style one is beautifully sneaky; it’s a measure of the difference in correlation between activations across channels. What does that mean in English? Well, let’s look at the van Gogh painting, above. If an edge detector is firing (a boundary between colors), then a swirliness detector is probably also firing, because all the lines are curves — that’s characteristic of van Gogh’s style in this painting. On the other hand, if a yellowness detector is firing, a blueness detector may or may not be (sometimes we have tight parallel yellow and blue lines, but sometimes yellow is in the middle of a large yellow region). Style transfer posits that artistic style lies in the correlations between different features. See? Sneaky. And elegant. Finally, for the style-transferred output, you need to generate an image that does as well as possible on both cost functions simultaneously — getting as close to the content as it can without unduly sacrificing the style, and vice versa. As a side note, I think I now understand why DeepDream is fixated on a really rather alarming number of eyes. Since the layer choice is a hyperparameter, I hypothesize that choosing too deep a layer — one that’s started to find complex features rather than mere textures and shapes — will communicate to the system, yes, what I truly want is for you to paint this image as if those complex features are matters of genuine stylistic significance. And, of course, eyes are simple enough shapes to be recognized relatively early (not very different from concentric circles), yet ubiquitous in image data sets. So…this is what you wanted, right? the eager robot helpfully offers. https://www.ucreative.com/inspiration/google-deep-dream-is-the-trippiest-thing-in-the-internet/ I’m going to have fun figuring out what the right layer hyperparameter is for the Coursera assignment, but I’m going to have so much more fun figuring out the wrong ones. Andromeda Uncategorized 2 Comments December 4, 2020December 4, 2020 Posts navigation Older posts Create a free website or blog at WordPress.com.   Loading Comments...   Write a Comment... Email (Required) Name (Required) Website Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy api-flickr-com-8486 ---- Recent Uploads tagged code4lib Recent Uploads tagged code4lib IMG_9817 IMG_9861 IMG_9945 IMG_9946 IMG_9922 IMG_9924 IMG_9932 IMG_9941 IMG_9881 IMG_9866 IMG_9952 IMG_9877 IMG_9959 IMG_9882 IMG_9905 IMG_9823 IMG_9843 IMG_9895 IMG_9855 IMG_9845 apps-lib-umich-edu-4531 ---- Library Tech Talk Blog | U-M Library Skip to main content Log in Library Tech Talk Technology Innovations and Project Updates from the U-M Library I.T. Division Search Library Tech Talk Subscribe To RSS feed Get updates via Email (U-M Only) Popular posts for Library Tech Talk Digital Collections Completed July 2020- June 2021 Library IT Services Portfolio 4 keys to a dazzling library website redesign Sweet Sixteen: Digital Collections Completed July 2019 - June 2020 Adding Ordered Metadata Fields to Samvera Hyrax Tags in Library Tech Talk HathiTrust Library Website MLibrary Labs DLXS Web Content Strategy Digital Collections Mirlyn Digitization search Design MTagger OAI Accessibility Usability Group UX Archive for Library Tech Talk Show 2021 July 2021 (1) Show 2020 October 2020 (1) September 2020 (1) August 2020 (1) July 2020 (1) June 2020 (2) April 2020 (2) March 2020 (1) January 2020 (1) Show 2019 October 2019 (1) June 2019 (2) April 2019 (1) February 2019 (2) January 2019 (1) Show Older Show 2018 December 2018 (1) November 2018 (1) September 2018 (1) July 2018 (2) April 2018 (1) February 2018 (1) Show 2017 November 2017 (3) September 2017 (1) August 2017 (1) June 2017 (1) April 2017 (1) March 2017 (1) February 2017 (1) January 2017 (1) Show 2016 December 2016 (2) November 2016 (2) August 2016 (2) June 2016 (1) April 2016 (1) March 2016 (1) February 2016 (1) January 2016 (1) Show 2015 December 2015 (1) November 2015 (1) October 2015 (2) September 2015 (2) July 2015 (2) June 2015 (2) May 2015 (2) April 2015 (2) March 2015 (2) February 2015 (2) January 2015 (2) Show 2014 December 2014 (2) November 2014 (2) October 2014 (2) September 2014 (2) August 2014 (2) July 2014 (2) June 2014 (2) Show 2012 December 2012 (1) October 2012 (1) September 2012 (2) April 2012 (2) March 2012 (1) January 2012 (1) Show 2011 August 2011 (2) July 2011 (1) June 2011 (1) May 2011 (1) Show 2010 December 2010 (1) November 2010 (2) September 2010 (2) July 2010 (5) May 2010 (1) April 2010 (1) March 2010 (2) Show 2009 December 2009 (3) October 2009 (2) September 2009 (1) August 2009 (1) July 2009 (1) May 2009 (1) February 2009 (1) January 2009 (2) Show 2008 December 2008 (3) November 2008 (1) October 2008 (2) September 2008 (2) August 2008 (3) July 2008 (5) June 2008 (6) May 2008 (6) Digital Collections Completed July 2020- June 2021 Digital Content & Collections (DCC) relies on content and subject experts to bring us new digital collections.From July 2018 to Jun 2019, our digital collections received 67.9 million views. During the pandemic, when there was an increased need for digital resources, usage of the digital collections jumped to 86.5 million views (July 2019-Jun 2020) and 89 million views (July 2020-June 2021). Thank you to the many people, too numerous to reasonably list here, who are involved not just in the... July 8, 2021 See all posts by Lauren Havens Library IT Services Portfolio Academic library service portfolios are mostly a mix of big to small strategic initiatives and tactical projects. Systems developed in the past can become a durable bedrock of workflows and services around the library, remaining relevant and needed for five, ten, and sometimes as long as twenty years. There is, of course, never enough time and resources to do everything. The challenge faced by Library IT divisions is to balance the tension of sustaining these legacy systems while continuing to... October 7, 2020 See all posts by Nabeela Jaffer 4 keys to a dazzling library website redesign The U-M Library launched a completely new primary website in July after 2 years of work. The redesign project team focused on building a strong team, internal communication, content strategy, and practicing needs informed design and development to make the project a success. September 8, 2020 See all posts by Heidi Steiner Burkhardt Sweet Sixteen: Digital Collections Completed July 2019 - June 2020 Digital Content & Collections (DCC) relies on content and subject experts to bring us new digital collections. This year, 16 digital collections were created or significantly enhanced. Here you will find links to videos and articles by the subject experts speaking in their own words about the digital collections they were involved in and why they found it so important to engage in this work with us. Thank you to all of the people involved in each of these digital collections! August 6, 2020 See all posts by Lauren Havens Adding Ordered Metadata Fields to Samvera Hyrax How to add ordered metadata fields in Samvera Hyrax. Includes example code and links to actual code. July 20, 2020 See all posts by Fritz Freiheit Sinking our Teeth into Metadata Improvement Like many attempts at revisiting older materials, working with a couple dozen volumes of dental pamphlets started very simply but ended up being an interesting opportunity to explore the challenges of making the diverse range of materials held in libraries accessible to patrons in a digital environment. And while improving metadata may not sound glamorous, having sufficient metadata for users to be able to find what they are looking for is essential for the utility of digital libraries. June 30, 2020 See all posts by Jackson Huang Collaboration and Generosity Provide the Missing Issue of The American Jewess What started with a bit of wondering and conversation within our unit of the Library led to my reaching out to Princeton University with a request but no expectations of having that request fulfilled. Individuals at Princeton, however, considered the request and agreed to provide us with the single issue of The American Jewess that we needed to complete the full run of the periodical within our digital collection. Especially in these stressful times, we are delighted to bring you a positive... June 15, 2020 See all posts by Lauren Havens Pager Page 1 of 21 1 2 3 4 5 … 21 Older Posts Library Contact Information University of Michigan Library 818 Hatcher Graduate Library South, 913 S. University Avenue Ann Arbor, MI 48109-1190 (734) 764-0400 | contact-mlibrary@umich.edu Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 license. For details and exceptions, see the Library Copyright Policy. ©2014, Regents of the University of Michigan amycastor-com-7660 ---- The DOJ’s criminal probe into Tether — What we know – Amy Castor Primary Menu Amy Castor Independent Journalist About Me Selected Clips Contact Me Blog Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 14,571 other followers Email Address: Subscribe Twitter Updates FYI - I'm taking an actual vacation for the next week, so I'll be quiet on Twitter and not following the news so mu… twitter.com/i/web/status/1… 1 day ago RT @davidgerard: News: the Senate hates Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network… 1 day ago RT @franciskim_co: @WuBlockchain Translation https://t.co/hPQeFLjHpU 1 day ago RT @WuBlockchain: The Chinese government is cracking down on fraud. They posted a fraud case involving USDT on the wall to remind the publi… 1 day ago RT @patio11: The core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us… 2 days ago Recent Comments cryptobuy on Binance: Fiat off-ramps keep c… Steve on Binance: A crypto exchange run… Amy Castor on El Salvador’s bitcoin plan: ta… Amy Castor on El Salvador’s bitcoin plan: ta… Clearwell Trader on El Salvador’s bitcoin plan: ta… Skip to content Amy Castor The DOJ’s criminal probe into Tether — What we know Early this morning, Bloomberg reported that Tether executives are under a criminal investigation by the US Department of Justice.   The DOJ doesn’t normally discuss ongoing investigations with the media. However, three unnamed sources leaked the info to Bloomberg. The investigation is focused on Tether misleading banks about the true nature of its business, the sources said. The DoJ has been circling Tether and Bitfinex for years now. In November 2018, “three sources” — maybe even the same three sources — told Bloomberg the DOJ was looking into the companies for bitcoin price manipulation.  Tether responded to the latest bit of news in typical fashion — with a blog post accusing Bloomberg of spreading FUD and trying to “generate clicks.”  “This article follows a pattern of repackaging stale claims as ‘news,” Tether said. “The continued efforts to discredit Tether will not change our determination to remain leaders in the community.” But nowhere in its post did Tether deny the claims.  I've read this several times and can't seem to find the, um, denial? pic.twitter.com/bkSadeaYL6 — Doomberg (@DoombergT) July 26, 2021 Last night, before the news broke, bitcoin was pumping like crazy. The price climbed nearly 17%, topping $40,000. On Coinbase, the price of BTC/USD went up $4,000 in three minutes, a bit after 01:00 UTC.  After a user placed a large number of buy orders for bitcoin perpetual futures denominated in tethers (USDT) on Binance — an unregulated exchange struggling with its own banking issues — The BTC/USDT perpetual contract hit a high of $48,168 at around 01:00 UTC on the exchange. 98,000 Bitcoins traded in 15 minutes on Tether fraud exchange Binance, driving the price to 48,168 Tethers per Bitcoin. Something's not right in Tether Land. pic.twitter.com/6jm2ebrksN — Bitfinex’ed 🔥 (@Bitfinexed) July 26, 2021 Bitcoin pumps are a good way to get everyone to ignore the impact of bad news and focus on number go up. “Hey, this isn’t so bad. Bitcoin is going up in price. I’m rich!” So what is this DoJ investigation about? It is likely a follow-up to the New York attorney general’s probe into Tether — and its sister company crypto exchange Bitfinex — which started in 2018.  Tether and Bitfinex, which operate under the same parent company iFinex, settled fraud charges with the NY AG for $18.5 million in February. They were also banned from doing any further business in New York. “Bitfinex and Tether recklessly and unlawfully covered-up massive financial losses to keep their scheme going and protect their bottom lines,” the NY AG said. The companies’ woes started with a loss of banking more than a year before the NY AG initiated its probe.  Banking history Tether and Bitfinex, both registered in the British Virgin Islands, were banking with four Taiwanese banks in 2017. Those banks used Wells Fargo as a correspondent bank to process US dollar wire transfers.  In other words, the companies would deposit money in their Taiwanese banks, and those banks would send money through Wells Fargo out to the rest of the world.  However, in March 2017, Wells Fargo abruptly cut off the Taiwanese banks, refusing to process any more transfers from Tether and Bitfinex.  About a month later — I would guess, after Wells Fargo told them they were on thin ice — the Taiwanese banks gave Tether and Bitfinex the boot.   Since then, Tether and Bitfinex have had to rely increasingly on shadow banks — such as Crypto Capital, a payment processor in Panama — to shuffle funds around the globe for them.  They also started furiously printing tethers. In early 2017, there were only 10 million tethers in circulation. Today, there are 62 billion tethers in circulation with a big question as to how much actual cash is behind those tethers.   Crypto Capital Partnering with Crypto Capital turned out to be an epic fail for Bitfinex and Tether. The payment processor was operated by principals Ivan Manuel Molina Lee and Oz Yosef with the help of Arizona businessman Reggie Fowler and Israeli woman Ravid Yosef — Oz’s sister, who was living in Los Angeles at the time. In April 2019, Fowler and Ravid were indicted in the US for allegedly lying to banks to set up accounts on behalf of Crypto Capital. Fowler is currently awaiting trial, and Ravid Yosef is still at large.  Starting in early 2018, the pair set up dozens of bank accounts as part of a shadow banking network for Crypto Capital. Some of those banks — Bank of America, Wells Fargo, HSBC, and JP Morgan Chase — were either based in the US, or in the case of HSBC, had branches in the US, and therefore, fell under the DOJ’s jurisdiction.  In total, Fowler’s bank accounts held some $371 million and were at the center of his failed plea negotiation in January 2020. Those accounts, along with more frozen Crypto Capital accounts in Poland, meant that Tether and Bitfinex had lost access to some $850 million in funds in 2018. Things spiraled downhill from there. Molina Lee was arrested by Polish authorities in October 2019. He was accused of being part of an international drug cartel and laundering funds through Bitfinex. And Oz Yosef was indicted by US authorities around the same time for bank fraud charges. Tether stops printing At the beginning of 2020, there were only 4.5 billion tethers in circulation. All through the year and into the next, Tether kept issuing tethers at greater and greater rates. Then, at the end of May 2021, it stopped — and nobody is quite sure of why. Pressure from authorities? A cease and desist order?  Usually, cease and desist orders are made public. And it is hard to imagine that there would be an order that has been kept non-public since May. One could argue, you don’t want to keep printing dubiously backed stablecoins when you’re under a criminal investigation by the DOJ. But as I’ve explained in prior posts, other factors could also be at play.  For instance, since Binance, one of Tether’s biggest customers, is having its own banking problems, it may be difficult for Binance users to wire funds to the exchange. And since Binance uses USDT in place of dollars, there’s no need for it to acquire an additional stash of tethers at this time. Also, other stablecoins, like USDC and BUSD, have been stepping in to fill in the gap. The DOJ and Tether You can be sure that any info pulled up by the NY AG in its investigation of Tether and Bitfinex has been passed along to the DoJ and the Commodities and Futures Trading Commission — who, by the way, subpoenaed Tether in late 2017.  Coincidentally — or not — bitcoin saw a price pump at that time, too. It went from around $14,000 on Dec. 5, 2017, the day before the subpoena was issued, to nearly $18,000 on Dec. 6, 2017 — another attempt to show that the bad news barely had any impact on the bitcoin price.  Tether relies on confidence in the markets. As long as people believe that Tether is fully backed, or that Tether and Bitfinex probes won’t impact the price of bitcoin, the game can continue. But if too many people start dumping bitcoin in a panic and rushing toward the fiat exits, the truth — that there isn’t enough cash left in the system to support a tsunami of withdrawals — will be revealed, and that would be especially bad news for Tether execs.  Will Tether’s operators be charged with criminal actions any time soon? And which execs is the DoJ even investigating? The original operators of Bitfinex and Tether — aka “the triad” — are Chief Strategy Officer Phil Potter, CEO Jan Ludovicus van der Velde and CFO Giancarlo Devasini. Phil Potter supposedly pulled away from the operation in mid-2018. And nobody has heard from van der Velde or Devasini in a long, long time. Now, the two main spokespersons for the companies are General Counsel Stuart Hoegner and CTO Paolo Ardoino, who give lots of interviews defending Tether and accusing salty nocoiners like me of FUD.   Tracking down bad actors takes a lot of coordination. Recall that the DoJ had to work with authorities in 17 different countries to finally arrest the operators of Liberty Reserve, a Costa Rica-based centralized digital currency service that was used for money laundering. Similar to Liberty Reserve, Tether is a global operation and all of the front persons associated with Tether — except for Potter who lives in New York — currently reside outside of the US.  It may still take a long while to completely shut down Tether and give it the Liberty Reserve treatment. But if the DoJ files criminal charges against Tether execs, that is at least a step in the right direction. Read more:  The curious case of Tether — a complete timeline Nocoiner predictions: 2021 will be a year of comedy gold  If you like my work, please subscribe to my Patreon for as little as $5 a month. Your support keeps me going. Share this: Twitter Facebook LinkedIn Like this: Like Loading... BloombergDepartment of JusticeDoJLiberty ReservePaolo ArdoinoStuart HoegnertetherWells Fargo Posted on July 26, 2021July 30, 2021 by Amy Castor in Blogging 0 Post navigation Previous postNews: Regulators zero in on stablecoins, El Salvador’s colón-dollar, Tether printer remains paused Next postPodcast: ‘Target Letter, Tether’ (Amy Castor and David Gerard) Leave a Reply Cancel reply Enter your comment here... Fill in your details below or click an icon to log in: Email (Address never made public) Name Website You are commenting using your WordPress.com account. ( Log Out /  Change ) You are commenting using your Google account. ( Log Out /  Change ) You are commenting using your Twitter account. ( Log Out /  Change ) You are commenting using your Facebook account. ( Log Out /  Change ) Cancel Connecting to %s Notify me of new comments via email. Notify me of new posts via email. Create a website or blog at WordPress.com   Loading Comments...   Write a Comment... Email Name Website %d bloggers like this: apps-lib-umich-edu-8729 ---- Library Tech Talk - U-M Library Library Tech Talk - U-M Library Technology Innovations and Project Updates from the U-M Library I.T. Division Digital Collections Completed July 2020- June 2021 Digital Content & Collections (DCC) relies on content and subject experts to bring us new digital collections.From July 2018 to Jun 2019, our digital collections received 67.9 million views. During the pandemic, when there was an increased need for digital resources, usage of the digital collections jumped to 86.5 million views (July 2019-Jun 2020) and 89 million views (July 2020-June 2021). Thank you to the many people, too numerous to reasonably list here, who are involved not just in the creation of these digital collections but in the continued maintenance of these and hundreds of other digital collections that reach users around the world to advance research and provide access to materials. Library IT Services Portfolio Academic library service portfolios are mostly a mix of big to small strategic initiatives and tactical projects. Systems developed in the past can become a durable bedrock of workflows and services around the library, remaining relevant and needed for five, ten, and sometimes as long as twenty years. There is, of course, never enough time and resources to do everything. The challenge faced by Library IT divisions is to balance the tension of sustaining these legacy systems while continuing to innovate and develop new services. The University of Michigan’s Library IT portfolio has legacy systems in need of ongoing maintenance and support, in addition to new projects and services that add to and expand the portfolio. We, at Michigan, worked on a process to balance the portfolio of services and projects for our Library IT division. We started working on the idea of developing a custom tool for our needs since all the other available tools are oriented towards corporate organizations and we needed a light-weight tool to support our process. We went through a complete planning process first on whiteboards and paper, then developed an open source tool called TRACC for helping us with portfolio management. 4 keys to a dazzling library website redesign The U-M Library launched a completely new primary website in July after 2 years of work. The redesign project team focused on building a strong team, internal communication, content strategy, and practicing needs informed design and development to make the project a success. Sweet Sixteen: Digital Collections Completed July 2019 - June 2020 Digital Content & Collections (DCC) relies on content and subject experts to bring us new digital collections. This year, 16 digital collections were created or significantly enhanced. Here you will find links to videos and articles by the subject experts speaking in their own words about the digital collections they were involved in and why they found it so important to engage in this work with us. Thank you to all of the people involved in each of these digital collections! Adding Ordered Metadata Fields to Samvera Hyrax How to add ordered metadata fields in Samvera Hyrax. Includes example code and links to actual code. Sinking our Teeth into Metadata Improvement Like many attempts at revisiting older materials, working with a couple dozen volumes of dental pamphlets started very simply but ended up being an interesting opportunity to explore the challenges of making the diverse range of materials held in libraries accessible to patrons in a digital environment. And while improving metadata may not sound glamorous, having sufficient metadata for users to be able to find what they are looking for is essential for the utility of digital libraries. Collaboration and Generosity Provide the Missing Issue of The American Jewess What started with a bit of wondering and conversation within our unit of the Library led to my reaching out to Princeton University with a request but no expectations of having that request fulfilled. Individuals at Princeton, however, considered the request and agreed to provide us with the single issue of The American Jewess that we needed to complete the full run of the periodical within our digital collection. Especially in these stressful times, we are delighted to bring you a positive story, one of collaboration and generosity across institutions, while also sharing the now-complete digital collection itself. How to stop being negative, or digitizing the Harry A. Franck film collection This article reviews how 9,000+ frames of photographic negatives from the Harry A. Franck collection are being digitally preserved. Combine Metadata Harvester: Aggregate ALL the data! The Digital Public Library of America (DPLA) has collected and made searchable a vast quantity of metadata from digital collections all across the country. The Michigan Service Hub works with cultural heritage institutions throughout the state to collect their metadata, transform those metadata to be compatible with the DPLA’s online library, and send the transformed metadata to the DPLA, using the Combine aggregator software, which is being developed here at the U of M Library. Hacks with Friends 2020 Retrospective: A pitch to hitch in 2021 When the students go on winter break I go to Hacks with Friends (HWF) and highly recommend and encourage everyone who can to participate in HWF 2021. Not only is it two days of free breakfast, lunch, and snacks at the Ross School of Business, but it’s a chance to work with a diverse cross section of faculty, staff, and students on innovative solutions to complex problems. arstechnica-com-1589 ---- iOS zero-day let SolarWinds hackers compromise fully updated iPhones | Ars Technica Skip to main content Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Subscribe Close Navigate Store Subscribe Videos Features Reviews RSS Feeds Mobile Site About Ars Staff Directory Contact Us Advertise with Ars Reprints Filter by topic Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Settings Front page layout Grid List Site theme Black on white White on black Sign in Comment activity Sign up or login to join the discussions! Stay logged in | Having trouble? Sign up to comment and more Sign up ZERO-DAY EXPLOSION — iOS zero-day let SolarWinds hackers compromise fully updated iPhones Flaw was exploited when government officials clicked on links in LinkedIn messages. Dan Goodin - Jul 14, 2021 8:04 pm UTC Enlarge Getty Images reader comments 78 with 43 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit The Russian state hackers who orchestrated the SolarWinds supply chain attack last year exploited an iOS zero-day as part of a separate malicious email campaign aimed at stealing Web authentication credentials from Western European governments, according to Google and Microsoft. Further Reading SolarWinds hackers are back with a new mass campaign, Microsoft says In a post Google published on Wednesday, researchers Maddie Stone and Clement Lecigne said a “likely Russian government-backed actor” exploited the then-unknown vulnerability by sending messages to government officials over LinkedIn. Moscow, Western Europe, and USAID Attacks targeting CVE-2021-1879, as the zero-day is tracked, redirected users to domains that installed malicious payloads on fully updated iPhones. The attacks coincided with a campaign by the same hackers who delivered malware to Windows users, the researchers said. Further Reading SolarWinds hackers are back with a new mass campaign, Microsoft says The campaign closely tracks to one Microsoft disclosed in May. In that instance, Microsoft said that Nobelium—the name the company uses to identify the hackers behind the SolarWinds supply chain attack—first managed to compromise an account belonging to USAID, a US government agency that administers civilian foreign aid and development assistance. With control of the agency’s account for online marketing company Constant Contact, the hackers could send emails that appeared to use addresses known to belong to the US agency. The federal government has attributed last year’s supply chain attack to hackers working for Russia’s Foreign Intelligence Service (abbreviated as SVR). For more than a decade, the SVR has conducted malware campaigns targeting governments, political think tanks, and other organizations in countries like Germany, Uzbekistan, South Korea, and the US. Targets have included the US State Department and the White House in 2014. Other names used to identify the group include APT29, the Dukes, and Cozy Bear. In an email, Shane Huntley, the head of Google's Threat Analysis Group, confirmed the connection between the attacks involving USAID and the iOS zero-day, which resided in the WebKit browser engine. “These are two different campaigns, but based on our visibility, we consider the actors behind the WebKit 0-day and the USAID campaign to be the same group of actors,” Huntley wrote. “It is important to note that everyone draws actor boundaries differently. In this particular case, we are aligned with the US and UK governments' assessment of APT 29.” Advertisement Forget the sandbox Throughout the campaign, Microsoft said, Nobelium experimented with multiple attack variations. In one wave, a Nobelium-controlled web server profiled devices that visited it to determine what OS and hardware the devices ran on. If the targeted device was an iPhone or iPad, a server used an exploit for CVE-2021-1879, which allowed hackers to deliver a universal cross-site scripting attack. Apple patched the zero-day in late March. In Wednesday’s post, Stone and Lecigne wrote: After several validation checks to ensure the device being exploited was a real device, the final payload would be served to exploit CVE-​2021-1879. This exploit would turn off Same-Origin-Policy protections in order to collect authentication cookies from several popular websites, including Google, Microsoft, LinkedIn, Facebook, and Yahoo and send them via WebSocket to an attacker-controlled IP. The victim would need to have a session open on these websites from Safari for cookies to be successfully exfiltrated. There was no sandbox escape or implant delivered via this exploit. The exploit targeted iOS versions 12.4 through 13.7. This type of attack, described by Amy Burnett in Forget the Sandbox Escape: Abusing Browsers from Code Execution, is mitigated in browsers with Site Isolation enabled, such as Chrome or Firefox. It’s raining zero-days The iOS attacks are part of a recent explosion in the use of zero-days. In the first half of this year, Google’s Project Zero vulnerability research group has recorded 33 zero-day exploits used in attacks—11 more than the total number from 2020. The growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through. The other big driver is the increased supply of zero-days from private companies selling exploits. “0-day capabilities used to be only the tools of select nation-states who had the technical expertise to find 0-day vulnerabilities, develop them into exploits, and then strategically operationalize their use,” the Google researchers wrote. “In the mid-to-late 2010s, more private companies have joined the marketplace selling these 0-day capabilities. No longer do groups need to have the technical expertise; now they just need resources.” The iOS vulnerability was one of four in-the-wild zero-days Google detailed on Wednesday. The other three were: CVE-2021-21166 and CVE-2021-30551 in Chrome CVE-2021-33742 in Internet Explorer The four exploits were used in three different campaigns. Based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. The researchers didn’t identify the surveillance company, the governments, or the specific three zero-days they were referring to. Representatives from Apple didn’t immediately respond to a request for comment. reader comments 78 with 43 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Dan Goodin Dan is the Security Editor at Ars Technica, which he joined in 2012 after working for The Register, the Associated Press, Bloomberg News, and other publications. Email dan.goodin@arstechnica.com // Twitter @dangoodin001 Advertisement You must login or create an account to comment. Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars Store Subscribe About Us RSS Feeds View Mobile Site Contact Us Staff Advertise with us Reprints Newsletter Signup Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up → CNMN Collection WIRED Media Group © 2021 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy. Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices arstechnica-com-3122 ---- Google Cloud offers a model for fixing Google’s product-killing reputation | Ars Technica Skip to main content Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Subscribe Close Navigate Store Subscribe Videos Features Reviews RSS Feeds Mobile Site About Ars Staff Directory Contact Us Advertise with Ars Reprints Filter by topic Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Settings Front page layout Grid List Site theme Black on white White on black Sign in Comment activity Sign up or login to join the discussions! Stay logged in | Having trouble? Sign up to comment and more Sign up Google's learning! — Google Cloud offers a model for fixing Google’s product-killing reputation GCP offers a stability promise that the rest of the company could learn from. Ron Amadeo - Jul 27, 2021 7:01 pm UTC Enlarge / Google Cloud Platform, no longer perpetually under construction? reader comments 122 with 94 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Google's reputation for aggressively killing products and services is hurting the company's brand. Any new product launch from Google is no longer a reason for optimism; instead, the company is met with questions about when the product will be shut down. It's a problem entirely of Google's own making, and it's yet another barrier that discourages customers from investing (either time, money, or data) in the latest Google thing. The wide public skepticism of Google Stadia is a great example of the problem. A Google division with similar issues is Google Cloud Platform, which asks companies and developers to build a product or service powered by Google's cloud infrastructure. Like the rest of Google, Cloud Platform has a reputation for instability, thanks to quickly deprecating APIs, which require any project hosted on Google's platform to be continuously updated to keep up with the latest changes. Google Cloud wants to address this issue, though, with a new "Enterprise API" designation. Further Reading Google’s constant product shutdowns are damaging its brand Enterprise APIs basically get a roadmap that promises stability for certain APIs. Google says, "The burden is on us: Our working principle is that no feature may be removed (or changed in a way that is not backwards compatible) for as long as customers are actively using it. If a deprecation or breaking change is inevitable, then the burden is on us to make the migration as effortless as possible." If Google needs to change an API, customers will now get a minimum of one year's notice, along with tools, documentation, and other materials. Google goes on to say, "To make sure we follow these tenets, any change we introduce to an API is reviewed by a centralized board of product and engineering leads and follows a rigorous product lifecycle evaluation." Despite being one of the world's largest Internet companies and basically defining what modern cloud infrastructure looks like, Google isn't doing very well in the cloud infrastructure market. Analyst firm Canalys puts Google in a distant third, with 7 percent market share, behind Microsoft Azure (19 percent) and market leader Amazon Web Services (32 percent). Rumor has it (according to a report from The Information) that Google Cloud Platform is facing a 2023 deadline to beat AWS and Microsoft, or it will risk losing funding. Advertisement Ex-Googler Steve Yegge laid out the problems with Google Cloud Platform last year in a post titled "Dear Google Cloud: Your Deprecation Policy is Killing You." Google's announcement seems to hit most of what that post highlights, like a lack of documentation and support, an endless treadmill of API upgrades, and Google Cloud's general disregard for backward compatibility. Yegge argues that successful platforms like Windows, Java, and Android (a group Yegge says is isolated from the larger Google culture) owe much of their success to their commitment to platform stability. AWS is the market leader partly because it's considered a lot more stable than Google Cloud Platform. Google Cloud gets it Protocol reports that Google VP Kripa Krishnan was asked during the announcement if she is familiar with the "Killed By Google" website and Twitter account, both run by Cody Ogden. The report says Krishnan "couldn't help but laugh," and she said, "It was pretty apparent to us from many sources on the Internet that we were not doing well." Google Cloud Platform's awareness of Google's reputation, its steps to limit disruption to customers, and its communication of which offerings are more stable than others have created a model for the rest of the company. Many Google products suffer from the specter of unceremonious shutdowns, and that's enough to force customers to seek alternatives. The primary fix to the problem is simply mitigation—i.e., stop shutting so many things down all the time. But a close second would be communication—just tell customers your plans for future support. Google seems to have no problem offering a public roadmap for the software it ships on hardware devices. Pixel phones and Chromebooks both have public support statements for their software, showing a minimum date for which the devices can count on support. For instance, we know a Pixel 5 will continue to receive updates until at least October 2023. Google can't do anything to immediately solve its reputation for killing products and services, but communication can help relieve some of the hesitation users and companies increasingly feel when investing in a Google product. If the company doesn't plan on killing a product for a long time, it should say so! Google should tell users and company partners which products are stable and which ones are fly-by-night experiments. Of course, for this idea to work, Google has to actually stick to any public commitments it makes so people can trust it will follow through. Recently, the company has not done this. It promised three years of support for Android Things, the Internet-of-things version of Android. Instead, Google ended OS updates after only one year. If the company really wants to fix its reputation for instability, it will need to prove itself to customers over time. reader comments 122 with 94 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Ron Amadeo Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. Email ron@arstechnica.com // Twitter @RonAmadeo Advertisement You must login or create an account to comment. Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars Store Subscribe About Us RSS Feeds View Mobile Site Contact Us Staff Advertise with us Reprints Newsletter Signup Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up → CNMN Collection WIRED Media Group © 2021 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy. Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices arstechnica-com-5034 ---- Google’s constant product shutdowns are damaging its brand | Ars Technica Skip to main content Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Subscribe Close Navigate Store Subscribe Videos Features Reviews RSS Feeds Mobile Site About Ars Staff Directory Contact Us Advertise with Ars Reprints Filter by topic Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Settings Front page layout Grid List Site theme Black on white White on black Sign in Comment activity Sign up or login to join the discussions! Stay logged in | Having trouble? Sign up to comment and more Sign up Please just stop closing things — Google’s constant product shutdowns are damaging its brand Google's product support has become a joke, and the company should be very concerned. Ron Amadeo - Apr 2, 2019 11:45 am UTC Enlarge / An artist's rendering of Google's current reputation. Aurich Lawson reader comments 521 with 351 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit It's only April, and 2019 has already been an absolutely brutal year for Google's product portfolio. The Chromecast Audio was discontinued January 11. YouTube annotations were removed and deleted January 15. Google Fiber packed up and left a Fiber city on February 8. Android Things dropped IoT support on February 13. Google's laptop and tablet division was reportedly slashed on March 12. Google Allo shut down on March 13. The "Spotlight Stories" VR studio closed its doors on March 14. The goo.gl URL shortener was cut off from new users on March 30. Gmail's IFTTT support stopped working March 31. And today, April 2, we're having a Google Funeral double-header: both Google+ (for consumers) and Google Inbox are being laid to rest. Later this year, Google Hangouts "Classic" will start to wind down, and somehow also scheduled for 2019 is Google Music's "migration" to YouTube Music, with the Google service being put on death row sometime afterward. We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days. Some of these product shutdowns have transition plans, and some of them (like Google+) represent Google completely abandoning a user base. The specifics aren't crucial, though. What matters is that every single one of these actions has a negative consequence for Google's brand, and the near-constant stream of shutdown announcements makes Google seem more unstable and untrustworthy than it has ever been. Yes, there was the one time Google killed Google Wave nine years ago or when it took Google Reader away six years ago, but things were never this bad. For a while there has been a subset of people concerned about Google's privacy and antitrust issues, but now Google is eroding trust that its existing customers have in the company. That's a huge problem. Google has significantly harmed its brand over the last few months, and I'm not even sure the company realizes it. Google products require trust and investment Enlarge / The latest batch of dead and dying Google apps. Google is a platform company. Be it cloud compute, app and extension ecosystems, developer APIs, advertising solutions, operating-system pre-installs, or the storage of user data, Google constantly asks for investment from consumers, developers, and partner companies in the things it builds. Any successful platform will pretty much require trust and buy-in from these groups. These groups need to feel the platform they invest in today will be there tomorrow, or they'll move on to something else. If any of these groups loses faith in Google, it could have disastrous effects for the company. Consumers want to know the photos, videos, and emails they upload to Google will stick around. If you buy a Chromecast or Google Home, you need to know the servers and ecosystems they depend on will continue to work, so they don't turn into fancy paperweights tomorrow. If you take the time to move yourself, your friends, and your family to a new messaging service, you need to know it won't be shut down two years later. If you begrudgingly join a new social network that was forced down your throat, you need to know it won't leak your data everywhere, shut down, and delete all your posts a few years later. Advertisement There are also enterprise customers, who, above all, like safe bets with established companies. The old adage of "Nobody ever got fired for buying IBM" is partly a reference for the enterprise's desire for a stable, steady, reliable tech partner. Google is trying to tackle this same market with its paid G Suite program, but the most it can do in terms of stability is post a calendar detailing the rollercoaster of consumer-oriented changes coming down the pipeline. There's a slower "Scheduled release track" that delays the rollout of some features, but things like a complete revamp of Gmail eventually all still arrive. G Suite has a "Core Services" list meant to show confidence in certain products sticking around, but some of the entries there, like Hangouts and Google Talk, still get shut down. Google kills product Google kills its augmented reality “Measure” app Google is killing “Google Play Movies & TV” on smart TVs Google is killing the Google Shopping app Google’s VR dreams are dead: Google Cardboard is no longer for sale Ex-Stadia developers dish on Google’s mismanagement and poor communication View more stories Developers gamble on a platform's stability even more than consumers do. Consumers might trust a service with their data or spend money on hardware, but developers can spend months building an app for a platform. They need to read documentation, set up SDKs, figure out how APIs work, possibly pay developer startup fees, and maybe even learn a new language. They won't do any of this if they don't have faith in the long-term stability of the platform. Developers can literally build their products around paid-access Google APIs like the Google Maps API, and when Google does things like raise the price of the Maps API by 14x for some use cases, it is incredibly disruptive for those businesses and harmful to Google's brand. When apps like Reddit clients are flagged by Google Play "every other month" for the crime of displaying user-generated content and when it's impossible to talk to a human at Google about anything, developers are less likely to invest in your schizophrenic ecosystem. Hardware manufacturers and other company partners need to be able to trust a company, too. Google constantly asks hardware developers to build devices dependent on its services. These are things like Google Assistant-compatible speakers and smart displays, devices with Chromecast built in, and Android and Chrome OS devices. Manufacturers need to know a certain product or feature they are planning to integrate will be around for years, since they need to both commit to a potentially multi-year planning and development cycle, and then it needs to survive long enough for customers to be supported for a few years. Watching Android Things chop off a major segment of its market nine months after launch would certainly make me nervous to develop anything based on Android Things. Imagine the risk Volvo is taking by integrating the new Android Auto OS into its upcoming Polestar 2: vehicles need around five years of development time and still need to be supported for several years after launch. Google’s shutdowns cast a shadow over the entire company With so many shutdowns, tracking Google's bodycount has become a competitive industry on the Internet. Over on Wikipedia, the list of discontinued Google products and services is starting to approach the size of the active products and services listed. There are entire sites dedicated to discontinued Google products, like killedbygoogle.com, The Google Cemetery, and didgoogleshutdown.com. Advertisement I think we're seeing a lot of the consequences of Google's damaged brand in the recent Google Stadia launch. A game streaming platform from one of the world's largest Internet companies should be grounds for excitement, but instead, the baggage of the Google brand has people asking if they can trust the service to stay running. In addition to the endless memes and jokes you'll see in every related comments section, you're starting to see Google skepticism in mainstream reporting, too. Over at The Guardian, this line makes the pullquote: "A potentially sticky fact about Google is that the company does have a habit of losing interest in its less successful projects." IGN has a whole section of a report questioning "Google's Commitment." From a Digital Foundry video: "Google has this reputation for discontinuing services that are often good, out of nowhere." One of SlashGear's "Stadia questions that need answers" is "Can I trust you, Google?" Enlarge / Google's Phil Harrison talks about the new Google Stadia controller. Google One of my favorite examples came from a Kotaku interview with Phil Harrison, the leader of Google Stadia. In an audio interview, the site lays this whopper of a question on him: "One of the sentiments we saw in our comments section a lot is that Google has a long history of starting projects and then abandoning them. There's a worry, I think, from users who might think that Google Stadia is a cool platform, but if I'm connecting to this and spending money on this platform, how do I know for sure that Google is still sticking with it for two, three, five years? How can you guys make a commitment that Google will be sticking with this in a way that they haven't stuck with Google+, or Google Hangouts, or Google Fiber, Reader, or all the other things Google has abandoned over the years?" Further Reading Hands on with Google Stadia: It works, but is that enough? Yikes. Kotaku is totally justified to ask a question like this, but to have one of your new executives face questions of "When will your new product shut down?" must be embarrassing for Google. Harrison's response to this question started with a surprisingly honest acknowledgement: "I understand the concern." Harrison, seemingly, gets it. He seemingly understands that it's hard to trust Google after so many product shutdowns, and he knows the Stadia team now faces an uphill battle. For the record, Harrison went on to cite Google's sizable investment in the project, saying Stadia was "Not a trivial product" and was a "significant cross-company effort." (Also for the record: you could say all the same things about Google+ a few years ago, when literally every Google employee was paid to work on it. Now it is dead.) Harrison and the rest of the Stadia team had nothing to do with the closing of Google Inbox, or the shutdown of Hangouts, or the removal of any other popular Google product. They are still forced to deal with the consequences of being associated with "Google the Product Killer," though. If Stadia was an Amazon product, I don't think we would see these questions of when it would shut down. Microsoft's game streaming service, Project xCloud, only faces questions about feasibility and appeal, not if Microsoft will get bored in two years and dump the project. Page: 1 2 Next → reader comments 521 with 351 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Ron Amadeo Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work. Email ron@arstechnica.com // Twitter @RonAmadeo Advertisement You must login or create an account to comment. Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars Store Subscribe About Us RSS Feeds View Mobile Site Contact Us Staff Advertise with us Reprints Newsletter Signup Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up → CNMN Collection WIRED Media Group © 2021 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy. Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices arstechnica-com-954 ---- Wary of Bitcoin? A guide to some other cryptocurrencies | Ars Technica Skip to main content Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Subscribe Close Navigate Store Subscribe Videos Features Reviews RSS Feeds Mobile Site About Ars Staff Directory Contact Us Advertise with Ars Reprints Filter by topic Biz & IT Tech Science Policy Cars Gaming & Culture Store Forums Settings Front page layout Grid List Site theme Black on white White on black Sign in Comment activity Sign up or login to join the discussions! Stay logged in | Having trouble? Sign up to comment and more Sign up Biz & IT — Wary of Bitcoin? A guide to some other cryptocurrencies You can fill your virtual pockets with Litecoin, PPCoin, or Freicoin. Ian Steadman, wired.co.uk - May 11, 2013 1:51 pm UTC PPCoin (PPC) Site: http://www.ppcoin.org/ Launch date: August 12, 2012 Number of PPCoins circulation: Unknown Eventual PPCoin total: No cap Market cap: Unknown PPCoin logo concepts. Mjbmonetarymetals, Bitcoin Forum Peer-to-Peer Coin, or PPCoin for short, presents itself as an improvement upon Bitcoin by changing one of the latter's fundamental ideas: proof-of-work. Beyond improving security—it’s a lot harder to steal PPCoin than Bitcoin this way—it reduces the chance of a 51 percent attack by making the counterfeiting of coins extremely difficult. In Bitcoin, as with all these coins, the supply of coins is stable and predetermined, and the rate at which they are generated decreases exponentially. The cost of mining has now risen such that people can't really use their home tablets, laptops, or desktops. Instead, they have to rely on application-specific integrated circuit (ASIC) mining—expensive, dedicated rigs that often cost thousands of dollars, running 24/7, just generating enough bitcoins to make the whole thing cost-effective. Some worry that this could lead to a security issue in the future. Harder mining means fewer people bother to dedicate the effort and time, and fewer miners means that the overall network of nodes decreases. It's possible that the number could decline to such an extent that Bitcoin, as massive as it may become, could be open to a 51 percent attack on the blockchain. The determining factor in which blockchain becomes the "real" one and which is discarded comes down to a simple rule—whichever blockchain is the one accepted by the most number of mining nodes becomes the canonical one. In a 51 percent attack, someone takes over enough nodes to effectively dictate that their own version of the blockchain is accepted over the legitimate one. If that happens, it becomes possible to counterfeit bitcoins or (even worse) to spend them more than once. It's a serious threat—lots of currencies have been taken down before they've even had a chance to stand on their own feet in this way. PPCoin's solution to this is to slightly alter what the blockchain records. In Bitcoin, a "proof-of-work" is attached to each block as it's generated. It verifies the ownership of the block by the person who mined it, and future transactions use it as an identifying marker. In PPCoin, a further piece of information is included—"proof-of-stake." Think of it this way. If you've had a single proof-of-work in your wallet for one day—you could say that you have one coin-day in your wallet. One coin in your wallet for a week gives you seven coin-days; three coins in your wallet for 18 days gives 54 coin-days. It's simply the number of coins multiplied by the time held, which is determined by the addition of a time stamp to the coin's information. It gives someone not just a proof-of-work, but a proof-of-stake. Beyond improving security—it's a lot harder to steal PPCoin than Bitcoin this way—it reduces the chance of a 51 percent attack by making the counterfeiting of coins extremely difficult. You have to gain 51 percent of all proofs-of-stake instead of mining power. Another radical difference is that, unlike with Bitcoin, there is no final cap set on the number of PPCoins that will be generated. Instead, the combination of proof-of-work mining (as with Bitcoin) and proof-of-stake mining (which comes from using coins for transactions) gives the currency a steady growth in size that, its developers claim, equals roughly one percent per year. Advertisement As proof-of-work mining becomes more difficult and the number of miners drops off, it's expected that proof-of-stake mining will become the dominant form of mining in PPCoin, increasing the supply of coin-days rather than coins that can be spent. Currently, PPCoin has a centralized checking system in place to verify transactions, so it doesn't qualify as decentralized in the same way that Bitcoin does. This is, the PPCoin developers have said, only a temporary measure required until "the network matures." BTC-e and Cryptonit are two of the main exchanges that accept PPCoin. Its current value is around 0.002 BTC ($0.15) per PPCoin. Freicoin (FRC) Site: http://freico.in/ Launch date: December 17, 2012 Number of freicoins in circulation: Unknown Eventual freicoin total: 100 million Market cap: Unknown Freicoin is inspired by the work of economist Silvio Gesell. Public domain Freicoin is an interesting alternative—with a distinctive philosophical framework—to other cryptocurrencies. It has a demurrage fee built into the system. "Demurrage" isn't something we usually associate with money. It usually means the cost of holding something for a long time, like the price for the storage of gold. In this context, though, it's a deliberate tax on savings. Think of it as inflation controlled through taxes on a stable money supply rather than an untaxed money supply that expands slowly but steadily, as we're perhaps used to with normal currencies. Freicoin developer Mark Friedenbach told Wired.co.uk through e-mail what this means: "[Demurrage] can be thought of as causing freicoins to rot, reducing them in value by ~4.9 percent per year. Now to answer the question as to why anybody would want that, you have to look at the economy as a whole. Demurrage causes consumers and merchants both to spend or invest coins they don't immediately need, as quickly as possible, driving up GDP. Further, this effect is continuous with little seasonal adjustments, so one can expect business cycles to be smaller in magnitude and duration. With demurrage, one saves money by making safe investments rather than letting money sit under the mattress." If you look at the problem with Bitcoin's bubble, it's easy to see why this kind of thing would be attractive for someone wanting a currency with a stable, predictable value. Many pundits have argued that Bitcoin will always have a deeply unstable price as the money supply is limited and grows slower every minute—if you're holding bitcoins, and you know that they'll be worth more in a week than right now, your incentive is to hold on to your money instead of spending it. Nobody buys anything, and the Bitcoin economy slows to a halt. Demurrage compensates for this deflation—you would be a fool to store large sums of money in Freicoin according to Friedenbach. "Demurrage eliminates what is called 'time-value preference'—the unsustainable nature of our culture to want things now rather than in the future, or at least spread out over time, such as the clear cutting of forests versus sustainable harvesting. Demurrage acts to lessen the desires of the present in order to meet the needs of the future as money is 'worth more' the longer you delay in receiving it. This leads to sustainable economic choices." Advertisement He cites real-world examples of demurrage fees, such as the "Miracle of Wörgl." The proposal to use demurrage deliberately, as a way to force the circulation of money and stimulate the economy, was first proposed by economist and anarchist Silvio Gesell. The mayor of the Austrian town of Wörgl instigated scraps of paper known as "Freigeld" with demurrage in 1932 during the Great Depression. The experiment led to a rise in employment and the local GDP until it was stopped by the Austrian central bank in 1933. Beyond demurrage, Freicoin works pretty much the same as the basic Bitcoin framework—new blocks roughly every 10 minutes, with the same difficulty and hashing algorithm. The final total of coins will be greater, however, at 100 million. Freicoin's developers are also pushing for new features for their cryptocurrency to mark it out as different from the others, Friedenbach said. "We have created the Freicoin Foundation which is a registered non-profit responsible for distributing 80 percent of the initial coins to charitable or mutually beneficial projects. We are making a variety of technical improvements to the Bitcoin protocol, which may eventually find their way upstream." "We are also working on new features that are probably too controversial to be worked into Bitcoin presently, such as the addition of 'Freicoin Assets,' a mechanism for issuing your own tokens for whatever purpose (stocks, bonds, lines of credit, etc.) and trading these tokens on a peer-to-peer exchange. We are also planning to extend Freicoin to include a variety of voting mechanisms in a proposal for distributed governance we are calling 'Republicoin.'" Freicoin's radical demurrage concept has marked it out for a lot of criticism, unsurprisingly. You can see on the developer's forum the number of discussions taking place about it (and at least one group has tried to take over Freicoin with their own new fork, removing the 80 percent Freicoin Foundation subsidy). After all, while Bitcoin might be unstable because of price speculation and deflation, that same increase in value is what drew attention, and therefore users, to Bitcoin first. The demurrage fee, taken from every transaction, is redistributed mainly to miners of new blocks. The Freicoin Foundation that Friedenbach mentioned is controversial because for the first three years, 80 percent of the demurrage fees will be siphoned off to this central fund to be sent forward to other people or organizations in a bid to get the currency traction outside its small community. However, this central fund goes against what many regard as the key point of cryptocurrencies—nobody is in control and they are completely decentralized. Assurances that the Foundation will be "democratic" and open for any Freicoin user to join and vote on the use of funds may not reassure some people. Freicoin is currently traded on Vircurex and Bter. The price per freicoin price is roughly 0.0006 BTC ($0.06). Others There have already been several cryptocurrencies that have been born, lived stuttering lives, and died because they offer nothing substantial beyond Bitcoin. Their names—SolidCoin, BBQCoin, Fairbrix, and GeistGeld, to name some—are now footnotes on the Bitcoin wiki. Their networks are ghosts with nodes that flicker on and off only intermittently. Several of them were taken down by 51 percent attacks, while others simply never enjoyed support from a large enough community. The balance between commodity speculation to drive price and merchants to drive transactions is a hard one to build. Legitimate currencies that are still working, and which have fans and active support communities, include Namecoin, Terracoin, and Feathercoin. However, cryptocurrencies are fast-moving and unpredictable in the long term—it's hard to state with confidence that any one currency, including Bitcoin itself, will be here next year or even next week. Their instability and their unpredictability is part of the design. By all means, if you have the money, invest in something intangible like an algorithm in the hope that it will become a viable payment method. Just be aware that it's a risky way to try to make money—you might be better off sticking to the stock market. This story originally appeared on Wired UK. Page: 1 2 reader comments 71 with 40 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Advertisement You must login or create an account to comment. Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars Store Subscribe About Us RSS Feeds View Mobile Site Contact Us Staff Advertise with us Reprints Newsletter Signup Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up → CNMN Collection WIRED Media Group © 2021 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy. Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices arxiv-org-120 ---- None arxiv-org-8640 ---- None archivesblogs-com-7914 ---- ArchivesBlogs | a syndicated collection of blogs by and for archivists ArchivesBlogs a syndicated collection of blogs by and for archivists Search Main menu Skip to primary content Skip to secondary content Home About Post navigation ← Older posts Meet Ike Posted on September 18, 2020 from AOTUS “I come from the very heart of America.” – Dwight Eisenhower, June 12, 1945 At a time when the world fought to overcome tyranny, he helped lead the course to victory as the Supreme Allied Commander in Europe. When our nation needed a leader, he upheld the torch of liberty as our 34th president. As a new memorial is unveiled, now is the time for us to meet Dwight David Eisenhower. Eisenhower Memorial statue and sculptures, photo by the Dwight D. Eisenhower Memorial Commission An opportunity to get to know this man can be found at the newly unveiled Eisenhower Memorial in Washington, DC, and the all-new exhibits in the Eisenhower Presidential Library and Museum in Abilene, Kansas. Each site in its own way tells the story of a humble man who grew up in small-town America and became the leader of the free world. The Eisenhower Presidential Library and Museum is a 22-acre campus which includes several buildings where visitors can interact with the life of this president. Starting with the Boyhood Home, guests discover the early years of Eisenhower as he avidly read history books, played sports, and learned lessons of faith and leadership. The library building houses the documents of his administration. With more than 26 million pages and 350,000 images, researchers can explore the career of a 40+-year public servant. The 25,000 square feet of all-new exhibits located in the museum building is where visitors get to meet Ike and Mamie again…for the first time. Using NARA’s holdings, guests gain insight into the life and times of President Eisenhower. Finally, visitors can be reflective in the Place of Meditation where Eisenhower rests beside his first-born son, Doud, and his beloved wife Mamie. A true encapsulation of his life. Eisenhower Presidential Library and Museum, Abilene, Kansas The updated gallery spaces were opened in 2019. The exhibition includes many historic objects from our holdings which highlight Eisenhower’s career through the military years and into the White House. Showcased items include Ike’s West Point letterman’s sweater, the D-Day Planning Table, Soviet lunasphere, and letters related to the Crisis at Little Rock. Several new films and interactives have been added throughout the exhibit including a D-Day film using newly digitized footage from the archives. Eisenhower Presidential Library and Museum, Abilene, Kansas In addition to facts and quotes, visitors will leave with an understanding of how his experiences made Ike the perfect candidate for Supreme Allied Commander of the Allied Expeditionary Force in Europe and the 34th President of the United States. The Eisenhower Memorial, which opened to the public on September 18, is located at an important historical corridor in Washington, DC. The 4-acre urban memorial park is surrounded by four buildings housing institutions that were formed during the Eisenhower Administration and was designed by award-winning architect, Frank Gehry. In 2011, the National Archives hosted Frank Gehry and his collaborator, theater artist Robert Wilson in a discussion about the creation of the Eisenhower National Memorial.  As part of the creative process, Gehry’s team visited the Eisenhower Presidential Library and drew inspiration from the campus. They also used the holdings of the Eisenhower Presidential Library to form the plans for the memorial itself. This also led to the development of online educational programs which will have a continued life through the Eisenhower Foundation. Visitors to both sites will learn lasting lessons from President Eisenhower’s life of public service. Eisenhower Memorial, photo by the Dwight D. Eisenhower Memorial Commission Link to Post | Language: English The First Post 9/11 Phone-In: Richard Hake Sitting-in For Brian Lehrer Posted on September 16, 2020 from NYPR Archives & Preservation On September 18, 2001, The late Richard Hake sat-in for Brian Lehrer at Columbia University’s new studios at WKCR.  Just one week after the attack on the World Trade Center, WNYC was broadcasting on FM at reduced power from the Empire State Building and over WNYE (91.5 FM). Richard spoke with New York Times columnist Paul Krugman on airport security, author James Fallows on the airline industry, Robert Roach Jr. of the International Association of Machinists, and security expert and former New York City Police Commissioner William Bratton as well as WNYC listeners. Link to Post | Language: English Capturing Virtual FSU Posted on September 16, 2020 from Illuminations When the world of FSU changed in March 2020, the website for FSU was used as one of the primary communication tools to let students, faculty, and staff know what was going on. New webpages created specifically to share information and news popped up all over fsu.edu and we had no idea how long those pages would exist (ah, the hopeful days of March) so Heritage & University Archives wanted to be sure to capture those pages quickly and often as they changed and morphed into new online resources for the FSU community. Screenshot of a capture of the main FSU News feed regarding coronavirus. Captured March 13, 2020. While FSU has had an Archive-It account for a while, we hadn’t fully implemented its use yet. Archive-It is a web archiving service that captures and preserves content on websites as well as allowing us to provide metadata and a public interface to viewing the collected webpages. COVID-19 fast-tracked me on figuring out Archive-It and how we could best use it to capture these unique webpages documenting FSU’s response to the pandemic. I worked to configure crawls of websites to capture the data we needed, set up a schedule that would be sufficient to capture changes but also not overwhelm our data allowance, and describe the sites being captured. It took me a few tries but we’ve successfully been capturing a set of COVID related FSU URLs since March. One of the challenges of this work was some of the webpages had functionality that the web crawling just wouldn’t capture. This was due to some interactive widgets on pages or potentially some CSS choices the crawler didn’t like. I decided the content was the most important thing to capture in this case, more so than making sure the webpage looked exactly like the original. A good example of this is the International Programs Alerts page. We’re capturing this to track information about our study abroad programs but what Archive-It displays is quite different from the current site in terms of design. The content is all there though. On the left is how Archive-It displays a capture of the International Programs Alerts page. On the right is how the site actually looks. While the content is the same, the formatting and design is not As the pandemic dragged on and it became clear that Fall 2020 would be a unique semester, I added the online orientation site and the Fall 2020 site to my collection line-up. The Fall 2020 page, once used to track the re-opening plan recently morphed into the Stay Healthy FSU site where the community can look for current information and resources but also see the original re-opening document. We’ll continue crawling and archiving these pages in our FSU Coronavirus Archive for future researchers until they are retired and the university community returns to “normal” operations – whatever that might look like when we get there! Link to Post | Language: English Welcome to the New ClintonLibrary.Gov! Posted on September 14, 2020 from AOTUS The National Archives’ Presidential Libraries and Museums preserve and provide access to the records of 14 presidential administrations. In support of this mission, we developed an ongoing program to modernize the technologies and designs that support the user experience of our Presidential Library websites. Through this program, we have updated the websites of the Hoover, Truman, Eisenhower and Nixon Presidential Libraries.  Recently we launched an updated website for the William J. Clinton Presidential Library & Museum. The website, which received more than 227,000 visitors over the past year, now improves access to the Clinton Presidential Library holdings by providing better performance, improving accessibility, and delivering a mobile-friendly experience. The updated website’s platform and design, based in the Drupal web content management framework, enables the Clinton Presidential Library staff to make increasing amounts of resources available online—especially while working remotely during the COVID-19 crisis. To achieve this website redesign, staff from the National Archives’ Office of Innovation, with both web development and user experience expertise, collaborated with staff from the Clinton Presidential Library to define goals for the new website. Our user experience team first launched the project by interviewing staff of the Clinton Presidential Library to determine the necessary improvements for the updated website to facilitate their work. Next, the user experience team researched the Library’s customers—researchers, students, educators, and the general public—by analyzing user analytics, heatmaps, recordings of real users navigating the site, and top search referrals. Based on the data collected, the user experience team produced wireframes and moodboards that informed the final site design. The team also refined the website’s information architecture to improve the user experience and meet the Clinton Library staff’s needs.  Throughout the project, the team used Agile project management development processes to deliver iterative changes focused on constant improvement. To be Agile, specific goals were outlined, defined, and distributed among team members for mutual agreement. Work on website designs and features was broken into development “sprints”—two-week periods to complete defined amounts of work. At the end of each development sprint, the resulting designs and features were demonstrated to the Clinton Presidential Library staff stakeholders for feedback which helped further refine the website. The project to update the Clinton Presidential Library and Museum website was guided by the National Archives’ strategic goals—to Make Access Happen, Connect with Customers, Maximize NARA’s Value to the Nation, and Build our Future Through our People. By understanding the needs of the Clinton Library’s online users and staff, and leveraging the in-house expertise of our web development and user experience staff, the National Archives is providing an improved website experience for all visitors. Please visit the site, and let us know what you think! Link to Post | Language: English The Road to Edinburgh (Part 2) Posted on September 11, 2020 from Culture on Campus “Inevitably, official thoughts early turned to the time when Scotland would be granted the honour of acting as hosts. Thought was soon turned into action and resulted in Scotland pursuing the opportunity to be host to the Games more relentlessly than any other country has.” From foreword to The Official History of the IXth Commonwealth Games (1970) In our last blog post we left the campaigners working to bring the Commonwealth Games to Edinburgh reflecting on the loss of the 1966 Games to Kingston, Jamaica. The original plan of action sketched out by Willie Carmichael in 1957 had factored in a renewed campaign for 1970 if the initial approach to host the 1966 Games proved unsuccessful. The choice of host cities for the Games were made at the bi-annual General Assemblies of the Commonwealth Games Federation. The campaign to choose the host for 1970 began at a meeting held in Tokyo in 1964 (to coincide with the Olympics), with the final vote taking place at the 1966 Kingston Games. In 1964 the Edinburgh campaign presented a document to the Federation restating its desire to be host city for the Games in 1970. Entitled ‘Scotland Invites’ it laid out Scotland’s case: “We are founder members of the Federation; we have taken part in each Games since the inception in 1930; and we are the only one of six countries who have taken part in every Games, who have not yet had the honour of celebrating the Games.” From Scotland Invites, British Empire and Commonwealth Games Council for Scotland (1964) Documents supporting Edinburgh’s bid to host the 1970 Commonwealth Games presented to meetings of the General Assembly of the Commonwealth Games Federation at Tokyo in 1964 and Kingston in 1966 (ref. WC/2/9/2) Edinburgh faced a rival bid from Christchurch, New Zealand, the competition between the two cities recorded in a series of press cutting files collected by Willie Carmichael. Reports in the Scottish press presented Edinburgh as the favourites for 1970, with Christchurch using their bid as a rehearsal for a more serious campaign to host the 1974 competition. However, the New Zealanders rejected this assessment, arguing that it was the turn of a country in the Southern Hemisphere to host the Games. The 1966 Games brought the final frantic round of lobbying and promotion for the rival bids as members of the Commonwealth Games Federation gathered in Kingston. The British Empire and Commonwealth Games Council for Scotland presented a bid document entitled ‘Scotland 1970’ which included detailed information on the venues and facilities to be provided for the competition along with a broader description of the city of Edinburgh. Artists impression of the new Meadowbank athletics stadium, Edinburgh (ref. WC/2/9/2/12) At the General Assembly of the Commonwealth Games Federation held in Kingston, Jamaica, on 7 August 1966 the vote took place to decide the host of the 1970 Games. Edinburgh was chosen as host city by 18 votes to 11. The Edinburgh campaign team kept a souvenir of this important event. At the end of the meeting they collected together the evidence of their success and put it in an envelope marked ‘Ballot Cards – which recorded votes for Scotland at Kingston 1966.’ The voting cards and envelope now sit in an administrative file which forms part of the Commonwealth Games Scotland Archive. Voting card recording vote for Scotland to host the 1970 Commonwealth Games (ref. CG/2/9/1/2/7) Link to Post | Language: English New Ancient Texts Research Guide Posted on September 10, 2020 from Illuminations “What are the oldest books you have?” is a common question posed to Special Collections & Archives staff at Strozier Library. In fact, the oldest materials in the collection are not books at all but cuneiform tablets ranging in date from 2350 to 1788 BCE (4370-3808 years old). These cuneiform tablets, along with papyrus fragments and ostraka comprise the ancient texts collection in Special Collections & Archives. In an effort to enhance remote research opportunities for students to engage with the oldest materials housed in Strozier Library, a research guide to Ancient Texts at FSU Libraries has been created by Special Collections & Archives staff. Ancient Texts Research Guide The Ancient Texts at FSU Libraries research guide provides links to finding aids with collections information, high-resolution photos of the objects in the digital library, and links to articles or books about the collections. Research guides can be accessed through the tile, “Research Guides,” on the library’s main page. Special Collections & Archives currently has 11 research guides published that share information and resources on specific collections or subjects that can be accessed remotely. While direct access to physical collections is unavailable at this time due to Covid-19, we hope to resume in-person research when it is safe to do so, and Special Collections & Archives is still available to assist you remotely with research and instruction. Please get in touch with us via email at: lib-specialcollections@fsu.edu. For a full list of our remote services, please visit our services page. Link to Post | Language: English SSCI Members Embrace Need for Declassification Reform, Discuss PIDB Recommendations at Senate Hearing Posted on September 10, 2020 from Transforming Classification The Board would like to thank Acting Chairman Marco Rubio (R-FL), Vice Chairman Mark Warner (D-VA), and members of the Senate Select Committee on Intelligence (SSCI) for their invitation to testify yesterday (September 9, 2020) at the open hearing on “Declassification Policy and Prospects for Reform.”    At the hearing, PIDB Member John Tierney responded to questions from committee members about recommendations in the PIDB’s May 2020 Report to the President. He stressed the need for modernizing information security systems and the critical importance of sustained leadership through a senior-level Executive Agent (EA) to oversee and implement meaningful reform. In addition to Congressman Tierney, Greg Koch, the Acting Director of Information Management in the Office of the Director of National Intelligence (ODNI), testified in response to the SSCI’s concerns about the urgent need to improve how the Executive Branch classifies and declassifies national security information. Much of the discussion focused on the PIDB recommendation that the President designate the ODNI as the EA to coordinate the application of information technology, including artificial intelligence and machine learning, to modernize classification and declassification across the Executive Branch. Senator Jerry Moran (R-KS), and Senator Ron Wyden (D-OR), who is a member of the SSCI, joined the hearing to discuss the bill they are cosponsoring to modernize declassification. Their proposed “Declassification Reform Act of 2020” aligns with the PIDB Report recommendations, including the recommendation to designate the ODNI as the EA for coordinating the required reforms. The Board would like to thank Senators Moran and Wyden for their continued support and attention to this crucial issue. Modernizing the classification and declassification system is important for our 21st century national security and it is important for transparency and our democracy. Video of the entire hearing is available to view at the SSCI’s website, and from C-SPAN.  The transcript of prepared testimony submitted to the SSCI by Mr. Tierney is posted on the PIDB website. Link to Post | Language: English Be Connected, Keep A Stir Diary Posted on September 9, 2020 from Culture on Campus The new semester approaches and it’s going to be a bit different from what we’re used to here at the University of Stirling. To help you with your mental health and wellbeing this semester, we’ve teamed up with the Chaplaincy to provide students new and returning with a diary where you can keep your thoughts and feelings, process your new environment, record your joys and capture what the University was like for you in this unprecedented time. Diaries will be stationed at the Welcome Lounges from 12th September and we encourage students to take one for their personal use. Please be considerate of others and only take one diary each. Inside each diary is a QR code which will take you to our project page where you can learn more about the project and where we will be creating an online resource for you to explore the amazing diaries that we keep in Archives and Special Collections. We will be updating this page throughout semester with information from the Archives and events for you to join. Keep an eye out for #StirDiary on social media for all the updates! At the end of semester, you are able to donate your diary to the Archive where it will sit with the University’s institutional records and form a truthful and creative account of what student life was like in 2020. You absolutely don’t have to donate your diary if you don’t want to, the diary belongs to you and you can keep it, throw it away, donate it or anything else (wreck it?) as you like. If you would like to take part in the project but you have missed the Welcome Lounges, don’t worry! Contact Rosie on archives@stir.ac.uk or Janet on janet.foggie1@stir.ac.uk Welcome to the University of Stirling – pick a colour! Link to Post | Language: English PIDB Member John Tierney to Support Modernizing Classification and Declassification before the Senate Select Committee on Intelligence, Tomorrow at 3:00 p.m., Live on C-SPAN Posted on September 8, 2020 from Transforming Classification PIDB member John Tierney will testify at an open hearing on declassification policy and the prospects for reform, to be held by the Senate Select Committee on Intelligence (SSCI) tomorrow, Wednesday, September 9, 2020, from 3:00-4:30 p.m. EST. The hearing will be shown on the SSCI’s website, and televised live on C-SPAN.  SSCI members Senators Ron Wyden (D-OR) and Jerry Moran (R-KS) have cosponsored the proposed “Declassification Reform Act of 2020,” which aligns with recommendations of the PIDB’s latest report to the President, A Vision for the Digital Age: Modernization of the U.S, National Security Classification and Declassification System (May 2020). In an Opinion-Editorial appearing today on the website Just Security, Senators Wyden and Moran present their case for legislative reform to address the challenges of outmoded systems for classification and declassification. At the hearing tomorrow, Mr. Tierney will discuss how the PIDB recommendations present a vision for a uniform, integrated, and modernized security classification system that appropriately defends national security interests, instills confidence in the American people, and maintains sustainability in the digital environment. Mr. Greg Koch, Acting Director of the Information Management Office for the Office of the Director of National Intelligence, will also testify at the hearing. The PIDB welcomes the opportunity to speak before the SSCI and looks forward to discussing the need for reform with the Senators. After the hearing, the PIDB will post a copy of Mr. Tierney’s prepared testimony on its website and on this blog. Link to Post | Language: English Wiki loves monuments – digital skills and exploring stirling Posted on September 8, 2020 from Culture on Campus Every year the Wikimedia Foundation runs Wiki Loves Monuments – the world’s largest photo competition. Throughout September there is a push to take good quality images of listed buildings and monuments and add them to Wiki Commons where they will be openly licensed and available for use across the world – they may end up featuring on Wikipedia pages, on Google, in research and presentations worldwide and will be entered into the UK competition where there are prizes to be had! Below you’ll see a map covered in red and blue pins. These represent all of the listed buildings and monuments that are covered by the Wiki Loves Monuments competition, blue pins are places that already have a photograph and red pins have no photograph at all. The aim of the campaign is to turn as many red pins blue as possible, greatly enhancing the amazing bank of open knowledge across the Wikimedia platforms. The University of Stirling sits within the black circle. The two big clusters of red pins on the map are Stirling and Bridge of Allan – right on your doorstep! We encourage you to explore your local area. Knowing your surroundings, finding hidden gems and learning about the history of the area will all help Stirling feel like home to you, whether you’re a first year or returning student. Look at all those red dots! Of course, this year we must be cautious and safe while taking part in this campaign and you should follow social distancing rules and all government coronavirus guidelines, such as wearing facemasks where appropriate, while you are out taking photographs. We encourage you to walk to locations you wish to photograph, or use the NextBikes which are situated on campus and in Stirling rather than take excessive public transport purely for the purposes of this project. Walking and cycling will help you to get a better sense of where everything is in relation to where you live and keeping active is beneficial to your mental health and wellbeing. Here are your NextBike points on campus where you can pick up a bike to use We hope you’ll join us for this campaign – we have a session planned for 4-5pm on Thursday 17th September on Teams where we’ll tell you more about Wiki Loves Monuments and show you how to upload your images. Sign up to the session on Eventbrite. If you cannot make our own University of Stirling session then Wikimedia UK have their own training session on the 21st September which you can join. Please note that if you want your photographs to be considered for the competition prizes then they must be submitted before midnight on the 30th September. Photographs in general can be added at any time so you can carry on exploring for as long as you like! Finally, just to add a little incentive, this year we’re having a friendly competition between the University of Stirling and the University of St Andrews students to see who can make the most edits so come along to a training session, pick up some brilliant digital skills and let’s paint the town green! Link to Post | Language: English What’s the Tea? Posted on September 4, 2020 from Illuminations Katie McCormick, Associate Dean (she/her/hers) For this post, I interviewed Kate McCormick in order to get a better understanding of the dynamics of Special Collections & Archives. Katie is one of the Associate Deans and has been with SCA for about nine years now (here’s a video of Katie discussing some of our collections on C-SPAN in 2014!). As a vital part of the library, and our leader in Special Collections & Archives, I wanted to get her opinion on how the division has progressed thus far and how they plan to continue to do so in regards to diversity and inclusion.  How would you describe FSU SCA when you first started? “…People didn’t feel comfortable communicating [with each other]… There was one person who really wrote for the blog, and maybe it would happen once every couple of months. When I came on board, my general sense was that we were a department and a group of people with a lot of really great ideas and some fantastic materials, who had come a long way from where things has been, but who hadn’t gotten to a place to be able to organize to change more or to really work more as a team… We were definitely valued as (mostly) the fancy crown jewel group. Really all that mattered was the stuff… it didn’t matter what we were doing with it.” How do you feel the lapse in communication affected diversity and inclusion? “While I don’t have any direct evidence that it excluded people or helped create an environment that was exclusive, I do know that even with our staff at the time, there were times where it contributed to hostilities, frustrations, an  environment where people didn’t feel able to speak or be comfortable in…Everybody just wanted to be comfortable with the people who were just like them that it definitely created some potentially hostile environments. Looking back, I recognize what a poor job we did, as a workplace and a community truly being inclusive, and not just in ways that are immediately visible.” How diverse was SCA when you started?  “In Special Collections there was minimal diversity, certainly less than we have now… [For the libraries as a whole] as you go up in classification and pay, the diversity decreases. That was certainly true when I got here and that remains true.” How would you rank SCA’s diversity and inclusion when you first started? “…Squarely a 5, possibly in some arenas a 4. Not nothing, but I feel like no one was really thinking of it.” And how would you describe it now? “Maybe we’re approaching a 7, I feel like there’s been progress, but there’s still a long way to go in my opinion.” What are some ways we can start addressing these issues? What are some tangible ways you are planning to enact? “For me, some of the first places [is] forming the inclusive research services task force in Special Collections, pulling together a group to look at descriptive practices and applications, and what we’re doing with creating coordinated processing workflows. Putting these issues on the table from the beginning is really important… Right now because we’re primarily in an online environment, i think we have some time to negotiate and change our practices so when we are re-open to the public and people are physically coming in to the spaces, we have new forms, new trainings, people have gone through training that gives them a better sense of identity, communication, diversity.” After my conversation with Katie, I feel optimistic about the direction we are heading in. Knowing how open Special Collections & Archives is about taking critique and trying to put it into action brought me comfort. I’m excited to see how these concerns are addressed and how the department will be putting Dynamic Inclusivity, one of Florida State University’s core values, at the forefront of their practice. I would like to give a big thank you to Katie McCormick for taking the time to do this post with me and for having these conversations! Link to Post | Language: English friday art blog: Terry Frost Posted on September 3, 2020 from Culture on Campus Black and Red on Blue (Screenprint, A/P, 1968) Born in Leamington Spa, Warwickshire, in 1915, Terry Frost KBE RA did not become an artist until he was in his 30s. During World War II, he served in France, the Middle East and Greece, before joining the commandos. While in Crete in June 1941 he was captured and sent to various prisoner of war camps. As a prisoner at Stalag 383 in Bavaria, he met Adrian Heath who encouraged him to paint. After the war he attended Camberwell School of Art and the St. Ives School of Art and painted his first abstract work in 1949. In 1951 he moved to Newlyn and worked as an assistant to the sculptor Barbara Hepworth. He was joined there by Roger Hilton, where they began a collaboration in collage and construction techniques. In 1960 he put on his first exhibition in the USA, in New York, and there he met many of the American abstract expressionists, including Marc Rothko who became a great friend. Terry Frost’s career included teaching at the Bath Academy of Art, serving as Gregory Fellow at the University of Leeds, and also teaching at the Cyprus College of Art. He later became the artist in residence and Professor of Painting at the Department of Fine Art of the University of Reading. Orange Dusk (Lithograph, 2/75, 1970) Frost was renowned for his use of the Cornish light, colour and shape. He became a leading exponent of abstract art and a recognised figure of the British art establishment. These two prints were purchased in the early days of the Art Collection at the beginning of the 1970s. Terry Frost married Kathleen Clarke in 1945 and they had six children, two of whom became artists, (and another, Stephen Frost, a comedian). His grandson Luke Frost, also an artist, is shown here, speaking about his grandfather. Link to Post | Language: English PIDB Sets Next Virtual Public Meeting for October 7, 2020 Posted on September 3, 2020 from Transforming Classification The Public Interest Declassification Board (PIDB) has scheduled its next virtual public meeting for Wednesday, October 7, 2020, from 1:00 to 2:30 p.m.  At the meeting, PIDB members will discuss their priorities for improving classification and declassification in the next 18 months. They will also introduce former Congressman Trey Gowdy, who was appointed on August 24, 2020, to a three-year term on the PIDB. A full agenda, as well as information on how to pre-register, and how to submit questions and comments to the PIDB prior to the virtual meeting, will be posted soon to Transforming Classification. The PIDB looks forward to your participation in continuing our public discussion of priorities for modernizing the classification system going forward. Link to Post | Language: English Digital Collections Updates Posted on September 3, 2020 from UNC Greensboro Digital Collections So as we start a new academic year, we thought this would be a good time for an update on what we’ve been working on recently. Digital collections migration: After more than a year’s delay, the migration of our collections into a new and more user-friendly (and mobile-friendly) platform driven by the Islandora open-source content management system is in the home stretch. This has been a major undertaking and has given us the opportunity to reassess how our collections work. We hope to be live with the new platform in November. 30,000 items (over 380,000 digital images) have already been migrated. 2019-2020 Projects: We’ve made significant progress on most of this year’s projects (see link for project descriptions), though many of these are currently not yet online pending our migration to the Islandora platform: Grant-funded projects: Temple Emanuel Project: We are working with the Public History department and a graduate student in that program. Several hundred items have already been digitized and more work is being done. We are also exploring grant options with the temple to digitize more material. People Not Property: NC Slave Deeds Project: We are in the final year of this project funded by the National Archives and hope to have it online as part of the Digital Library on American Slavery late next year. We are also exploring additional funding options to continue this work. Women Who Answered the Call: This project was funded by a CLIR Recordings at Risk grant. The fragile cassettes have been digitized and we are midway through the process of getting them online in the new platform. Library-funded projects: Poetas sin Fronteras: Poets Without Borders, the Scrapbooks of Dr. Ramiro Lagos: These items have been digitized and will go online when the new platform launches. North Carolina Runaway Slaves Ads Project, Phase 2: Work continues on this ongoing project and over 5700 ads are now online. This second phase has involved both locating and digitizing/transcribing the ads, and we will soon triple the number of ads done in Phase One. We are also working on tighter integration of this project into the Digital Library on American Slavery. PRIDE! of the Community: This ongoing project stemmed from an NEH grant two years ago and is growing to include numerous new oral history interviews and (just added) a project to digitize and display ads from LGBTQ+ bars and other businesses in the Triad during the 1980s and 1990s. We are also working with two Public History students on contextual and interpretive projects based on the digital collection. Faculty-involved projects: Black Lives Matter Collections: This is a community-based initiative to document the Black Lives Matter movement and recent demonstrations and artwork in the area. Faculty: Dr. Tara Green (African America and Diaspora Studies);  Stacey Krim, Erin Lawrimore, Dr. Rhonda Jones, David Gwynn (University Libraries). Civil Rights Oral Histories: This has become multiple projects. We are working with several faculty members in the Media Studies department to make these transcribed interviews available online. November is the target. Faculty: Matt Barr, Jenida Chase, Hassan Pitts, and Michael Frierson (Media Studies); Richard Cox, Erin Lawrimore, David Gwynn (University Libraries). Oral Contraceptive Ads: Working with a faculty member and a student on this project, which may be online by the end of the year. Faculty: Dr. Heather Adams (English); David Gwynn and Richard Cox (University Libraries). Well-Crafted NC: Work is ongoing and we are in the second year of a UNCG P2 grant, working with a faculty member in eth Bryan School and a brewer based in Asheboro. Faculty: Erin Lawrimore, Richard Cox, David Gwynn (University Libraries), Dr. Erick Byrd (Marketing, Entrepreneurship, Hospitality, and Tourism) New projects taken on during the pandemic: City of Greensboro Scrapbooks: Huge collection of scrapbooks from the Greensboro Urban Development Department dating back to the 1940s. These items have been digitized and will go online when the new platform launches. Negro Health Week Pamphlets: 1930s-1950s pamphlets published by the State of North Carolina. These items are currently being digitized and will go online when the new platform launches. Clara Booth Byrd Collection: Manuscript collection. These items are currently being digitized and will go online when the new platform launches. North Carolina Speaker Ban Collection: Manuscript collection. These items are currently being digitized and will go online when the new platform launches. Mary Dail Dixon Papers: Manuscript collection. These items are currently being digitized and will go online when the new platform launches. Ruth Wade Hunter Collection: Manuscript collection. These items are currently being digitized and will go online when the new platform launches. Projects on hold pending the pandemic: Junior League of Greensboro: Much of this has already been digitized and will go online when the new platform launches. UNCG Graduate School Bulletins: Much of this has already been digitized and will go online when the new platform launches.  David Gwynn (Digitization Coordinator, me) offers kudos to Erica Rau and Kathy Howard (Digitization and Metadata Technicians); Callie Coward (Special Collections Cataloging & Digital Projects Library Technician); Charley Birkner (Technology Support Technician); and Dr. Brian Robinson (Fellow for Digital Curation and Scholarship) for their great work in very surreal circumstances over the past six months. Link to Post | Language: English CORRECTION: Creative Fellowship Call for Proposals Posted on September 3, 2020 from Notes For Bibliophiles We have an update to our last post! We’re still accepting proposals for our 2021 Creative Fellowship… But we’ve decided to postpone both the Fellowship and our annual Exhibition & Program Series by six months due to the coronavirus. The annual exhibition will now open on October 1, 2021 (which is 13 months away, but we’re still hard at work planning!). The new due date for Fellowship proposals is April 1, 2021. We’ve adjusted the timeline and due dates in the call for proposals accordingly. Link to Post | Language: English On This Day in the Florida Flambeau, Friday, September 2, 1983 Posted on September 2, 2020 from Illuminations Today in 1983, a disgruntled reader sent in this letter to the editor of the Flambeau. In it, the reader describes the outcome of a trial and the potential effects that outcome will have on the City of Tallahassee. Florida Flambeau, September 2, 1983 It is such a beautifully written letter that I still can’t tell whether or not it’s satire. Do you think the author is being serious or sarcastic? Leave a comment below telling us what you think! Link to Post | Language: English Hartgrove, Meriwether, and Mattingly Posted on September 2, 2020 from The Consecrated Eminence The past few months have been a challenging time for archivists everywhere as we adjust to doing our work remotely. Fortunately, the materials available in Amherst College Digital Collections enable us to continue doing much of our work. Back in February, I posted about five Black students from the 1870s and 1880s — Black Men of Amherst, 1877-1883 — and now we’re moving into the early 20th century. A small clue in The Olio has revealed another Black student that was not included in Harold Wade’s Black Men of Amherst. Robert Sinclair Hartgrove (AC 1905) was known to Wade, as was Robert Mattingly (AC 1906), but we did not know about Robert Henry Meriwether. These three appear to be the first Black students to attend Amherst in the twentieth century. Robert Sinclair Hartgrove, Class of 1905 The text next to Hartgrove’s picture in the 1905 yearbook gives us a tiny glimpse into his time at Amherst. The same yearbook shows Hartgrove not just jollying the players, but playing second base for the Freshman baseball team during the 1902 season. Freshman Baseball Team, 1902 The reference to Meriwether sent me to the Amherst College Biographical Record, where I found Robert Henry Meriwether listed as a member of the Class of 1904. A little digging into the College Catalogs revealed that he belongs with the Class of 1905. College Catalog, 1901-02 Hartgrove and Meriwether are both listed as members of the Freshman class in the 1901-02 catalog. The catalog also notes that they were both from Washington, DC and the Biographical Record indicates that they both prepped at Howard University before coming to Amherst. We find Meriwether’s name in the catalog for 1902-03, but he did not “pull through” as The Olio hopes Hartgrove will; Meriwether returned to Howard University where he earned his LLB in 1907. Hartgrove also became a lawyer, earning his JB from Boston University in 1908 and spending most of his career in Jersey City, NJ. Robert Nicholas Mattingly, Class of 1906 Mattingly was born in Louisville, KY in 1884 and prepped for Amherst at The M Street School in Washington, DC, which changed its name in 1916 to The Dunbar School. Matt Randolph (AC 2016) wrote “Remembering Dunbar: Amherst College and African-American Education in Washington, DC” for the book Amherst in the World, which includes more details of Mattingly’s life. The Amherst College Archives and Special Collections reading room is closed to on-site researchers. However, many of our regular services are available remotely, with some modifications. Please read our Services during COVID-19 page for more information. Contact us at archives@amherst.edu. Link to Post | Language: English Democratizing Access to our Records Posted on September 1, 2020 from AOTUS The National Archives has a big, hairy audacious strategic goal to provide public access to 500 million digital copies of our records through our online Catalog by FY24. When we first announced this goal in 2010, we had less than a million digital copies in the Catalog and getting to 500 million sounded to some like a fairy tale. The goal received a variety of reactions from people across the archival profession, our colleagues and our staff. Some were excited to work on the effort and wanted particular sets of records to be first in line to scan. Some laughed out loud at the sheer impossibility of it. Some were angry and said it was a waste of time and money. Others were fearful that digitizing the records could take their jobs away. We moved ahead. Staff researched emerging technologies and tested them through pilots in order to increase our efficiency. We set up a room at our facilities in College Park to transfer our digital copies from individual hard drives to new technology from Amazon, known as snowballs. We worked on developing new partnership projects in order to get more records digitized. We streamlined the work in our internal digitization labs and we piloted digitization projects with staff in order to find new ways to get digital copies into the Catalog. By 2015, we had 10 million in the Catalog. We persisted. In 2017, we added more digital objects, with their metadata, to the Catalog in a single year than we had for the preceding decade of the project. Late in 2019, we surpassed a major milestone by having more than 100 million digital copies of our records in the Catalog. And yes, it has strained our technology. The Catalog has developed growing pains, which we continue to monitor and mitigate. We also created new finding aids that focus on digital copies of our records that are now available online: see our Record Group Explorer and our Presidential Library Explorer. So now, anyone with a smart phone or access to a computer with wifi, can view at least some of the permanent records of the U.S. Federal government without having to book a trip to Washington, D.C. or one of our other facilities around the country. The descriptions of over 95% of our records are also available through the Catalog, so even if you can’t see it immediately, you can know what records exist. And that is convenient for the millions of visitors we get each year to our website, even more so during the pandemic. National Archives Identifier 20802392 We are well on our way to 500 million digital copies in the Catalog by FY24. And yet, with over 13 billion pages of records in our holdings, we know, we have only just begun. Link to Post | Language: English Lola Hayes and “Tone Pictures of the Negro in Music” Posted on August 31, 2020 from NYPR Archives & Preservation Lola Wilson Hayes (1906-2001) was a highly-regarded African-American mezzo-soprano, WNYC producer, and later, much sought after vocal teacher and coach. A Boston native, Hayes was a music graduate of Radcliffe College and studied voice with Frank Bibb at Baltimore’s Peabody Conservatory. She taught briefly at a black vocational boarding school in New Jersey known as the ‘Tuskeegee of the north'[1] before embarking on a recital and show career which took her to Europe and around the United States. During World War II, she also made frequent appearances at the American Theatre Wing of the Stage Door Canteen of New York and entertained troops at USO clubs and hospitals. Headline from The New York Age, August 12, 1944, pg. 10. (WNYC Archive Collections) Hayes also made time to produce a short but notable run of WNYC programs, which she hosted and performed on the home front. Her November and December 1943 broadcasts were part of a rotating half-hour time slot designated for known recitalists. She shared the late weekday afternoon slot with sopranos Marjorie Hamill, Pina La Corte, Jean Carlton, Elaine Malbin, and the Hungarian pianist Arpád Sándor. Hayes’ series, Tone Pictures of the Negro in Music, sought to highlight African-American composers and was frequently referred to as The Negro in Music. The following outline of 1943 and 1944 broadcasts was pieced together from the WNYC Masterwork Bulletin program guide and period newspaper radio listings. Details on the 1943 programs are sparse. We know that Hayes’ last broadcast in 1943 featured the pianist William Duncan Allen (1906-1999) performing They Led My Lord Away by Roland Hayes and Good Lord Done Been Here by Hall Johnson, and a Porgy and Bess medley by George Gershwin. Excerpt from “Behind the Mike,” November/December 1944, WNYC Masterwork Bulletin. (WNYC Archive Collections) The show was scheduled again in August 1944 as a 15-minute late Tuesday afternoon program and in November that year as a half-hour Wednesday evening broadcast. The August programs began with an interview of soprano Abbie Mitchell (1884-1960), the widow of composer and choral director Will Marion Cook (1869-1944). The composer and arranger Hall Johnson (1888-1970) was her studio guest the following week. The third Tuesday of the month featured pianist Jonathan Brice performing “songs of young contemporary Negro composers,” and the August shows concluded with selections from Porgy and Bess and Cameron Jones. The November broadcasts focused on the work of William Grant Still, “the art songs, spirituals and street cries” of William Lawrence, as well as the songs and spirituals of William Rhodes, lyric soprano Lillian Evanti, and baritone Harry T. Burleigh. Hayes also spent airtime on the work of neo-romantic composer and violinist Clarence Cameron White. The November 29th program considered “the musical setting of poems by Langston Hughes and reportedly included the bard himself. “Langston Hughes was guest of honor and punctuated his interview with a reading from his opera Troubled Island.”[2] This was not the first time the poet’s work was the subject of Hayes’ broadcast. Below is a rare copy of her script from a program airing eight months earlier when she sat in for the regularly scheduled host, soprano Marjorie Hamill. The script for Tone Pictures of the Negro in Music hosted by Lola Hayes on March 24, 1944. (Image used with permission of Van Vecten Trust and courtesy of the Carl Van Vechten Papers Relating to African American Arts and Letters. James Weldon Johnson Collection in the Yale Collection in the Yale Collection of American Literature, Beinecke Rare Book and Manuscript Library)[3] It is unfortunate, but it appears there are no recordings of Lola Hayes’ WNYC program. We can’t say if that’s because they weren’t recorded or, if they were, the lacquer discs have not survived. We do know that World War II-era transcription discs, in general, are less likely to have survived since most of them were cut on coated glass, rather than aluminum, to save vital metals for the war effort. After the war, Hayes focused on voice teaching and coaching. Her students included well-known performers like Dorothy Rudd Moore, Hilda Harris, Raoul Abdul-Rahim, Carol Brice, Nadine Brewer, Elinor Harper, Lucia Hawkins, and Margaret Tynes. She was the first African-American president of the New York Singing Teachers Association (NYSTA), serving in that post from 1970-1972. In her later years, she devoted much of her time to the Lola Wilson Hayes Vocal Artists Award, which gave substantial financial aid to young professional singers worldwide.[4]  ___________________________________________________________ [1] The Manual Training and Industrial School for Colored Youth in Bordentown, New Jersey [2] “The Listening Room,” The People’s Voice, December 2, 1944, pg. 29. The newspaper noted that the broadcast included Hall Johnson’s Mother to Son, Cecil Cohen’s Death of an Old Seaman and Florence Price’s Song to a Dark Virgin, all presumably sung by host, Lola Hayes.  Troubled Island is an opera set in Haiti in 1791. It was composed by William Grant Still with a libretto by Langston Hughes and Verna Arvey. [3] Page two of the script notes Langston Hughes’ grandmother was married to a veteran of the 1859 Harper’s Ferry raid led by abolitionist John Brown. Indeed, Hughes’ grandmother’s first husband was Lewis Sheridan Leary, who was one of Brown’s raiders at Harper’s Ferry. For more on the story please see: A Shawl From Harper’s Ferry. [4] Abdul, Raoul, “Winners of the Lola Hayes Vocal Scholarship and Awards,” The New York Amsterdam News, February 8, 1992, pg. 25. Special thanks to Valeria Martinez for research assistance.   Link to Post | Language: English the road to edinburgh Posted on August 28, 2020 from Culture on Campus On the 50th anniversary of the 1970 Edinburgh Commonwealth Games newly catalogued collections trace the long road to the first Games held in Scotland. A handwritten note dated 10th April 1957 sits on the top of a file marked ‘Scotland for 1970 Host’. The document forms part of a series of files recording the planning, organisation and operation of the 1970 Edinburgh Commonwealth Games, the first to be held in Scotland. Written by Willie Carmichael, a key figure in Scotland’s Games history, the note sets out his plans to secure the Commonwealth Games for Scotland. He begins by noting that Scotland’s intention to host the Games was made at a meeting of Commonwealth Games Federations at the 1956 Melbourne Olympic Games. Carmichael then proceeds to lay out the steps required to make Scotland’s case to be the host of the Games in 1966 or 1970. Willie Carmichael The steps which Carmichael traced out in his note can be followed through the official records and personal papers relating to the Games held in the University Archives. The recently catalogued administrative papers of Commonwealth Games Scotland for the period provide a detailed account of the long process of planning for this major event, recording in particular the close collaboration with Edinburgh Corporation which was an essential element in securing the Games for Scotland (with major new venues being required for the city to host the event). Further details and perspectives on the road to the 1970 Games can be found in the personal papers of figures associated with Commonwealth Games Scotland also held in the University Archives including Sir Peter Heatly and Willie Carmichael himself. The choice of host city for the 1966 Games was to be made at a meeting held at the 1962 Games in Perth, Australia. The first target on Carmichael’s plan, the Edinburgh campaign put forward its application as host city at a Federation meeting held in Rome in 1960. A series of press cutting files collected by Carmichael trace the campaigns progress from this initial declaration of intent through to the final decision made in Perth. Documents supporting Edinburgh’s bid to host the 1966 Commonwealth Games presented to meetings of the Commonwealth Games Federation in Rome (1960) and Perth (1962), part of the Willie Carmichael Archive. Edinburgh faced competition both within Scotland, with the press reporting a rival bid from Glasgow, and across the Commonwealth, with other nations including Jamaica, India and Southern Rhodesia expressing an interest in hosting the 1966 competition. When it came to the final decision in 1962 three cities remained in contention: Edinburgh, Kingston in Jamaica, and Salisbury in Southern Rhodesia. The first round of voting saw Salisbury eliminated. In the subsequent head-to-head vote Kingston was selected as host city for the 1966 Games by the narrowest of margins (17 votes to 16). As Carmichael had sketched out in his 1957 plan if Edinburgh failed in its attempt to host the 1966 Games it would have another opportunity to make its case to hold the 1970 event. Carmichael and his colleagues travelled to Kingston in 1966 confident of securing the support required to bring the Games to Scotland in 1970. In our next blog we’ll look at how they succeeded in making the case for Edinburgh. ‘Scotland Invites’, title page to document supporting Edinburgh’s bid to host the 1966 Commonwealth Games (Willie Carmichael Archive). Link to Post | Language: English friday art blog: kate downie Posted on August 27, 2020 from Culture on Campus Nanbei by Kate Downie (Oil on canvas, 2013) During a series of visits to China a few years ago, Kate Downie was brought into contact with traditional ink painting techniques, and also with the China of today. There she encountered the contrasts and meeting points between the epic industrial and epic romantic landscapes: the motorways, rivers, cityscapes and geology – all of which she absorbed and reflected on in a series of oil and ink paintings. As Kate creates studies for her paintings in situ, she is very much immersed in the landscapes that she is responding to and reflecting on. The artwork shown above, ‘Nanbei’, which was purchased by the Art Collection in 2013, tackles similar themes to Downie’s Scottish based work, reflecting both her interest in the urban landscape and also the edges where land meets water. Here we encounter both aspects within a new setting – an industrial Chinese landscape set by the edge of a vast river. Downie is also obsessed with bridges. As well as the bridge that appears in this image, seemingly supported by trees that follow its line, the space depicted forms an unseen bridge between two worlds and two extremes, between epic natural and epic industrial forms. In this imagined landscape, north meets south (Nanbei literally means North South) and mountains meet skyscrapers; here both natural and industrial structures dominate the landscape. This juxtaposition is one of the aspects of China that impressed the artist and inspired the resulting work. After purchasing this work by Kate Downie, the Art Collection invited her to be one of three exhibiting artists in its exhibition ‘Reflections of the East’ in 2015 (the other two artists were Fanny Lam Christie and Emma Scott Smith). All artists had links to China, and ‘Nanbei’ was central to the display of works in the Crush Hall that Kate had entitled ‘Shared Vision’. Temple Bridge (Monoprint, 2015) Kate Downie studied Fine Art at Gray’s School of Art, Aberdeen and has held artists’ residencies in the USA and Europe. She has exhibited widely and has also taught and directed major art projects. In 2010 Kate Downie travelled to Beijing and Shanghai to work with ink painting masters and she has since returned there several times, slowly building a lasting relationship with Chinese culture. On a recent visit she learned how to carve seals from soapstone, and these red stamps can now be seen on all of her work, including on her print ‘Temple Bridge’ above, which was purchased by the Collection at the end of the exhibition. Kate Downie recently gave an interesting online talk about her work and life in lockdown. It was organised by The Scottish Gallery in Edinburgh which is currently holding an exhibition entitled ‘Modern Masters Women‘ featuring many women artists. Watch Kate Downie’s talk below: Link to Post | Language: English Telling Untold Stories Through the Emmett Till Archives Posted on August 27, 2020 from Illuminations Detail of a newspaper clipping from the Joseph Tobias Papers, MSS 2017-002 Friday August 28th marks the 65th anniversary of the abduction and murder of Emmett Till. Till’s murder is regarded as a significant catalyst for the mid-century African-American Civil Rights Movement. Calls for justice for Till still drive national conversations about racism and oppression in the United States. In 2015, Florida State University (FSU) Libraries Special Collections & Archives established the Emmett Till Archives in collaboration with Emmett Till scholar Davis Houck, filmmaker Keith Beauchamp, and author Devery Anderson. Since then, we have continued to build robust research collections of primary and secondary sources related to the life, murder, and commemoration of Emmett Till. We invite researchers from around the world, from any age group, to explore these collections and ask questions. It is through research and exploration of original, primary resources that Till’s story can be best understood and that truth can be shared. “Mamie had a little boy…”, from the Wright Family Interview, Keith Beauchamp Audiovisual Recordings, MSS 2015-016 FSU Special Collections & Archives. As noted in our Emmett Till birthday post this year, an interview with Emmett Till’s family, conducted by civil rights filmmaker Keith Beauchamp in 2018, is now available through the FSU Digital Library in two parts. Willie Wright, Thelma Wright Edwards, and Wilma Wright Edwards were kind enough to share their perspectives with Beauchamp and in a panel presentation at the FSU Libraries Heritage Museum that Spring. Soon after this writing, original audio and video files from the interview will be also be available to any visitor, researcher, or aspiring documentary filmmaker through the FSU Digital Library. Emmett Till, December 1954. Image from the Davis Houck Papers A presentation by a Till scholar in 2019 led to renewed contact with and a valuable donation from FSU alum Steve Whitaker, who in a way was the earliest contributor to Emmett Till research at FSU. His seminal 1963 master’s thesis, completed right here at Florida State University, is still the earliest known scholarly work on the kidnapping and murder of Till, and was influential on many subsequent retellings of the story. The Till Archives recently received a few personal items from Whitaker documenting life in mid-century Mississippi, as well as a small library of books on Till, Mississippi law, and other topics that can give researchers valuable context for his thesis and the larger Till story. In the future, the newly-founded Emmett Till Lecture and Archives Fund will ensure further opportunities to commemorate Till through events and collection development. FSU Libraries will continue to partner with Till’s family, the Emmett Till Memory Project, Emmett Till Interpretive Center, the Emmett Till Project, the FSU Civil Rights Institute, and other institutions and private donors to collect, preserve and provide access to the ongoing story of Emmett Till. Sources and Further Reading FSU Libraries. Emmett Till Archives Research Guide. https://guides.lib.fsu.edu/till Wright Family Interview, Keith Beauchamp Audiovisual Recordings, MSS 2015-016, Special Collections & Archives, Florida State University, Tallahassee, Florida. Interview Part I: http://purl.flvc.org/fsu/fd/FSU_MSS2015-016_BD_001 Interview Part II: http://purl.flvc.org/fsu/fd/FSU_MSS2015-016_BD_002 Link to Post | Language: English Former Congressman Trey Gowdy Appointed to the PIDB Posted on August 26, 2020 from Transforming Classification On August 24, 2020, House Minority Leader Kevin McCarthy (R-CA) appointed former Congressman Harold W. “Trey” Gowdy, III as a member of the Public Interest Declassification Board. Mr. Gowdy served four terms in Congress, representing his hometown of Spartansburg in South Carolina’s 4th congressional district. The Board members and staff welcome Mr. Gowdy and look forward to working with him in continuing efforts to modernize and improve how the Federal Government classifies and declassifies sensitive information. Mr. Gowdy was appointed by the Minority Leader McCarthy on August 24, 2020. He is serving his first three-year term on the Board. His appointment was announced on August 25, 2020 in the Congressional Record https://www.congress.gov/116/crec/2020/08/25/CREC-2020-08-25-house.pdf Link to Post | Language: English Tracey Sterne Posted on August 25, 2020 from NYPR Archives & Preservation In November of 1981, an item appeared in The New York Times -and it seemed all of us in New York (and elsewhere) who were interested in music, radio, and culture in general, saw it:  “Teresa Sterne,” it read, “who in 14 years helped build the Nonesuch Record label into one of the most distinguished and innovative in the recording industry, will be named Director of Music Programming at WNYC radio next month.” The piece went on to promise that Ms. Sterne, under WNYC’s management, would be creating “new kinds of programming -including some innovative approaches to new music and a series of live music programs.”  This was incredible news. Sterne, by this time, was a true cultural legend. She was known not only for those 14 years she’d spent building Nonesuch, a remarkably smart, serious, and daring record label —but also for how it had all ended, with her sudden dismissal from that label by Elektra, its parent company (whose own parent company was Warner Communications), two years earlier. The widely publicized outrage over her termination from Nonesuch included passionate letters of protest from the likes of Leonard Bernstein, Elliott Carter, Aaron Copland —only the alphabetical beginning of a long list of notable musicians, critics and journalists who saw her firing as a sharp blow to excellence and diversity in music. But the dismissal stood.  By coincidence, only three weeks before the news of her hiring broke, I had applied for a job as a part-time music-host at WNYC. Steve Post, a colleague whom I’d met while doing some producing and on-air work at New York’s decidedly non-profit Pacifica station, WBAI, had come over from there to WNYC, a year before, to do the weekday morning music and news program. “Fishko,” he said to me, “they need someone on the weekends -and I think they want a woman.” My day job of longstanding was as a freelance film editor, but I wanted to keep my hand in the radio world. Weekends would be perfect. In two interviews with executives at WNYC, I had failed to impress. But now I could feel hopeful about making a connection to Ms. Sterne, who was a music person, as was I.  Soon after her tenure began, I threw together a sample tape and got it to her through a contact on the inside. And she said, simply: Yeah, let’s give her a chance. And so it began.  Tracey—the name she was called by all friends and colleagues — seemed, immediately, to be a fascinating, controversial character: she was uniquely qualified to do the work at hand, but at the same time she was a fish out of water. She was un-corporate, not inclined to be polite to the young executives upstairs, and not at all enamored of current trends or audience research. For this we dearly loved her, those of us on the air. She cared how the station sounded, how the music connected, how the information about the music surrounded it. Her preoccupations seemed, even then, to be of the Old School. But she was also fiercely modern in her attitude toward the music, unafraid to mix styles and periods, admiring of new music, up on every instrumentalist and conductor and composer, young, old, avant-garde, traditional. And she had her own emphatic and impeccable taste. Always the best, that was her motto —whatever it is, if it’s great, or even just extremely good, it will distinguish itself and find its audience, she felt.  Tracey Sterne, age 13, rehearsing for a Tchaikovsky concerto performance at WNYC in March 1940. (Finkelstein/WNYC Archive Collections) She had developed her ear and her convictions, as it turned out, as a musician, having been a piano prodigy who performed at Madison Square Garden at age 12. She went on to a debut with the New York Philharmonic, gave concerts at Lewisohn Stadium and the Brooklyn Museum, and so on. I could relate. Though my gifts were not nearly at her level, I, too, had been a dedicated, early pianist and I, too, had looked later for other ways to use what I’d learned at the piano keyboard. And our birthdays were on the same date in March. So, despite being at least a couple of decades apart in age, we bonded.  Tracey’s tenure at WNYC was fruitful, though not long. As she had at Nonesuch, she embraced ambitious and adventurous music programming. She encouraged some of the on-air personalities to express themselves about the music, to “personalize” the air, to some degree. That was also happening in special programs launched shortly before she arrived as part of a New Music initiative, with John Schaefer and Tim Page presenting a range of music way beyond the standard classical fare. And because of Tracey’s deep history and contacts in the New York music business, she forged partnerships with music institutions and found ways to work live performances by individual musicians and chamber groups into the programming. She helped me carve out a segment on air for something we called Great Collaborations, a simple and very flexible idea of hers that spread out to every area of music and made a nice framework for some observations about musical style and history. She loved to talk (sometimes to a fault) and brainstorm about ways to enliven the idea of classical music on the radio, not something all that many people were thinking about, then.  But management found her difficult, slow and entirely too perfectionistic. She found management difficult, slow and entirely too superficial. And after a short time, maybe a year, she packed up her sneakers —essential for navigating the unforgiving marble floors in that old place— and left the long, dusty hallways of the Municipal Building.  After that, I occasionally visited Tracey’s house in Brooklyn for events which I can only refer to as “musicales.” Her residence was on the Upper West Side, but this family house was treated as a country place, she’d go on the weekends. She’d have people over, they’d play piano, and sing, and it might be William Bolcom and Joan Morris, or some other notables, spending a musical and social afternoon. Later, she and I produced a big, New York concert together for the 300th birthday of Domenico Scarlatti –which exact date fell on a Saturday in 1985. “Scarlatti Saturday,” we called it, with endless phone-calling, musician-wrangling and fundraising needed for months to get it off the ground.  The concert itself, much of which was also broadcast on WNYC, went on for many hours, with appearances by some of the finest pianists and harpsichordists in town and out, lines all up and down Broadway to get into Symphony Space.  Throughout, Tracey was her incorruptible self — and a brilliant organizer, writer, thinker, planner, and impossibly driven producing-partner.  I should make clear, however, that for all her knowledge and perfectionistic, obsessive behavior, she was never the cliche of the driven, lonely careerist -or whatever other cliche you might want to choose. She was a warm, haimish person with friends all over the world, friends made mostly through music. A case in point: the “Scarlatti Saturday” event was produced by the two of us on a shoestring. And Tracey, being Tracey, she insisted that we provide full musical and performance information in printed programs, offered free to all audience members, and of course accurate to the last comma. How to assure this? She quite naturally charmed and befriended the printer — who wound up practically donating the costly programs to the event. By the time we were finished she was making him batches of her famous rum balls and he was giving us additional, corrected pages —at no extra charge. It was not a calculated maneuver -it was just how she did things.  You just had to love and respect her for the life force, the intelligence, the excellence and even the temperament she displayed at every turn. Sometimes even now, after her death many years ago at 73 from ALS, I still feel Tracey Sterne’s high standards hanging over me —in the friendliest possible way. ___________________________________________ Sara Fishko hosts WNYC’s culture series, Fishko Files. Link to Post | Language: English Heroes Work Here Posted on August 24, 2020 from AOTUS The National Archives is home to an abundance of remarkable records that chronicle and celebrate the rich history of our nation. It is a privilege to be Archivist of the United States—to be the custodian of our most treasured documents and the head of an agency with such a unique and rewarding mission. But it is my greatest privilege to work with such an accomplished and dedicated staff—the real treasures of the National Archives go home at night. Today I want to recognize and thank the mission-essential staff of NARA’s National Personnel Records Center (NPRC). Like all NARA offices, the NPRC closed in late March to protect its workforce and patrons from the spread of the pandemic and comply with local government movement orders. While modern military records are available electronically and can be referenced remotely, the majority of NPRC’s holdings and reference activity involve paper records that can be accessed only by on-site staff. Furthermore, these records are often needed to support veterans and their families with urgent matters such as medical emergencies, homeless veterans seeking shelter, and funeral services for deceased veterans. Concerned about the impact a disruption in service would have on veterans and their families, over 150 staff voluntarily set aside concerns for their personal welfare and regularly reported to the office throughout the period of closure to respond to these types of urgent requests. These exceptional staff were pioneers in the development of alternative work processes to incorporate social distancing and other protective measures to ensure a safe work environment while providing this critical service. National Personnel Records Center (NPRC) building in St. Louis The Center is now in Phase One of a gradual re-opening, allowing for additional on-site staff.  The same group that stepped up during the period of closure continues to report to the office and are now joined by additional staff volunteers, enabling them to also respond to requests supporting employment opportunities and home loan guaranty benefits. There are now over 200 staff supporting on-site reference services on a rotational basis. Together they have responded to over 32,000 requests since the facility closed in late March. More than half of these requests supported funeral honors for deceased veterans. With each passing day we are a day closer to the pandemic being behind us. Though it may seem far off, there will come a time when Covid-19 is no longer the threat that it is today, and the Pandemic of 2020 will be discussed in the context of history. When that time comes, the mission essential staff of NPRC will be able to look back with pride and know that during this unprecedented crisis, when their country most needed them, they looked beyond their personal well-being to serve others in the best way they were able. As Archivist of the United States, I applaud you for your commitment to the important work of the National Archives, and as a Navy veteran whose service records are held at NPRC, I thank you for your unwavering support to America’s veterans. Link to Post | Language: English Contribute to the FSU Community COVID 19 Project Posted on August 21, 2020 from Illuminations Masks Sign, contributed by Lorraine Mon, view this item in the digital library here Students, faculty, and alumni! Heritage & University Archives is collecting stories and experiences from the FSU community during COVID-19. University life during a pandemic will be studied by future scholars. During this pandemic, we have received requests surrounding the 1918 Flu Pandemic. Unfortunately, not many documents describing these experiences survive in the archive.  To create a rich record of life in these unique times we are asking the FSU Community to contribute their thoughts, experiences, plans, and photographs to the archive. Working from Home, contributed by Shaundra Lee, view this time in the digital library here How did COVID-19 affect your summer? Tell us about your plans for fall. How did COVID-19 change your plans for classes? Upload photographs of your dorm rooms or your work from home set ups. If you’d like to see examples of what people have already contributed, please see the collection on Diginole. You can add your story to the project here. Link to Post | Language: English 2021 Creative Fellowship – Call for Proposals Posted on August 21, 2020 from Notes For Bibliophiles PPL is now accepting proposals for our 2021 Creative Fellowship! We’re looking for an artist working in illustration or two-dimensional artwork to create new work related to the theme of our 2021 exhibition, Tomboys. View the full call for proposals, including application instructions, here. The application deadline is October 1, 2020 April 1, 2021*. *This deadline has shifted since we originally posted this call for proposals! The 2021 Fellowship, and the Exhibition & Program Series, have both been shifted forward by six months due to the coronavirus. Updated deadlines and timeline in the call for proposals! Link to Post | Language: English Friday art blog: still life in the collection Posted on August 20, 2020 from Culture on Campus Welcome to our new regular blog slot, the ‘Friday Art Blog’. We look forward to your continued company over the next weeks and months. You can return to the Art Collection website here, and search our entire permanent collection here. Pears by Jack Knox (Oil on board, 1973) This week we are taking a look at some of the still life works of art in the permanent collection. ‘Still life’ (or ‘nature morte’ as it is also widely known) refers to the depiction of mostly inanimate subject matter. It has been a part of art from the very earliest days, from thousands of years ago in Ancient Egypt, found also on the walls in 1st century Pompeii, and featured in illuminated medieval manuscripts. During the Renaissance, when it began to gain recognition as a genre in its own right, it was adapted for religious purposes. Dutch golden age artists in particular, in the early 17th century, depicted objects which had a symbolic significance. The still life became a moralising meditation on the brevity of life. and the vanity of the acquisition of possessions. But, with urbanization and the rise of a middle class with money to spend, it also became fashionable simply as a celebration of those possessions – in paintings of rare flowers or sumptuous food-laden table tops with expensive silverware and the best china. The still life has remained a popular feature through many modern art movements. Artists might use it as an exercise in technique (much cheaper than a live model), as a study in colour, form, or light and shade, or as a meditation in order to express a deeper mood. Or indeed all of these. The works collected by the University of Stirling Art Collection over the past fifty years reflect its continuing popularity amongst artists and art connoisseurs alike. Bouteille et Fruits by Henri Hayden (Lilthograph, 75/75, 1968) In the modern era the still life featured in the post impressionist art of Van Gogh, Cezanne and Picasso. Henri Hayden trained in Warsaw, but moved to Paris in 1907 where Cezanne and Cubism were influences. From 1922 he rejected this aesthetic and developed a more figurative manner, but later in life there were signs of a return to a sub-cubist mannerism in his work, and as a result the landscapes and still lifes of his last 20 years became both more simplified and more definitely composed than the previous period, with an elegant calligraphy. They combine a new richness of colour with lyrical melancholy. Meditation and purity of vision mark the painter’s last years. Black Lace by Anne Redpath (Gouache, 1951) Anne Redpath is best known for her still lifes and interiors, often with added textural interest, and also with the slightly forward-tilted table top, of which this painting is a good example. Although this work is largely monochrome it retains the fascination the artist had in fabric and textiles – the depiction of the lace is enhanced by the restrained palette. Untitled still life by Euan Heng (Linocut, 1/5, 1974) While Euan Heng’s work is contemporary in practice his imagery is not always contemporary in origin. He has long been influenced by Italian iconography, medieval paintings and frescoes. Origin of a rose by Ceri Richards (Lithograph, 30/70, 1967) In Ceri Richards’ work there is a constant recurrence of visual symbols and motifs always associated with the mythic cycles of nature and life. These symbols include rock formations, plant forms, sun, moon and seed-pods, leaf and flower. These themes refer to the cycle of human life and its transience within the landscape of earth. Still Life, Summer by Elizabeth Blackadder (Oil on canvas, 1963) This is a typical example of one of Elizabeth Blackadder’s ‘flattened’ still life paintings, with no perspective. Works such as this retain the form of the table, with the top raised to give the fullest view. Broken Cast by David Donaldson (Oil on canvas , 1975) David Donaldson was well known for his still lifes and landscape paintings as well as literary, biblical and allegorical subjects. Flowers for Fanny by William MacTaggart Oil on board, 1954 William MacTaggart typically painted landscapes, seascapes and still lifes featuring vases of flowers. These flowers, for his wife, Fanny Aavatsmark, are unusual for not being poppies, his most commonly painted flower. Cake by Fiona Watson (Digital print, 18/25, 2009) We end this blog post with one of the most popular still lifes in the collection. This depiction of Scottish classic the Tunnock’s teacake is a modern take on the still life. It is a firm favourite whenever it is on display. Image by Julie Howden Link to Post | Language: English Solar Energy: A Brief Look Back Posted on August 20, 2020 from Illuminations In the early 1970’s the United States was in the midst of an energy crisis. Massive oil shortages and high prices made it clear that alternative ideas for energy production were needed and solar power was a clear front runner. The origins of the solar cell in the United States date back to inventor Charles Fritz in the 1880’s, and the first attempts at harvesting solar energy for homes, to the late 1930’s. In 1974, the State of Florida put it’s name in the ring to become the host of the National Solar Energy Research Institute. Site proposal for the National Solar Energy Research Institute. Claude Pepper Papers S. 301 B. 502 F. 4 With potential build sites in Miami and Cape Canaveral, the latter possessing the added benefit of proximity to NASA, the Florida Solar Energy Task Force, led by Robert Nabors and endorsed by Representative Pepper, felt confident. The state made it to the final rounds of the search before the final location of Golden, Colorado was settled upon, which would open in 1977. Around this same time however (1975), the Florida Solar Energy Center was established at the University of Central Florida. The Claude Pepper Papers contain a wealth of information on Florida’s efforts in the solar energy arena from the onset of the energy crisis, to the late 1980’s. Carbon copy of correspondence between Claude Pepper and Robert L. Nabors regarding the Cape Canaveral proposed site for the National Solar Research Institute. Claude Pepper Papers S. 301 B. 502 F. 4 Earlier this year, “Tallahassee Solar II”, a new solar energy farm, began operating in Florida’s capitol city.  Located near the Tallahassee International Airport, it provides electricity for more than 9,500 homes in the Leon County area. With the steady gains that the State of Florida continues to make in the area of solar energy expansion, it gets closer to fully realizing its nickname, “the Sunshine State.” Link to Post | Language: English (C)istory Lesson Posted on August 18, 2020 from Illuminations Our next submission is from Rachel Duke, our Rare Books Librarian, who has been with Special collections for two years. This project was primarily geared towards full-time faculty and staff, so I chose to highlight her contribution to see what a full-time faculty’s experience would be like looking through the catalog. Frontispiece and Title Page, Salome, 1894. Image from https://collection.cooperhewitt.org/objects/68775953/ The item she chose was Salome, originally written in French by Oscar Wilde, then translated into English, as her object. While this book does not explicitly identify as a “Queer Text,” Wilde has become canonized in queer historical literature. In the first edition of the book, there is even a dedication to his lover, Lord Alfred Bruce Douglas, who helped with the translation. While there are documented historical examples of what we would refer to today as “queerness,” (queer meaning non-straight) there is still no demarcation of his queerness anywhere in the catalog record. Although the author is not necessarily unpacking his own queer experiences in the text, “both [Salome’s] author and its legacy participate strongly in queer history” as Duke states in her submission.  Oscar Wilde and Lord Alfred Bruce Douglas Even though Wilde was in a queer relationship with Lord Alfred Bruce Douglas, and has been accepted into the Queer canon, why doesn’t his catalog record reflect that history? Well, a few factors come into play. One of the main ones is an aversion to retroactively labeling historical figures. Since we cannot confirm which modern label would fit Wilde, we can’t necessarily outright label him as gay. How would a queer researcher like me go about finding authors and artists from the past who are connected with queer history? It is important to acknowledge LGBTQ+ erasure when discussing this topic. Since the LGBTQ+ community has historically been marginalized, documentation of queerness is hard to come by because: People did not collect, and even actively erased, Queer and Trans Histories. LGBTQ+ history has been passed down primarily as an oral tradition.  Historically, we cannot confirm which labels people would have identified with. Language and social conventions change over time. So while we view and know someone to be queer, since it is not in official documentation we have no “proof.” On the other hand, in some cultures, gay relations were socially acceptable. For example, in the Middle Ages, there was a legislatively approved form of same-sex marriage, known as affrèrement. This example is clearly labeled as *gay* in related library-based description because it was codified that way in the historical record. By contrast, Shakespeare’s sonnets, which (arguably) use queer motifs and themes, are not labeled as “queer” or “gay.” Does queer content mean we retroactively label the AUTHOR queer? Does the implication of queerness mean we should make the text discoverable under queer search terms? Cartoon depicting Oscar Wilde’s visit to San Francisco. By George Frederick Keller – The Wasp, March 31, 1882. Personally, I see both sides. As someone who is queer, I would not want a random person trying to retroactively label me as something I don’t identify with. On the other hand, as a queer researcher, I find it vital to have access to that information. Although they might not have been seen as queer in their time period, their experiences speak to queer history. Identities and people will change, which is completely normal, but as a group that has experienced erasure of their history, it is important to acknowledge all examples of historical queerness as a proof that LGBTQ+ individuals have existed throughout time. How do we responsibly and ethically go about making historical queerness discoverable in our finding aids and catalogs? Click Here to see some more historical figures you might not have known were LGBTQ+. Link to Post | Language: English Post navigation ← Older posts About ArchivesBlogs ArchivesBlogs syndicates content from weblogs about archives and archival issues and then makes the content available in a central location in a variety of formats.More Info.   Languages Deutsch English Español Français Italiano Nederlands Nihongo (日本語) العربية Syndicated Blogs ????????? blog? A Lively Experiment A Repository for Bottled Monsters A View to Hugh Academic Health Center Archives Adventures in Records Management African American Studies at Beinecke Library Annotations: The NEH Preservation Project AOTUS Archaeology Archives Oxford Archivagando Archival science / ??? ??????? Archivalia Archiveros Españoles en la Función Pública (AEFP) Archives and Auteurs Archives and Special Collections Archives d’Assy Archives Forum Archives Gig Archives Hub Blog Archives Outside Archives, Records and Artefacts ArchivesInfo ArchivesNext Archivistica e dintorni Archivium Sancti Iacobi Archivólogo – blog de archivo – Lic. Carmen Marín ArcHiVóNoMo.biT Arkivformidling Around the D AuthentiCity Beaver Archivist Blog bloggers@brooklynmuseum » Libraries & Archives Bogdan's Archival Blog — Blog de arhivist born digital archives (AIMS Project) Brandeis Special Collections Spotlight Calames – le blog Consultores Documentales Cultural Compass Culture on Campus Daily Searchivist De Digitale Archivaris Depotdrengen Digital Library of Georgia Digitization 101 discontents Dub Collections Endangered archives blog Ephemeral Archives F&M Archives & Special Collections Fil d'ariane frei23 – GeschichtsPuls Fresh Pickin's futureArch, or the future of archives… Hanging Together Helen Morgan Historical Notes Illuminations In the mailbox inside the CHS Inside the Gates Keeping Time L’Affaire Makropoulos l’Archivista La Tribune des Archives LiveJournal Archivists LSU Libraries Special Collections Blog M.E. Grenander Department of Special Collections and Archives MIT Libraries News » Archives + MIT History Modern Books and Manuscripts Mudd Manuscript Library Blog National Union of Women Teachers NC Miscellany nccdhistory New Archivist New York State Archives News and Events News – Litwin Books & Library Juice Press Notes For Bibliophiles O arquivista Old Things With Stories Open Beelden Order from Chaos Out of the Box Out of the box Pacific Northwest Features PaulingBlog Peeling Back the Bark Poetry at Beinecke Library Posts on Mark A. Matienzo Practical Archivist Practical E-Records Presbyterian Research RATilburg ReadyResources Reclamation & Representation Records management futurewatch Records Mgmt & Archiving Richard B. Russell Library for Political Research and Studies Room 26 Cabinet of Curiosities SDSU Special Collections: New Acquisitions, Events, and Highlights from Our Collections Special Collections Blog Special Collections – The University of Chicago Library News Special Collections – UGA Libraries News & Events Special Collections – UTC Library Spellbound Blog Stacked Five High State Library of Massachusetts State Records Office of Western Australia The Anarchivist The Autry Blog The Back Table The Butler Center for Arkansas Studies The Charleston Archive The Consecrated Eminence The Devil's Tale The Last Campaign The Legacy Center The Posterity Project The Quantum Archivist The Top Shelf the visible archive Touchable Archives Transforming Classification Trinity University Special Collections and Archives Twin Cities Archives Round Table UNC Greensboro Digital Collections Vault217 VPRO Radio Archief WebArchivists WebArchivists (FR) What the fonds? What's Cool at Hoole What’s on the 6th floor? WNYC Archives & Preservation You Ought to be Ashamed Proudly powered by WordPress barnesfoundation-org-4540 ---- Visit the Collection | Barnes Foundation Skip to content Skip to footer Barnes What’s On Plan Your Visit Our Collection Take a Class Barnes What’s On Plan Your Visit Our Collection Take a Class Please correct your errors Search SEARCH COLLECTION Search Enter a search term Suggested terms CAREERS CONTACT SHOP INTERNSHIP MEMBERSHIP PARKING RESTAURANT TICKETS Main menu What’s On Plan Your Visit Our Collection Take a Class About Support Teachers Careers Press Shop Host an Event Arboretum Permanent Collection The Barnes Collection Ongoing A story behind every object. Share it Share with Facebook (opens in a new window) Share with Twitter (opens in a new window) Share with Pinterest (opens in a new window) Share via Email (opens in a your email application) Copy URL #SeeingtheBarnes William James Glackens. The Raft (detail), 1915. BF701. Public Domain. Adults $25; students $5; members free. Become a Member Buy Tickets About the Collection The Barnes is home to one of the world’s greatest collections of impressionist, post-impressionist, and early modern paintings, with especially deep holdings in Renoir, Cézanne, Matisse, and Picasso. Assembled by Dr. Albert C. Barnes between 1912 and 1951, the collection also includes important examples of African art, Native American pottery and jewelry, Pennsylvania German furniture, American avant-garde painting, and wrought-iron metalwork. The minute you step into the galleries of the Barnes collection, you know you’re in for an experience like no other. Masterpieces by Vincent van Gogh, Henri Matisse, and Pablo Picasso hang next to ordinary household objects—a door hinge, a spatula, a yarn spinner. On another wall, you might see a French medieval sculpture displayed with a Navajo textile. These dense groupings, in which objects from different cultures, time periods, and media are all mixed together, are what Dr. Barnes called his “ensembles.” The ensembles, each one meticulously crafted by Dr. Barnes himself, are meant to draw out visual similarities between objects we don’t normally think of together. Created as teaching tools, they were essential to the educational program Dr. Barnes developed in the 1920s. The main gallery upon entering the Barnes Foundation collection. © Michael Moran/OTTO One of the first paintings purchased by Dr. Barnes. Vincent van Gogh. The Postman (Joseph-Étienne Roulin), 1889. BF37. Public Domain. Dr. Barnes began collecting in 1912. After making a fortune in the pharmaceutical business, he turned his attention to building “the greatest modern art collection” of his time. In February of that year, he sent his friend, the artist William Glackens, to Paris with instructions to bring back paintings by the French avant-garde. Glackens returned with 33 works, including Van Gogh’s The Postman (1889) and Picasso’s Young Woman Holding a Cigarette (1901). Dr. Barnes quickly established himself as a bold and ambitious collector, traveling frequently to New York and Paris, buying from dealers and sometimes directly from artists. Over the course of four decades, he assembled what is now considered one of the world's greatest collections of impressionist, post-impressionist, and early modern European paintings, with works by Paul Cézanne, Henri Matisse, Pablo Picasso, Amedeo Modigliani, Pierre-Auguste Renoir, and Chaïm Soutine. Great Paintings at the Barnes The collection has the world's largest holdings of paintings by Renoir (181) and Cézanne (69), as well as significant works by Matisse, Picasso, Modigli­ani, Van Gogh, and other renowned artists. Henri Matisse. Le Bonheur de vivre, 1905–1906. BF719. © 2020 Succession H. Matisse / Artists Rights Society (ARS), New York Paul Cézanne. The Card Players (Les Joueurs de cartes), 1890–1892. BF564. Public Domain. Georges Seurat. Models (Poseuses), 1886–1888. BF811. Public Domain. Amedeo Modigliani. Jeanne Hébuterne, 1919. BF285. Public Domain. Pablo Picasso. Young Woman Holding a Cigarette (Jeune femme tenant une cigarette), 1901. BF318. © 2020 Estate of Pablo Picasso / Artists Rights Society (ARS), New York Pierre-Auguste Renoir. Mussel-Fishers at Berneval (Pêcheuses de moules à Berneval, côte normand). 1879. BF989. Public Domain. Claude Monet. The Studio Boat (Le Bateau-atelier), 1876. BF730. © 2020 Estate of Claude Monet In 1922, Dr. Barnes chartered the Foundation as an educational institution for teaching people how to look at art. He was inspired by the writings of philosopher John Dewey, who emphasized the importance of education in a truly democratic society, and he decided to devote his whole collection to the project. He commissioned architect Paul Cret to design a gallery (the original building is still in Merion); he hired a teaching staff, including the legendary Violette de Mazia; and the Barnes Foundation opened for classes in 1925. Meanwhile, Dr. Barnes continued to build his collection. Though still focused on European modernism, his interests extended into other areas as well. In the early 1920s, he added a stunning group of African masks and sculptures to his collection—there are over 100—which he acquired from the French art dealer Paul Guillaume. He also began to purchase Native American pottery, jewelry, and textiles; old master paintings; ancient Egyptian, Greek, and Roman art; and American and European decorative and industrial arts, including 887 wrought-iron objects. Dr. Barnes continued collecting until his death in 1951. The wall ensembles are still arranged exactly as he left them. Dr. Barnes, in his Merion gallery. The ensembles created by Dr. Barnes combine art and craft, cosmopolitan and provincial styles, and objects from across periods and cultures. © Michael Moran/OTTO Learn More about the Collection Barnes Focus, Our Mobile Gallery Guide Enhance your experience on-site with our smartphone guide, which offers information and stories about the art and objects in the collection. The Collection Online The design of our collection online was inspired by Dr. Barnes and his approach to looking at art. You can browse 3,000-plus objects by color, light, line, and space, making unexpected and exciting connections between pieces from different eras, places, and cultures. Research Notes The Barnes has a team of curators, scholars, conservators, and archivists actively engaged in research about the works in our galleries. Read about some of our most recent discoveries and theories. Library, Archives, and Special Collections Want to learn more about Dr. Barnes, his collection, and the Barnes Foundation? Our archives, art library, and manuscript and rare book collections are rich research resources. Conservation Our Conservation team has the difficult but rewarding task of caring for the art collection. If you have questions about the Barnes collection, please email us. Your support helps research and conservation at the Barnes, so we can present exhibitions and events. Donate Become a Member Location 2025 Benjamin Franklin Parkway Philadelphia, PA 19130 215.278.7000 Get directions Hours Thu–Mon 11am – 5pm Members: 10am – 5pm Newsletter Please correct your errors Enter your e-mail address Subscribe Please enter a valid email address Processing your request… Thanks for subscribing to our newsletter Useful links Accessibility Terms & Conditions Privacy Policy Non-discrimination Copyright & Image Licensing Find us on social media Site by AREA 17 Our COVID-19 guidelines have changed to reflect current guidance from the City of Philadelphia. Masks are required for all. basecamp-com-3459 ---- Getting Real: The smarter, faster, easier way to build a successful web application | Basecamp Skip to content How it works Before & after Got clients? Pricing Support Sign in Try it FREE How it works Before & after Got clients? Pricing Support Sign in Try Basecamp Free Getting Real A must read for anyone building a web app. Getting Real is packed with keep-it-simple insights, contrarian points of view, and unconventional approaches to software design. This isn't a technical book or a design tutorial, it's a book of ideas. Anyone working on a web app - including entrepreneurs, designers, programmers, executives, or marketers - will find value and inspiration in this book. Read it online Download a PDF “I got more out of reading this little e-book than just about any other computer-related book I’ve ever read on any topic that I can possibly think of. Whoa.” -Jared White “Getting Real is now officially our ‘bible.’” -Bill Emmack “I can honestly say that this is the first book I’ve read about software development that has been able to reignite my passion for the process. It is an incredible and very relevant book. Thank you guys for publishing it.” -Anthony Papillion Full list of essays included in the book Introduction What is Getting Real? About Basecamp Caveats, disclaimers, and other preemptive strikes The Starting Line Build Less What’s Your Problem? Fund Yourself Fix Time and Budget, Flex Scope Have an Enemy It Shouldn’t be a Chore Stay Lean Less Mass Lower Your Cost of Change The Three Musketeers Embrace Constraints Be Yourself Priorities What’s the big idea? Ignore Details Early On It’s a Problem When It’s a Problem Hire the Right Customers Scale Later Make Opinionated Software Feature Selection Half, Not Half-Assed It Just Doesn’t Matter Start With No Hidden Costs Can You Handle It? Human Solutions Forget Feature Requests Hold the Mayo Process Race to Running Software Rinse and Repeat From Idea to Implementation Avoid Preferences “Done!” Test in the Wild Shrink Your Time The Organization Unity Alone Time Meetings Are Toxic Seek and Celebrate Small Victories Staffing Hire Less and Hire Later Kick the Tires Actions, Not Words Get Well Rounded Individuals You Can’t Fake Enthusiasm Wordsmiths Interface Design Interface First Epicenter Design Three State Solution The Blank Slate Get Defensive Context Over Consistency Copywriting is Interface Design One Interface Code Less Software Optimize for Happiness Code Speaks Manage Debt Open Doors Words There’s Nothing Functional about a Functional Spec Don’t Do Dead Documents Tell Me a Quick Story Use Real Words Personify Your Product Pricing and Signup chapter 12 Free Samples Easy On, Easy Off Silly Rabbit, Tricks are for Kids A Softer Bullet Promotion Hollywood Launch A Powerful Promo Site Ride the Blog Wave Solicit Early Promote Through Education Feature Food Track Your Logs Inline Upsell Name Hook Support Feel The Pain Zero Training Answer Quick Tough Love In Fine Forum Publicize Your Screwups Post-Launch One Month Tuneup Keep the Posts Coming Better, Not Beta All Bugs Are Not Created Equal Ride Out the Storm Keep Up With the Joneses Beware the Bloat Monster Go With the Flow Other books by Basecamp Shape Up It Doesn't Have to Be Crazy at Work REWORK REMOTE: Office Not Required Basecamp apps: iOS, Android, Mac, and PC, integrations. Company: about us, podcast, blog, books, handbook, newsletter. Guides: going remote, team communication, group chat: group stress. Our new app: HEY - email at its best. Fine print: customer rights, privacy & terms, uptime, system status. Copyright ©1999-2021 Basecamp. All rights reserved. Enjoy the rest of your day! acrl-ala-org-8080 ----
Fatal error: Cannot declare class WP_Block_Template, because the name is already in use in /home/customer/www/acrl.ala.org/public_html/techconnect/wp-content/plugins/gutenberg/lib/full-site-editing/class-wp-block-template.php on line 12
basecamp-com-723 ---- Make Opinionated Software | Getting Real Heads up! This page uses features your browser doesn't support. Try a modern browser like Firefox or Chrome for the best experience. Getting Real Chapter 20: Make Opinionated Software Next: Half, Not Half-Assed Your app should take sides Some people argue software should be agnostic. They say it’s arrogant for developers to limit features or ignore feature requests. They say software should always be as flexible as possible. We think that’s bullshit. The best software has a vision. The best software takes sides. When someone uses software, they’re not just looking for features, they’re looking for an approach. They’re looking for a vision. Decide what your vision is and run with it. And remember, if they don’t like your vision there are plenty of other visions out there for people. Don’t go chasing people you’ll never make happy. A great example is the original wiki design. Ward Cunningham and friends deliberately stripped the wiki of many features that were considered integral to document collaboration in the past. Instead of attributing each change of the document to a certain person, they removed much of the visual representation of ownership. They made the content ego-less and time-less. They decided it wasn’t important who wrote the content or when it was written. And that has made all the difference. This decision fostered a shared sense of community and was a key ingredient in the success of Wikipedia. Our apps have followed a similar path. They don’t try to be all things to all people. They have an attitude. They seek out customers who are actually partners. They speak to people who share our vision. You’re either on the bus or off the bus. Half, Not Half-Assed → We made Basecamp using the principles in this book. It combines all the tools teams need to get work done in a single, streamlined package. With Basecamp, everyone knows what to do, where things stand, and where to find things they need. Copyright ©1999-2021 Basecamp. All rights reserved. Back to Basecamp.com Getting Real The smarter, faster, easier way to build a successful web application by Basecamp Introduction Chapter 1 What is Getting Real Chapter 2 About Basecamp Chapter 3 Caveats, disclaimers, and other preemptive strikes The Starting Line Chapter 4 Build Less Chapter 5 What's Your Problem? Chapter 6 Fund Yourself Chapter 7 Fix Time and Budget, Flex Scope Chapter 8 Have an Enemy Chapter 9 It Shouldn't be a Chore Stay Lean Chapter 10 Less Mass Chapter 11 Lower Your Cost of Change Chapter 12 The Three Musketeers Chapter 13 Embrace Constraints Chapter 14 Be Yourself Priorities Chapter 15 What’s the Big Idea? Chapter 16 Ignore Details Early On Chapter 17 It’s a Problem When It’s a Problem Chapter 18 Hire the Right Customers Chapter 19 Scale Later Chapter 20 Make Opinionated Software Feature Selection Chapter 21 Half, Not Half-Assed Chapter 22 It Just Doesn’t Matter Chapter 23 Start With No Chapter 24 Hidden Costs Chapter 25 Can You Handle It? Chapter 26 Human Solutions Chapter 27 Forget Feature Requests Chapter 28 Hold the Mayo Process Chapter 29 Race to Running Software Chapter 30 Rinse and Repeat Chapter 31 From Idea to Implementation Chapter 32 Avoid Preferences Chapter 33 “Done!” Chapter 34 Test in the Wild Chapter 35 Shrink Your Time The Organization Chapter 36 Unity Chapter 37 Alone Time Chapter 38 Meetings Are Toxic Chapter 39 Seek and Celebrate Small Victories Staffing Chapter 40 Hire Less and Hire Later Chapter 41 Kick the Tires Chapter 42 Actions, Not Words Chapter 43 Get Well Rounded Individuals Chapter 44 You Can’t Fake Enthusiasm Chapter 45 Wordsmiths Interface Design Chapter 46 Interface First Chapter 47 Epicenter Design Chapter 48 Three State Solution Chapter 49 The Blank Slate Chapter 50 Get Defensive Chapter 51 Context Over Consistency Chapter 52 Copywriting is Interface Design Chapter 53 One Interface Code Chapter 54 Less Software Chapter 55 Optimize for Happiness Chapter 56 Code Speaks Chapter 57 Manage Debt Chapter 58 Open Doors Words Chapter 59 There’s Nothing Functional about a Functional Spec Chapter 60 Don’t Do Dead Documents Chapter 61 Tell Me a Quick Story Chapter 62 Use Real Words Chapter 63 Personify Your Product Pricing and Signup Chapter 64 Free Samples Chapter 65 Easy On, Easy Off Chapter 66 Silly Rabbit, Tricks are for Kids Chapter 67 A Softer bullet Promotion Chapter 68 Hollywood Launch Chapter 69 A Powerful Promo Site Chapter 70 Ride the Blog Wave Chapter 71 Solicit Early Chapter 72 Promote Through Education Chapter 73 Feature Food Chapter 74 Track Your Logs Chapter 75 Inline Upsell Chapter 76 Name Hook Support Chapter 77 Feel The Pain Chapter 78 Zero Training Chapter 79 Answer Quick Chapter 80 Tough Love Chapter 81 In Fine Forum Chapter 82 Publicize Your Screwups Post-Launch Chapter 83 One Month Tuneup Chapter 84 Keep the Posts Coming Chapter 85 Better, Not Beta Chapter 86 All Bugs Are Not Created Equal Chapter 87 Ride Out the Storm Chapter 88 Keep Up With the Joneses Chapter 89 Beware the Bloat Monster Chapter 90 Go With The Flow Conclusion Chapter 91 Start Your Engines arxiv-org-3427 ---- None baserow-io-574 ---- Baserow: Open source no-code database and Airtable alternative Product Premium Pricing Templates Developers Documentation OpenAPI specification API Blog Jobs Contact Repository GitHub Sponsor Login Register Product Premium Pricing Templates Developers Documentation Getting started Start baserow locally Creating a plugin OpenAPI specification API GitLab repository Want to contribute? Blog Jobs 1 Contact Become a sponsor GitLab repository Login Register new 1.5 July release Open source no-code database and Airtable alternative Create your own online database without technical experience. Our user friendly no-code tool gives you the powers of a developer without leaving your browser. Create account Already have an account? Prefer to self host? Deploy Baserow to Heroku, Cloudron or Ubuntu No-code platform that grows Are your projects, ideas or business processes unorganized or unclear? Do you have many tools for one job? With Baserow you decide how you want to structure everything. Whether you’re managing customers, products, airplanes or all of them. If you know how a spreadsheet works, you know how Baserow works. Flexible software Software tailored to your needs instead of the other way around. Clear and accessible data by all your team members. Never unorganized projects, ideas and notes anymore. One interface for everything. Easily integrate with other software. Collaborate in realtime. Unlimited rows. Fast! Developer friendly Easily create custom plugins with our boilerplate or use third party ones. Because Baserow is built with modern and proven frameworks it feels like a breeze for developers. Built with Django and Nuxt. Open source. Self hosted. Headless and API first. Works with PostgreSQL. Supports custom and third party plugins. Ready to bring structure to your organisation? We're in such an early phase that we can’t yet offer you everything we want. Because we appreciate everyone who tries out Baserow you may use the SaaS version for free! For now at least. Hosted SaaS version Free for now For people and companies that want to try out an early version. Early access to the latest features. Unlimited databases and tables. Support via the live contact form and email. Organize your workflow and projects. Receive the latest updates. No costs at this point. Create account Self hosted open source Always free For everyone that wants to self host or develop custom plugins. Can be self hosted. Unlimited users, rows and databases. Easy install with our step by step guide. Will always be free. MIT license. Custom plugins. Baserow repository Early premium € 4 per user / month For companies with advanced needs. Can be self hosted. Unlimited users, rows and databases. Admin dashboard. Role based permissions. Kanban and Calendar views. Lots of other features. More information More detailed pricing Make your business future proof Spend time on running and innovating your business without worrying about software or lost data. Because of our open source nature and dedicated development team you decide it all. Short development cycles Frequent releases and fast bug fixes make sure you never fall behind. No vendor lock-in Our open source core means that you can run Baserow independently on your own server. Blazingly fast Continuously tested with 100.000+ rows per table while in development. Connect with software Baserow is API first which means it is made to connect with other software. Project Tracker Templates Find inspiration or a starting point for your database in one of our templates: Explore all templates Applicant Tracker Personal Task Manager Feature roadmap 2021 V1 March Templates, search and performance The ability to install templates, re-order field columns, additional date filters, searching in grid view, phone field and a huge interface performance improvement. April Order rows and user admin Order rows by drag and drop, manage users as admin premium, run Baserow on your own device locally. May Exporting, Importing and more admin Exporting to CSV, JSON and XML, an admin dashboard and group management premium and importing additional formats like Excel and JSON. June Trash and form view Restore deleted items with trash functionality, form view and re-ordering of applications, tables, views and select options. July Form view, date fields and row comments Improved form view, created on field, last modified field, link to table filters and row comments premium. August Advanced fields and Zapier Formula field, multiple select field, lookup field and an integration with Zapier. September Kanban view and web hooks Kanban view premium to track progress, web hooks and configurable row height October Undo redo and gallery view Advanced undo redo functionality and a gallery to list your data in a more user friendly and manageable way. November Public view sharing and multiple copy paste Public grid view sharing, additional link row filters, n8n node and copy pasting multiple values. December Footer calculations and coloring of rows Different type of footer calculations, coloring of rows premium and link to table field improvements. V2 git clone https://gitlab.com/bramw/baserow Cloning into "baserow"... cd baserow docker-compose up Starting db           ... done Starting backend      ... done Starting celery       ... done Starting web-frontend ... done ... echo "Visit http://localhost:3000" Open source Easily create plugins or contribute You don’t have to spend time on reinventing the wheel. We use modern tools and frameworks like Docker, Django, Vue.js and Nuxt.js so you can easily write plugins or contribute. Use our boilerplate and documentation to jumpstart your plugin. Baserow repo Read the docs Plugin boilerplate Early premium version Does it sound good if you could export your data directly to Excel, XML or JSON, have role based permissions and the ability to place comments on row level? Would you like to the visualize your data using a Kanban or Calendar view? Then the premium version might be something for you. It also includes an admin panel, signup rules, SSO login and more. More information Our blog View all blog posts release August 11, 2021 by Bram Wiepjes July 2021 release of Baserow The July release of Baserow contains new created on / last updated field types, a one-click Heroku install, row comments (premium), new templates and much more! info March 2, 2021 by Bram Wiepjes Best Excel alternatives info May 22, 2020 by Bram Wiepjes Best Airtable alternatives Log in Register Contact GitLab repository Sponsor Twitter Product Premium Pricing Developer documentation OpenAPI specification API Blog July 2021 release of Baserow Best Airtable alternatives Best Excel alternatives Under the hood of Baserow Building a database Legal Privacy policy Terms & conditions Newsletter Stay up to date with the lates developments and releases by signing up for our newsletter. Sign up © Copyright 2021 Baserow All rights reserved. amycastor-com-5463 ---- Binance: Italy, Lithuania, Hong Kong, all issue warnings; Brazil director quits – Amy Castor Primary Menu Amy Castor Independent Journalist About Me Selected Clips Contact Me Blog Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 14,571 other followers Email Address: Subscribe Twitter Updates FYI - I'm taking an actual vacation for the next week, so I'll be quiet on Twitter and not following the news so mu… twitter.com/i/web/status/1… 1 day ago RT @davidgerard: News: the Senate hates Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network… 1 day ago RT @franciskim_co: @WuBlockchain Translation https://t.co/hPQeFLjHpU 1 day ago RT @WuBlockchain: The Chinese government is cracking down on fraud. They posted a fraud case involving USDT on the wall to remind the publi… 1 day ago RT @patio11: The core use case for stablecoins is non-correlated collateral for making margin-intensive trades, particularly via levered us… 2 days ago Recent Comments cryptobuy on Binance: Fiat off-ramps keep c… Steve on Binance: A crypto exchange run… Amy Castor on El Salvador’s bitcoin plan: ta… Amy Castor on El Salvador’s bitcoin plan: ta… Clearwell Trader on El Salvador’s bitcoin plan: ta… Skip to content Amy Castor Binance: Italy, Lithuania, Hong Kong, all issue warnings; Brazil director quits Ever since Germany’s BaFin and the UK’s FCA issued warnings against Binance, the dominoes have continued to topple. Global regulators are fed up with the world’s biggest crypto exchange. This last week, three more jurisdictions issued warnings about Binance’s tokenized stocks, joining several others in voicing their concerns about the exchange. In a press release on Thursday, Italy’s market watchdog Consob warned investors that Binance and its subsidiaries “are not authorized to provide investment services and activities in Italy.” The notice specifically points to Binance’s “stock token.”  Lithuania’s central bank issued a warning on Friday about Binance UAB, a Binance affiliate, providing “unlicensed investment services.” “Companies that are registered in Lithuania as virtual currency exchange operators are not supervised as financial service providers. They also have no right to provide any financial services, including investment services,” the Bank of Lithuania said. Also on Friday, Hong Kong’s Securities and Futures Commission announced that Binance is not licensed to trade stock tokens in the territory.  In a statement, Thomas Atkinson, the SFC’s executive director of enforcement, had stern words for the exchange: “The SFC does not tolerate any violations of the securities laws and will not hesitate to take enforcement action against unlicensed platform operators where appropriate.” Binance responded to the mounting pressure by announcing on its website that it would cease offering stock tokens. Effective immediately, you can no longer buy stock tokens on Binance, and the exchange will stop supporting them on October 14. As for the unlucky ones who are still holding Binance stock tokens, you apparently have 90 days to try and offload them onto someone else. The exchange also deleted mentions of stock tokens on its website. If you click on a link to “Introduction to Stock Tokens” on the site, you get a “404 error.” You can still visit the page here, however. A short-lived bad idea Binance introduced its tokenized stocks idea on April 12, starting with Tesla, followed by Coinbase, and later MicroStrategy, Microsoft and Apple. (Links are to archives on Wayback machine.) “Unlike traditional stocks, users can purchase fractional shares of the listed companies with stock tokens. For instance, for a Tesla share that trades at over $700 per share, stock tokens enable investors to buy a piece of the underlying share (e.g., 0.01) instead of the entire unit,” Binance explained on its website. Prices were settled in BUSD — a stablecoin Binance created in partnership with Paxos, a NY-based company. Binance claims its stock tokens are fully backed by shares held by CM-Equity AG, a regulated asset management firm in Germany. The exchange also said Friday that users in the EEA and Switzerland will be able to transition their stock token balances to CM-Equity AG once the brokerage creates a special portal for that purpose, sometime in September or early October. However, the transition will require additional KYC. Binance, whose modus operandi has always been to ignore the laws and do whatever, launched its stock token service two days before US crypto exchange Coinbase went public on the Nasdaq and bitcoin reached an all-time high of nearly $65,000. The price of bitcoin is now less than half of that. In April, Germany’s financial regulator BaFin warned that Binance risked being fined for offering its securities-tracking tokens without publishing an investor prospectus. Binance went back and forth with BaFin on the issue, trying to persuade them to take the notice down, according to the FT, but to no avail. The warning stayed up. In June, the UK followed with its own consumer warning, and then one by one, a host of other global regulators issued their own cautions about Binance, and banks began cutting off services to the exchange — essentially a form of slow strangulation.   Binance clearly wasn’t thinking when it introduced those stock tokens. The move appears to have been driven by the hubris of its CEO CZ, who is now realizing that actions have repercussions. Or maybe not, since his recent tweets and a blog post celebrating Binance’s fourth birthday seem to reflect an ongoing detachment from reality. “Together, we can increase the freedom of money for people around the world, in safe and compliant ways,” he wrote. By freedom, I assume he means, freedom to operate outside the law, or freedom to freeze withdrawals on his exchanges — a frequent user complaint, according to Gizmodo. FTX and Bittrex Binance isn’t the only crypto exchange to offer stock tokens. Sam Bankman-Fried’s FTX exchange also offers tokenized stocks (archive) — a service that it added in June. I suspect that a lot of Binance’s business will flow over to FTX, and we’ll soon see similar regulatory crackdowns on FTX.  Like Binance, FTX has a US version of its exchange and a main site. FTX is registered in Antigua and Barbuda with headquarters in Hong Kong. It offers stock tokens for Tesla, GameStock, Beyond Meat, PayPal, Twitter, Google, Amazon, and a host of others.  Bittrex Global — another exchange that has a regulated US-based arm — also offers an impressive array of stock tokens. The Liechtenstein-based firm added the service in December 2020, according to a press release at the time, noting that “these tokenized stocks are available even in countries where accessing US stocks through traditional financial instruments is not possible.”  FTX and Bittrex also claim their stock tokens are backed by actual stocks held by CM-Equity AG. Binance Brazil director resigns Banks are not the only ones distancing themselves from Binance these days. Amidst the recent drama, Ricardo Da Ros, Binance’s director of Brazil announced his departure on LinkedIn. He had only been with the company for six months.   “There was a misalignment of expectations about my role and I made the decision according to my personal values,” he said. Other employees have also exited stage left in recent months. Wei Zhou, the chief finance officer at Binance, quit abruptly in June, and Catherine Coley, the CEO of Binance.US stepped down in May — though nobody has heard from her since. If you like my work, please support my writing. Subscribe to my Patreon account for as little as $5 a month.  Share this: Twitter Facebook LinkedIn Like this: Like Loading... BaFinBinanceFCA Posted on July 17, 2021July 17, 2021 by Amy Castor in Blogging 0 Post navigation Previous postBinance: Fiat off-ramps keep closing, reports of frozen funds, what happened to Catherine Coley? Next postNews: Regulators zero in on stablecoins, El Salvador’s colón-dollar, Tether printer remains paused Leave a Reply Cancel reply Enter your comment here... Fill in your details below or click an icon to log in: Email (Address never made public) Name Website You are commenting using your WordPress.com account. ( Log Out /  Change ) You are commenting using your Google account. ( Log Out /  Change ) You are commenting using your Twitter account. ( Log Out /  Change ) You are commenting using your Facebook account. ( Log Out /  Change ) Cancel Connecting to %s Notify me of new comments via email. Notify me of new posts via email. Create a website or blog at WordPress.com %d bloggers like this: baserow-io-8140 ---- Pricing // Baserow Product Premium Pricing Templates Developers Documentation OpenAPI specification API Blog Jobs Contact Repository GitHub Sponsor Login Register Product Premium Pricing Templates Developers Documentation Getting started Start baserow locally Creating a plugin OpenAPI specification API GitLab repository Want to contribute? Blog Jobs 1 Contact Become a sponsor GitLab repository Login Register Pricing Hosted version Free for now Create an account Self hosted Always free Installation instructions Early premium € 4 per user / month More information Not yet available. Can be used in combination with the hosted and self hosted version in the future. Price might change. Usage Groups Unlimited Unlimited Unlimited Databases Unlimited Unlimited Unlimited Tables Unlimited Unlimited Unlimited Rows Unlimited Unlimited Unlimited Features Web app Dashboard Filtering Sorting Public REST API API token permissions Search Templates CSV, XML and JSON import CSV export Trash Web hooks Footer calculations Public view sharing XML and JSON export Role based permissions Row comments Row coloring Views Grid Form Gallery Kanban Calendar Survey Fields Single line text Long text Link to table Number Boolean Date URL Email Phone File Multi select Formula Lookup Created Last modified Admin Admin panel SSO Signup rules Audit logs Payment by invoice Support User support Within 5 business days Optionally Optionally Technical support Optionally Optionally Create account Instructions More information Hosted version Free for now Create an account Unlimited groups Unlimited databases Unlimited tables Unlimited rows Features Web app Dashboard Filtering Sorting Public REST API API token permissions Search Templates CSV, XML and JSON import CSV Export Trash Web hooks Footer calculations Public view sharing Views Grid Form Gallery Fields Single line text Long text Link to table Number Boolean Date URL Email Phone File Multi select Formula Lookup Created Last modified Support User support within 5 business days Create account Self hosted Always free Installation instructions Unlimited groups Unlimited databases Unlimited tables Unlimited rows Features Web app Dashboard Filtering Sorting Public REST API API token permissions Search Templates CSV, XML and JSON import CSV Export Trash Web hooks Footer calculations Public view sharing Views Grid Form Gallery Fields Single line text Long text Link to table Number Boolean Date URL Email Phone File Multi select Formula Lookup Created Last modified Support Optionally user support Optionally technical support Instructions Early premium € 4 per user / month Not yet available. Can be used in combination with the hosted and self hosted version in the future. Price might change. More information Unlimited groups Unlimited databases Unlimited tables Unlimited rows Features Web app Dashboard Filtering Sorting Public REST API API token permissions Search Templates CSV, XML and JSON import CSV, XML and JSON export Trash Web hooks Footer calculations Public view sharing Role based permission Row comments Row coloring Views Grid Form Gallery Kanban Calendar Survey Fields Single line text Long text Link to table Number Boolean Date URL Email Phone File Multi select Formula Lookup Created Last modified Admin Admin panel SSO Signup rules Audit logs Payment by invoice Support Optionally user support Optionally technical support More information Is already implemented and can be used Will soon be implemented and cannot be used Additional services Next to our self hosted and premium version we also offer a two additional services that will make your life even easier. Support € 4 per user / month For companies who need a little bit of extra help Priority support. User questions. Technical questions. Contact us Technical consulting € 150 per hour For companies with advanced needs. On premise installation help. On premise updating help. Advanced technical help. Contact us Log in Register Contact GitLab repository Sponsor Twitter Product Premium Pricing Developer documentation OpenAPI specification API Blog July 2021 release of Baserow Best Airtable alternatives Best Excel alternatives Under the hood of Baserow Building a database Legal Privacy policy Terms & conditions Newsletter Stay up to date with the lates developments and releases by signing up for our newsletter. Sign up © Copyright 2021 Baserow All rights reserved. biblehub-com-5521 ---- Mark 4 KJV Bible > KJV > Mark 4 ◄ Mark 4 ► King James Bible  Par ▾  The Parable of the Sower (Matthew 13:1-9; Luke 8:4-15) 1And he began again to teach by the sea side: and there was gathered unto him a great multitude, so that he entered into a ship, and sat in the sea; and the whole multitude was by the sea on the land. 2And he taught them many things by parables, and said unto them in his doctrine, 3Hearken; Behold, there went out a sower to sow: 4And it came to pass, as he sowed, some fell by the way side, and the fowls of the air came and devoured it up. 5And some fell on stony ground, where it had not much earth; and immediately it sprang up, because it had no depth of earth: 6But when the sun was up, it was scorched; and because it had no root, it withered away. 7And some fell among thorns, and the thorns grew up, and choked it, and it yielded no fruit. 8And other fell on good ground, and did yield fruit that sprang up and increased; and brought forth, some thirty, and some sixty, and some an hundred. 9And he said unto them, He that hath ears to hear, let him hear. The Purpose of Jesus' Parables (Matthew 13:10-17) 10And when he was alone, they that were about him with the twelve asked of him the parable. 11And he said unto them, Unto you it is given to know the mystery of the kingdom of God: but unto them that are without, all these things are done in parables: 12That seeing they may see, and not perceive; and hearing they may hear, and not understand; lest at any time they should be converted, and their sins should be forgiven them. The Parable of the Sower Explained (Matthew 13:18-23) 13And he said unto them, Know ye not this parable? and how then will ye know all parables? 14The sower soweth the word. 15And these are they by the way side, where the word is sown; but when they have heard, Satan cometh immediately, and taketh away the word that was sown in their hearts. 16And these are they likewise which are sown on stony ground; who, when they have heard the word, immediately receive it with gladness; 17And have no root in themselves, and so endure but for a time: afterward, when affliction or persecution ariseth for the word's sake, immediately they are offended. 18And these are they which are sown among thorns; such as hear the word, 19And the cares of this world, and the deceitfulness of riches, and the lusts of other things entering in, choke the word, and it becometh unfruitful. 20And these are they which are sown on good ground; such as hear the word, and receive it, and bring forth fruit, some thirtyfold, some sixty, and some an hundred. The Lesson of the Lamp (Luke 8:16-18) 21And he said unto them, Is a candle brought to be put under a bushel, or under a bed? and not to be set on a candlestick? 22For there is nothing hid, which shall not be manifested; neither was any thing kept secret, but that it should come abroad. 23If any man have ears to hear, let him hear. 24And he said unto them, Take heed what ye hear: with what measure ye mete, it shall be measured to you: and unto you that hear shall more be given. 25For he that hath, to him shall be given: and he that hath not, from him shall be taken even that which he hath. The Seed Growing Secretly 26And he said, So is the kingdom of God, as if a man should cast seed into the ground; 27And should sleep, and rise night and day, and the seed should spring and grow up, he knoweth not how. 28For the earth bringeth forth fruit of herself; first the blade, then the ear, after that the full corn in the ear. 29But when the fruit is brought forth, immediately he putteth in the sickle, because the harvest is come. The Parable of the Mustard Seed (Matthew 13:31-32; Luke 13:18-19) 30And he said, Whereunto shall we liken the kingdom of God? or with what comparison shall we compare it? 31It is like a grain of mustard seed, which, when it is sown in the earth, is less than all the seeds that be in the earth: 32But when it is sown, it groweth up, and becometh greater than all herbs, and shooteth out great branches; so that the fowls of the air may lodge under the shadow of it. 33And with many such parables spake he the word unto them, as they were able to hear it. 34But without a parable spake he not unto them: and when they were alone, he expounded all things to his disciples. Jesus Stills the Storm (Matthew 8:23-27; Luke 8:22-25) 35And the same day, when the even was come, he saith unto them, Let us pass over unto the other side. 36And when they had sent away the multitude, they took him even as he was in the ship. And there were also with him other little ships. 37And there arose a great storm of wind, and the waves beat into the ship, so that it was now full. 38And he was in the hinder part of the ship, asleep on a pillow: and they awake him, and say unto him, Master, carest thou not that we perish? 39And he arose, and rebuked the wind, and said unto the sea, Peace, be still. And the wind ceased, and there was a great calm. 40And he said unto them, Why are ye so fearful? how is it that ye have no faith? 41And they feared exceedingly, and said one to another, What manner of man is this, that even the wind and the sea obey him? King James Bible Text courtesy of BibleProtector.com Section Headings Courtesy INT Bible © 2012, Used by Permission Bible Hub bibwild-wordpress-com-7199 ---- Bibliographic Wilderness Skip to content Bibliographic Wilderness Menu About Contact logging URI query params with lograge The lograge gem for taming Rails logs by default will lot the path component of the URI, but leave out the query string/query params. For instance, perhaps you have a URL to your app /search?q=libraries. lograge will log something like: method=GET path=/search format=html… The q=libraries part is completely left out of the log. I kinda want that part, it’s important. The lograge README provides instructions for “logging request parameters”, by way of the params hash. I’m going to modify them a bit slightly to: use the more recent custom_payload config instead of custom_options. (I’m not certain why there are both, but I think mostly for legacy reasons and newer custom_payload? is what you should read for?) If we just put params in there, then a bunch of ugly "foo"} OK. The params hash isn’t exactly the same as the query string, it can include things not in the URL query string (like controller and action, that we have to strip above, among others), and it can in some cases omit things that are in the query string. It just depends on your routing and other configuration and logic. The params hash itself is what default rails logs… but what if we just log the actual URL query string instead? Benefits: it’s easier to search the logs for actually an exact specific known URL (which can get more complicated like /search?q=foo&range%5Byear_facet_isim%5D%5Bbegin%5D=4&source=foo or something). Which is something I sometimes want to do, say I got a URL reported from an error tracking service and now I want to find that exact line in the log. I actually like having the exact actual URL (well, starting from path) in the logs. It’s a lot simpler, we don’t need to filter out controller/action/format/id etc. It’s actually a bit more concise? And part of what I’m dealing with in general using lograge is trying to reduce my bytes of logfile for papertrail! Drawbacks? if you had some kind of structured log search (I don’t at present, but I guess could with papertrail features by switching to json format?), it might be easier to do something like “find a /search with q=foo and source=ef without worrying about other params) To the extent that params hash can include things not in the actual url, is that important to log like that? ….? Curious what other people think… am I crazy for wanting the actual URL in there, not the params hash? At any rate, it’s pretty easy to do. Note we use filtered_path rather than fullpath to again take account of Rails 6 parameter filtering, and thanks again /u/ezekg: config.lograge.custom_payload do |controller| { path: controller.request.filtered_path } end This is actually overwriting the default path to be one that has the query string too: method=GET path=/search?q=libraries format=html ... You could of course add a different key fullpath instead, if you wanted to keep path as it is, perhaps for easier collation in some kind of log analyzing system that wants to group things by same path invariant of query string. I’m gonna try this out! Meanwhile, on lograge… As long as we’re talking about lograge…. based on commit history, history of Issues and Pull Requests… the fact that CI isn’t currently running (travis.org grr) and doesn’t even try to test on Rails 6.0+ (although lograge seems to work fine)… one might worry that lograge is currently un/under-maintained…. No comment on a GH issue filed in May asking about project status. It still seems to be one of the more popular solutions to trying to tame Rails kind of out of control logs. It’s mentioned for instance in docs from papertrail and honeybadger, and many many other blog posts. What will it’s future be? Looking around for other possibilties, I found semantic_logger (rails_semantic_logger). It’s got similar features. It seems to be much more maintained. It’s got a respectable number of github stars, although not nearly as many as lograge, and it’s not featured in blogs and third-party platform docs nearly as much. It’s also a bit more sophisticated and featureful. For better or worse. For instance mainly I’m thinking of how it tries to improve app performance by moving logging to a background thread. This is neat… and also can lead to a whole new class of bug, mysterious warning, or configuration burden. For now I’m sticking to the more popular lograge, but I wish it had CI up that was testing with Rails 6.1, at least! Incidentally, trying to get Rails to log more compactly like both lograge and rails_semantic_logger do… is somewhat more complicated than you might expect, as demonstrated by the code in both projects that does it! Especially semantic_logger is hundreds of lines of somewhat baroque code split accross several files. A refactor of logging around Rails 5 (I think?) to use ActiveSupport::LogSubscriber made it possible to customize Rails logging like this (although I think both lograge and rails_semantic_logger still do some monkey-patching too!), but in the end didn’t make it all that easy or obvious or future-proof. This may discourage too many other alternatives for the initial primary use case of both lograge and rails_semantic_logger — turn a rails action into one log line, with a structured format. jrochkind General Leave a comment August 4, 2021August 5, 2021 Notes on Cloudfront in front of Rails Assets on Heroku, with CORS Heroku really recommends using a CDN in front of your Rails app static assets — which, unlike in non-heroku circumstances where a web server like nginx might be taking care of it, otherwise on heroku static assets will be served directly by your Rails app, consuming limited/expensive dyno resources. After evaluating a variety of options (including some heroku add-ons), I decided AWS Cloudfront made the most sense for us — simple enough, cheap, and we are already using other direct AWS services (including S3 and SES). While heroku has an article on using Cloudfront, which even covers Rails specifically, and even CORS issues specifically, I found it a bit too vague to get me all the way there. And while there are lots of blog posts you can find on this topic, I found many of them outdated (Rails has introduced new API; Cloudfront has also changed it’s configuration options!), or otherwise spotty/thin. So while I’m not an expert on this stuff, i’m going to tell you what I was able to discover, and what I did to set up Cloudfront as a CDN in front of Rails static assets running on heroku — although there’s really nothing at all specific to heroku here, if you have any other context where Rails is directly serving assets in production. First how I set up Rails, then Cloudfront, then some notes and concerns. Btw, you might not need to care about CORS here, but one reason you might is if you are serving any fonts (including font-awesome or other icon fonts!) from Rails static assets. Rails setup In config/environments/production.rb # set heroku config var RAILS_ASSET_HOST to your cloudfront # hostname, will look like `xxxxxxxx.cloudfront.net` config.asset_host = ENV['RAILS_ASSET_HOST'] config.public_file_server.headers = { # CORS: 'Access-Control-Allow-Origin' => "*", # tell Cloudfront to cache a long time: 'Cache-Control' => 'public, max-age=31536000' } Cloudfront Setup I changed some things from default. The only one that absolutely necessary — if you want CORS to work — seemed to be changing Allowed HTTP Methods to include OPTIONS. Click on “Create Distribution”. All defaults except: Origin Domain Name: your heroku app host like app-name.herokuapp.com Origin protocol policy: Switch to “HTTPS Only”. Seems like a good idea to ensure secure traffic between cloudfront and origin, no? Allowed HTTP Methods: Switch to GET, HEAD, OPTIONS. In my experimentation, necessary for CORS from a browser to work — which AWS docs also suggest. Cached HTTP Methods: Click “OPTIONS” too now that we’re allowing it, I don’t see any reason not to? Compress objects automatically: yes Sprockets is creating .gz versions of all your assets, but they’re going to be completely ignored in a Cloudfront setup either way. ☹️ (Is there a way to tell Sprockets to stop doing it? WHO KNOWS not me, it’s so hard to figure out how to reliably talk to Sprockets). But we can get what it was trying to do by having Cloudfront encrypt stuff for us, seems like a good idea, Google PageSpeed will like it, etc? I noticed by experimentation that Cloudfront will compress CSS and JS (sometimes with brotli sometimes gz, even with the same browser, don’t know how it decides, don’t care), but is smart enough not to bother trying to compress a .jpg or .png (which already has internal compression). Comment field: If there’s a way to edit it after you create the distribution, I haven’t found it, so pick a good one! Notes on CORS AWS docs here and here suggest for CORS support you also need to configure the Cloudfront distribution to forward additional headers — Origin, and possibly Access-Control-Request-Headers and Access-Control-Request-Method. Which you can do by setting up a custom “cache policy”. Or maybe instead by by setting the “Origin Request Policy”. Or maybe instead by setting custom cache header settings differently using the Use legacy cache settings option. It got confusing — and none of these settings seemed to be necessary to me for CORS to be working fine, nor could I see any of these settings making any difference in CloudFront behavior or what headers were included in responses. Maybe they would matter more if I were trying to use a more specific Access-Control-Allow-Origin than just setting it to *? But about that…. If you set Access-Control-Allow-Origin to a single host, MDN docs say you have to also return a Vary: Origin header. Easy enough to add that to your Rails config.public_file_server.headers. But I couldn’t get Cloudfront to forward/return this Vary header with it’s responses. Trying all manner of cache policy settings, referring to AWS’s quite confusing documentation on the Vary header in Cloudfront and trying to do what it said — couldn’t get it to happen. And what if you actually need more than one allowed origin? Per spec Access-Control-Allow-Origin as again explained by MDN, you can’t just include more than one in the header, the header is only allowed one: ” If the server supports clients from multiple origins, it must return the origin for the specific client making the request.” And you can’t do that with Rails static/global config.public_file_server.headers, we’d need to use and setup rack-cors instead, or something else. So I just said, eh, * is probably just fine. I don’t think it actually involves any security issues for rails static assets to do this? I think it’s probably what everyone else is doing? The only setup I needed for this to work was setting Cloudfront to allow OPTIONS HTTP method, and setting Rails config.public_file_server.headers to include 'Cache-Control' => 'public, max-age=31536000'. Notes on Cache-Control max-age A lot of the existing guides don’t have you setting config.public_file_server.headers to include 'Cache-Control' => 'public, max-age=31536000'. But without this, will Cloudfront actually be caching at all? If with every single request to cloudfront, cloudfront makes a request to the Rails app for the asset and just proxies it — we’re not really getting much of the point of using Cloudfront in the first place, to avoid the traffic to our app! Well, it turns out yes, Cloudfront will cache anyway. Maybe because of the Cloudfront Default TTL setting? My Default TTL was left at the Cloudfront default, 86400 seconds (one day). So I’d think that maybe Cloudfront would be caching resources for a day when I’m not supplying any Cache-Control or Expires headers? In my observation, it was actually caching for less than this though. Maybe an hour? (Want to know if it’s caching or not? Look at headers returned by Cloudfront. One easy way to do this? curl -IXGET https://whatever.cloudfront.net/my/asset.jpg, you’ll see a header either x-cache: Miss from cloudfront or x-cache: Hit from cloudfront). Of course, Cloudfront doesn’t promise to cache for as long as it’s allowed to, it can evict things for it’s own reasons/policies before then, so maybe that’s all that’s going on. Still, Rails assets are fingerprinted, so they are cacheable forever, so why not tell Cloudfront that? Maybe more importantly, if Rails isn’t returning a Cache-Cobntrol header, then Cloudfront isn’t either to actual user-agents, which means they won’t know they can cache the response in their own caches, and they’ll keep requesting/checking it on every reload too, which is not great for your far too large CSS and JS application files! So, I think it’s probably a great idea to set the far-future Cache-Control header with config.public_file_server.headers as I’ve done above. We tell Cloudfront it can cache for the max-allowed-by-spec one year, and this also (I checked) gets Cloudfront to forward the header on to user-agents who will also know they can cache. Note on limiting Cloudfront Distribution to just static assets? The CloudFront distribution created above will actually proxy/cache our entire Rails app, you could access dynamic actions through it too. That’s not what we intend it for, our app won’t generate any URLs to it that way, but someone could. Is that a problem? I don’t know? Some blog posts try to suggest limiting it only being willing to proxy/cache static assets instead, but this is actually a pain to do for a couple reasons: Cloudfront has changed their configuration for “path patterns” since many blog posts were written (unless you are using “legacy cache settings” options), such that I’m confused about how to do it at all, if there’s a way to get a distribution to stop caching/proxying/serving anything but a given path pattern anymore? Modern Rails with webpacker has static assets at both /assets and /packs, so you’d need two path patterns, making it even more confusing. (Why Rails why? Why aren’t packs just at public/assets/packs so all static assets are still under /assets?) I just gave up on figuring this out and figured it isn’t really a problem that Cloudfront is willing to proxy/cache/serve things I am not intending for it? Is it? I hope? Note on Rails asset_path helper and asset_host You may have realized that Rails has both asset_path and asset_url helpers for linking to an asset. (And similar helpers with dashes instead of underscores in sass, and probably different implementations, via sass-rails) Normally asset_path returns a relative URL without a host, and asset_url returns a URL with a hostname in it. Since using an external asset_host requires we include the host with all URLs for assets to properly target CDN… you might think you have to stop using asset_path anywhere and just use asset_url… You would be wrong. It turns out if config.asset_host is set, asset_path starts including the host too. So everything is fine using asset_path. Not sure if at that point it’s a synonym for asset_url? I think not entirely, because I think in fact once I set config.asset_host, some of my uses of asset_url actually started erroring and failing tests? And I had to actually only use asset_path? In ways I don’t really understand what’s going on and can’t explain it? Ah, Rails. jrochkind General Leave a comment June 23, 2021June 23, 2021 ActiveSupport::Cache via ActiveRecord (note to self) There are a variety of things written to use flexible back-end key/value datastores via the ActiveSupport::Cache API. For instance, say, activejob-status. I have sometimes in the past wanted to be able to use such things storing the data in an rdbms, say vai ActiveRecord. Make a table for it. Sure, this won’t be nearly as fast or “scalable” as, say, redis, but for so many applications it’s just fine. And I often avoid using a feature at all if it is going to require to me to add another service (like another redis instance). So I’ve considered writing an ActiveSupport::Cache adapter for ActiveRecord, but never really gotten around to it, so I keep avoiding using things I’d be trying out if I had it…. Well, today I discovered the ruby gem that’s a key/value store swiss army knife, moneta. Look, it has an ActiveSupport::Cache adapter so you can use any moneta-supported store as an ActiveSupport::Cache API. AND then if you want to use an rdbms as your moneta-supported store, you can do it through ActiveRecord or Sequel. Great, I don’t have to write the adapter after all, it’s already been done! Assuming it works out okay, which I haven’t actually checked in practice yet. Writing this in part as a note-to-self so next time I have an itch that can be scratched this way, I remember moneta is there — to at least explore further. Not sure where to find the docs, but here’s the source for ActiveRecord moneta adapter. It looks like I can create different caches that use different tables, which is the first thing I thought to ensure. The second thing I thought to look for — can it handle expiration, and purging expired keys? Unclear, I can’t find it. Maybe I could PR it if needed. And hey, if for some reason you want an ActiveSupport::Cache backed by PStore or BerkelyDB (don’t do it!), or Cassandara (you got me, no idea?), moneta has you too. jrochkind General Leave a comment June 21, 2021June 21, 2021 Heroku release phase, rails db:migrate, and command failure If you use capistrano to deploy a Rails app, it will typically run a rails db:migrate with every deploy, to apply any database schema changes. If you are deploying to heroku you might want to do the same thing. The heroku “release phase” feature makes this possible. (Introduced in 2017, the release phase feature is one of heroku’s more recent major features, as heroku dev has seemed to really stabilize and/or stagnate). The release phase docs mention “running database schema migrations” as a use case, and there are a few ((1), (2), (3)) blog posts on the web suggesting doing exactly that with Rails. Basically as simple as adding release: bundle exec rake db:migrate to your Procfile. While some of the blog posts do remind you that “If the Release Phase fails the app will not be deployed”, I have found the implications of this to be more confusing in practice than one would originally assume. Particularly because on heroku changing a config var triggers a release; and it can be confusing to notice when such a release has failed. It pays to consider the details a bit so you understand what’s going on, and possibly consider somewhat more complicated release logic than simply calling out to rake db:migrate. 1) What if a config var change makes your Rails app unable to boot? I don’t know how unusual this is, but I actually had a real-world bug like this when in the process of setting up our heroku app. Without confusing things with the details, we can simulate such a bug simply by putting this in, say, config/application.rb: if ENV['FAIL_TO_BOOT'] raise "I am refusing to boot" end Obviously my real bug was weirder, but the result was the same — with some settings of one or more heroku configuration variables, the app would raise an exception during boot. And we hadn’t noticed this in testing, before deploying to heroku. Now, on heroku, using CLI or web dashboard, set the config var FAIL_TO_BOOT to “true”. Without a release phase, what happens? The release is successful! If you look at the release in the dashboard (“Activity” tab) or heroku releases, it shows up as successful. Which means heroku brings up new dynos and shuts down the previous ones, that’s what a release is. The app crashes when heroku tries to start it in the new dynos. The dynos will be in “crashed” state when looked at in heroku ps or dashboard. If a user tries to access the web app, they will get the generic heroku-level “could not start app” error screen (unless you’ve customized your heroku error screens, as usual). You can look in your heroku logs to see the error and stack trace that prevented app boot. Downside: your app is down. Upside: It is pretty obvious that your app is down, and (relatively) why. With a db:migrate release phase, what happens? The Rails db:migrate rake task has a dependency on the rails :environment task, meaning it boots the Rails app before executing. You just changed your config variable FAIL_TO_BOOT: true such that the Rails app can’t boot. Changing the config variable triggered a release. As part of the release, the db:migrate release phase is run… which fails. The release is not succesful, it failed. You don’t get any immediate feedback to that effect in response to your heroku config:add command or on the dashboard GUI in the “settings” tab. You may go about your business assuming it succeeded. If you look at the release in heroku releases or dashboard “activity” tab you will see it failed. You do get an email that it failed. Maybe you notice it right away, or maybe you notice it later, and have to figure out “wait, which release failed? And what were the effects of that? Should I be worried?” The effects are: The config variable appears changed in heroku’s dashboard or in response to heroku config:get etc. The old dynos without the config variable change are still running. They don’t have the change. If you open a one-off dyno, it will be using the old release, and have the old (eg) ENV[‘FAIL_TO_BOOT’] value. ANY subsequent attempts at a releases will keep fail, so long as the app is in a state (based on teh current config variables) that it can’t boot. Again, this really happened to me! It is a fairly confusing situation. Upside: Your app is actually still up, even though you broke it, the old release that is running is still running, that’s good? Downside: It’s really confusing what happened. You might not notice at first. Things remain in a messed up inconsistent and confusing state until you notice, figure out what’s going on, what release caused it, and how to fix it. It’s a bit terrifying that any config variable change could do this. But I guess most people don’t run into it like I did, since I haven’t seen it mentioned? 2) A heroku pg:promote is a config variable change, that will create a release in which db:migrate release phase fails. heroku pg:promote is a command that will change which of multiple attached heroku postgreses are attached as the “primary” database, pointed to by the DATABASE_URL config variable. For a typical app with only one database, you still might use pg:promote for a database upgrade process; for setting up or changing a postgres high-availability leader/follower; or, for what I was experimenting with it for, using heroku’s postgres-log-based rollback feature. I had assumed that pg:promote was a zero-downtime operation. But, in debugging it’s interaction with my release phase, I noticed that pg:promote actually creates TWO heroku releases. First it creates a release labelled Detach DATABASE , in which there is no DATABASE_URL configuration variable at all. Then it creates another release labelled Attach DATABASE in which the DATABASE_URL configuration variable is defined to it’s new value. Why does it do this instead of one release that just changes the DATABASE_URL? I don’t know. My app (like most Rails and probably other apps) can’t actually function without DATABASE_URL set, so if that first release ever actually runs, it will just error out. Does this mean there’s an instant with a “bad” release deployed, that pg:promote isn’t actually zero-downtime? I am not sure, it doens’t seem right (I did file a heroku support ticket asking….). But under normal circumstances, either it’s not a problem, or most people(?) don’t notice. But what if you have a db:migrate release phase? When it tries to do release (1) above, that release will fail. Because it tries to run db:migrate, and it can’t do that without a DATABASE_URL set, so it raises, the release phase exits in an error condition, the release fails. Actually what happens is without DATABASE_URL set, the Rails app will assume a postgres URL in a “default” location, try to connect to, and fail, with an error message (hello googlers?), like: ActiveRecord::ConnectionNotEstablished: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now, release (2) is coming down the pike seconds later, this is actually fine, and will be zero outage. We had a release that failed (so never was deployed), and seconds later the next correct release succeeds. Great! The only problem is that we got an email notifying us that release 1 failed, and it’s also visible as failing in the heroku release list, etc. A “background” (not in response to a git push or other code push to heroku) release failing is already a confusing situation — a”false positives” that actually mean “nothing unexpected or problematic happened, just ignore this and carry on.” is… really not something I want. (I call this the “error notification crying wolf”, right? I try to make sure my error notifications never do it, because it takes your time away from flow unecessarily, and/or makes it much harder to stay vigilant to real errors). Now, there is a fairly simple solution to this particular problem. Here’s what I did. I changed my heroku release phase from rake db:migrate to a custom rake task, say release: bundle exec rake my_custom_heroku_release_phase, defined like so: task :my_custom_heroku_release_phase do if ENV['DATABASE_URL'] Rake::Task["db:migrate"].invoke else $stderr.puts "\n!!! WARNING, no ENV['DATABASE_URL'], not running rake db:migrate as part of heroku release !!!\n\n" end end view raw custom.rake hosted with ❤ by GitHub Now that release (1) above at least won’t fail, it has the same behavior as a “traditional” heroku app without a release phase. Swallow-and-report all errors? When a release fails because a release phase has failed as result of a git push to heroku, that’s quite clear and fine! But the confusion of the “background” release failure, triggered by a config var change, is high enough that part of me wants to just rescue StandardError in there, and prevent a failed release phase from ever exiting with a failure code, so heroku will never use a db:migrate release phase to abort a release. Just return the behavior to the pre-release-phase heroku behavior — you can put your app in a situation where it will be crashed and not work, but maybe that’s better not a mysterious inconsistent heroku app state that happens in the background and you find out about only through asynchronous email notifications from heroku that are difficult to understand/diagnose. It’s all much more obvious. On the other hand, if a db:migrate has failed not becuase of some unrelated boot process problem that is going to keep the app from launching too even if it were released, but simply because the db:migrate itself actually failed… you kind of want the release to fail? That’s good? Keep the old release running, not a new release with code that expects a db migration that didn’t happen? So I’m not really sure. If you did want to rescue-swallow-and-notify, the custom rake task for your heroku release logic — instead of just telling heroku to run a standard thing like db:migrate on release — is certainly convenient. Also, do you really always want to db:migrate anyway? What about db:schema:load? Another alternative… if you are deploying an app with an empty database, standard Rails convention is to run rails db:schema:load instead of db:migrate. The db:migrate will probably work anyway, but will be slower, and somewhat more error-prone. I guess this could come up on heroku with an initial deploy or (for some reason) a database that’s been nuked and restarted, or perhaps a Heroku “Review app”? (I don’t use those yet) stevenharman has a solution that actually checks the database, and runs the appropriate rails task depending on state, here in this gist. I’d probably do it as a rake task instead of a bash file if I were going to do that. I’m not doing it at all yet. Note that stevenharman’s solution will actually catch a non-existing or non-connectable database and not try to run migrations… but it will print an error message and exit 1 in that case, failing the release — meaning that you will get a failed release in the pg:promote case mentioned above! jrochkind General Leave a comment June 16, 2021June 17, 2021 Code that Lasts: Sustainable And Usable Open Source Code A presentation I gave at online conference Code4Lib 2021, on Monday March 21. I have realized that the open source projects I am most proud of are a few that have existed for years now, increasing in popularity, with very little maintenance required. Including traject and bento_search. While community aspects matter for open source sustainability, the task gets so much easier when the code requires less effort to keep alive, for maintainers and utilizers. Using these projects as examples, can we as developers identify what makes code “inexpensive” to use and maintain over the long haul with little “churn”, and how to do that? Slides on Google Docs Rough transcript (really the script I wrote for myself) Hi, I’m Jonathan Rochkind, and this is “Code that Lasts: Sustainable and Usable Open Source Code” So, who am I? I have been developing open source library software since 2006, mainly in ruby and Rails.  Over that time, I have participated in a variety open source projects meant to be used by multiple institutions, and I’ve often seen us having challenges with long-term maintenance sustainability and usability of our software. This includes in projects I have been instrumental in creating myself, we’ve all been there!  We’re used to thinking of this problem in terms of needing more maintainers. But let’s first think more about what the situation looks like, before we assume what causes it. In addition to features  or changes people want not getting done, it also can look like, for instance: Being stuck using out-of-date dependencies like old, even end-of-lifed, versions of Rails or ruby. A reduction in software “polish” over time.  What do I mean by “polish”? Engineer Richard Schneeman writes: [quote] “When we say something is “polished” it means that it is free from sharp edges, even the small ones. I view polished software to be ones that are mostly free from frustration. They do what you expect them to and are consistent.”  I have noticed that software can start out very well polished, but over time lose that polish.  This usually goes along with decreasing “cohesion” in software over time, a feeling like that different parts of the software start to no longer tell the developer a consistent story together.  While there can be an element of truth in needing more maintainers in some cases – zero maintainers is obviously too few — there are also ways that increasing the number of committers or maintainers can result in diminishing returns and additional challenges. One of the theses of Fred Brooks famous 1975 book “The Mythical Man-Month” is sometimes called ”Brooks Law”:  “under certain conditions, an incremental person when added to a project makes the project take more, not less time.” Why? One of the main reasons Brooks discusses is the the additional time taken for communication and coordination between more people – with every person you add, the number of connections between people goes up combinatorily.  That may explain the phenomenon we sometimes see with so-called “Design  by committee” where “too many cooks in the kitchen” can produce inconsistency or excessive complexity. Cohesion and polish require a unified design vision— that’s  not incompatible with increasing numbers of maintainers, but it does make it more challenging because it takes more time to get everyone on the same page, and iterate while maintaining a unifying vision.  (There’s also more to be said here about the difference between just a bunch of committers committing PR’s, and the maintainers role of maintaining historical context and design vision for how all the parts fit together.) Instead of assuming adding more committers or maintainers is the solution, can there instead be ways to reduce the amount of maintenance required? I started thinking about this when I noticed a couple projects of mine which had become more widely successful than I had any right  to expect, considering how little maintainance was being put into them.  Bento_search is a toolkit for searching different external search engines in a consistent way. It’s especially but not exclusively for displaying multiple search results in “bento box” style, which is what Tito Sierra from NCSU first called these little side by side search results.  I wrote bento_search  for use at a former job in 2012.  55% of all commits to the project were made in 2012.  95% of all commits in 2016 or earlier. (I gave it a bit of attention for a contracting project in 2016). But bento_search has never gotten a lot of maintenance, I don’t use it anymore myself. It’s not in wide use, but I found  it kind of amazing, when I saw people giving me credit in conference presentations for the gem (thanks!), when I didn’t even know they were using it and I hadn’t been paying it any attention at all! It’s still used by a handful of institutions for whom it just works with little attention from maintainers. (The screenshot from Cornell University Libraries) Traject is a Marc-to-Solr indexing tool written in ruby  (or, more generally, can be a general purpose extract-transform-load tool), that I wrote with Bill Dueber from the University of Michigan in 2013.  We hoped it would catch on in the Blacklight community, but for the first couple years, it’s uptake was slow.  However, since then, it has come to be pretty popular in Blacklight and Samvera communities, and a few other library technologist uses.  You can see the spikes of commit activity in the graph for a 2.0 release in 2015 and a 3.0 release in 2018 – but for the most part at other times, nobody has really been spending much time on maintaining traject.   Every once in a while a community member submits a minor Pull Request, and it’s usually me who reviews it. Me and Bill remain the only maintainers.  And yet traject just keeps plugging along, picking up adoption and working well for adopters.   So, this made me start thinking, based on what I’ve seen in my career, what are some of the things that might make open source projects both low-maintenance and successful in their adoption and ease-of-use for developers? One thing both of these projects did was take backwards compatibility very seriously.  The first step of step there is following “semantic versioning” a set of rules whose main point is that releases can’t include backwards incompatible changes unless they are a new major version, like going from 1.x to 2.0.  This is important, but it’s not alone enough to minimize backwards incompatible changes that add maintenance burden to the ecosystem. If the real goal is preventing the pain of backwards incompatibility, we also need to limit the number of major version releases, and limit the number and scope of backwards breaking changes in each major release! The Bento_search gem has only had one major release, it’s never had a 2.0 release, and it’s still backwards compatible to it’s initial release.  Traject is on a 3.X release after 8 years, but the major releases of traject have had extremely few backwards breaking changes, most people could upgrade through major versions changing very little or most often nothing in their projects.  So OK, sure, everyone wants to minimize backwards incompatibility, but that’s easy to say, how do you DO it? Well, it helps to have less code overall, that changes less often overall all  – ok, again, great, but how do you do THAT?  Parsimony is a word in general English that means “The quality of economy or frugality in the use of resources.” In terms of software architecture, it means having as few as possible moving parts inside your code: fewer classes, types, components, entities, whatever: Or most fundamentally, I like to think of it in terms of minimizing the concepts in the mental model a programmer needs to grasp how the code works and what parts do what. The goal of architecture design is, what is the smallest possible architecture we can create to make [quote] “simple things simple and complex things possible”, as computer scientist Alan Kay described the goal of software design.  We can see this in bento_search has very few internal architectural concepts.  The main thing bento_search does is provide a standard API for querying a search engine and representing results of a search. These are consistent across different searche engines,, with common metadata vocabulary for what results look like. This makes search engines  interchangeable to calling code.  And then it includes half a dozen or so search engine implementations for services I needed or wanted to evaluate when I wrote it.   This search engine API at the ruby level can be used all by itself even without the next part, the actual “bento style” which is a built-in support for displaying search engine results in a boxes on a page of your choice in a Rails app, way to,  writing very little boilerplate code.   Traject has an architecture which basically has just three parts at the top. There is a reader which sends objects into the pipeline.  There are some indexing rules which are transformation steps from source object to build an output Hash object.  And then a writer which which translates the Hash object to write to some store, such as Solr. The reader, transformation steps, and writer are all independent and uncaring about each other, and can be mixed and matched.   That’s MOST of traject right there. It seems simple and obvious once you have it, but it can take a lot of work to end up with what’s simple and obvious in retrospect!  When designing code I’m often reminded of the apocryphal quote: “I would have written a shorter letter, but I did not have the time” And, to be fair, there’s a lot of complexity within that “indexing rules” step in traject, but it’s design was approached the same way. We have use cases about supporting configuration settings in a  file or on command line; or about allowing re-usable custom transformation logic – what’s the simplest possible architecture we can come up with to support those cases. OK, again, that sounds nice, but how do you do it? I don’t have a paint by numbers, but I can say that for both these projects I took some time – a few weeks even – at the beginning to work out these architectures, lots of diagraming, some prototyping I was prepared to throw out,  and in some cases “Documentation-driven design” where I wrote some docs for code I hadn’t written yet. For traject it was invaluable to have Bill Dueber at University of Michigan also interested in spending some design time up front, bouncing ideas back and forth with – to actually intentionally go through an architectural design phase before the implementation.  Figuring out a good parsimonious architecture takes domain knowledge: What things your “industry” – other potential institutions — are going to want to do in this area, and specifically what developers are going to want to do with your tool.  We’re maybe used to thinking of “use cases” in terms of end-users, but it can be useful at the architectural design stage, to formalize this in terms of developer use cases. What is a developer going to want to do, how can I come up with a small number of software pieces she can use to assemble together to do those things. When we said “make simple things simple and complex things possible”, we can say domain analysis and use cases is identifying what things we’re going to put in either or neither of those categories.  The “simple thing” for bento_search , for instance is just “do a simple keyword search in a search engine, and display results, without having the calling code need to know anything about the specifics of that search engine.” Another way to get a head-start on solid domain knowledge is to start with another tool you have experience with, that you want to create a replacement for. Before Traject, I and other users used a tool written in Java called SolrMarc —  I knew how we had used it, and where we had had roadblocks or things that we found harder or more complicated than we’d like, so I knew my goals were to make those things simpler. We’re used to hearing arguments about avoiding rewrites, but like most things in software engineering, there can be pitfalls on either either extreme. I was amused to notice, Fred Brooks in the previously mentioned Mythical Man Month makes some arguments in both directions.  Brooks famously warns about a “second-system effect”, the [quote] “tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence” – one reason to be cautious of a rewrite.  But Brooks in the very same book ALSO writes [quote] “In most projects, the first system built is barely usable….Hence plan to throw one away; you will, anyhow.” It’s up to us figure out when we’re in which case. I personally think an application is more likely to be bitten by the “second-system effect” danger of a rewrite, while a shared re-usable library is more likely to benefit from a rewrite (in part because a reusable library is harder to change in place without disruption!).  We could sum up a lot of different princples as variations of “Keep it small”.  Both traject and bento_search are tools that developers can use to build something. Bento_search just puts search results in a box on a page; the developer is responsible for the page and an overall app.  Yes, this means that you have to be a ruby developer to use it. Does this limit it’s audience? While we might aspire to make tools that even not-really-developers can just use out of the box, my experience has been that our open source attempts at shrinkwrapped “solutions” often end up still needing development expertise to successfully deploy.  Keeping our tools simple and small and not trying to supply a complete app can actually leave more time for these developers to focus on meeting local needs, instead of fighting with a complicated frameworks that doesn’t do quite what they need. It also means we can limit interactions with any external dependencies. Traject was developed for use with a Blacklight project, but traject code does not refer to Blacklight or even Rails at all, which means new releases of Blacklight or Rails can’t possibly break traject.  Bento_search , by doing one thing and not caring about the details of it’s host application, has kept working from Rails 3.2 all the way up to current Rails 6.1 with pretty much no changes needed except to the test suite setup.  Sometimes when people try to have lots of small tools working together, it can turn into a nightmare where you get a pile of cascading software breakages every time one piece changes. Keeping assumptions and couplings down is what lets us avoid this maintenance nightmare.  And another way of keeping it small is don’t be afraid to say “no” to features when you can’t figure out how to fit them in without serious harm to the parsimony of your architecture. Your domain knowledge is what lets you take an educated guess as to what features are core to your audience and need to be accomodated, and which are edge cases and can be fulfilled by extension points, or sometimes not at all.  By extension points we mean we prefer opportunities for developer-users to write their own code which works with your tools, rather than trying to build less commonly needed features in as configurable features.  As an example, Traject does include some built-in logic, but one of it’s extension point use cases is making sure it’s simple to add whatever transformation logic a developer-user wants, and have it look just as “built-in” as what came with traject. And since traject makes it easy to write your own reader or writer, it’s built-in readers and writers don’t need to include every possible feature –we plan for developers writing their own if they need something else.  Looking at bento_search, it makes it easy to write your own search engine_adapter — that will be useable interchangeably with the built-in ones. Also, bento_search provides a standard way to add custom search arguments specific to a particular adapter – these won’t be directly interchangeable with other adapters, but they are provided for in the architecture, and won’t break in future bento_search releases – it’s another form of extension point.  These extension points are the second half of “simple things simple, complex things possible.” – the complex things possible. Planning for them is part of understanding your developer use-cases, and designing an architecture that can easily handle them. Ideally, it takes no extra layers of abstraction to handle them, you are using the exact  architectural join points the out-of-the-box code is using, just supplying custom components.  So here’s an example of how these things worked out in practice with traject, pretty well I think. Stanford ended up writing a package of extensions to traject called TrajectPlus, to take care of some features they needed that traject didn’t provide. Commit history suggests it was written in 2017, which was Traject 2.0 days.   I can’t recall, but I’d guess they approached me with change requests to traject at that time and I put them off because I couldn’t figure out how to fit them in parsimoniously, or didn’t have time to figure it out.  But the fact that they were *able* to extend traject in this way I consider a validation of traject’s architecture, that they could make it do what they needed, without much coordination with me, and use it in many projects (I think beyond just Stanford).  Much of the 3.0 release of traject was “back-port”ing some features that TrajectPlus had implemented, including out-of-the-box support for XML sources. But I didn’t always do them with the same implementation or API as TrajectPlus – this is another example of being able to use a second go at it to figure out how to do something even more parsimoniously, sometimes figuring out small changes to traject’s architecture to support flexibility in the right dimensions.  When Traject 3.0 came out – the TrajectPlus users didn’t necessarily want to retrofit all their code to the new traject way of doing it. But TrajectPlus could still be used with traject 3.0 with few or possibly no changes, doing things the old way, they weren’t forced to upgrade to the new way. This is a huge win for traject’s backwards compat – everyone was able to do what they needed to do, even taking separate paths, with relatively minimized maintenance work.  As I think about these things philosophically, one of my takeaways is that software engineering is still a craft – and software design is serious thing to be studied and engaged in. Especially for shared libraries rather than local apps, it’s not always to be dismissed as so-called “bike-shedding”.  It’s worth it to take time to think about design, self-reflectively and with your peers, instead of just rushing to put our fires or deliver features, it will reduce maintenance costs and increase values over the long-term.  And I want to just briefly plug “kithe”, a project of mine which tries to be guided by these design goals to create a small focused toolkit for building Digital Collections applications in Rails.  I could easily talk about all of this this another twenty minutes, but that’s our time! I’m always happy to talk more, find me on slack or IRC or email.  This last slide has some sources mentioned in the talk. Thanks for your time!  jrochkind General Leave a comment March 23, 2021March 23, 2021 Product management In my career working in the academic sector, I have realized that one thing that is often missing from in-house software development is “product management.” But what does that mean exactly? You don’t know it’s missing if you don’t even realize it’s a thing and people can use different terms to mean different roles/responsibilities. Basically, deciding what the software should do. This is not about colors on screen or margins (what our stakeholderes often enjoy micro-managing) — I’d consider those still the how of doing it, rather than the what to do. The what is often at a much higher level, about what features or components to develop at all. When done right, it is going to be based on both knowledge of the end-user’s needs and preferences (user research); but also knowledge of internal stakeholder’s desires and preferences (overall organiational strategy, but also just practically what is going to make the right people happy to keep us resourced). Also knowledge of the local capacity, what pieces do we need to put in place to get these things developed. When done seriously, it will necessarily involve prioritization — there are many things we could possibly done, some subset of them we very well may do eventually, but which ones should we do now? My experience tells me it is a very big mistake to try to have a developer doing this kind of product management. Not because a developer can’t have the right skillset to do them. But because having the same person leading development and product management is a mistake. The developer is too close to the development lense, and there’s just a clarification that happens when these roles are separate. My experience also tells me that it’s a mistake to have a committee doing these things, much as that is popular in the academic sector. Because, well, just of course it is. But okay this is all still pretty abstract. Things might become more clear if we get more specific about the actual tasks and work of this kind of product management role. I found Damilola Ajiboye blog post on “Product Manager vs Product Marketing Manager vs Product Owner” very clear and helpful here. While it is written so as to distinguish between three different product management related roles, but Ajiboye also acknowledges that in a smaller organization “a product manager is often tasked with the duty of these 3 roles. Regardless of if the responsibilities are to be done by one or two or three person, Ajiboye’s post serves as a concise listing of the work to be done in managing a product — deciding the what of the product, in an ongoing iterative and collaborative manner, so that developers and designers can get to the how and to implementation. I recommend reading the whole article, and I’ll excerpt much of it here, slightly rearranged. The Product Manager These individuals are often referred to as mini CEOs of a product. They conduct customer surveys to figure out the customer’s pain and build solutions to address it. The PM also prioritizes what features are to be built next and prepares and manages a cohesive and digital product roadmap and strategy. The Product Manager will interface with the users through user interviews/feedback surveys or other means to hear directly from the users. They will come up with hypotheses alongside the team and validate them through prototyping and user testing. They will then create a strategy on the feature and align the team and stakeholders around it. The PM who is also the chief custodian of the entire product roadmap will, therefore, be tasked with the duty of prioritization. Before going ahead to carry out research and strategy, they will have to convince the stakeholders if it is a good choice to build the feature in context at that particular time or wait a bit longer based on the content of the roadmap. The Product Marketing Manager The PMM communicates vital product value — the “why”, “what” and “when” of a product to intending buyers. He manages the go-to-market strategy/roadmap and also oversees the pricing model of the product. The primary goal of a PMM is to create demand for the products through effective messaging and marketing programs so that the product has a shorter sales cycle and higher revenue. The product marketing manager is tasked with market feasibility and discovering if the features being built align with the company’s sales and revenue plan for the period. They also make research on how sought-after the feature is being anticipated and how it will impact the budget. They communicate the values of the feature; the why, what, and when to potential buyers — In this case users in countries with poor internet connection. [While expressed in terms of a for-profit enterprise selling something, I think it’s not hard to translate this to a non-profit or academic environment. You still have an audience whose uptake you need to be succesful, whether internal or external. — jrochkind ] The Product Owner A product owner (PO) maximizes the value of a product through the creation and management of the product backlog, creation of user stories for the development team. The product owner is the customer’s representative to the development team. He addresses customer’s pain points by managing and prioritizing a visible product backlog. The PO is the first point of call when the development team needs clarity about interpreting a product feature to be implemented. The product owner will first have to prioritize the backlog to see if there are no important tasks to be executed and if this new feature is worth leaving whatever is being built currently. They will also consider the development effort required to build the feature i.e the time, tools, and skill set that will be required. They will be the one to tell if the expertise of the current developers is enough or if more engineers or designers are needed to be able to deliver at the scheduled time. The product owner is also armed with the task of interpreting the product/feature requirements for the development team. They serve as the interface between the stakeholders and the development team. When you have someone(s) doing these roles well, it ensures that the development team is actually spending time on things that meet user and business needs. I have found that it makes things so much less stressful and more rewarding for everyone involved. When you have nobody doing these roles, or someone doing it in a cursory or un-intentional way not recognized as part of their core job responsibilities, or have a lead developer trying to do it on top of develvopment, I find it leads to feelings of: spinning wheels, everything-is-an-emergency, lack of appreciation, miscommunication and lack of shared understanding between stakeholders and developers, general burnout and dissatisfaction — and at the root, a product that is not meeting user or business needs well, leading to these inter-personal and personal problems. jrochkind General Leave a comment February 3, 2021 Rails auto-scaling on Heroku We are investigating moving our medium-small-ish Rails app to heroku. We looked at both the Rails Autoscale add-on available on heroku marketplace, and the hirefire.io service which is not listed on heroku marketplace and I almost didn’t realize it existed. I guess hirefire.io doesn’t have any kind of a partnership with heroku, but still uses the heroku API to provide an autoscale service. hirefire.io ended up looking more fully-featured and lesser priced than Rails Autoscale; so the main service of this post is just trying to increase visibility of hirefire.io and therefore competition in the field, which benefits us consumers. Background: Interest in auto-scaling Rails background jobs At first I didn’t realize there was such a thing as “auto-scaling” on heroku, but once I did, I realized it could indeed save us lots of money. I am more interested in scaling Rails background workers than I a web workers though — our background workers are busiest when we are doing “ingests” into our digital collections/digital asset management system, so the work is highly variable. Auto-scaling up to more when there is ingest work piling up can give us really nice inget throughput while keeping costs low. On the other hand, our web traffic is fairly low and probably isn’t going to go up by an order of magnitude (non-profit cultural institution here). And after discovering that a “standard” dyno is just too slow, we will likely be running a performance-m or performance-l anyway — which likely can handle all anticipated traffic on it’s own. If we have an auto-scaling solution, we might configure it for web dynos, but we are especially interested in good features for background scaling. There is a heroku built-in autoscale feature, but it only works for performance dynos, and won’t do anything for Rails background job dynos, so that was right out. That could work for Rails bg jobs, the Rails Autoscale add-on on the heroku marketplace; and then we found hirefire.io. Pricing: Pretty different hirefire As of now January 2021, hirefire.io has pretty simple and affordable pricing. $15/month/heroku application. Auto-scaling as many dynos and process types as you like. hirefire.io by default can only check into your apps metrics to decide if a scaling event can occur once per minute. If you want more frequent than that (up to once every 15 seconds), you have to pay an additional $10/month, for $25/month/heroku application. Even though it is not a heroku add-on, hirefire does advertise that they bill pro-rated to the second, just like heroku and heroku add-ons. Rails autoscale Rails autoscale has a more tiered approach to pricing that is based on number and type of dynos you are scaling. Starting at $9/month for 1-3 standard dynos, the next tier up is $39 for up to 9 standard dynos, all the way up to $279 (!) for 1 to 99 dynos. If you have performance dynos involved, from $39/month for 1-3 performance dynos, up to $599/month for up to 99 performance dynos. For our anticipated uses… if we only scale bg dynos, I might want to scale from (low) 1 or 2 to (high) 5 or 6 standard dynos, so we’d be at $39/month. Our web dynos are likely to be performance and I wouldn’t want/need to scale more than probably 2, but that puts us into performance dyno tier, so we’re looking at $99/month. This is of course significantly more expensive than hirefire.io’s flat rate. Metric Resolution Since Hirefire had an additional charge for finer than 1-minute resolution on checks for autoscaling, we’ll discuss resolution here in this section too. Rails Autoscale has same resolution for all tiers, and I think it’s generally 10 seconds, so approximately the same as hirefire if you pay the extra $10 for increased resolution. Configuration Let’s look at configuration screens to get a sense of feature-sets. Rails Autoscale web dynos To configure web dynos, here’s what you get, with default values: The metric Rails Autoscale uses for scaling web dynos is time in heroku routing queue, which seems right to me — when things are spending longer in heroku routing queue before getting to a dyno, it means scale up. worker dynos For scaling worker dynos, Rails Autoscale can scale dyno type named “worker” — it can understand ruby queuing libraries Sidekiq, Resque, Delayed Job, or Que. I’m not certain if there are options for writing custom adapter code for other backends. Here’s what the configuration options are — sorry these aren’t the defaults, I’ve already customized them and lost track of what defaults are. You can see that worker dynos are scaled based on the metric “number of jobs queued”, and you can tell it to only pay attention to certain queues if you want. Hirefire Hirefire has far more options for customization than Rails Autoscale, which can make it a bit overwhelming, but also potentially more powerful. web dynos You can actually configure as many Heroku process types as you have for autoscale, not just ones named “web” and “worker”. And for each, you have your choice of several metrics to be used as scaling triggers. For web, I think Queue Time (percentile, average) matches what Rails Autoscale does, configured to percentile, 95, and is probably the best to use unless you have a reason to use another. (“Rails Autoscale tracks the 95th percentile queue time, which for most applications will hover well below the default threshold of 100ms.“) Here’s what configuration Hirefire makes available if you are scaling on “queue time” like Rails Autoscale, configuration may vary for other metrics. I think if you fill in the right numbers, you can configure to work equivalently to Rails Autoscale. worker dynos If you have more than one heroku process type for workers — say, working on different queues — Hirefire can scale the independently, with entirely separate configuration. This is pretty handy, and I don’t think Rails Autoscale offers this. (update i may be wrong, Rails Autoscale says they do support this, so check on it yourself if it matters to you). For worker dynos, you could choose to scale based on actual “dyno load”, but I think this is probably mostly for types of processes where there isn’t the ability to look at “number of jobs”. A “number of jobs in queue” like Rails Autoscale does makes a lot more sense to me as an effective metric for scaling queue-based bg workers. Hirefire’s metric is slightly difererent than Rails Autoscale’s “jobs in queue”. For recognized ruby queue systems (a larger list than Rails Autoscale’s; and you can write your own custom adapter for whatever you like), it actually measures jobs in queue plus workers currently busy. So queued+in-progress, rather than Rails Autoscale’s just queued. I actually have a bit of trouble wrapping my head around the implications of this, but basically, it means that Hirefire’s “jobs in queue” metric strategy is intended to try to scale all the way to emptying your queue, or reaching your max scale limit, whichever comes first. I think this may make sense and work out at least as well or perhaps better than Rails Autoscale’s approach? Here’s what configuration Hirefire makes available for worker dynos scaling on “job queue” metric. Since the metric isn’t the same as Rails Autosale, we can’t configure this to work identically. But there are a whole bunch of configuration options, some similar to Rails Autoscale’s. The most important thing here is that “Ratio” configuration. It may not be obvious, but with the way the hirefire metric works, you are basically meant to configure this to equal the number of workers/threads you have on each dyno. I have it configured to 3 because my heroku worker processes use resque, with resque_pool, configured to run 3 resque workers on each dyno. If you use sidekiq, set ratio to your configured concurrency — or if you are running more than one sidekiq process, processes*concurrency. Basically how many jobs your dyno can be concurrently working is what you should normally set for ‘ratio’. Hirefire not a heroku plugin Hirefire isn’t actually a heroku plugin. In addition to that meaning separate invoicing, there can be some other inconveniences. Since hirefire only can interact with heroku API, for some metrics (including the “queue time” metric that is probably optimal for web dyno scaling) you have to configure your app to log regular statistics to heroku’s “Logplex” system. This can add a lot of noise to your log, and for heroku logging add-ons that are tired based on number of log lines or bytes, can push you up to higher pricing tiers. If you use paperclip, I think you should be able to use the log filtering feature to solve this, keep that noise out of your logs and avoid impacting data log transfer limits. However, if you ever have cause to look at heroku’s raw logs, that noise will still be there. Support and Docs I asked a couple questions of both Hirefire and Rails Autoscale as part of my evaluation, and got back well-informed and easy-to-understand answers quickly from both. Support for both seems to be great. I would say the documentation is decent-but-not-exhaustive for both products. Hirefire may have slightly more complete documentation. Other Features? There are other things you might want to compare, various kinds of observability (bar chart or graph of dynos or observed metrics) and notification. I don’t have time to get into the details (and didn’t actually spend much time exploring them to evaluate), but they seem to offer roughly similar features. Conclusion Rails Autoscale is quite a bit more expensive than hirefire.io’s flat rate, once you get past Rails Autoscale’s most basic tier (scaling no more than 3 standard dynos). It’s true that autoscaling saves you money over not, so even an expensive price could be considered a ‘cut’ of that, and possibly for many ecommerce sites even $99 a month might a drop in the bucket (!)…. but this price difference is so significant with hirefire (which has flat rate regardless of dynos), that it seems to me it would take a lot of additional features/value to justify. And it’s not clear that Rails Autoscale has any feature advantage. In general, hirefire.io seems to have more features and flexibility. Until 2021, hirefire.io could only analyze metrics with 1-minute resolution, so perhaps that was a “killer feature”? Honestly I wonder if this price difference is sustained by Rails Autoscale only because most customers aren’t aware of hirefire.io, it not being listed on the heroku marketplace? Single-invoice billing is handy, but probably not worth $80+ a month. I guess hirefire’s logplex noise is a bit inconvenient? Or is there something else I’m missing? Pricing competition is good for the consumer. And are there any other heroku autoscale solutions, that can handle Rails bg job dynos, that I still don’t know about? update a day after writing djcp on a reddit thread writes: I used to be a principal engineer for the heroku add-ons program. One issue with hirefire is they request account level oauth tokens that essentially give them ability to do anything with your apps, where Rails Autoscaling worked with us to create a partnership and integrate with our “official” add-on APIs that limits security concerns and are scoped to the application that’s being scaled. Part of the reason for hirefire working the way it does is historical, but we’ve supported the endpoints they need to scale for “official” partners for years now. A lot of heroku customers use hirefire so please don’t think I’m spreading FUD, but you should be aware you’re giving a third party very broad rights to do things to your apps. They probably won’t, of course, but what if there’s a compromise? “Official” add-on providers are given limited scoped tokens to (mostly) only the actions / endpoints they need, minimizing blast radius if they do get compromised. You can read some more discussion at that thread. jrochkind General 2 Comments January 27, 2021January 30, 2021 Managed Solr SaaS Options I was recently looking for managed Solr “software-as-a-service” (SaaS) options, and had trouble figuring out what was out there. So I figured I’d share what I learned. Even though my knowledge here is far from exhaustive, and I have only looked seriously at one of the ones I found. The only managed Solr options I found were: WebSolr; SearchStax; and OpenSolr. Of these, i think WebSolr and SearchStax are more well-known, I couldn’t find anyone with experience with OpenSolr, which perhaps is newer. Of them all, SearchStax is the only one I actually took for a test drive, so will have the most to say about. Why we were looking We run a fairly small-scale app, whose infrastructure is currently 4 self-managed AWS EC2 instances, running respectively: 1) A rails web app 2) Bg workers for the rails web app 3) Postgres, and 4) Solr. Oh yeah, there’s also a redis running one of those servers, on #3 with pg or #4 with solr, I forget. Currently we manage this all ourselves, right on the EC2. But we’re looking to move as much as we can into “managed” servers. Perhaps we’ll move to Heroku. Perhaps we’ll use hatchbox. Or if we do stay on AWS resources we manage directly, we’d look at things like using an AWS RDS Postgres instead of installing it on an EC2 ourselves, an AWS ElastiCache for Redis, maybe look into Elastic Beanstalk, etc. But no matter what we do, we need a Solr, and we’d like to get it managed. Hatchbox has no special Solr support, AWS doesn’t have a Solr service, Heroku does have a solr add-on but you can also use any Solr with it and we’ll get to that later. Our current Solr use is pretty small scale. We don’t run “SolrCloud mode“, just legacy ordinary Solr. We only have around 10,000 documents in there (tiny for Solr), our index size is only 70MB. Our traffic is pretty low — when I tried to figure out how low, it doesn’t seem we have sufficient logging turned on to answer that specifically but using proxy metrics to guess I’d say 20K-40K requests a day, query as well as add. This is a pretty small Solr installation, although it is used centrally for the primary functions of the (fairly low-traffic) app. It currently runs on an EC2 t3a.small, which is a “burstable” EC2 type with only 2G of RAM. It does have two vCPUs (that is one core with ‘hyperthreading’). The t3a.small EC2 instance only costs $14/month on-demand price! We know we’ll be paying more for managed Solr, but we want to do get out of the business of managing servers — we no longer really have the staff for it. WebSolr (didn’t actually try out) WebSolr is the only managed Solr currently listed as a Heroku add-on. It is also available as a managed Solr independent of heroku. The pricing in the heroku plans vs the independent plans seems about the same. As a heroku add-on there is a $20 “staging” plan that doesn’t exist in the independent plans. (Unlike some other heroku add-ons, no time-limited free plan is available for WebSolr). But once we go up from there, the plans seem to line up. Starting at: $59/month for: 1 million document limit 40K requests/day 1 index 954MB storage 5 concurrent requests limit (this limit is not mentioned on the independent pricing page?) Next level up is $189/month for: 5 million document limit 150K requests/day 4.6GB storage 10 concurrent request limit (again concurrent request limits aren’t mentioned on independent pricing page) As you can see, WebSolr has their plans metered by usage. $59/month is around the price range we were hoping for (we’ll need two, one for staging one for production). Our small solr is well under 1 million documents and ~1GB storage, and we do only use one index at present. However, the 40K requests/day limit I’m not sure about, even if we fit under it, we might be pushing up against it. And the “concurrent request” limit simply isn’t one I’m even used to thinking about. On a self-managed Solr it hasn’t really come up. What does “concurrent” mean exactly in this case, how is it measured? With 10 puma web workers and sometimes a possibly multi-threaded batch index going on, could we exceed a limit of 4? Seems plausible. What happens when they are exceeded? Your Solr request results in an HTTP 429 error! Do I need to now write the app to rescue those gracefully, or use connection pooling to try to avoid them, or something? Having to rewrite the way our app functions for a particular managed solr is the last thing we want to do. (Although it’s not entirely clear if those connection limits exist on the non-heroku-plugin plans, I suspect they do?). And in general, I’m not thrilled with the way the pricing works here, and the price points. I am positive for a lot of (eg) heroku customers an additional $189*2=$378/month is peanuts not even worth accounting for, but for us, a small non-profit whose app’s traffic does not scale with revenue, that starts to be real money. It is not clear to me if WebSolr installations (at “standard” plans) are set up in “SolrCloud mode” or not; I’m not sure what API’s exist for uploading your custom schema.xml (which we’d need to do), or if they expect you to do this only manually through a web UI (that would not be good); I’m not sure if you can upload custom solrconfig.xml settings (this may be running on a shared solr instance with standard solrconfig.xml?). Basically, all of this made WebSolr not the first one we looked at. Does it matter if we’re on heroku using a managed Solr that’s not a Heroku plugin? I don’t think so. In some cases, you can get a better price from a Heroku plug-in than you could get from that same vendor not on heroku or other competitors. But that doesn’t seem to be the case here, and other that that does it matter? Well, all heroku plug-ins are required to bill you by-the-minute, which is nice but not really crucial, other forms of billing could also be okay at the right price. With a heroku add-on, your billing is combined into one heroku invoice, no need to give a credit card to anyone else, and it can be tracked using heroku tools. Which is certainly convenient and a plus, but not essential if the best tool for the job is not a heroku add-on. And as a heroku add-on, WebSolr provides a WEBSOLR_URL heroku config/env variable automatically to code running on heroku. OK, that’s kind of nice, but it’s not a big deal to set a SOLR_URL heroku config manually referencing the appropriate address. I suppose as a heroku add-on, WebSolr also takes care of securing and authenticating connections between the heroku dynos and the solr, so we need to make sure we have a reasonable way to do this from any alternative. SearchStax (did take it for a spin) SearchStax’s pricing tiers are not based on metering usage. There are no limits based on requests/day or concurrent connections. SearchStax runs on dedicated-to-you individual Solr instances (I would guess running on dedicated-to-you individual (eg) EC2, but I’m not sure). Instead the pricing is based on size of host running Solr. You can choose to run on instances deployed to AWS, Google Cloud, or Azure. We’ll be sticking to AWS (the others, I think, have a slight price premium). While SearchStax gives you a pricing pages that looks like the “new-way-of-doing-things” transparent pricing, in fact there isn’t really enough info on public pages to see all the price points and understand what you’re getting, there is still a kind of “talk to a salesperson who has a price sheet” thing going on. What I think I have figured out from talking to a salesperson and support, is that the “Silver” plans (“Starting at $19 a month”, although we’ll say more about that in a bit) are basically: We give you a Solr, we don’t don’t provide any technical support for Solr. While the “Gold” plans “from $549/month” are actually about paying for Solr consultants to set up and tune your schema/index etc. That is not something we need, and $549+/month is way more than the price range we are looking for. While the SearchStax pricing/plan pages kind of imply the “Silver” plan is not suitable for production, in fact there is no real reason not to use it for production I think, and the salesperson I talked to confirmed that — just reaffirming that you were on your own managing the Solr configuration/setup. That’s fine, that’s what we want, we just don’t want to mangage the OS or set up the Solr or upgrade it etc. The Silver plans have no SLA, but as far as I can tell their uptime is just fine. The Silver plans only guarantees 72-hour support response time — but for the couple support tickets I filed asking questions while under a free 14-day trial (oh yeah that’s available), I got prompt same-day responses, and knowledgeable responses that answered my questions. So a “silver” plan is what we are interested in, but the pricing is not actually transparent. $19/month is for the smallest instance available, and IF you prepay/contract for a year. They call that small instance an NDN1 and it has 1GB of RAM and 8GB of storage. If you pay-as-you-go instead of contracting for a year, that already jumps to $40/month. (That price is available on the trial page). When you are paying-as-you-go, you are actually billed per-day, which might not be as nice as heroku’s per-minute, but it’s pretty okay, and useful if you need to bring up a temporary solr instance as part of a migration/upgrade or something like that. The next step up is an “NDN2” which has 2G of RAM and 16GB of storage, and has an ~$80/month pay-as-you-go — you can find that price if you sign-up for a free trial. The discount price price for an annual contract is a discount similar to the NDN1 50%, $40/month — that price I got only from a salesperson, I don’t know if it’s always stable. It only occurs to me now that they don’t tell you how many CPUs are available. I’m not sure if I can fit our Solr in the 1G NDN1, but I am sure I can fit it in the 2G NDN2 with some headroom, so I didn’t look at plans above that — but they are available, still under “silver”, with prices going up accordingly. All SearchStax solr instances run in “SolrCloud” mode — these NDN1 and NDN2 ones we’re looking at just run one node with one zookeeper, but still in cloud mode. There are also “silver” plans available with more than one node in a “high availability” configuration, but the prices start going up steeply, and we weren’t really interested in that. Because it’s SolrCloud mode though, you can use the standard Solr API for uploading your configuration. It’s just Solr! So no arbitrary usage limits, no features disabled. The SearchStax web console seems competently implemented; it let’s you create and delete individual Solr “deployments”, manage accounts to login to console (on “silver” plan you only get two, or can pay $10/month/account for more, nah), and set up auth for a solr deployment. They support IP-based authentication or HTTP Basic Auth to the Solr (no limit to how many Solr Basic Auth accounts you can create). HTTP Basic Auth is great for us, because trying to do IP-based from somewhere like heroku isn’t going to work. All Solrs are available over HTTPS/SSL — great! SearchStax also has their own proprietary HTTP API that lets you do most anything, including creating/destroying deployments, managing Solr basic auth users, basically everything. There is some API that duplicates the Solr Cloud API for adding configsets, I don’t think there’s a good reason to use it instead of standard SolrCloud API, although their docs try to point you to it. There’s even some kind of webhooks for alerts! (which I haven’t really explored). Basically, SearchStax just seems to be a sane and rational managed Solr option, it has all the features you’d expect/need/want for dealing with such. The prices seem reasonable-ish, generally more affordable than WebSolr, especially if you stay in “silver” and “one node”. At present, we plan to move forward with it. OpenSolr (didn’t look at it much) I have the least to say about this, have spent the least time with it, after spending time with SearchStax and seeing it met our needs. But I wanted to make sure to mention it, because it’s the only other managed Solr I am even aware of. Definitely curious to hear from any users. Here is the pricing page. The prices seem pretty decent, perhaps even cheaper than SearchStax, although it’s unclear to me what you get. Does “0 Solr Clusters” mean that it’s not SolrCloud mode? After seeing how useful SolrCloud APIs are for management (and having this confirmed by many of my peers in other libraries/museums/archives who choose to run SolrCloud), I wouldn’t want to do without it. So I guess that pushes us to “executive” tier? Which at $50/month (billed yearly!) is still just fine, around the same as SearchStax. But they do limit you to one solr index; I prefer SearchStax’s model of just giving you certain host resources and do what you want with it. It does say “shared infrastructure”. Might be worth investigating, curious to hear more from anyone who did. Now, what about ElasticSearch? We’re using Solr mostly because that’s what various collaborative and open source projects in the library/museum/archive world have been doing for years, since before ElasticSearch even existed. So there are various open source libraries and toolsets available that we’re using. But for whatever reason, there seem to be SO MANY MORE managed ElasticSearch SaaS available. At possibly much cheaper pricepoints. Is this because the ElasticSearch market is just bigger? Or is ElasticSearch easier/cheaper to run in a SaaS environment? Or what? I don’t know. But there’s the controversial AWS ElasticSearch Service; there’s the Elastic Cloud “from the creators of ElasticSearch”. On Heroku that lists one Solr add-on, there are THREE ElasticSearch add-ons listed: ElasticCloud, Bonsai ElasticSearch, and SearchBox ElasticSearch. If you just google “managed ElasticSearch” you immediately see 3 or 4 other names. I don’t know enough about ElasticSearch to evaluate them. There seem on first glance at pricing pages to be more affordable, but I may not know what I’m comparing and be looking at tiers that aren’t actually usable for anything or will have hidden fees. But I know there are definitely many more managed ElasticSearch SaaS than Solr. I think ElasticSearch probably does everything our app needs. If I were to start from scratch, I would definitely consider ElasticSearch over Solr just based on how many more SaaS options there are. While it would require some knowledge-building (I have developed a lot of knowlege of Solr and zero of ElasticSearch) and rewriting some parts of our stack, I might still consider switching to ES in the future, we don’t do anything too too complicated with Solr that would be too too hard to switch to ES, probably. jrochkind General Leave a comment January 12, 2021January 27, 2021 Gem authors, check your release sizes Most gems should probably be a couple hundred kb at most. I’m talking about the package actually stored in and downloaded from rubygems by an app using the gem. After all, source code is just text, and it doesn’t take up much space. OK, maybe some gems have a couple images in there. But if you are looking at your gem in rubygems and realize that it’s 10MB or bigger… and that it seems to be getting bigger with every release… something is probably wrong and worth looking into it. One way to look into it is to look at the actual gem package. If you use the handy bundler rake task to release your gem (and I recommend it), you have a ./pkg directory in your source you last released from. Inside it are “.gem” files for each release you’ve made from there, unless you’ve cleaned it up recently. .gem files are just .tar files it turns out. That have more tar and gz files inside them etc. We can go into it, extract contents, and use the handy unix utility du -sh to see what is taking up all the space. How I found the bytes jrochkind-chf kithe (master ?) $ cd pkg jrochkind-chf pkg (master ?) $ ls kithe-2.0.0.beta1.gem kithe-2.0.0.pre.rc1.gem kithe-2.0.0.gem kithe-2.0.1.gem kithe-2.0.0.pre.beta1.gem kithe-2.0.2.gem jrochkind-chf pkg (master ?) $ mkdir exploded jrochkind-chf pkg (master ?) $ cp kithe-2.0.0.gem exploded/kithe-2.0.0.tar jrochkind-chf pkg (master ?) $ cd exploded jrochkind-chf exploded (master ?) $ tar -xvf kithe-2.0.0.tar x metadata.gz x data.tar.gz x checksums.yaml.gz jrochkind-chf exploded (master ?) $ mkdir unpacked_data_tar jrochkind-chf exploded (master ?) $ tar -xvf data.tar.gz -C unpacked_data_tar/ jrochkind-chf exploded (master ?) $ cd unpacked_data_tar/ /Users/jrochkind/code/kithe/pkg/exploded/unpacked_data_tar jrochkind-chf unpacked_data_tar (master ?) $ du -sh * 4.0K MIT-LICENSE 12K README.md 4.0K Rakefile 160K app 8.0K config 32K db 100K lib 300M spec jrochkind-chf unpacked_data_tar (master ?) $ cd spec jrochkind-chf spec (master ?) $ du -sh * 8.0K derivative_transformers 300M dummy 12K factories 24K indexing 72K models 4.0K rails_helper.rb 44K shrine 12K simple_form_enhancements 8.0K spec_helper.rb 188K test_support 4.0K validators jrochkind-chf spec (master ?) $ cd dummy/ jrochkind-chf dummy (master ?) $ du -sh * 4.0K Rakefile 56K app 24K bin 124K config 4.0K config.ru 8.0K db 300M log 4.0K package.json 12K public 4.0K tmp Doh! In this particular gem, I have a dummy rails app, and it has 300MB of logs, cause I haven’t b bothered trimming them in a while, that are winding up including in the gem release package distributed to rubygems and downloaded by all consumers! Even if they were small, I don’t want these in the released gem package at all! That’s not good! It only turns into 12MB instead of 300MB, because log files are so compressable and there is compression involved in assembling the rubygems package. But I have no idea how much space it’s actually taking up on consuming applications machines. This is very irresponsible! What controls what files are included in the gem package? Your .gemspec file of course. The line s.files = is an array of every file to include in the gem package. Well, plus s.test_files is another array of more files, that aren’t supposed to be necessary to run the gem, but are to test it. (Rubygems was set up to allow automated *testing* of gems after download, is why test files are included in the release package. I am not sure how useful this is, and who if anyone does it; although I believe that some linux distro packagers try to make use of it, for better or worse). But nobody wants to list every file in your gem individually, manually editing the array every time you add, remove, or move one. Fortunately, gemspec files are executable ruby code, so you can use ruby as a shortcut. I have seen two main ways of doing this, with different “gem skeleton generators” taking one of two approaches. Sometimes a shell out to git is used — the idea is that everything you have checked into your git should be in the gem release package, no more or no less. For instance, one of my gems has this in it, not sure where it came from or who/what generated it. spec.files = `git ls-files -z`.split("\x0").reject do |f| f.match(%r{^(test|spec|features)/}) end In that case, it wouldn’t have included anything in ./spec already, so this obviously isn’t actually the gem we were looking at before. But in this case, in addition to using ruby logic to manipulate the results, nothing excluded by your .gitignore file will end up included in your gem package, great! In kithe we were looking at before, those log files were in the .gitignore (they weren’t in my repo!), so if I had been using that git-shellout technique, they wouldn’t have been included in the ruby release already. But… I wasn’t. Instead this gem has a gemspec that looks like: s.test_files = Dir["spec/*/"] Just include every single file inside ./spec in the test_files list. Oops. Then I get all those log files! One way to fix I don’t really know which is to be preferred of the git-shellout approach vs the dir-glob approach. I suspect it is the subject of historical religious wars in rubydom, when there were still more people around to argue about such things. Any opinions? Or another approach? Without being in the mood to restructure this gemspec in anyway, I just did the simplest thing to keep those log files out… Dir["spec/*/"].delete_if {|a| a =~ %r{/dummy/log/}} Build the package without releasing with the handy bundler supplied rake build task… and my gem release package size goes from 12MB to 64K. (which actually kind of sounds like a minimum block size or something, right?) Phew! That’s a big difference! Sorry for anyone using previous versions and winding up downloading all that cruft! (Actually this particular gem is mostly a proof of concept at this point and I don’t think anyone else is using it). Check your gem sizes! I’d be willing to be there are lots of released gems with heavily bloated release packages like this. This isn’t the first one I’ve realized was my fault. Because who pays attention to gem sizes anyway? Apparently not many! But rubygems does list them, so it’s pretty easy to see. Are your gem release packages multiple megs, when there’s no good reason for them to be? Do they get bigger every release by far more than the bytes of lines of code you think were added? At some point in gem history was there a big jump from hundreds of KB to multiple MB? When nothing particularly actually happened to gem logic to lead to that? All hints that you might be including things you didn’t mean to include, possibly things that grow each release. You don’t need to have a dummy rails app in your repo to accidentally do this (I accidentally did it once with a gem that had nothing to do with rails). There could be other kind of log files. Or test coverage or performance metric files, or any other artifacts of your build or your development, especially ones that grow over time — that aren’t actually meant to or needed as part of the gem release package! It’s good to sanity check your gem release packages now and then. In most cases, your gem release package should be hundreds of KB at most, not MBs. Help keep your users’ installs and builds faster and slimmer! jrochkind General Leave a comment January 11, 2021 Every time you decide to solve a problem with code… Every time you decide to solve a problem with code, you are committing part of your future capacity to maintaining and operating that code. Software is never done. Software is drowning the world by James Abley jrochkind General Leave a comment January 10, 2021 Posts navigation Older posts Bibliographic Wilderness is a blog by Jonathan Rochkind about digital library services, ruby, and web development. Contact Search for: Email Subscription Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 219 other followers Email Address: Subscribe Recent Posts logging URI query params with lograge August 4, 2021 Notes on Cloudfront in front of Rails Assets on Heroku, with CORS June 23, 2021 ActiveSupport::Cache via ActiveRecord (note to self) June 21, 2021 Heroku release phase, rails db:migrate, and command failure June 16, 2021 Code that Lasts: Sustainable And Usable Open Source Code March 23, 2021 Archives Archives Select Month August 2021  (1) June 2021  (3) March 2021  (1) February 2021  (1) January 2021  (4) December 2020  (1) November 2020  (3) October 2020  (2) September 2020  (3) August 2020  (2) April 2020  (1) March 2020  (1) December 2019  (1) October 2019  (1) September 2019  (1) August 2019  (2) June 2019  (2) April 2019  (3) March 2019  (3) February 2019  (1) December 2018  (1) November 2018  (1) October 2018  (2) September 2018  (4) August 2018  (1) June 2018  (2) May 2018  (1) April 2018  (1) March 2018  (3) February 2018  (1) January 2018  (1) November 2017  (1) October 2017  (1) September 2017  (1) August 2017  (3) July 2017  (1) May 2017  (4) April 2017  (2) March 2017  (9) February 2017  (5) January 2017  (1) December 2016  (7) November 2016  (4) September 2016  (1) August 2016  (4) June 2016  (2) May 2016  (4) March 2016  (2) February 2016  (1) January 2016  (2) November 2015  (2) October 2015  (5) September 2015  (7) August 2015  (5) July 2015  (4) May 2015  (3) April 2015  (5) March 2015  (2) February 2015  (2) January 2015  (4) December 2014  (2) November 2014  (2) October 2014  (6) September 2014  (5) August 2014  (3) July 2014  (3) June 2014  (1) May 2014  (3) April 2014  (5) March 2014  (9) February 2014  (4) January 2014  (5) December 2013  (5) November 2013  (14) October 2013  (4) September 2013  (6) August 2013  (2) July 2013  (7) June 2013  (10) May 2013  (4) April 2013  (5) March 2013  (8) February 2013  (6) January 2013  (16) December 2012  (8) November 2012  (14) October 2012  (6) September 2012  (6) August 2012  (2) July 2012  (5) June 2012  (5) May 2012  (7) April 2012  (12) March 2012  (6) February 2012  (7) January 2012  (6) December 2011  (5) November 2011  (7) October 2011  (5) September 2011  (10) August 2011  (4) July 2011  (5) June 2011  (7) May 2011  (8) April 2011  (5) March 2011  (13) February 2011  (4) January 2011  (12) December 2010  (7) November 2010  (5) October 2010  (5) September 2010  (10) August 2010  (6) July 2010  (7) June 2010  (5) May 2010  (8) April 2010  (8) March 2010  (14) February 2010  (3) January 2010  (3) December 2009  (4) November 2009  (2) October 2009  (3) September 2009  (9) August 2009  (1) July 2009  (4) June 2009  (7) May 2009  (14) April 2009  (17) March 2009  (21) February 2009  (11) January 2009  (16) December 2008  (12) November 2008  (30) October 2008  (12) September 2008  (3) July 2008  (4) June 2008  (2) May 2008  (11) April 2008  (3) March 2008  (4) February 2008  (10) January 2008  (7) December 2007  (4) November 2007  (4) September 2007  (1) August 2007  (3) June 2007  (6) May 2007  (12) April 2007  (11) March 2007  (9) Feeds  RSS - Posts  RSS - Comments Recent Comments The Return of the Semantic Web? – Deeply Semantic on “Is the semantic web still a thing?” jrochkind on Rails auto-scaling on Heroku Adam (Rails Autoscale) on Rails auto-scaling on Heroku On catalogers, programmers, and user tasks – Gavia Libraria on Broad categories from class numbers Replacing MARC – Gavia Libraria on Linked Data Caution jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI Top Posts Purposes/Functions of Controlled Vocabulary Bootstrap 3 to 4: Changes in how font size, line-height, and spacing is done. Or "what happened to $line-height-computed." yes, product owner and technical lead need to be different people logging URI query params with lograge Some notes on what's going on in ActiveStorage Top Clicks blog.travis-ci.com/2020-1… aws.amazon.com/premiumsup… kunststube.net/encoding searchstax.com/docs/staxa… github.com/twbs/bootstrap… A blog by Jonathan Rochkind. All original content licensed CC-BY. Create a website or blog at WordPress.com   Loading Comments...   Write a Comment... Email (Required) Name (Required) Website Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy bibwild-wordpress-com-7202 ---- Bibliographic Wilderness Bibliographic Wilderness logging URI query params with lograge The lograge gem for taming Rails logs by default will lot the path component of the URI, but leave out the query string/query params. For instance, perhaps you have a URL to your app /search?q=libraries. lograge will log something like: method=GET path=/search format=html… The q=libraries part is completely left out of the log. I kinda … Continue reading logging URI query params with lograge → Notes on Cloudfront in front of Rails Assets on Heroku, with CORS Heroku really recommends using a CDN in front of your Rails app static assets — which, unlike in non-heroku circumstances where a web server like nginx might be taking care of it, otherwise on heroku static assets will be served directly by your Rails app, consuming limited/expensive dyno resources. After evaluating a variety of options … Continue reading Notes on Cloudfront in front of Rails Assets on Heroku, with CORS → ActiveSupport::Cache via ActiveRecord (note to self) There are a variety of things written to use flexible back-end key/value datastores via the ActiveSupport::Cache API. For instance, say, activejob-status. I have sometimes in the past wanted to be able to use such things storing the data in an rdbms, say vai ActiveRecord. Make a table for it. Sure, this won’t be nearly as … Continue reading ActiveSupport::Cache via ActiveRecord (note to self) → Heroku release phase, rails db:migrate, and command failure If you use capistrano to deploy a Rails app, it will typically run a rails db:migrate with every deploy, to apply any database schema changes. If you are deploying to heroku you might want to do the same thing. The heroku “release phase” feature makes this possible. (Introduced in 2017, the release phase feature is … Continue reading Heroku release phase, rails db:migrate, and command failure → Code that Lasts: Sustainable And Usable Open Source Code A presentation I gave at online conference Code4Lib 2021, on Monday March 21. I have realized that the open source projects I am most proud of are a few that have existed for years now, increasing in popularity, with very little maintenance required. Including traject and bento_search. While community aspects matter for open source sustainability, … Continue reading Code that Lasts: Sustainable And Usable Open Source Code → Product management In my career working in the academic sector, I have realized that one thing that is often missing from in-house software development is “product management.” But what does that mean exactly? You don’t know it’s missing if you don’t even realize it’s a thing and people can use different terms to mean different roles/responsibilities. Basically, … Continue reading Product management → Rails auto-scaling on Heroku We are investigating moving our medium-small-ish Rails app to heroku. We looked at both the Rails Autoscale add-on available on heroku marketplace, and the hirefire.io service which is not listed on heroku marketplace and I almost didn’t realize it existed. I guess hirefire.io doesn’t have any kind of a partnership with heroku, but still uses … Continue reading Rails auto-scaling on Heroku → Managed Solr SaaS Options I was recently looking for managed Solr “software-as-a-service” (SaaS) options, and had trouble figuring out what was out there. So I figured I’d share what I learned. Even though my knowledge here is far from exhaustive, and I have only looked seriously at one of the ones I found. The only managed Solr options I … Continue reading Managed Solr SaaS Options → Gem authors, check your release sizes Most gems should probably be a couple hundred kb at most. I’m talking about the package actually stored in and downloaded from rubygems by an app using the gem. After all, source code is just text, and it doesn’t take up much space. OK, maybe some gems have a couple images in there. But if … Continue reading Gem authors, check your release sizes → Every time you decide to solve a problem with code… Every time you decide to solve a problem with code, you are committing part of your future capacity to maintaining and operating that code. Software is never done. Software is drowning the world by James Abley Updating SolrCloud configuration in ruby We have an app that uses Solr. We currently run a Solr in legacy “not cloud” mode. Our solr configuration directory is on disk on the Solr server, and it’s up to our processes to get our desired solr configuration there, and to update it when it changes. We are in the process of moving … Continue reading Updating SolrCloud configuration in ruby → Are you talking to Heroku redis in cleartext or SSL? In “typical” Redis installation, you might be talking to redis on localhost or on a private network, and clients typically talk to redis in cleartext. Redis doesn’t even natively support communications over SSL. (Or maybe it does now with redis6?) However, the Heroku redis add-on (the one from Heroku itself) supports SSL connections via “Stunnel”, … Continue reading Are you talking to Heroku redis in cleartext or SSL? → Comparing performance of a Rails app on different Heroku formations I develop a “digital collections” or “asset management” app, which manages and makes digitized historical objects and their descriptions available to the public, from the collections here at the Science History Institute. The app receives relatively low level of traffic (according to Google Analytics, around 25K pageviews a month), although we want it to be … Continue reading Comparing performance of a Rails app on different Heroku formations → Deep Dive: Moving ruby projects from Travis to Github Actions for CI So this is one of my super wordy posts, if that’s not your thing abort now, but some people like them. We’ll start with a bit of context, then get to some detailed looks at Github Actions features I used to replace my travis builds, with example config files and examination of options available. For … Continue reading Deep Dive: Moving ruby projects from Travis to Github Actions for CI → Unexpected performance characteristics when exploring migrating a Rails app to Heroku I work at a small non-profit research institute. I work on a Rails app that is a “digital collections” or “digital asset management” app. Basically it manages and provides access (public as well as internal) to lots of files and description about those files, mostly images. It’s currently deployed on some self-managed Amazon EC2 instances … Continue reading Unexpected performance characteristics when exploring migrating a Rails app to Heroku → faster_s3_url: Optimized S3 url generation in ruby Subsequent to my previous investigation about S3 URL generation performance, I ended up writing a gem with optimized implementations of S3 URL generation. github: faster_s3_url It has no dependencies (not even aws-sdk). It can speed up both public and presigned URL generation by around an order of magnitude. In benchmarks on my 2015 MacBook compared … Continue reading faster_s3_url: Optimized S3 url generation in ruby → Delete all S3 key versions with ruby AWS SDK v3 If your S3 bucket is versioned, then deleting an object from s3 will leave a previous version there, as a sort of undo history. You may have a “noncurrent expiration lifecycle policy” set which will delete the old versions after so many days, but within that window, they are there. What if you were deleting … Continue reading Delete all S3 key versions with ruby AWS SDK v3 → Github Actions tutorial for ruby CI on Drifting Ruby I’ve been using travis for free automated testing (“continuous integration”, CI) on my open source projects for a long time. It works pretty well. But it’s got some little annoyances here and there, including with github integration, that I don’t really expect to get fixed after its acquisition by private equity. They also seem to … Continue reading Github Actions tutorial for ruby CI on Drifting Ruby → More benchmarking optimized S3 presigned_url generation In a recent post, I explored profiling and optimizing S3 presigned_url generation in ruby to be much faster. In that post, I got down to using a Aws::Sigv4::Signer instance from the AWS SDK, but wondered if there was a bunch more optimization to be done within that black box. Julik posted a comment on that … Continue reading More benchmarking optimized S3 presigned_url generation → Delivery patterns for non-public resources hosted on S3 I work at the Science History Institute on our Digital Collections app (written in Rails), which is kind of a “digital asset management” app combined with a public catalog of our collection. We store many high-resolution TIFF images that can be 100MB+ each, as well as, currently, a handful of PDFs and audio files. We … Continue reading Delivery patterns for non-public resources hosted on S3 → bibwild-wordpress-com-7940 ---- logging URI query params with lograge – Bibliographic Wilderness Skip to content Bibliographic Wilderness Menu About Contact logging URI query params with lograge jrochkind General August 4, 2021August 5, 2021 The lograge gem for taming Rails logs by default will lot the path component of the URI, but leave out the query string/query params. For instance, perhaps you have a URL to your app /search?q=libraries. lograge will log something like: method=GET path=/search format=html… The q=libraries part is completely left out of the log. I kinda want that part, it’s important. The lograge README provides instructions for “logging request parameters”, by way of the params hash. I’m going to modify them a bit slightly to: use the more recent custom_payload config instead of custom_options. (I’m not certain why there are both, but I think mostly for legacy reasons and newer custom_payload? is what you should read for?) If we just put params in there, then a bunch of ugly "foo"} OK. The params hash isn’t exactly the same as the query string, it can include things not in the URL query string (like controller and action, that we have to strip above, among others), and it can in some cases omit things that are in the query string. It just depends on your routing and other configuration and logic. The params hash itself is what default rails logs… but what if we just log the actual URL query string instead? Benefits: it’s easier to search the logs for actually an exact specific known URL (which can get more complicated like /search?q=foo&range%5Byear_facet_isim%5D%5Bbegin%5D=4&source=foo or something). Which is something I sometimes want to do, say I got a URL reported from an error tracking service and now I want to find that exact line in the log. I actually like having the exact actual URL (well, starting from path) in the logs. It’s a lot simpler, we don’t need to filter out controller/action/format/id etc. It’s actually a bit more concise? And part of what I’m dealing with in general using lograge is trying to reduce my bytes of logfile for papertrail! Drawbacks? if you had some kind of structured log search (I don’t at present, but I guess could with papertrail features by switching to json format?), it might be easier to do something like “find a /search with q=foo and source=ef without worrying about other params) To the extent that params hash can include things not in the actual url, is that important to log like that? ….? Curious what other people think… am I crazy for wanting the actual URL in there, not the params hash? At any rate, it’s pretty easy to do. Note we use filtered_path rather than fullpath to again take account of Rails 6 parameter filtering, and thanks again /u/ezekg: config.lograge.custom_payload do |controller| { path: controller.request.filtered_path } end This is actually overwriting the default path to be one that has the query string too: method=GET path=/search?q=libraries format=html ... You could of course add a different key fullpath instead, if you wanted to keep path as it is, perhaps for easier collation in some kind of log analyzing system that wants to group things by same path invariant of query string. I’m gonna try this out! Meanwhile, on lograge… As long as we’re talking about lograge…. based on commit history, history of Issues and Pull Requests… the fact that CI isn’t currently running (travis.org grr) and doesn’t even try to test on Rails 6.0+ (although lograge seems to work fine)… one might worry that lograge is currently un/under-maintained…. No comment on a GH issue filed in May asking about project status. It still seems to be one of the more popular solutions to trying to tame Rails kind of out of control logs. It’s mentioned for instance in docs from papertrail and honeybadger, and many many other blog posts. What will it’s future be? Looking around for other possibilties, I found semantic_logger (rails_semantic_logger). It’s got similar features. It seems to be much more maintained. It’s got a respectable number of github stars, although not nearly as many as lograge, and it’s not featured in blogs and third-party platform docs nearly as much. It’s also a bit more sophisticated and featureful. For better or worse. For instance mainly I’m thinking of how it tries to improve app performance by moving logging to a background thread. This is neat… and also can lead to a whole new class of bug, mysterious warning, or configuration burden. For now I’m sticking to the more popular lograge, but I wish it had CI up that was testing with Rails 6.1, at least! Incidentally, trying to get Rails to log more compactly like both lograge and rails_semantic_logger do… is somewhat more complicated than you might expect, as demonstrated by the code in both projects that does it! Especially semantic_logger is hundreds of lines of somewhat baroque code split accross several files. A refactor of logging around Rails 5 (I think?) to use ActiveSupport::LogSubscriber made it possible to customize Rails logging like this (although I think both lograge and rails_semantic_logger still do some monkey-patching too!), but in the end didn’t make it all that easy or obvious or future-proof. This may discourage too many other alternatives for the initial primary use case of both lograge and rails_semantic_logger — turn a rails action into one log line, with a structured format. Share this: Twitter Facebook Tagged ruby Published by jrochkind View all posts by jrochkind Published August 4, 2021August 5, 2021 Post navigation Previous Post Notes on Cloudfront in front of Rails Assets on Heroku, with CORS Leave a Reply Cancel reply Enter your comment here... Fill in your details below or click an icon to log in: Email (required) (Address never made public) Name (required) Website You are commenting using your WordPress.com account. ( Log Out /  Change ) You are commenting using your Google account. ( Log Out /  Change ) You are commenting using your Twitter account. ( Log Out /  Change ) You are commenting using your Facebook account. ( Log Out /  Change ) Cancel Connecting to %s Notify me of new comments via email. Notify me of new posts via email. Bibliographic Wilderness is a blog by Jonathan Rochkind about digital library services, ruby, and web development. Contact Search for: Email Subscription Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 219 other followers Email Address: Subscribe Recent Posts logging URI query params with lograge August 4, 2021 Notes on Cloudfront in front of Rails Assets on Heroku, with CORS June 23, 2021 ActiveSupport::Cache via ActiveRecord (note to self) June 21, 2021 Heroku release phase, rails db:migrate, and command failure June 16, 2021 Code that Lasts: Sustainable And Usable Open Source Code March 23, 2021 Archives Archives Select Month August 2021  (1) June 2021  (3) March 2021  (1) February 2021  (1) January 2021  (4) December 2020  (1) November 2020  (3) October 2020  (2) September 2020  (3) August 2020  (2) April 2020  (1) March 2020  (1) December 2019  (1) October 2019  (1) September 2019  (1) August 2019  (2) June 2019  (2) April 2019  (3) March 2019  (3) February 2019  (1) December 2018  (1) November 2018  (1) October 2018  (2) September 2018  (4) August 2018  (1) June 2018  (2) May 2018  (1) April 2018  (1) March 2018  (3) February 2018  (1) January 2018  (1) November 2017  (1) October 2017  (1) September 2017  (1) August 2017  (3) July 2017  (1) May 2017  (4) April 2017  (2) March 2017  (9) February 2017  (5) January 2017  (1) December 2016  (7) November 2016  (4) September 2016  (1) August 2016  (4) June 2016  (2) May 2016  (4) March 2016  (2) February 2016  (1) January 2016  (2) November 2015  (2) October 2015  (5) September 2015  (7) August 2015  (5) July 2015  (4) May 2015  (3) April 2015  (5) March 2015  (2) February 2015  (2) January 2015  (4) December 2014  (2) November 2014  (2) October 2014  (6) September 2014  (5) August 2014  (3) July 2014  (3) June 2014  (1) May 2014  (3) April 2014  (5) March 2014  (9) February 2014  (4) January 2014  (5) December 2013  (5) November 2013  (14) October 2013  (4) September 2013  (6) August 2013  (2) July 2013  (7) June 2013  (10) May 2013  (4) April 2013  (5) March 2013  (8) February 2013  (6) January 2013  (16) December 2012  (8) November 2012  (14) October 2012  (6) September 2012  (6) August 2012  (2) July 2012  (5) June 2012  (5) May 2012  (7) April 2012  (12) March 2012  (6) February 2012  (7) January 2012  (6) December 2011  (5) November 2011  (7) October 2011  (5) September 2011  (10) August 2011  (4) July 2011  (5) June 2011  (7) May 2011  (8) April 2011  (5) March 2011  (13) February 2011  (4) January 2011  (12) December 2010  (7) November 2010  (5) October 2010  (5) September 2010  (10) August 2010  (6) July 2010  (7) June 2010  (5) May 2010  (8) April 2010  (8) March 2010  (14) February 2010  (3) January 2010  (3) December 2009  (4) November 2009  (2) October 2009  (3) September 2009  (9) August 2009  (1) July 2009  (4) June 2009  (7) May 2009  (14) April 2009  (17) March 2009  (21) February 2009  (11) January 2009  (16) December 2008  (12) November 2008  (30) October 2008  (12) September 2008  (3) July 2008  (4) June 2008  (2) May 2008  (11) April 2008  (3) March 2008  (4) February 2008  (10) January 2008  (7) December 2007  (4) November 2007  (4) September 2007  (1) August 2007  (3) June 2007  (6) May 2007  (12) April 2007  (11) March 2007  (9) Feeds  RSS - Posts  RSS - Comments Recent Comments The Return of the Semantic Web? – Deeply Semantic on “Is the semantic web still a thing?” jrochkind on Rails auto-scaling on Heroku Adam (Rails Autoscale) on Rails auto-scaling on Heroku On catalogers, programmers, and user tasks – Gavia Libraria on Broad categories from class numbers Replacing MARC – Gavia Libraria on Linked Data Caution jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI jrochkind on Deep Dive: Moving ruby projects from Travis to Github Actions for CI Top Posts Purposes/Functions of Controlled Vocabulary Bootstrap 3 to 4: Changes in how font size, line-height, and spacing is done. Or "what happened to $line-height-computed." yes, product owner and technical lead need to be different people logging URI query params with lograge Some notes on what's going on in ActiveStorage Top Clicks blog.travis-ci.com/2020-1… aws.amazon.com/premiumsup… kunststube.net/encoding searchstax.com/docs/staxa… github.com/twbs/bootstrap… A blog by Jonathan Rochkind. All original content licensed CC-BY. Create a website or blog at WordPress.com Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy bitcoinmagazine-com-8352 ---- What Proof of Stake Is And Why It Matters - Bitcoin Magazine: Bitcoin News, Articles, Charts, and Guides Events Culture Business Technical Markets Store Earn Press Releases Reviews Learn Subscribe About Bitcoin Magazine Advertise Terms of Use Privacy Policy B.TC Inc Privacy Settings Articles Research Store Conference Buy Bitcoin Learn Articles Research Store Conference Buy Bitcoin Learn What Proof of Stake Is And Why It Matters Author: Vitalik Buterin Publish date: Aug 26, 2013 If you have been involved in Bitcoin for any significant length of time, you have probably at least heard of the idea of “proof of work”. The basic concept behind proof of work is simple: one party (usually called the prover) presents the result of a computation which is known to be hard to compute, but easy to verify, and by verifying the solution anyone else can be sure that the prover performed a certain amount of computational work to generate the result. The first modern application, presented as “Hashcash” by Adam Back in 1996, uses a SHA256-based proof of work as an anti-spam measure – by requiring all emails to come with a strong proof-of-work attached, the system makes it uneconomical for spammers to send mass emails while still allowing individuals to send messages to each other when they need to. A similar system is used today for the same purpose in Bitmessage, and the algorithm has also been repurposed to serve as the core of Bitcoin’s security in the form of “mining”. How does SHA256 Proof of Work Work? SHA256 is what cryptographers call a “one-way function” – a function for which it is easy to calculate an output given an input, but it is impossible to do the reverse without trying every possible input until one works by random chance. The canonical representation of a SHA256 output is as a series of 64 hexadecimal digits – letters and numbers taken from the set 0123456789abcdef. For example, here are the first digits of a few hashes: SHA256("hello") = 2cf24dba...SHA256("Hello") = 185f8db3...SHA256("Hello.") = 2d8bd7d9... The output of SHA256 is designed to be highly chaotic; even the smallest change in the input completely scrambles the output, and this is part of what makes SHA256 a one-way function. Finding an input whose SHA256 starts with ‘0’ on average takes 16 attempts, ’00’ takes 256 attempts, and so forth. The way Hashcash, and Bitcoin mining, work, is by requiring provers (ie. mail senders or miners) to find a “nonce” such that SHA256(message+nonce)starts with a large number of zeroes, and then send the valid nonce along with the message as the proof of work. For example, the hash of block 254291 is:000000000000003cf55c8d254fc97d2850547e5b787a936bc729497d76443a89 On average, it would take 72057 trillion attempts to find a nonce that, when hashed together with a block, returns a value starting with this many zeroes (technically, 282394 trillion since the POW requirement is a bit more complex than “starts with this many zeroes”, but the general principle is the same). The reason this artificial difficulty exists is to prevent attackers from overpowering the Bitcoin network and introducing alternative blockchains that reverse previous transactions and block new transactions; any attacker trying to flood the Bitcoin network with their own fake blocks would need to make 282394 trillion SHA256 computations to produce each one. However, there is a problem: proof of work is highly wasteful. Six hundred trillion SHA256 computations are being performed by the Bitcoin network every second, and ultimately these computations have no practical or scientific value; their only purpose is to solve proof of work problems that are deliberately made to be hard so that malicious attackers cannot easily pretend to be millions of nodes and overpower the network. Of course, this waste is not inherently evil; given no alternatives, the wastefulness of proof of work may well be a small price to pay for the reward of a decentralized and semi-anonymous global currency network that allows anyone to instantly send money to anyone else in the world for virtually no fee. And in 2009 proof of work was indeed the only option. Four years later, however, we have developed a number of alternatives. Sunny King’s Primecoin is perhaps the most moderate, and yet at the same time potentially the most promising, solution. Rather than doing away with proof of work entirely, Primecoin seeks to make its proof of work useful. Rather than using SHA256 computations, Primecoin requires miners to look for long “Cunningham chains” of prime numbers – chains of values n-1, 2n-1, 4n-1, etc. up to some length such that all of the values in the chain are prime (for the sake of accuracy, n+1, 2n+1, 4n+1 can also be a valid Cunningham chain, and Primecoin also accepts “bi-twin chains” of the form n-1, n+1, 2n-1, 2n+1 where all terms are prime). It is not immediately obvious how these chains are useful – Primecoin advocates have pointed to a few theoretical applications, but these all require only chains of length 3 which are trivial to produce. However, the stronger argument is that in modern Bitcoin mining the majority of the production cost of mining hardware is actually researching methods of mining more efficiently (ASICs, optimized circuits, etc) and not building or running the devices themselves, and in a Primecoin world this research would go towards finding more efficient ways of doing arithmetic and number theory computation instead – things which have applications far beyond just mining cryptocurrencies. The reason why Primecoin-like “useful POWs” are the most promising is that, if the computations are useful enough, the currency’s “waste factor” can actually drop below zero, making the currency a public good. For example, suppose that there is a computation which, somehow, has a 1 in 1020chance of getting researchers significantly further along the way to curing cancer. Right now, no individual or organization has much of an incentive to attempt it: if they get lucky and succeed, they could either release the secret and earn little personal benefit beyond some short-lived media recognition or they could try to sell it to a few researchers under a non-disclosure agreement, which would rob everyone not under the non-disclosure agreement of the benefits of the discovery and likely not earn too much money in any case. If this magic computation was integrated into a currency, however, the block reward would incentivize many people to perform the computation, and the results of the computations would be visible on the blockchain for everyone to see. The societal reward would be more than worth the electricity cost. However, so far we know of no magical cancer-curing computation; the closest is Folding@home, but it lacks mathematical verificability – a dishonest miner can easily cheat by making fake computations that are indistinguishable from real results to any proof of work checker but have no value to society. As far as mathematically verifiable useful POWs go, Primecoin is the best we have, and whether its societal benefit fully outweighs its production and electricity cost is hard to tell; many people doubt it. But even then, what Primecoin accomplished is very praiseworthy; even partially recovering the costs of mining as a public good is better than nothing. Proof of Stake However, there is one SHA256 alternative that is already here, and that essentially does away with the computational waste of proof of work entirely: proof of stake. Rather than requiring the prover to perform a certain amount of computational work, a proof of stake system requires the prover to show ownership of a certain amount of money. The reason why Satoshi could not have done this himself is simple: before 2009, there was no kind of digital property which could securely interact with cryptographic protocols. Paypal and online credit card payments have been around for over ten years, but those systems are centralized, so creating a proof of stake system around them would allow Paypal and credit card providers themselves to cheat it by generating fake transactions. IP addresses and domain names are partially decentralized, but there is no way to construct a proof of ownership of either that could be verified in the future. Indeed, the first digital property that could possibly work with an online proof of stake system is Bitcoin (and cryptocurrency in general) itself. There have been several proposals on how proof of stake can be implemented; the only one that is currently working in practice, however, is PPCoin, once again created by Sunny King. PPCoin’s proof of stake algorithm works as follows. When creating a proof-of-stake block, a miner needs to construct a “coinstake” transaction, sending some money in their possession to themselves as well as a preset reward (like an interest rate, similar to Bitcoin’s 25 BTC block reward). A SHA256 hash is calculated based only on the transaction input, some additional fixed data, and the current time (as an integer representing the number of seconds since Jan 1, 1970). This hash is then checked against a proof of work requirement, much like Bitcoin, except the difficulty is inversely proportional to the “coin age” of the transaction input. Coin age is defined as the size of the transaction input, in PPcoins, multiplied by the time that the input has existed. Because the hash is based only on the time and static data, there is no way to make hashes quickly by doing more work; every second, each PPCoin transaction output has a certain chance of producing a valid work proportional to its age and how many PPCoins it contains, and that is that. Essentially, every PPCoin can act as a “simulated mining rig”, albeit with the interesting property that its mining power goes up linearly over time but resets to zero every time it finds a valid block. It is not clear if using coin age as PPCoin does rather than just output size is strictly necessary; the original intent of doing so was to prevent miners from re-using their coins multiple times, but PPCoin’s current design does not actually allow miners to consciously try to generate a block with a specific transaction output. Rather, the system does the equivalent of picking a PPCoin at random every second and maybe giving its owner the right to create a block. Even without including age as a weighting factor in the randomness, this is roughly equivalent to a Bitcoin mining setup but without the waste. However, there is one more sophisticated argument in coin age’s favor: because your chance of success goes up the longer you fail to create a block, miners can expect to create blocks more regularly, reducing the incentive to dampen the risk by creating the equivalent of centralized mining pools. Beyond Cryptocurrency But what makes proof of stake truly interesting is the fact that it can be applied to much more than just currency. So far, anti-spam systems have fallen into three categories: proof of work, captchas and identity systems. Proof of work, used in systems like Hashcash and Bitmessage, we have already discussed extensively above. Captchas are used very widely on the internet; the idea is to present a problem that a human can easily solve but a computer can’t, thereby distinguishing the two (CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”). In practice, this usually involves presenting a messy image containing letters and numbers, and requiring the solver to type in what the letters and numbers are. Recent providers have implemented a “public good” component into the system by making part of the captcha a word from a printed book, using the power of the crowd to digitize old printed literature. Unfortunately, captchas are not that effective; recent machine-learning efforts have achieved success rates of 30-96 – similar to that of humans themselves. Identity systems come in two forms. First, there are systems that require users to register with their physical identity; this is how democracies have so far avoided being overrun by anonymous trolls. Second, there are systems that require some fee to get into, and moderators can close accounts without refund if they are found to be trying to abuse the system. These systems work, but at the cost of privacy. Proof of stake can be used to provide a fourth category of anti-spam measure. Imagine that, instead of filling in a captcha to create a forum account, a user can consume coin age by sending a Bitcoin or PPCoin transaction to themselves instead. To make sure each proof of stake computation is done by the user, and not simply randomly pulled from the blockchain, the system might require the user to also send a signed message with the same address, or perhaps send their money back to themselves in a specific way (eg. one of the outputs must contain exactly 0.000XXXXX BTC, with the value randomly set each time). Note that here coin age is crucial; we want users to be able to create proofs of stake on demand, so something must be consumed to prevent reuse. In a way, a form of proof of stake already exists in the form of SMS verification, requiring users to send text messages to prove ownership of a phone to create a Google account – although this is hardly pure proof of stake, as phone numbers are also heavily tied with physical identity and the process of buying a phone is itself a kind of captcha. Thus, SMS verification has some of the advantages and some of the disadvantages of all three systems. But proof of stake’s real advantage is in decentralized systems like Bitmessage. Currently, Bitmessage uses proof of work because it has no other choice; there is no “decentralized captcha” solution out there, and there has been little research into figuring out how to make one. However, proof of work is wasteful, and makes Bitmessage a somewhat cumbersome and power-consuming system to use – for emails, it’s fine, but for instant messaging forget about it. But if Bitmessage could be integrated into Bitcoin (or Primecoin or PPCoin) and use it as proof of stake, much of the difficulty and waste could be alleviated. Does proof of stake have a future? Many signs suggest that it certainly does. PPCoin founder Sunny King argues that Bitcoin’s security will become too weak over time as its block reward continues to drop; indeed, this is one of his primary motivations for creating PPCoin and Primecoin. Since then, PPCoin has come to be the fifth largest cryptocurrency on the market, and an increasing number of new cryptocurrencies are copying its proof-of-stake design. Currently, PPCoin is not fully proof-of-stake; because it is a small cryptocurrency with a highly centralized community, the risk of some kind of takeover is higher than with Bitcoin, so a centralized checkpointing system does exist, allowing developers to create “checkpoints” that are guaranteed to remain part of the transaction history forever regardless of what any attacker does. Eventually, the intent is to both move toward making the checkpointing system more decentralized and reducing its power and PPCoins come to be owned by a larger group of people. An alternative approach might be to integrate proof of stake as a decentralized checkpointing system into Bitcoin itself; for example, one protocol might allow any coalition of people with at least 1 million BTC-years to consume their outputs to generate a checkpoint that the community would agree is a valid block, at the cost of sending their coins to themselves and consuming coin age. In 2009, cryptocurrency emerged as the culmination of a number of unrelated cryptographic primitives: hash functions, Merkle trees, proof of work and public key cryptography all play key roles in Bitcoin’s construction. Now, however, Bitcoin and cryptocurrencies are here to stay, and this presents another exciting possibility for the future of cryptography: we can now design protocols that build off of cryptocurrency itself – of which proof of stake is the perfect example. Proof of stake can be used to secure a cryptocurrency, it can be used in decentralized anti-spam systems, and probably in dozens of other protocols that we haven’t even thought of yet – just like no one had thought of anything like Bitcoin until Wei Dai’s b-money in 1998. The possibilities are endless. Tags terms: HashcashProof of workProof of stakePrimecoinsunny king By Vitalik Buterin Business What Libbitcoin and SX are And Why They Matter By Vitalik Buterin Aug 9, 2013 Business NXT – Proof of Stake and the New Alternative Altcoin By Adam Hofman Feb 1, 2014 Business Has This Ethereum Classic Developer Solved Proof of Stake? By Aaron van Wirdum Jan 25, 2017 Business Feathercoin: Interview With Peter Bushnell By Vitalik Buterin Aug 12, 2013 Technical What Firstbits Is and Why You Should Implement It By Vitalik Buterin Sep 15, 2013 Business Interview: Cryptographer Silvio Micali on Bitcoin, Ethereum and Proof of Stake By Amy Castor Oct 9, 2017 Markets Primecoin Has Exchange, Casino, Already Breaking World Records By Vitalik Buterin Jul 11, 2013 Culture Bitcoin Is Not Losing Its Soul – Or, Why the Regulation Hysteria is Missing the Point By Vitalik Buterin Jun 3, 2013 Markets Op Ed: Why Bitcoin Is Still the Most Valuable Cryptocurrency By Cole Walton Jul 12, 2019 Business Hot Cryptocurrency Trends: Delegated Proof of Stake By Guest Author Aug 28, 2014 Technical Bitcoin Is Not Quantum-Safe, And How We Can Fix It When Needed By Vitalik Buterin Jul 30, 2013 Culture Why Proof of Reserves Is Important to Bitcoin By Mauricio Di Bartolomeo Jan 9, 2020 Business In Defense of Alternative Cryptocurrencies By Vitalik Buterin Sep 7, 2013 Culture Is Bitcoin Truly Decentralized? Yes – and Here Is Why It’s Important By Erik Voorhees Jan 22, 2015 Technical Quantum Computing and Building Resistance into Proof of Stake By BTC STUDIOS Oct 23, 2017 Loading… See More About Bitcoin Magazine Advertise Terms of Use Privacy Policy B.TC Inc © 2021 CLOSE CLOSE CLOSE CLOSE blog-cbeer-info-7264 ---- blog.cbeer.info blog.cbeer.info Autoscaling AWS Elastic Beanstalk worker tier based on SQS queue length LDPath in 3 examples Building a Pivotal Tracker IRC bot with Sinatra and Cinch Real-time statistics with Graphite, Statsd, and GDash Icemelt: A stand-in for integration tests against AWS Glacier blog-dataunbound-com-474 ---- Data Unbound : Helping organizations access and share data effectively. Special focus on web APIs for data integration. Data Unbound Helping organizations access and share data effectively. Special focus on web APIs for data integration. Skip to content About Some of what I missed from the Cmd-D Automation Conference The CMD-D|Masters of Automation one-day conference in early August would have been right up my alley: It’ll be a full day of exploring the current state of automation technology on both Apple platforms, sharing ideas and concepts, and showing what’s possible—all with the goal of inspiring and furthering development of your own automation projects. Fortunately, those of us who missed it can still get a meaty summary of the meeting by listening to the podcast segment Upgrade #154: Masters of Automation – Relay FM. I've been keen on automation for a long time now and was delighted to hear the panelists express their own enthusiasm for customizing their Macs, iPhones, or iPads to make repetitive tasks much easier and less time-consuming. Noteworthy take-aways from the podcast include: Something that I hear and believe but have yet to experience in person: non-programmers can make use of automation through applications such as Automator — for macOS — and Workflow for iOS. Also mentioned often as tools that are accessible to non-geeks: Hazel and Alfred – Productivity App for Mac OS X. Automation can make the lives of computer users easier but it's not immediately obvious to many people exactly how. To make a lot of headway in automating your workflow, you need a problem that you are motivated to solve. Many people use AppleScript by borrowing from others, just like how many learn HTML and CSS from copying, pasting, and adapting source on the web. Once you get a taste for automation, you will seek out applications that are scriptable and avoid those that are not. My question is how to make it easier for developers to make their applications scriptable without incurring onerous development or maintenance costs? E-book production is an interesting use case for automation. People have built businesses around scripting Photoshop [is there really a large enough market?] OmniGroup's automation model is well worth studying and using. I hope there will be a conference next year to continue fostering this community of automation enthusists and professionals. 2017 09 25 Raymond Yee automation macOS Comments (0) Permalink Fine-tuning a Python wrapper for the hypothes.is web API and other #ianno17 followup In anticipation of #ianno17 Hack Day, I wrote about my plans for the event, one of which was to revisit my own Python wrapper for the nascent hypothes.is web API. Instead of spending much time on my own wrapper, I spent most of the day working with Jon Udell's wrapper for the API. I've been working on my own revisions of the library but haven't yet incorporated Jon's latest changes. One nice little piece of the puzzle is that I learned how to introduce retries and exponential backoff into the library, thanks to a hint from Nick Stenning and a nice answer on Stackoverflow . Other matters In addition to the Python wrapper, there are other pieces of follow-up for me. I hope to write more extensively on those matters down the road but simply note those topics for the moment. Videos from the conference I might start by watching videos from #ianno17 conference: I Annotate 2017 – YouTube. Because I didn't attend the conference per se, I might glean insight into two particular topics of interest to me (the role of page owner in annotations and the intermingling of annotations in ebooks.) An extension for embedding selectors in the URL I will study and try Treora/precise-links: Browser extension to support Web Annotation Selectors in URIs. I've noticed that the same annotation is shown in two related forms: https://hyp.is/Zj2dyi9tEeeTmxvuPjLhSw/blog.dataunbound.com/2017/05/01/revisiting-hypothes-is-at-i-annotate-2017/ https://blog.dataunbound.com/2017/05/01/revisiting-hypothes-is-at-i-annotate-2017/#annotations:Zj2dyi9tEeeTmxvuPjLhSw Does the precise-links extension let me write the selectors into the URL? 2017 05 22 Raymond Yee annotation Comments (0) Permalink Revisiting hypothes.is at I Annotate 2017 I'm looking forward to hacking on web and epub annotation at the #ianno17 Hack Day. I won't be at the I Annotate 2017 conference per se but will be curious to see what comes out of the annual conference. I continue to have high hopes for digital annotations, both on the Web and in non-web digital contexts. I have used Hypothesis on and off since Oct 2013. My experiences so far: I like the ability to highlight and comment on very granular sections of articles for comment, something the hypothes.is annotation tool makes easy to do. I appreciate being able to share annotation/highlight with others (on Twitter or Facebook), though I'm pretty sure most people who bother to click on the links might wonder "what's this" when they click on the link. A small user request: hypothes.is should allow a user to better customize the Facebook preview image for the annotation. I've enjoyed using hypothes.is for code review on top of GitHub. (Exactly how hypothes.is complements the extensive code-commenting functionality in GitHub might be worth a future blog post.) My Plans for Hack Day Python wrapper for hypothes.is This week, I plan to revisit rdhyee/hypothesisapi: A Python wrapper for the nascent hypothes.is web API to update or abandon it in favor of new developments. (For example, I should look at kshaffer/pypothesis: Python scripts for interacting with the hypothes.is API.) Epubs + annotations I want to figure out the state of art for epubs and annotations. I'm happy to see the announcement of a partnership to bring open annotation to eBooks from March 2017. I'd definitely like to figure out how to annotate epubs (e.g., Oral Literature in Africa (at unglue.it) or Moby Dick). The best approach is probably for me to wait until summer at which time we'll see the fruits of the partnership: Together, our goal is to complete a working integration of Hypothesis with both EPUB frameworks by Summer 2017. NYU plans to deploy the ReadiumJS implementation in the NYU Press Enhanced Networked Monographs site as a first use case. Based on lessons learned in the NYU deployment, we expect to see wider integration of annotation capabilities in eBooks as EPUB uptake continues to grow. In the meantime, I can catch up on the current state of futurepress/epub.js: Enhanced eBooks in the browser., grok Epub CFI Updates, and relearn how to parse epubs using Python (e.g., rdhyee/epub_avant_garde: an experiment to apply ideas from https://github.com/sandersk/ebook_avant_garde to arbitrary epubs). Role of page owners I plan to check in on what's going on with efforts at Hypothes.is to involve owners in page annotations: In the past months we launched a small research initiative to gather different points of view about website publishers and authors consent to annotation. Our goal was to identify different paths forward taking into account the perspectives of publishers, engineers, developers and people working on abuse and harassment issues. We have published a first summary of our discussion on our blog post about involving page owners in annotation. I was reminded of these efforts after reading that Audrey Watters had blocked annotation services like hypothes.is and genius from her domains: Un-Annotated Episode 52: Marginalia In the spirit of communal conversation, I threw in my two cents: Have there been any serious exploration of easy opt-out mechanisms for domain owners? Something like robots.txt for annotation tools? 2017 05 01 Raymond Yee annotation Comments (2) Permalink My thoughts about Fargo.io using fargo.io 2013 11 03 Raymond Yee Uncategorized Comments (0) Permalink Organizing Your Life With Python: a submission for PyCon 2015? I have penciled into my calendar a trip  to Montreal to attend PyCon 2014.   In my moments of suboptimal planning, I wrote an overly ambitious abstract for a talk or poster session I was planning to submit.  As I sat down this morning to meet the deadline for submitting a proposal for a poster session (Nov 1), I once again encountered the ominous (but for me, definitive) admonition: Avoid presenting a proposal for code that is far from completion. The program committee is very skeptical of "conference-driven development". It's true: my efforts to organize my life with Python are in the early stages. I hope that I'll be able to write something like the following for PyCon 2015. Organizing Your Life with Python David Allen's Getting Things Done (GTD) system is a popular system for personal productivity. Although GTD can be implemented without any computer technology, I have pursued two different digital implementations, including my current implementation using Evernote, the popular note-taking program. This talk explores using Python in conjunction with the Evernote API to implement GTD on top of Evernote. I have found that a major practical hinderance for using GTD is that it way too easy to commit to too many projects. I will discuss how to combine Evernote, Python, GTD with concepts from Personal Kanban to solve this problem. Addendum: Whoops…I find it embarrassing that I already quoted my abstract in a previous blog post in September that I had forgotten about. Oh well. Where's my fully functioning organization system when I need it! Tagged PyCon, Python 2013 10 30 Raymond Yee Evernote GTD Comments (0) Permalink Current Status of Data Unbound LLC in Pennsylvania I'm currently in the process of closing down Data Unbound LLC in Pennsylvania.  I submitted the paperwork to dissolve the legal entity in April 2013 and have been amazed to learn that it may take up to a year to get the final approval done.  In the meantime, as I establishing a similar California legal entity, I will certainly continue to write on this blog about APIs, mashups, and open data. 2013 10 30 Raymond Yee Data Unbound LLC Comments (0) Permalink Must Get Cracking on Organizing Your Life with Python Talk and tutorial proposals for PyCon 2014 are due tomorrow (9/15) .  I was considering submitting a proposal until I took the heart the appropriate admonition against "conference-driven" development of the program committee.   I will nonetheless use the Oct 15 and Nov 1 deadlines for lightning talks and proposals respectively to judge whether to submit a refinement of the following proposal idea: Organizing Your Life with Python David Allen's Getting Things Done (GTD) system is a popular system for personal productivity.  Although GTD can be implemented without any computer technology, I have pursued two different digital implementations, including my current implementation using Evernote, the popular note-taking program.  This talk explores using Python in conjunction with the Evernote API to implement GTD on top of Evernote. I have found that a major practical hinderance for using GTD is that it way too easy to commit to too many projects.  I will discuss how to combine Evernote, Python, GTD with concepts from Personal Kanban to solve this problem.   2013 09 14 Raymond Yee Getting Things Done Python Comments (0) Permalink Embedding Github gists in WordPress As I gear up I to write more about programming, I have installed the Embed GitHub Gist plugin. So by writing [gist id=5625043] in the text of this post, I can embed https://gist.github.com/rdhyee/5625043 into the post to get: from itertools import islice def triangular(): n = 1 i = 1 while True: yield n i +=1 n += i # for i, n in enumerate(islice(triangular(), 10)): print i+1, n Tagged gist, github 2013 05 21 Raymond Yee Wordpress Comments (2) Permalink Working with Open Data I'm very excited to be teaching a new course Working with Open Data at the UC Berkeley School of Information in the Spring 2013 semester: Open data — data that is free for use, reuse, and redistribution — is an intellectual treasure-trove that has given rise to many unexpected and often fruitful applications. In this course, students will 1) learn how to access, visualize, clean, interpret, and share data, especially open data, using Python, Python-based libraries, and supplementary computational frameworks and 2) understand the theoretical underpinnings of open data and their connections to implementations in the physical and life sciences, government, social sciences, and journalism.   2012 11 23 Raymond Yee Uncategorized Comments (0) Permalink A mundane task: updating a config file to retain old settings I want to have a hand in creating an excellent personal information manager (PIM) that can be a worthy successor to Ecco Pro. So far, running EccoExt (a clever and expansive hack of Ecco Pro) has been a eminently practical solution.   You can download the most recent version of this actively developed extension from the files section of the ecco_pro Yahoo! group.   I would do so regularly but one of the painful problems with unpacking (using unrar) the new files is that there wasn't an updater that would retain the configuration options of the existing setup.  So a mundane but happy-making programming task of this afternoon was to write a Python script to do exact that function, making use of the builtin ConfigParser library. """ compare eccoext.ini files My goal is to edit the new file so that any overlapping values take on the current value """ current_file_path = "/private/tmp/14868/C/Program Files/ECCO/eccoext.ini" new_file_path = "/private/tmp/14868/C/utils/eccoext.ini" updated_file = "/private/tmp/14868/C/utils/updated_eccoext.ini" # extract the key value pairs in both files to compare the two # http://docs.python.org/library/configparser.html import ConfigParser def extract_values(fname): # generate a parsed configuration object, set of (section, options) config = ConfigParser.SafeConfigParser() options_set = set() config.read(fname) sections = config.sections() for section in sections: options = config.options(section) for option in options: #value = config.get(section,option) options_set.add((section,option)) return (config, options_set) # process current file and new file (current_config, current_options) = extract_values(current_file_path) (new_config, new_options) = extract_values(new_file_path) # what are the overlapping options overlapping_options = current_options & new_options # figure out which of the overlapping options are the values different for (section,option) in overlapping_options: current_value = current_config.get(section,option) new_value = new_config.get(section,option) if current_value != new_value: print section, option, current_value, new_value new_config.set(section,option,current_value) # write the updated config file with open(updated_file, 'wb') as configfile: new_config.write(configfile) 2011 02 12 Raymond Yee Ecco Pro Python Comments (0) Permalink « Older posts Pages About Categories Amazon annotation announcments APIs architecture art history automation bibliographics bioinformatics BPlan 2009 Chickenfoot Citizendium collaboration consulting copyright creative commons data mining Data Unbound LLC digital scholarship Ecco Pro education Evernote Firefox Flickr freebase Getting Things Done Google government GTD hardware HCI higher education humanities imaging iSchool journalism libraries macOS mashups meta MITH API workshop Mixing and Remixing information notelets OCLC open access open data OpenID personal information management personal news politics Processing programming tip prototype publishing Python recovery.gov tracking repositories REST screen scraping screencast services SOAP training tutorial UC Berkeley Uncategorized web hosting web services web20 weblogging Wikipedia Wordpress writing Zotero Tags API art history books Chickenfoot codepad coins creative commons data hosting data portability Educause EXIF Firefox Flickr freebase JCDL JCDL 2008 kses Library of Congress mashups mashup symfony Django metadata news NYTimes AmazonEC2 AmazonS3 OMB OpenID openlibrary OpenOffice.org photos politics Project Bamboo Python pywin32 recovery.gov tracking screencast stimulus sychronization video webcast Wikipedia Windows XP WMI Wordpress workshops XML in libraries Zotero Blogroll Information Services and Technology, UC Berkeley UC Berkeley RSS Feeds All posts All comments Meta Log in Blog Search © 2021 | Thanks, WordPress | Barthelme theme by Scott Allan Wallick | Standards Compliant XHTML & CSS | RSS Posts & Comments blog-dataunbound-com-6523 ---- Data Unbound Data Unbound Helping organizations access and share data effectively. Special focus on web APIs for data integration. Some of what I missed from the Cmd-D Automation Conference The CMD-D|Masters of Automation one-day conference in early August would have been right up my alley: It’ll be a full day of exploring the current state of automation technology on both Apple platforms, sharing ideas and concepts, and showing what’s possible—all with the goal of inspiring and furthering development of your own automation projects. Fortunately, […] Fine-tuning a Python wrapper for the hypothes.is web API and other #ianno17 followup In anticipation of #ianno17 Hack Day, I wrote about my plans for the event, one of which was to revisit my own Python wrapper for the nascent hypothes.is web API. Instead of spending much time on my own wrapper, I spent most of the day working with Jon Udell's wrapper for the API. I've been […] Revisiting hypothes.is at I Annotate 2017 I'm looking forward to hacking on web and epub annotation at the #ianno17 Hack Day. I won't be at the I Annotate 2017 conference per se but will be curious to see what comes out of the annual conference. I continue to have high hopes for digital annotations, both on the Web and in non-web […] My thoughts about Fargo.io using fargo.io Organizing Your Life With Python: a submission for PyCon 2015? I have penciled into my calendar a trip  to Montreal to attend PyCon 2014.   In my moments of suboptimal planning, I wrote an overly ambitious abstract for a talk or poster session I was planning to submit.  As I sat down this morning to meet the deadline for submitting a proposal for a poster […] Current Status of Data Unbound LLC in Pennsylvania I'm currently in the process of closing down Data Unbound LLC in Pennsylvania.  I submitted the paperwork to dissolve the legal entity in April 2013 and have been amazed to learn that it may take up to a year to get the final approval done.  In the meantime, as I establishing a similar California legal […] Must Get Cracking on Organizing Your Life with Python Talk and tutorial proposals for PyCon 2014 are due tomorrow (9/15) .  I was considering submitting a proposal until I took the heart the appropriate admonition against "conference-driven" development of the program committee.   I will nonetheless use the Oct 15 and Nov 1 deadlines for lightning talks and proposals respectively to judge whether to […] Embedding Github gists in WordPress As I gear up I to write more about programming, I have installed the Embed GitHub Gist plugin. So by writing [gist id=5625043] in the text of this post, I can embed https://gist.github.com/rdhyee/5625043 into the post to get: Working with Open Data I'm very excited to be teaching a new course Working with Open Data at the UC Berkeley School of Information in the Spring 2013 semester: Open data — data that is free for use, reuse, and redistribution — is an intellectual treasure-trove that has given rise to many unexpected and often fruitful applications. In this […] A mundane task: updating a config file to retain old settings I want to have a hand in creating an excellent personal information manager (PIM) that can be a worthy successor to Ecco Pro. So far, running EccoExt (a clever and expansive hack of Ecco Pro) has been a eminently practical solution.   You can download the most recent version of this actively developed extension from […] blog-dshr-org-100 ---- DSHR's Blog: Hardware I/O Virtualization DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, December 16, 2014 Hardware I/O Virtualization At enterprisetech.com, Timothy Prickett Morgan has an interesting post entitled A Rare Peek Into The Massive Scale Of AWS. It is based on a talk by Amazon's James Hamilton at the re:Invent conference. Morgan's post provides a hierarchical, network-centric view of the AWS infrastructure: Regions, 11 of them around the world, contain Availability Zones (AZ). The 28 AZs are arranged so that each Region contains at least 2 and up to 6 datacenters. Morgan estimates that there are close to 90 datacenters in total, each with 2000 racks, burning 25-30MW. Each rack holds 25 to 40 servers. AZs are no more than 2ms apart measured in network latency, allowing for synchronous replication. This means the AZs in a region are only a couple of kilometres apart, which is less geographic diversity than one might want, but a disaster still has to have a pretty big radius to take out more than one AZ. The datacenters in an AZ are not more than 250us apart in latency terms, close enough that a disaster might take all the datacenters in one AZ out. Below the fold, some details and the connection between what Amazon is doing now, and what we did in the early days of NVIDIA. Amazon uses custom-built hardware, including network hardware, and their own network software. Doing so is simpler and more efficient than generic hardware and software because they only need to support a very restricted set of configurations and services. In particular they build their own network interface cards (NICs). The reason is particularly interesting to me, as it is to solve exactly the same problem that we faced as we started NVIDIA more than two decades ago. The state-of-the-art of PC games, and thus PC graphics, were based on Windows, at that stage little more than a library on top of MS-DOS. The game was the only application running on the hardware. It didn't have to share the hardware with, and thus need the operating system (OS) to protect it from, any other application. Coming from the Unix world we knew how the OS shared access to physical hardware devices, such as the graphics chip, among multiple processes while protecting them (and the operating system) from each other. Processes didn't access the devices directly, they made system calls which invoked device driver code in the OS kernel that accessed the physical hardware on their behalf. We understood that Windows would have to evolve into a multi-process OS with real inter-process protection. Our problem, like Amazon's, was two-fold; latency and the variance of latency. If the games were to provide arcade performance on mid-90s PCs, there was no way the game software could take the overhead of calling into the OS to perform graphics operations on its behalf. It had to talk directly to the graphics chip, not via a driver in the OS kernel. If there would have been only a single process, such as the X server, doing graphics this would not have been a problem. Using the Memory Management Unit (MMU), the hardware provided to mediate access of multiple processes to memory, the OS could have mapped the graphic chip's IO registers into that process' address space. That process could access the graphics chip with no OS overhead. Other processes would have to use inter-process communications to request graphics operations, as X clients do. SEGA's Virtua Fighter on NV1 Because we expected there to be many applications simultaneously doing graphics, and they all needed low, stable latency, we needed to make it possible for the OS safely to map the chip's registers into multiple processes at one time. We devoted a lot of the first NVIDIA chip to implementing what looked to the application like 128 independent sets of I/O registers. The OS could map one of the sets into a process' address space, allowing it to do graphics by writing directly to these hardware registers. The technical name for this is hardware I/O virtualization; we pioneered this technology in the PC space. It provided the very low latency that permitted arcade performance on the PC, despite other processes doing graphics at the same time. And because the competition between the multiple process' accesses to their virtual I/O resources was mediated on-chip as it mapped the accesses to the real underlying resources, it provided very stable latency without the disruptive long tail that degrades the user experience. Amazon's problem was that, like PCs running multiple graphics applications on one real graphics card, they run many virtual machines (VMs) on each real server. These VMs have to share access to the physical network interface card (NIC). Mediating this in software in the hypervisor imposes both overhead and variance. Their answer was enhanced NICs: The network interface cards support Single Root I/O Virtualization (SR-IOV), which is an extension to the PCI-Express protocol that allows the resources on a physical network device to be virtualized. SR-IOV gets around the normal software stack running in the operating system and its network drivers and the hypervisor layer that they sit on. It takes milliseconds to wade down through this software from the application to the network card. It only takes microseconds to get through the network card itself, and it takes nanoseconds to traverse the light pipes out to another network interface in another server. “This is another way of saying that the only thing that matters is the software latency at either end,” explained Hamilton. SR-IOV is much lighter weight and gives each guest partition on a virtual machine its own virtual network interface card, which rides on the physical card. This, as shown on Hamilton's graph, provides much less variance in latency: The new network, after it was virtualized and pumped up, showed about a 2X drop in latency compared to the old network at the 50th percentile for latency on data transmissions, and at the 99.9th percentile the latency dropped by about a factor of 10X. The importance of reducing the variance of latency for Web services at Amazon scale is detailed in a fascinating, must-read paper, The Tail At Scale by Dean and Barroso. Amazon had essentially the same problem we had, and came up with the same basic hardware solution - hardware I/O virtualization. Posted by David. at 8:00 AM Labels: amazon, networking 2 comments: David. said... In the first of a series Rich Miller at Data Center Frontier adds a little to the Timothy Prickett Morgan post. September 23, 2015 at 6:48 PM David. said... PETTYOFFICER117's video history of Nvidia GPUs is worth watching. October 30, 2019 at 8:50 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ▼  2014 (68) ▼  December (7) Crypto-currency as a basis for preservation Economic Failures of HTTPS Hardware I/O Virtualization "Official" Senate CIA Torture Report Talk at Fall CNI A Note of Thanks Henry Newman's Farewell Column ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-1474 ---- DSHR's Blog: Yet Another DNA Storage Technique DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, July 27, 2021 Yet Another DNA Storage Technique Source An alternative approach to nucleic acid memory by George D. Dickinson et al from Boise State University describes a fundamentally different way to store and retrieve data using DNA strands as the medium. Will Hughes et al have an accessible summary in DNA ‘Lite-Brite’ is a promising way to archive data for decades or longer: We and our colleagues have developed a way to store data using pegs and pegboards made out of DNA and retrieving the data with a microscope – a molecular version of the Lite-Brite toy. Our prototype stores information in patterns using DNA strands spaced about 10 nanometers apart. Below the fold I look at the details of the technique they call digital Nucleic Acid Memory (dNAM). The traditional way to use DNA as a storage medium is to encode the data in the sequence of bases in a synthesized strand, then use sequencing to retrieve the data. Instead: dNAM uses advancements in super-resolution microscopy (SRM)15 to access digital data stored in short oligonucleotide strands that are held together for imaging using DNA origami. In dNAM, non-volatile information is digitally encoded into specific combinations of single-stranded DNA, commonly known as staple strands, that can form DNA origami nanostructures when combined with a scaffold strand. When formed into origami, the staple strands are arranged at addressable locations ... that define an indexed matrix of digital information. This site-specific localization of digital information is enabled by designing staple strands with nucleotides that extend from the origami. Writing In dNAM, writing their 20 character message "Data is in our DNA!\n" involved encoding it into 15 16-bit fountain code droplets then synthesizing two different types of DNA sequences: Origami: There is one origami for each 16 bits of data to be stored. It forms a 6x8 matrix holding a 4 bit index, the 16 bits of droplet data, 20 bits of parity, 4 bits of checksum, and 4 orientation bits. Each of the 48 cells thus contains a unique, message-specific DNA sequence. Staples: There is one staple for each of the 15x48 matrix cells, with one end of the strand matching the matrix cell's sequence, and the other indicating a 0 or a 1 by the presence or absence of a sequence that binds to the flourescent DNA used for reading. When combined, the staple strands bind to the appropriate cells in the origami, labelling each cell as a 0 or a 1. Reading The key difference between dNAM and traditional DNA storage techniques is that dNAM reads data without sequencing the DNA. Instead, it uses optical microscopy to identify each "peg" (staple strand) in each matrix cell as either a 0 or a 1: The patterns of DNA strands – the pegs – light up when fluorescently labeled DNA bind to them. Because the fluorescent strands are short, they rapidly bind and unbind. This causes them to blink, making it easier to separate one peg from another and read the stored information. The difficulty in doing so is that the pegs are on a 10 nanometer grid: Because the DNA pegs are positioned closer than half the wavelength of visible light, we used super-resolution microscopy, which circumvents the diffraction limit of light. The technique is called "DNA-Points Accumulation for Imaging in Nanoscale Topography (DNA-PAINT)". The process to recover the 20 character message was: 40,000 frames from a single field of view were recorded using DNA-PAINT (~4500 origami identified in 2982 µm2). The super-resolution images of the hybridized imager strands were then reconstructed from blinking events identified in the recording to map the positions of the data domains on each origami ... Using a custom localization processing algorithm, the signals were translated to a 6 × 8 grid and converted back to a 48-bit binary string — which was passed to the decoding algorithm for error correction, droplet recovery, and message reconstruction ... The process enabled successful recovery of the dNAM encoded message from a single super-resolution recording. Analysis The first thing to note is that whereas traditional DNA storage techniques are volumetric, dNAM like hard disk or tape is areal. It will therefore be unable to match the extraordinary data density potentially achievable using the traditional approach. dNAM claims: After accounting for the bits used by the algorithms, our prototype was able to read data at a density of 330 gigabits per square centimeter. Current hard disks have an areal density of 1.3Tbit/inch2, or about 200Gbit/cm2, so for a prototype this is good but not revolutionary, The areal density is set by the 10nm grid space, so it may not be possible to greatly reduce it. Hard disk vendors have demonstrated 400Gbit/cm2 and have roadmaps to around 800Gbit/cm2. dNAM's writing process seems more complex than the traditional approach, so is unlikely to be faster or cheaper. The read process is likely to be both faster and cheaper, because DNA-PAINT images a large number of origami in parallel, whereas sequencing is sequential (duh!). But, as I have written, the big barrier to adoption of DNA storage is the low bandwidth and high cost of writing the data. Posted by David. at 8:00 AM Labels: storage media No comments: Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ▼  July (6) Economics Of Evil Revisited Yet Another DNA Storage Technique Alternatives To Proof-of-Work A Modest Proposal About Ransomware Intel Did A Boeing Graphing China's Cryptocurrency Crackdown ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-1638 ---- DSHR's Blog: Cryptocurrency's Carbon Footprint DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, April 13, 2021 Cryptocurrency's Carbon Footprint China’s bitcoin mines could derail carbon neutrality goals, study says and Bitcoin mining emissions in China will hit 130 million tonnes by 2024, the headlines say it all. Excusing this climate-destroying externality of Proof-of-Work blockchains requires a continuous flow of new misleading arguments. Below the fold I discuss one of the more recent novelties. In Bitcoin and Ethereum Carbon Footprints – Part 2, Moritz Seibert claims the reason for mining is to get the mining reward: Bitcoin transactions themselves don’t cause a lot of power usage. Getting the network to accept a transaction consumes almost no power, but having ASIC miners grind through the mathematical ether to solve valid blocks does. Miners are incentivized to do this because they are compensated for it. Presently, that compensation includes a block reward which is paid in bitcoin (6.25 BTC per block) as well as a miner fee (transaction fee). Transaction fees are denominated in fractional bitcoins and paid by the initiator of the transaction. Today, about 15% of total miners’ rewards are transactions fees, and about 85% are block rewards. So, he argues, Bitcoin's current catastrophic carbon footprint doesn't matter because, as the reward decreases, so will the carbon footprint: This also means that the power usage of the Bitcoin network won’t scale linearly with the number of transactions as the network becomes predominantly fee-based and less rewards-based (which causes a lot of power to the thrown at it in light of increasing BTC prices), and especially if those transactions take place on secondary layers. In other words, taking the ratio of “Bitcoin’s total power usage” to “Number of transactions” to calculate the “Power cost per transaction” falsely implies that all transactions hit the final settlement layer (they don’t) and disregards the fact that the final state of the Bitcoin base layer is a fee-based state which requires a very small fraction of Bitcoin’s overall power usage today (no more block rewards). Seibert has some vague idea that there are implications of this not just for the carbon footprint but also for the security of the Bitcoin blockchain: Going forward however, miners’ primary revenue source will change from block rewards to the fees paid for the processing of transactions, which don’t per se cause high carbon emissions. Bitcoin is set to become be a purely fee-based system (which may pose a risk to the security of the system itself if the overall hash rate declines, but that’s a topic for another article because a blockchain that is fully reliant on fees requires that BTCs are transacted with rather than held in Michael Saylor-style as HODLing leads to low BTC velocity, which does not contribute to security in a setup where fees are the only rewards for miners.) Lets leave aside the stunning irresponsibility of arguing that it is acceptable to dump huge amounts of long-lasting greenhouse gas into the atmosphere now because you believe that in the future you will dump less. How realistic is the idea that decreasing the mining reward will decrease the carbon footprint? The graph shows the history of the hash rate, which is a proxy for the carbon footprint. You can see the effect of the "halvening", when on May 11th 2020 the mining reward halved. There was a temporary drop, but the hash rate resumed its inexorable rise. This experiment shows that reducing the mining reward doesn't reduce the carbon footprint. So why does Seibert think that eliminating it will reduce the carbon footprint? The answer appears to be that Seibert thinks the purpose of mining is to create new Bitcoins, that the reason for the vast expenditure of energy is to make the process of creating new coins secure, and that it has nothing to do with the security of transactions. This completely misunderstands the technology. In The Economic Limits of Bitcoin and the Blockchain, Eric Budish examines the return on investment in two kinds of attacks on a blockchain like Bitcoin's. The simpler one is a 51% attack, in which an attacker controls the majority of the mining power. Budish explains what this allows the attacker to do: An attacker could (i) spend Bitcoins, i.e., engage in a transaction in which he sends his Bitcoins to some merchant in exchange for goods or assets; then (ii) allow that transaction to be added to the public blockchain (i.e., the longest chain); and then subsequently (iii) remove that transaction from the public blockchain, by building an alternative longest chain, which he can do with certainty given his majority of computing power. The merchant, upon seeing the transaction added to the public blockchain in (ii), gives the attacker goods or assets in exchange for the Bitcoins, perhaps after an escrow period. But, when the attacker removes the transaction from the public blockchain in (iii), the merchant effectively loses his Bitcoins, allowing the attacker to “double spend” the coins elsewhere. Such attacks are endemic among the smaller alt-coins; for example there were three successful attacks on Ethereum Classic in a single month last year. Clearly, Seibert's future "transaction only" Bitcoin must defend against them. There are two ways to mount a 51% attack, from the outside or from the inside. An outside attack requires more mining power than the insiders are using, whereas an insider attack only needs a majority of the mining power to conspire. Bitcoin miners collaborate in "mining pools" to reduce volatility of their income, and for many years it would have taken only three or so pools to conspire for a successful attack. But assuming insiders are honest, outsiders must acquire more mining power than the insiders are using. Clearly, Bitcoin insiders are using so much mining power that this isn't feasible. The point of mining isn't to create new Bitcoins. Mining is needed to make the process of adding a block to the chain, and thus adding a set of transactions to the chain, so expensive that it isn't worth it for an attacker to subvert the process. The cost, and thus in the case of Proof of Work the carbon footprint, is the whole point. As Budish wrote: From a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power, ... In contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock. Lets consider the possible futures of a fee-based Bitcoin blockchain. It turns out that currently fee revenue is a smaller proportion of total miner revenue than Seibert claims. Here is the chart of total revenue (~$60M/day): And here is the chart of fee revenue (~$5M/day): Thus the split is about 8% fee, 92% reward: If security stays the same, blocksize stays the same, fees must increase to keep the cost of a 51% attack high enough. The chart shows the average fee hovering around $20, so the average cost of a single transaction would be over $240. This might be a problem for Seibert's requirement that "BTCs are transacted with rather than held". If blocksize stays the same, fees stay the same, security must decrease because the fees cannot cover the cost of enough hash power to deter a 51% attack. Similarly, in this case it would be 12 times cheaper to mount a 51% attack, which would greatly increase the risk of delivering anything in return for Bitcoin. It is already the case that users are advised to wait 6 blocks (about an hour) before treating a transaction as final. Waiting nearly half a day before finality would probably be a disincentive. If fees stay the same, security stays the same, blocksize must increase to allow for enough transactions so that their fees cover the cost of enough hash power to deter a 51% attack. Since 2017 Bitcoin blocks have been effectively limited to around 2MB, and the blockchain is now over one-third of a Terabyte growing at over 25%/yr. Increasing the size limit to say 22MB would solve the long-term problem of a fee-based system at the cost of reducing miners income in the short term by reducing the scarcity value of a slot in a block. Doubling the effective size of the block caused a huge controversy in the Bitcoin community for precisely this short vs. long conflict, so a much larger increase would be even more controversial. Not to mention that the size of the blockchain a year from now would be 3 times bigger imposing additional storage costs on miners. That is just the supply side. On the demand side it is an open question as to whether there would be 12 times the current demand for transactions costing $20 and taking an hour which, at least in the US, must each be reported to the tax authorities. Short vs. Long None of these alternatives look attractive. But there's also a second type of attack in Budish's analysis, which he calls "sabotage". He quotes Rosenfeld: In this section we will assume q < p [i.e., that the attacker does not have a majority]. Otherwise, all bets are off with the current Bitcoin protocol ... The honest miners, who no longer receive any rewards, would quit due to lack of incentive; this will make it even easier for the attacker to maintain his dominance. This will cause either the collapse of Bitcoin or a move to a modified protocol. As such, this attack is best seen as an attempt to destroy Bitcoin, motivated not by the desire to obtain Bitcoin value, but rather wishing to maintain entrenched economical systems or obtain speculative profits from holding a short position. Short interest in Bitcoin is currently small relative to the total stock, but much larger relative to the circulating supply. Budish analyzes various sabotage attack cases, with a parameter ∆attack representing the proportion of the Bitcoin value destroyed by the attack: For example, if ∆attack = 1, i.e., if the attack causes a total collapse of the value of Bitcoin, the attacker loses exactly as much in Bitcoin value as he gains from double spending; in effect, there is no chance to “double” spend after all. ... However, ∆attack is something of a “pick your poison” parameter. If ∆attack is small, then the system is vulnerable to the double-spending attack ... and the implicit transactions tax on economic activity using the blockchain has to be high. If ∆attack is large, then a short time period of access to a large amount of computing power can sabotage the blockchain. The current cryptocurrency bubble ensures that everyone is making enough paper profits from the golden eggs to deter them from killing the goose that lays them. But it is easy to create scenarios in which a rush for the exits might make killing the goose seem like the best way out. Seibert's misunderstanding illustrates the fundamental problem with permissionless blockchains. As I wrote in A Note On Blockchains: If joining the replica set of a permissionless blockchain is free, it will be vulnerable to Sybil attacks, in which an attacker creates many apparently independent replicas which are actually under his sole control. If creating and maintaining a replica is free, anyone can authorize any change they choose simply by creating enough Sybil replicas. Defending against Sybil attacks requires that membership in a replica set be expensive. There are many attempts to provide less environmentally damaging ways to make adding a block to a blockchain expensive, but attempts to make adding a block cheaper are self-defeating because they make the blockchain less secure. There are two reasons why the primary use of a permissionless blockchain cannot be transactions as opposed to HODL-ing: The lack of synchronization between the peers means that transactions must necessarily be slow. The need to defend against Sybil attacks means either that transactions must necessarily be expensive, or that blocks must be impractically large. Posted by David. at 8:00 AM Labels: bitcoin, security 25 comments: David. said... Seibert apparently believes (a) that a fee-only Bitcoin network would be secure, used for large numbers of transactions, and have a low carbon footprint, and (b) that the network would have a low carbon footprint because most transactions would use the Lightning network. Ignoring the contradiction, anyone who believes that the Lightning network would do the bulk of the transactions needs to read the accounts of people actually trying to transact using it. David Gerard writes: "Crypto guy loses a bet, and tries to pay the bet using the Lightning Network. Hilarity ensues." Indeed, the archived Twitter thread from the loser is a laugh-a-minute read. April 20, 2021 at 7:16 PM David. said... Jaime Powell shreds another attempt at cryptocurrency carbon footprint gaslighting in The destructive green fantasy of the bitcoin fanatics: "It is in this context that we should consider the latest “research” from the good folks at ETF-house-come-fund manager ARK Invest and $113bn payment company Square. Titled “Bitcoin is Key to an Abundant, Clean Energy Future”, it does exactly what you’d expect it to. Which is to try justify, after the fact, bitcoin’s insane energy use. Why? Because both entities are deeply involved in this “space” and now need to a) feel better about themselves and b) guard against people going off crypto on the grounds that it is actually a Very Bad Thing. ... The white paper imagines bitcoin mining being a solution, alongside battery storage, for excess energy. It also imagines that if solar and wind prices continue to collapse, bitcoin could eventually transition to being completely renewable-powered in the future. “Imagines” is the key word here. Because in reality, bitcoin mining is quite the polluter. It’s estimated that 72 per cent of bitcoin mining is concentrated in China, where nearly two-thirds of all electricity is generated by coal power, according to a recent Bank of America report. In fact, mining uses coal power so aggressively that when one coal mine flooded and shut down in Xianjiang province over the weekend, one-third of all bitcoin’s computing power went offline." April 25, 2021 at 5:00 PM David. said... In Jack Dorsey and Elon Musk agree on bitcoin's green credentials the BBC reports on yet another of Elon Musk's irresponsible cryptocurrency tweets: "The tweet comes soon after the release of a White Paper from Mr Dorsey's digital payment services firm Square, and global asset management business ARK Invest. Entitled "Bitcoin as key to an abundant, clean energy future", the paper argues that "bitcoin miners are unique energy buyers", because they offer flexibility, pay in a cryptocurrency, and can be based anywhere with an internet connection." The BBC fails to point out that Musk and Dorsey are "talking their book"; Tesla invested $1.6B and Square $220M in Bitcoin. So they have over $1.8B reasons to worry about efforts to limit its carbon footprint. April 25, 2021 at 5:10 PM David. said... This comment has been removed by the author. April 25, 2021 at 5:45 PM David. said... Nathan J. Robinson's Why Cryptocurrency Is A Giant Fraud has an interesting footnote, discussing a "pseudoscholarly masterpiece" of Bitcoin puffery by Vijay Boyapati: "Interestingly, Boyapati cites Bitcoin’s high transaction fees as a feature rather than a bug: “A recent criticism of the Bitcoin network is that the increase in fees to transmit bitcoins makes it unsuitable as a payment system. However, the growth in fees is healthy and expected… A network with ‘low’ fees is a network with little security and prone to external censorship. Those touting the low fees of Bitcoin alternatives are unknowingly describing the weakness of these so-called ‘alt-coins.’” As you can see, this successfully makes the case that high fees are unavoidable, but it also undermines the reasons why any sane person would use this as currency rather than a speculative investment." Right! A permissionless blockchain has to be expensive to run if it is to be secure. Those costs have either to be borne, ultimately, by the blockchain's users, or dumped on the rest of us as externalities (e.g. the blockchain's carbon footprint, the shortage of GPUs, ...). April 25, 2021 at 5:55 PM David. said... Colin Chartier's Crypto miners are killing free CI points to yet another cryptocurrency externality: "CI providers like LayerCI, GitLab, TravisCI, and Shippable are all worsening or shutting down their free tiers due to cryptocurrency mining attacks." CI = "Continuous Integration" April 27, 2021 at 10:14 AM David. said... Drew DeVault's must-read Cryptocurrency is an abject disaster is an even more comprehensive denunciation of cryptocurrency externalities than Chartier's. Drew concludes: "When you’re the only honest person in the room, maybe you should be in a different room. It is impossible to trust you. Every comment online about cryptocurrency is tainted by the fact that the commenter has probably invested thousands of dollars into a Ponzi scheme and is depending on your agreement to make their money back. Not to mention that any attempts at reform, like proof-of-stake, are viciously blocked by those in power (i.e. those with the money) because of any risk it poses to reduce their bottom line. No, your blockchain is not different. Cryptocurrency is one of the worst inventions of the 21st century. I am ashamed to share an industry with this exploitative grift. It has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere." Amen. April 30, 2021 at 12:18 PM David. said... Jason Herring reports on another externality of cryptocurrencies: "Around midnight on April 13, two men armed with handguns forced their way into an apartment in the 11600 block of Elbow Drive S.W., police said. The men tied up the apartment’s resident and stole computers, jewelry and bank cards from the suite. They also took cryptocurrency keys, which allow holders access to digital financial accounts. The men forced the victim to disclose his bank PINs, then put him in a storage room and fled." Hat tip to David Gerard. April 30, 2021 at 3:39 PM David. said... Bitcoin-Mining Power Plant Stirs Up Controversy by Nathaniel Mott reports: "The conflict revolves around a power plant on New York's Seneca Lake called Greenidge. The company’s website says the plant was opened in 1937, shuttered in 2009, and purchased by new owners in 2014. Those owners started mining Bitcoin in the facility in 2019. New York Focus reported that Greenidge plans “to quadruple the power used to process Bitcoin transactions by late next year” as the cryptocurrency’s value soars. Environmentalists fear those plans would lead to dangerously high CO2 emissions." May 10, 2021 at 10:05 AM David. said... After the pump, comes the dump. Reuters reports that Tesla suspends bitcoin purchases over fossil fuel concerns for mining the cryptocurrency, Elon Musk confirms. Tesla padded their quarterly results with $101M profit from the pump. May 12, 2021 at 5:38 PM David. said... In Musk: Bitcoin is bad for climate (and you can’t buy Teslas with it anymore), Tim De Chant writes: "When purchased using dollars, a new Tesla Model 3 made and operated in the US produces about 8.85 tonnes of carbon dioxide over its lifetime (assuming it's driven about 94,000 miles). The price of the same car on March 24, when Musk announced the payment option, would have been around one bitcoin, and at the time, one bitcoin had an estimated footprint of around 400 tonnes. Not only does one Tesla’s worth of bitcoin pollute significantly more than the car itself, including manufacturing, it also represents more than five times the carbon pollution of an average combustion-engined vehicle in the US. And that’s according to Tesla’s own estimate." May 13, 2021 at 11:42 AM David. said... CNBC reports that China bans financial, payment institutions from cryptocurrency business: "China has banned financial institutions and payment companies from providing services related to cryptocurrency transactions, and warned investors against speculative crypto trading. It was China's latest attempt to clamp down on what was a burgeoning digital trading market. Under the ban, such institutions, including banks and online payments channels, must not offer clients any service involving cryptocurrency, such as registration, trading, clearing and settlement, three industry bodies said in a joint statement on Tuesday." May 18, 2021 at 7:12 PM David. said... CNBC reports on yet another externality of cryptocurrencies in Hackers behind Colonial Pipeline attack reportedly received $90 million in bitcoin before shutting down: "Elliptic said that DarkSide's bitcoin wallet contained $5.3 million worth of the digital currency before its funds were drained last week. There was some speculation that this bitcoin had been seized by the U.S. government. Of the $90 million total haul, $15.5 million went to DarkSide's developer while $74.7 million went to its affiliates, according to Elliptic. The majority of the funds are being sent to crypto exchanges, where they can be converted into fiat money, Elliptic said." May 18, 2021 at 7:29 PM David. said... Based on their experimental Proof-of-Stake blockchain, Carl Beekhuizen claims that: "Ethereum will be completing the transition to Proof-of-Stake in the upcoming months, which brings a myriad of improvements that have been theorized for years. But now that the Beacon chain has been running for a few months, we can actually dig into the numbers. One area that we’re excited to explore involves new energy-use estimates, as we end the process of expending a country’s worth of energy on consensus. ... In total, a Proof-of-Stake Ethereum therefore consumes something on the order of 2.62 megawatt. This is not on the scale of countries, provinces, or even cities, but that of a small town (around 2100 American homes)." Of course, this assumes that there won't be a fork, with a Proof-of-Work Ethereum Traditional continuing. There is already an Ethereum Classic from a fork in 2016. May 18, 2021 at 7:38 PM David. said... Jeff Keeling reveals yet another cryptocurrency externality in Noisy ‘mining’ operation leaves Washington County community facing ‘bit’ of a conundrum: "Residents of the pastoral New Salem community say a Bitcoin mining center next to a Brightridge power substation has seriously impacted a prized element of their quality of life — peace and quiet. “When we lay down and all, the TV’s off and the kids are in bed, the noise is there,” Preston Holley, a school teacher whose home is just across Lola Humphreys Road from the site, said. “It’s as plain as day. When wake up to let the dog out it’s running full bore.” Cooling fans from the round-the-clock operation off Bailey Bridge Road are so loud they sometimes keep residents up at night. But the massive computing power in what Brightridge CEO Jeff Dykes said is about a $10 million operation has made Red Dog Technologies the power distributor’s biggest customer virtually overnight." May 19, 2021 at 9:37 AM David. said... Catalin Cimpanu reports that Crypto-mining gangs are running amok on free cloud computing platforms: "Gangs have been operating by registering accounts on selected platforms, signing up for a free tier, and running a cryptocurrency mining app on the provider’s free tier infrastructure. After trial periods or free credits reach their limits, the groups register a new account and start from the first step, keeping the provider’s servers at their upper usage limit and slowing down their normal operations." As David Gerard said: "Cryptocurrency decentralization is a performative waste of resources in order to avoid having to trust a government to issue currency. But since cryptocurrencies don’t actually function as currencies, it just generates new types of otherwise worthless magic beans to sell for real money. Your system will waste unlimited amounts of whatever resource you’re throwing away—and incentivize the theft of whatever resources other people can waste to turn into money." And the waste doesn't even get you decentralization. May 24, 2021 at 7:34 AM David. said... Jiang et al's Policy assessments for the carbon emission flows and sustainability of Bitcoin blockchain operation in China projects that, without policy action: "the annualized energy consumption of the Bitcoin industry in China will peak in 2024 at 296.59 Twh based on the Benchmark simulation of BBCE modeling. This exceeds the total energy consumption level of Italy and Saudi Arabia and ranks 12th among all countries in 2016. Correspondingly, the carbon emission flows of the Bitcoin operation would peak at 130.50 million metric tons per year in 2024." May 24, 2021 at 3:35 PM David. said... When writing this post I had forgotten that two years ago I wrote about the future of a fee-based Bitcoin in The Economics Of Bitcoin Transactions. Raphael Auer's Beyond the doomsday economics of “proof-of-work” in cryptocurrencies concludes: "The key takeaway of this paper concerns the interaction of these two limitations: proof-of-work can only achieve payment security if mining income is high, but the transaction market cannot generate an adequate level of income. ... the economic design of the transaction market fails to generate high enough fees. A simple model suggests that ultimately, it could take nearly a year, or 50,000 blocks, before a payment could be considered “final”." May 31, 2021 at 2:19 PM David. said... I also forgot that, 15 months ago, I wrote in Economic Limits Of Proof-of-Stake Blockchains: "Budish showed that Bitcoin was unsafe unless the value of transactions in a block was less than the sum of the mining reward and the fees for the transactions it contains. The mining reward is due to decrease to zero, at which point safety requires fees larger than the value of the transactions, not economically viable. In 2016 Arvind Narayanan's group at Princeton published a related instability in Carlsten et al's On the instability of bitcoin without the block reward. Narayanan summarized the paper in a blog post: 'Our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein.' Note that: 'We model transaction fees as arriving at a uniform rate. The rate is non-uniform in practice, which is an additional complication.' The rate is necessarily non-uniform, because transactions are in a blind auction for inclusion in the next block, which leads to over-payment." May 31, 2021 at 4:29 PM David. said... Ethereum claims that if they succeed in transitioning to Proof-of-Stake their carbon emissions will be greatly reduced. Even if this happens, it does not mean that the total carbon emissions of the cryptocurrency world will be reduced. The mining resources used to mine ETH using Proof-of-Work will not be fed to the trash compactor, they will be re-purposed to mine other cryptocurrencies. June 7, 2021 at 6:59 PM David. said... Mike Melanson's This Week in Programming: Crypto Miners Overrun Docker Hub’s Autobuild reports on the latest free tier of Web services to be killed by the cryptocurrency mining gangs. June 12, 2021 at 2:55 PM David. said... David Gerard explains why Bitcoin's blocksize didn't increase: "In mid-2015, the Bitcoin network finally filled its tiny transaction capacity. Transactions became slow, expensive and clogged. By October 2016, Bitcoin regularly had around 40,000 unconfirmed transactions waiting, and in May 2017 it peaked at 200,000 stuck in the queue. [FT,2017, free with login] Nobody could agree how to fix this, and everyone involved despised each other. The possible solutions were: 1) Increase the block size. This would increase centralisation even further. (Though that ship really sailed in 2013.) 2) The Lightning Network: bolt on a completely different non-Bitcoin network, and do all the real transactions there. This only had the minor problem that the Lightning Network’s design couldn’t possibly fix the problem. 3) Do nothing. Leave the payment markets to use a different cryptocurrency that hasn’t clogged yet. (Payment markets, such as the darknets, ended up moving to other coins that worked better.) Bitcoin mostly chose option 3 — though 2 is talked up, just as if saying “But, the Lighting Network!” solves the transaction clog." As I write the average fee is $5.10 and there are 40,485 unconfirmed transactions in the mempool. The network is confirming 1.717 transactions/sec, so the mempool is the equivalent of 6hr 36min of transaction processing, or almost 40 blocks of backlog. June 27, 2021 at 4:45 PM David. said... Gretchen Morgenson's Some locals say a bitcoin mining operation is ruining one of the Finger Lakes. Here's how. reports on yet another externality of cryptocurrencies: "Water usage by Greenidge is another problem, residents said. The current permit allows Greenidge to take in 139 million gallons of water and discharge 135 million gallons daily, at temperatures as high as 108 degrees Fahrenheit in the summer and 86 degrees in winter, documents show. Rising water temperatures can stress fish and promote toxic algae blooms, the EPA says. A full thermal study hasn't been produced and won't be until 2023, but residents protesting the plant say the lake is warmer with Greenidge operating." July 7, 2021 at 6:36 AM David. said... The life of a HODL-er carries significant risks, as Olga Kharif reports in Ethereum Co-Founder Says Safety Concern Has Him Quitting Crypto: 'Anthony Di Iorio, a co-founder of the Ethereum network, says he’s done with the cryptocurrency world, partially because of personal safety concerns. Di Iorio, 48, has had a security team since 2017, with someone traveling with or meeting him wherever he goes. In coming weeks, he plans to sell Decentral Inc., and refocus on philanthropy and other ventures not related to crypto. The Canadian expects to sever ties in time with other startups he is involved with, and doesn’t plan on funding any more blockchain projects. “It’s got a risk profile that I am not too enthused about,” said Di Iorio, who declined to disclose his cryptocurrency holdings or net worth. “I don’t feel necessarily safe in this space. If I was focused on larger problems, I think I’d be safer.” ' He can go back to being inconspicuous, despite: "He made a splash in 2018 when buying the largest and one of the most expensive condos in Canada, paying for it partly with digital money. Di Iorio purchased the three-story penthouse for C$28 million ($22 million) at the St. Regis Residences Toronto, the former Trump International Hotel & Tower in the downtown business district." July 18, 2021 at 12:36 PM David. said... EditorDavid's Both Dogecoin Creators are Now Criticizing Cryptocurrencies leads to a wonderful Twitter thread by the second of them, Jackson Palmer: "After years of studying it, I believe that cryptocurrency is an inherently right-wing, hyper-capitalistic technology built primarily to amplify the wealth of its proponents through a combination of tax avoidance, diminished regulatory oversight and artificially enforced scarcity. Despite claims of “decentralization”, the cryptocurrency industry is controlled by a powerful cartel of wealthy figures who, with time, have evolved to incorporate many of the same institutions tied to the existing centralized financial system they supposedly set out to replace." July 19, 2021 at 3:49 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ▼  April (6) Venture Capital Isn't Working Dogecoin Disrupts Bitcoin! What Is The Point? NFTs and Web Archiving Cryptocurrency's Carbon Footprint Elon Musk: Threat or Menace? ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2064 ---- DSHR's Blog: Why Decentralize? DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, December 28, 2017 Why Decentralize? In Blockchain: Hype or Hope? (paywalled until June '18) Radia Perlman asks what exactly you get in return for the decentralization provided by the enormous resource cost of blockchain technologies? Her answer is: a ledger agreed upon by consensus of thousands of anonymous entities, none of which can be held responsible or be shut down by some malevolent government ... [but] most applications would not require or even want this property. Two important essays published last February by pioneers in the field provide different answers to Perlman's question: Vitalik Buterin's answer in The Meaning of Decentralization is that what you get depends on what exactly you mean by "decentralization". Nick Szabo's answer in Money, blockchains, and social scalability is "social scalability" Below the fold I try to apply our experience with the decentralized LOCKSS technology to ask whether their arguments hold up. I'm working on a follow-up post based on Chelsea Barabas, Neha Narula and Ethan Zuckerman's Defending Internet Freedom through Decentralization from last August, which asks the question specifically about the decentralized Web and thus the idea of decentralized storage. Buterin The Meaning of Decentralization is the shorter and more accessible of the two essays. Vitalik Buterin is a co-founder of Ethereum, as one can tell from the links in his essay, which starts by discussing what decentralization means: When people talk about software decentralization, there are actually three separate axes of centralization/decentralization that they may be talking about. While in some cases it is difficult to see how you can have one without the other, in general they are quite independent of each other. The axes are as follows: Architectural (de)centralization — how many physical computers is a system made up of? How many of those computers can it tolerate breaking down at any single time? Political (de)centralization — how many individuals or organizations ultimately control the computers that the system is made up of? Logical (de)centralization— does the interface and data structures that the system presents and maintains look more like a single monolithic object, or an amorphous swarm? One simple heuristic is: if you cut the system in half, including both providers and users, will both halves continue to fully operate as independent units? He notes that: Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer) The Global LOCKSS network (GLN) is decentralized on all three axes. Individual libraries control their own network node; nodes cooperate but do not trust each other; no network operation involves more than a small proportion of the nodes. The CLOCKSS network, built from the same technology, is decentralized on the architectural and logical axes, but is centralized on the political axis since all the nodes are owned by the CLOCKSS archive. Buterin then asks: why is decentralization useful in the first place? There are generally several arguments raised: Fault tolerance— decentralized systems are less likely to fail accidentally because they rely on many separate components that are not likely. Attack resistance— decentralized systems are more expensive to attack and destroy or manipulate because they lack sensitive central points that can be attacked at much lower cost than the economic size of the surrounding system. Collusion resistance — it is much harder for participants in decentralized systems to collude to act in ways that benefit them at the expense of other participants, whereas the leaderships of corporations and governments collude in ways that benefit themselves but harm less well-coordinated citizens, customers, employees and the general public all the time. As regards fault tolerance, I think what Buterin meant by "that are not likely" is "that are not likely to suffer common-mode failures", because he goes on to ask: Do blockchains as they are today manage to protect against common mode failure? Not necessarily. Consider the following scenarios: All nodes in a blockchain run the same client software, and this client software turns out to have a bug. All nodes in a blockchain run the same client software, and the development team of this software turns out to be socially corrupted. The research team that is proposing protocol upgrades turns out to be socially corrupted. In a proof of work blockchain, 70% of miners are in the same country, and the government of this country decides to seize all mining farms for national security purposes. The majority of mining hardware is built by the same company, and this company gets bribed or coerced into implementing a backdoor that allows this hardware to be shut down at will. In a proof of stake blockchain, 70% of the coins at stake are held at one exchange. His recommendations for improving fault tolerance: are fairly obvious: It is crucially important to have multiple competing implementations. The knowledge of the technical considerations behind protocol upgrades must be democratized, so that more people can feel comfortable participating in research discussions and criticizing protocol changes that are clearly bad. Core developers and researchers should be employed by multiple companies or organizations (or, alternatively, many of them can be volunteers). Mining algorithms should be designed in a way that minimizes the risk of centralization Ideally we use proof of stake to move away from hardware centralization risk entirely (though we should also be cautious of new risks that pop up due to proof of stake). They may be "fairly obvious" but some of them are very hard to achieve in the real world. For example: What matters isn't that there are multiple competing implementations, but rather what fraction of the network's resources use the most common implementation. Pointing, as Buterin does, to a list of implementations of the Ethereum protocol (some of which are apparently abandoned) is interesting but if the majority of mining power runs one of them the network is vulnerable. Since one of them is likely to be more efficient than the others, a monoculture is likely to arise. Similarly, the employment or volunteer status of the "core developers and researchers" isn't very important if they are vulnerable to the kind of group-think that we see in the Bitcoin community. While it is true that the Ethereum mining algorithm is designed to enable smaller miners to function, it doesn't address the centralizing force I described in Economies of Scale in Peer-to-Peer Networks. If smaller mining nodes are more cost-effective, economies of scale will drive the network to be dominated by large collections of smaller mining nodes under unified control. If they aren't more cost-effective, the network will be dominated by collections of larger mining nodes under unified control. Either way, you get political centralization and lose attack resistance. The result is, as Buterin points out, that the vaunted attack resistance of proof-of-work blockchains like Bitcoin's is less than credible: In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. If any one actor gets more than 1/3 of the mining power in a proof of work system, they can gain outsized profits by selfish-mining. However, can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? But it turns out that coordination is a double-edged sword: Many communities, including Ethereum’s, are often praised for having a strong community spirit and being able to coordinate quickly on implementing, releasing and activating a hard fork to fix denial-of-service issues in the protocol within six days. But how can we foster and improve this good kind of coordination, but at the same time prevent “bad coordination” that consists of miners trying to screw everyone else over by repeatedly coordinating 51% attacks? And this makes resisting collusion hard: Collusion is difficult to define; perhaps the only truly valid way to put it is to simply say that collusion is “coordination that we don’t like”. There are many situations in real life where even though having perfect coordination between everyone would be ideal, one sub-group being able to coordinate while the others cannot is dangerous. As Tonto said to the Lone Ranger, "What do you mean we, white man?" Our SOSP paper showed how, given a large surplus of replicas, the LOCKSS polling protocol made it hard for even a very large collusion among the peers to modify the consensus of the non-colluding peers without detection. The large surplus of replicas allowed each peer to involve a random sample of other peers in each operation. Absent an attacker, the result of each operation would be landslide agreement or landslide disagreement. The random sample of peers made it hard for an attacker to ensure that all operations resulted in landslides. Alas, this technique has proven difficult to apply in other contexts, which in any case (except for cryotcurrencies) find it difficult to provide a sufficient surplus of replicas. Szabo Nick Szabo was a pioneer of digital currency, sometimes even suspected of being Satoshi Nakamoto. His essay Money, blockchains, and social scalability starts by agreeing with Perlman that blockchains are extremely wasteful of resources: Blockchains are all the rage. The oldest and biggest blockchain of them all is Bitcoin, ... Running non-stop for eight years, with almost no financial loss on the chain itself, it is now in important ways the most reliable and secure financial network in the world. The secret to Bitcoin’s success is certainly not its computational efficiency or its scalability in the consumption of resources. ... Bitcoin’s puzzle-solving hardware probably consumes in total over 500 megawatts of electricity. ... Rather than reduce its protocol messages to be as few as possible, each Bitcoin-running computer sprays the Internet with a redundantly large number of “inventory vector” packets to make very sure that all messages get accurately through to as many other Bitcoin computers as possible. As a result, the Bitcoin blockchain cannot process as many transactions per second as a traditional payment network such as PayPal or Visa. Szabo then provides a different answer than Perlman's to the question "what does Bitcoin get in return for this profligate expenditure of resources?" the secret to Bitcoin’s success is that its prolific resource consumption and poor computational scalability is buying something even more valuable: social scalability. ... Social scalability is about the ways and extents to which participants can think about and respond to institutions and fellow participants as the variety and numbers of participants in those institutions or relationships grow. It's about human limitations, not about technological limitations or physical resource constraints. He measures social scalability thus: One way to estimate the social scalability of an institutional technology is by the number of people who can beneficially participate in the institution. ... blockchains, and in particular public blockchains that implement cryptocurrencies, increase social scalability, even at a dreadful reduction in computational efficiency and scalability. People participate in cryptocurrencies in three ways, by mining, transacting, and HODL-ing. In practice most miners simply passively contribute resources to a few large mining pools in return for a consistent cash flow. Chinese day-traders generate the vast majority of Bitcoin transactions. 40% of Bitcoin are HODL-ed by a small number of early adopters. None of these are really great social scalability. Bitcoin is a scheme to transfer money from many later to a few earlier adopters: Bitcoin was substantially mined early on - early adopters have most of the coins. The design was such that early users would get vastly better rewards than later users for the same effort. Cashing in these early coins involves pumping up the price, then selling to later adopters, particularly in the bubbles. Thus Bitcoin was not a Ponzi or pyramid scheme, but a pump-and-dump. Anyone who bought in after the earliest days is functionally the sucker in the relationship. Szabo goes on to discuss the desirability of trust minimization and the impossibility of eliminating the need for trust: Trust minimization is reducing the vulnerability of participants to each other’s and to outsiders’ and intermediaries’ potential for harmful behavior. ... In most cases an often trusted and sufficiently trustworthy institution (such as a market) depends on its participants trusting, usually implicitly, another sufficiently trustworthy institution (such as contract law). ... An innovation can only partially take away some kinds of vulnerability, i.e. reduce the need for or risk of trust in other people. There is no such thing as a fully trustless institution or technology. ... The historically recent breakthroughs of computer science can reduce vulnerabilities, often dramatically so, but they are far from eliminating all kinds of vulnerabilities to the harmful behavior of any potential attacker. Szabo plausibly argues that the difference between conventional Internet services and blockchains is that between matchmaking and trust-minimization: Matchmaking is facilitating the mutual discovery of mutually beneficial participants. Matchmaking is probably the kind of social scalability at which the Internet has most excelled. ... Whereas the main social scalability benefit of the Internet has been matchmaking, the predominant direct social scalability benefit of blockchains is trust minimization. ... Trust in the secret and arbitrarily mutable activities of a private computation can be replaced by verifiable confidence in the behavior of a generally immutable public computation. This essay will focus on such vulnerability reduction and its benefit in facilitating a standard performance beneficial to a wide variety of potential counterparties, namely trust-minimized money. Szabo then describes his vision of "trust-minimized money" and its advantages thus: A new centralized financial entity, a trusted third party without a “human blockchain” of the kind employed by traditional finance, is at high risk of becoming the next Mt. Gox; it is not going to become a trustworthy financial intermediary without that bureaucracy. Computers and networks are cheap. Scaling computational resources requires cheap additional resources. Scaling human traditional institutions in a reliable and secure manner requires increasing amounts accountants, lawyers, regulators, and police, along with the increase in bureaucracy, risk, and stress that such institutions entail. Lawyers are costly. Regulation is to the moon. Computer science secures money far better than accountants, police, and lawyers. Given the routine heists from exchanges it is clear that the current Bitcoin ecosystem is much less secure than traditional financial institutions. And imagine if the huge resources devoted to running the Bitcoin blockchain were instead devoted to additional security for the financial institutions! Szabo is correct that: In computer science there are fundamental security versus performance tradeoffs. Bitcoin's automated integrity comes at high costs in its performance and resource usage. Nobody has discovered any way to greatly increase the computational scalability of the Bitcoin blockchain, for example its transaction throughput, and demonstrated that this improvement does not compromise Bitcoin’s security. The LOCKSS technology's automated security also comes from using a lot of computational resources, although by doing so it avoids expensive and time-consuming copyright negotiations. But then Szabo argues that because of the resource cost and the limited transaction throughput, the best that can be delivered is a reduced level of security for most transactions: Instead, a less trust-minimized peripheral payment network (possibly Lightning ) will be needed to bear a larger number of lower-value bitcoin-denominated transactions than Bitcoin blockchain is capable of, using the Bitcoin blockchain to periodically settle with one high-value transaction batches of peripheral network transactions. Despite the need for peripheral payment networks, Szabo argues: Anybody with a decent Internet connection and a smart phone who can pay $0.20-$2 transaction fees – substantially lower than current remitance fees -- can access Bitcoin any where on the globe. Transaction Fees That was then. Current transaction fees are in the region of $50, with a median transaction size of about $4K, so the social scalability of Bitcoin transactions no longer extends to "Anybody with a decent Internet connection and a smart phone". As I wrote: To oversimplify, the argument for Bitcoin and its analogs is the argument for gold, that because the supply is limited the price will go up. The history of the block size increase shows that the supply of Bitcoin transactions is limited to something around 4 per second. So by the same argument that leads to HODL-ing, the cost of getting out when you decide you can't HODL any more will always go up. And, in particular, it will go up the most when you need it the most, when the bubble bursts. Szabo's outdated optimism continues: When it comes to small-b bitcoin, the currency, there is nothing impossible about paying retail with bitcoin the way you’d pay with a fiat currency. ... Gold can have value anywhere in the world and is immune from hyperinflation because its value doesn’t depend on a central authority. Bitcoin excels at both these factors and runs online, enabling somebody in Albania to use Bitcoin to pay somebody in Zimbabwe with minimal trust in or and no payment of quasi-monopoly profits to intermediaries, and with minimum vulnerability to third parties. Mining Pools 12/25/17 They'd better be paying many tens of thousands of dollars to make the transaction fees to the quasi-monopoly mining pools (top 6 pools = 79.8% of the mining power) worth the candle. Bitcoin just lost 25% of its "value" in a day, which would count as hyperinflation if it hadn't recently gained 25% in a day. In practice, they need to trust exchanges. And, as David Gerard recounts in Chapter 7, retailers who have tried accepting Bitcoin have found the volatility, the uncertainty of transactions succeeding and the delays impossible to live with. Szabo's discussion of blockchains has worn better than his discussion of cryptocurrencies. It starts with a useful definition: It is a blockchain if it has blocks and it has chains. The “chains” should be Merkle trees or other cryptographic structures with ... post-unforgeable integrity. Also the transactions and any other data whose integrity is protected by a blockchain should be replicated in a way objectively tolerant to worst-case malicious problems and actors to as high a degree as possible (typically the system can behave as previously specified up to a fraction of 1/3 to 1/2 of the servers maliciously trying to subvert it to behave differently). and defines the benefit blockchains provide thus: To say that data is post-unforgeable or immutable means that it can’t be undetectably altered after being committed to the blockchain. Contrary to some hype this doesn’t guarantee anything about a datum’s provenance, or its truth or falsity, before it was committed to the blockchain. but this doesn't eliminate the need for (trusted) governance because 51% or less attacks are possible: and because of the (hopefully very rare) need to update software in a manner that renders prior blocks invalid – an even riskier situation called a hard fork -- blockchains also need a human governance layer that is vulnerable to fork politics. The possibility of 51% attacks means that it is important to identify who is behind the powerful miners. Szabo's earlier "bit gold" was based on his "secure property titles": Also like today’s private blockchains, secure property titles assumed and required securely distinguishable and countable nodes. Given the objective 51% hashrate attack limit to some important security goals of public blockchains like Bitcoin and Ethereum, we actually do care about the distinguishable identity of the most powerful miners to answer the question “can somebody convince and coordinate the 51%? Or the 49% of the top three pools. Identification of the nodes is the basic difference between public and private blockchains: So I think some of the “private blockchains” qualify as bona fide blockchains; others should go under the broader rubric of “distributed ledger” or “shared database” or similar. They are all very different from and not nearly as socially scalable as public and permissionless blockchains like Bitcoin and Ethereum. All of the following are very similar in requiring an securely identified (distinguishable and countable) group of servers rather than the arbitrary anonymous membership of miners in public blockchains. In other words, they require some other, usually far less socially scalable, solution to the Sybil (sockpuppet) attack problem: Private blockchains The “federated” model of sidechains (Alas, nobody has figured out how to do sidechains with any lesser degree of required trust, despite previous hopes or claims). Sidechains can also be private chains, and it’s a nice fit because their architectures and external dependencies (e.g. on a PKI) are similar. Multisig-based schemes, even when done with blockchain-based smart contracts Threshold-based “oracle” architectures for moving off-blockchain data onto blockchains Like blockchains, the LOCKSS technology can be used in public (as in the Global LOCKSS Network) or private (as in the CLOCKSS Archive) networks. The CLOCKSS network identifies its nodes using a Public Key Infrastructure (PKI): The dominant, but usually not very socially scalable, way to identify a group of servers is with a PKI based on trusted certificate authorities (CAs). To avoid the problem that merely trusted third parties are security holes, reliable CAs themselves must be expensive, labor-intensive bureaucracies that often do extensive background checks themselves or rely on others (e.g. Dun and Bradstreet for businesses) to do so. Public certificate authorities have proven not trustworthy but private CAs are within the trust border of the sponsoring organization. Szabo is right that: We need more socially scalable ways to securely count nodes, or to put it another way to with as much robustness against corruption as possible, assess contributions to securing the integrity of a blockchain. But in practice, the ideal picture of blockchains hasn't worked out for Bitcoin: That is what proof-of-work and broadcast-replication are about: greatly sacrificing computational scalability in order to improve social scalability. That is Satoshi’s brilliant tradeoff. It is brilliant because humans are far more expensive than computers and that gap widens further each year. And it is brilliant because it allows one to seamlessly and securely work across human trust boundaries (e.g. national borders), in contrast to “call-the-cop” architectures like PayPal and Visa that continually depend on expensive, error-prone, and sometimes corruptible bureaucracies to function with a reasonable amount of integrity. Total Daily Transaction Fees With the overhead cost of transactions currently running at well over $10M/day its not clear that "humans are far more expensive than computers". With almost daily reports of thefts over $10M Bitcoin lacks "a reasonable amount of integrity" at the level most people interact with it. It is possible that other public blockchain applications might not suffer these problems. But mining blocks needs to be costly for the chain to deter Sybil attacks, and these costs need to be covered. So, as I argued in Economies of Scale in Peer-to-Peer Networks, there has to be an exchange rate between the chains "coins" and the fiat currencies that equipment and power vendors accept. Economies of scale will apply, and drive centralization of the network. If the "coins" become, as Bitcoins did, channels for flight capital and speculation the network will also become a target for crime. Private blockchains escape these problems, but they lack social scalability and have single points of failure; their advantages over more conventional and efficient systems are unclear. Posted by David. at 9:00 AM Labels: bitcoin, distributed web, security, techno-hype 3 comments: Anonymous said... «Szabo's discussion of blockchains has worn better than his discussion of cryptocurrencies. It starts with a useful definition:» I dunno why techies pay so much attention to "blockchain" coins, the issues within etc. have been thoroughly discussed for decades. The only big deal is that a lot of "greater fools" have rushed into pump-and-dump schemes. As to "blockchains" techies are routinely familiar with the Linux kernel 'git' crypto blockchain ledger, which was designed precisely to ensure that source code deposits and withdrawals into contributors' accounts were cryptographically secured in a peer-to-peer way to ensure malicious servers could not subvert the kernel source. January 11, 2018 at 10:11 AM David. said... One-stop counterfeit certificate shops for all your malware-signing needs by Dan Goodin is an example of why treating Certificate Authorities as "Trusted Third Parties" is problematic: "A report published by threat intelligence provider Recorded Future ... identified four such sellers of counterfeit certificates since 2011. Two of them remain in business today. The sellers offered a variety of options. In 2014, one provider calling himself C@T advertised certificates that used a Microsoft technology known as Authenticode for signing executable files and programming scripts that can install software. C@T offered code-signing certificates for macOS apps as well. His fee: upwards of $1,000 per certificate." Note that these certificates are not counterfeit, they are real certificates "registered under legitimate corporations and issued by Comodo, Thawte, and Symantec—the largest and most respected issuers". They are the result of corporate identity theft and failures of the verification processes of the issuers. February 22, 2018 at 9:47 AM David. said... "Over 23,000 users will have their SSL certificates revoked by tomorrow morning, March 1, in an incident between two companies —Trustico and DigiCert— that is likely to have a huge impact on the CA (Certificate Authority) industry as a whole in the coming months." is the start of a Catalin Cimpanu post. It is a complicated story of Certificate Authorities behaving badly (who could have imagined?). Cimpanu has a useful timeline. The gist is that Trustico used to resell certificates from DigiCert but was switching to resell certificates from Comodo. During this spat with DigiCert, it became obvious that: A) Trustico's on-line certificate generation process captured and stored the user's private keys, which is a complete no-no. Dan Goodin writes: "private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren't followed. (There's no indication the email was encrypted, either, although neither Trustico nor DigiCert provided that detail when responding to questions.)" B) Trustico's approach to website security was inadequate. They had to take their website down: "shortly after a website security expert disclosed a critical vulnerability on Twitter that appeared to make it possible for outsiders to run malicious code on Trustico servers. The vulnerability, in a trustico.com website feature that allowed customers to confirm certificates were properly installed on their sites, appeared to run as root. By inserting commands into the validation form, attackers could call code of their choice and get it to run on Trustico servers with unfettered "root" privileges, the tweet indicated." March 4, 2018 at 10:05 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ▼  2017 (82) ▼  December (6) Why Decentralize? Updating Flash vs. Hard Disk Science Friday's "File Not Found" Bad Identifiers Cliff Lynch's Stewardship in the "Age of Algorithms" International Digital Preservation Day ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2216 ---- DSHR's Blog: The Death Of Corporate Research Labs DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, May 19, 2020 The Death Of Corporate Research Labs In American innovation through the ages, Jamie Powell wrote: who hasn’t finished a non-fiction book and thought “Gee, that could have been half the length and just as informative. If that.” Yet every now and then you read something that provokes the exact opposite feeling. Where all you can do after reading a tweet, or an article, is type the subject into Google and hope there’s more material out there waiting to be read. So it was with Alphaville this Tuesday afternoon reading a research paper from last year entitled The changing structure of American innovation: Some cautionary remarks for economic growth by Arora, Belenzon, Patacconi and Suh (h/t to KPMG’s Ben Southwood, who highlighted it on Twitter). The exhaustive work of the Duke University and UEA academics traces the roots of American academia through the golden age of corporate-driven research, which roughly encompasses the postwar period up to Ronald Reagan’s presidency, before its steady decline up to the present day. Arora et al argue that a cause of the decline in productivity is that: The past three decades have been marked by a growing division of labor between universities focusing on research and large corporations focusing on development. Knowledge produced by universities is not often in a form that can be readily digested and turned into new goods and services. Small firms and university technology transfer offices cannot fully substitute for corporate research, which had integrated multiple disciplines at the scale required to solve significant technical problems. As someone with many friends who worked at the legendary corporate research labs of the past, including Bell Labs and Xerox PARC, and who myself worked at Sun Microsystems' research lab, this is personal. Below the fold I add my 2c-worth to Arora et al's extraordinarily interesting article. The authors provide a must-read, detailed history of the rise and fall of corporate research labs. I lived through their golden age; a year before I was born the transistor was invented at Bell Labs: The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs. They shared the 1956 Nobel Prize in Physics for their achievement.[2] The most widely used transistor is the MOSFET (metal–oxide–semiconductor field-effect transistor), also known as the MOS transistor, which was invented by Egyptian engineer Mohamed Atalla with Korean engineer Dawon Kahng at Bell Labs in 1959.[3][4][5] The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[6] Arora et al Fig 2. Before I was 50 Bell Labs had been euthanized as part of the general massacre of labs: Bell Labs had been separated from its parent company AT&T and placed under Lucent in 1996; Xerox PARC had also been spun off into a separate company in 2002. Others had been downsized: IBM under Louis Gerstner re-directed research toward more commercial applications in the mid-90s ... A more recent example is DuPont’s closing of its Central Research & Development Lab in 2016. Established in 1903, DuPont research rivaled that of top academic chemistry departments. In the 1960s, DuPont’s central R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined. However, in the 1990s, DuPont’s attitude toward research changed and after a gradual decline in scientific publications, the company’s management closed its Central Research and Development Lab in 2016. Arora et al point out that the rise and fall of the labs coincided with the rise and fall of anti-trust enforcement: Historically, many large labs were set up partly because antitrust pressures constrained large firms’ ability to grow through mergers and acquisitions. In the 1930s, if a leading firm wanted to grow, it needed to develop new markets. With growth through mergers and acquisitions constrained by anti-trust pressures, and with little on offer from universities and independent inventors, it often had no choice but to invest in internal R&D. The more relaxed antitrust environment in the 1980s, however, changed this status quo. Growth through acquisitions became a more viable alternative to internal research, and hence the need to invest in internal research was reduced. Lack of anti-trust enforcement, pervasive short-termism, driven by Wall Street's focus on quarterly results, and management's focus on manipulating the stock price to maximize the value of their options killed the labs: Large corporate labs, however, are unlikely to regain the importance they once enjoyed. Research in corporations is difficult to manage profitably. Research projects have long horizons and few intermediate milestones that are meaningful to non-experts. As a result, research inside companies can only survive if insulated from the short-term performance requirements of business divisions. However, insulating research from business also has perils. Managers, haunted by the spectre of Xerox PARC and DuPont’s “Purity Hall”, fear creating research organizations disconnected from the main business of the company. Walking this tightrope has been extremely difficult. Greater product market competition, shorter technology life cycles, and more demanding investors have added to this challenge. Companies have increasingly concluded that they can do better by sourcing knowledge from outside, rather than betting on making game-changing discoveries in-house. They describe the successor to the labs as: a new division of innovative labor, with universities focusing on research, large firms focusing on development and commercialization, and spinoffs, startups, and university technology licensing offices responsible for connecting the two. An unintended consequence of abandoning anti-trust enforcement was thus a slowing of productivity growth, because the this new division of labor wasn't as effective as the labs: The translation of scientific knowledge generated in universities to productivity enhancing technical progress has proved to be more difficult to accomplish in practice than expected. Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to significant resources, can more easily integrate multiple knowledge streams, and direct their research toward solving specific practical problems, which makes it more likely for them to produce commercial applications. University research has tended to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specific problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. In Sections 5.1.1 through 5.1.4 Arora et al discuss in detail four reasons why the corporate labs drove faster productivity growth: Corporate labs work on general purpose technologies. Because the labs were hosted the leading companies in their market, they believed that technologies that benefited their product space would benefit them the most: Claude Shannon’s work on information theory, for instance, was supported by Bell Labs because AT&T stood to benefit the most from a more efficient communication network ... IBM supported milestones in nanoscience by developing the scanning electron microscope, and furthering investigations into electron localization, non-equilibrium superconductivity, and ballistic electron motions because it saw an opportunity to pre-empt the next revolutionary chip design in its industry ... Finally, a recent surge in corporate publications in Machine Learning suggests that larger firms such as Google and Facebook that possess complementary assets (user data) for commercialization publish more of their research and software packages to the academic community, as they stand to benefit most from advances in the sector in general. My experience of Open Source supports this. Sun was the leading player in the workstation market and was happy to publish and open source infrastructure technologies such as NFS that would buttress that position. On the desktop it was not a dominant player, which (sadly) led to NeWS being closed-source. Corporate labs solve practical problems. They quote Andrew Odlyzko: “It was very important that Bell Labs had a connection to the market, and thereby to real problems. The fact that it wasn’t a tight coupling is what enabled people to work on many long-term problems. But the coupling was there, and so the wild goose chases that are at the heart of really innovative research tended to be less wild, more carefully targeted and less subject to the inertia that is characteristic of university research.” Again, my experience supports this contention. My work at Sun Labs was on fault-tolerance. Others worked on, for example, ultra high-bandwidth backplane bus technology, innovative cooling materials, optical interconnect, and asynchronous chip architectures, all of which are obviously "practical problems" with importance for Sun's products, but none of which could be applied to the products under development at the time. Corporate labs are multi-disciplinary and have more resources. As regards the first of these, the authors use Google as an example: Researching neural networks requires an interdisciplinary team. Domain specialists (e.g. linguists in the case of machine translation) define the problem to be solved and assess performance; statisticians design the algorithms, theorize on their error bounds and optimization routines; computer scientists search for efficiency gains in implementing the algorithms. Not surprisingly, the “Google translate” paper has 31 coauthors, many of them leading researchers in their respective fields Again, I would agree. A breadth of disciplines was definitely a major contributor to PARC's successes. As regards extra resources, I think this is a bigger factor than Arora et al do. As I wrote in Falling Research Productivity Revisited: the problem of falling research productivity is like the "high energy physics" problem - after a while all the experiments at a given energy level have been done, and getting to the next energy level is bound to be a lot more expensive and difficult each time. Information Technology at all levels is suffering from this problem. For example, Nvidia got to its first working silicon of a state-of-the-art GPU on $2.5M from the VCs, which today wouldn't even buy you a mask set. Even six years ago system architecture research, such as Berkeley's ASPIRE project, needed to build (or at least simulate) things like this: Firebox is a 50kW WSC building block containing a thousand compute sockets and 100 Petabytes (2^57B) of non-volatile memory connected via a low-latency, high-bandwidth optical switch. ... Each compute socket contains a System-on-a-Chip (SoC) with around 100 cores connected to high-bandwidth on-package DRAM. Clearly, AI research needs a scale of data and computation that only a very large company can afford. For example, Waymo's lead in autonomous vehicles is based to a large extent on the enormous amount of data that has taken years of a fleet of vehicles driving all day, every day to accumulate. Large corporate labs may generate significant external benefits. By "external benefits", Arora et al mean benefits to society and the broader economy, but not to the lab's host company: One well-known example is provided by Xerox PARC. Xerox PARC developed many fundamental inventions in PC hardware and software design, such as the modern personal computer with graphical user interface. However, it did not significantly benefit from these inventions, which were instead largely commercialized by other firms, most notably Apple and Microsoft. While Xerox clearly failed to internalize fully the benefits from its immensely creative lab ... it can hardly be questioned that the social benefits were large, with the combined market capitalization of Apple and Microsoft now exceeding 1.6 trillion dollars. Two kinds of company form these external benefits. PARC had both spin-offs, in which Xerox had equity, and startups that built on their ideas and hired their alumni but in which they did not. Xerox didn't do spin-offs well: As documented by Chesbrough (2002, 2003), the key problem there was not Xerox’s initial equity position in the spin-offs, but Xerox’s practices in managing the spin-offs, which discouraged experimentation by forcing Xerox researchers to look for applications close to Xerox’s existing businesses. But Cisco is among the examples of how spin-offs can be done well, acting as an internal VC to incentivize a team by giving them equity in a startup. If it was successful, Cisco would later acquire it. Sun Microsystems is an example of exceptional fertility in external startups. Nvidia was started by a group of frustrated Sun engineers. It is currently worth almost 30 times what Oracle paid to acquire Sun. It is but one entry in a long list of such startups whose aggregate value dwarfs that of Sun at its peak. As Arora et al write: A surprising implication of this analysis is that the mismanagement of leading firms and their labs can sometimes be a blessing in disguise. The comparison between Fairchild and Texas Instruments is instructive. Texas Instruments was much better managed than Fairchild but also spawned far fewer spin-offs. Silicon Valley prospered as a technology hub, while the cluster of Dallas-Fort Worth semiconductor companies near Texas Instruments, albeit important, is much less economically significant. An important additional external benefit that Arora et al ignore is the Open Source movement, which was spawned by Bell Labs and the AT&T consent decree. AT&T was forced to license the Unix source code. Staff at institutions, primarily Universities, which had signed the Unix license could freely share code enhancements. This sharing culture grew and led to the BSD and Gnu licenses that underlie the bulk of today's computational ecosystem. Jamie Powell was right that Arora et al have produced an extremely valuable contribution to the study of the decay of the vital link between R&D and productivity of the overall economy. Posted by David. at 8:00 AM Labels: anti-trust, big data, intellectual property, unix, venture capital 15 comments: miguel said... a Lab is a cost, so it's externalized and sold, until someone buys it, and on again waiting for the next buyer. Probably the way to account it should be different, taking into account other non-appearing benefits for the mother company (brand/know-how/reputation/etc...) rgds!M May 21, 2020 at 3:58 AM Unknown said... A thought ... the business model changing to "providing services" instead of "selling products" will, perhaps, again shift the research back to corporations. R&D in that case makes a more visible contribution to the bottom line profits within the company. I also would like to add my opinion that the academia cannot fully be of service to the large economy when their funding is so tightly controlled and directed to whatever political idea is flying around for the moment. However,,, this was a great read! May 22, 2020 at 12:36 AM AlanG01 said... I worked in corporate R&D labs for 9 years in the 1980's and early 1990's at GTE Labs and Digital Equipment Corporation. A large issue we constantly dealt with was technology transfer to other more business-oriented parts of the company. The technology we were trying to transfer was ground breaking and state-of-the-art, but was also often not at a production usage level. And the staff in the receiving organizations often did not have Masters or PhD level Computer Science training, though they were quite proficient oat MIS. As a result, they were not well equipped to receive 50K of Lisp code written within an object-oriented framework that ran on this "weird (to them) Lisp machine. So there was always this technology transfer disconnect. As researchers, we published extensively, so we contributed to the greater good of computer science advancement. And the level of publication was a large part of how we ere measured, as in academia. But it would have also been gratifying to see more use made of the cool technology we were producing. Usage was not zero percent, but I don't think it exceeded 10-15% either. May 22, 2020 at 1:29 PM David. said... Matthew Hutson reports more evidence for falling research productivity, this time in AI, in Eye-catching advances in some AI fields are not real: "Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. May 30, 2020 at 7:47 AM Cem Kaner said... I worked in Silicon Valley (software industry) for 17 years, then as a full Professor of Software Engineering for 17 years. I retired at the end of 2016. I ran a research lab at school and spent a lot of time arranging funding. My lab, and several others that I knew, were perfectly willing to create a research stream that applied / extended our work in a direction needed by a corporation. The government grants kept us at a pretty theoretical level. Mixing corporate and government work let us explore some ideas much more thoroughly. The problem with corporate funding was that almost every corporate representative who wanted to contract with us wanted to pay minimum wage to my students and nothing to me--and they wanted all rights (with the scope "all" very broadly defined). They seemed to assume that I was so desperate for funding that I would agree to anything. Negotiating with these folks was unpleasant and unproductive. Several opportunities for introducing significant improvements in the efficiency and effectiveness of software engineering efforts were squandered because the large corporations who contacted me wanted the equivalent of donations, not research contracts. I heard similar stories from faculty members at my school and other schools. It is possible for university researchers to steer some of their work into directions that are immediately commercially useful. But we can't do it with government research money because that's not what they agree to pay for. And we can't do it for free. And we can't agree to terms that completely block graduate students from continuing to work on the research that they focused their graduate research on because that would destroy their careers. Companies that won't make a serious effort to adddress those issues won't get much from university researchers. But don't pin that failure on the university. June 6, 2020 at 9:34 AM Godfree Roberts said... Meanwhile, no-one cares that China outspends us 4:1 on research. We have a real missile gap opportunity to get our R&D back on track. June 16, 2020 at 8:07 PM David. said... Why Corporate America Gave Up on R&D by Kaushik Viswanath is an interview with Ashish Arora and Sharon Belenzon, the authors of the paper behind this post. July 8, 2020 at 10:03 AM jrminter said... I spent 37 years in an Analytical Sciences Division that, depending upon the whim of manamagement, was either loosely or tightly integrated with the Research Labs. Our mission was to help researchers and technologists understand the chemistry and performance of the materials that they were integrating into products. It was facinating and rewarding work. The instruments and the skills needed to properly interpret the results made it a frequent target for budget cuts when money was tight. Our clients valued the data because it helped understand the chemistry and materials science that determined how well the final product would perform. August 18, 2020 at 1:59 PM Dwarkesh Patel said... Excellent post! I'm still not sure how antitrust plays into this. Doesn't the value of research increase to a firm if they have many acquired companies which could make use of that research? August 18, 2020 at 2:58 PM David. said... In the long-gone days when anti-trust was enforced, firms could not buy competitors to acquire their newly-developed products. So they had to invest in their own internal research and development, or they wouldn't have any new products. Now that there is no anti-trust enforcement, there's no point in spending dollars that could go to stock buybacks on internal research labs or developing new products. Instead, you let the VCs fund R&D, and if their bets pay off, you buy the company. See for example the pharma industry. In software it is particularly effective, because you can offer the VC-funded startup a choice between being bought at a low valuation, or facing a quick-and-dirty clone backed by a much bigger company with a much bigger user base. See for example Facebook. August 18, 2020 at 3:57 PM Unknown said... After spending 16 years in Silicon Valley R&D, I am working for Toyota on zero emission vehicle technology, and I can tell you that the Japanese have not abandoned the corporate research model. I don't know if it is tradition or necessity, or simply that it works for them, but it is refreshingly old fashioned. In my mind, economics as a measure of success is as good as any other metric, because it represents a sort of minimization of effort, energy, what have you. R&D will always be resource limited in some way, electricity, money, time, personnel; and so we have to learn to be efficient within our constraints. The yin to that yang, is that innovation can not happen outside of a creative environment. It is the responsibility, and the sole responsibility, of leadership to maintain a dynamic balance between creativity/innovation and resource constraint. August 21, 2020 at 2:43 PM David. said... Rob Beschizza's Explore an abandoned research lab points to this video, which provides a suitable coda for the post. September 5, 2020 at 9:56 AM David. said... Ex-Google boss Eric Schmidt: US 'dropped the ball' on innovation by Karishma Vaswani starts: "In the battle for tech supremacy between the US and China, America has "dropped the ball" in funding for basic research, according to former Google chief executive Eric Schmidt. ... For example, Chinese telecoms infrastructure giant Huawei spends as much as $20bn (£15.6bn) on research and development - one of the highest budgets in the world. This R&D is helping Chinese tech firms get ahead in key areas like artificial intelligence and 5G." September 13, 2020 at 7:22 AM David. said... Daron Acemoglu makes good points in Antitrust Alone Won’t Fix the Innovation Problem, including: "In terms of R&D, the McKinsey Global Institute estimates that just a few of the largest US and Chinese tech companies account for as much as two-thirds of global spending on AI development. Moreover, these companies not only share a similar vision of how data and AI should be used (namely, for labor-replacing automation and surveillance), but they also increasingly influence other institutions, such as colleges and universities catering to tens of thousands of students clamoring for jobs in Big Tech. There is now a revolving door between leading institutions of higher education and Silicon Valley, with top academics often consulting for, and sometimes leaving their positions to work for, the tech industry." October 31, 2020 at 12:04 PM Blissex2 said... As a late comment, most corporate management realized that researchers in corporate labs were too old and too expensive, with permanent positions, plus benefits, pensions, etc. and decided to go for much lower cost alternatives: * A lot of research labs in cheaper offshore locations, with younger researcher not demanding high wages and pensions and benefits, and much easier to fire. * A lot of research was outsourced, via research grants, to universities, casualizing research work, because universities can put together very cheaply teams of young, hungry PhDs and postdocs on low pay and temporary contracts, also thanks to an enormous increase in the number of PhD and postdoc positions, also thanks to industry outsourcing contracts. January 12, 2021 at 1:29 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ▼  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ▼  May (3) The Death Of Corporate Research Labs Economics Of Decentralized Storage Carl Malamud Wins (Mostly) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2370 ---- DSHR's Blog: A Modest Proposal About Ransomware DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, July 15, 2021 A Modest Proposal About Ransomware On the evening of July 2nd the REvil ransomware gang exploited a 0-day vulnerability to launch a supply chain attack on customers of Kaseya's Virtual System Administrator (VSA) product. The timing was perfect, with most system administrators off for the July 4th long weekend. By the 6th Alex Marquardt reported that Kaseya says up to 1,500 businesses compromised in massive ransomware attack. REvil, which had previously extorted $11M from meat giant JBS, announced that for the low, low price of only $70M they would provide everyone with a decryptor. The US government's pathetic response is to tell the intelligence agencies to investigate and to beg Putin to crack down on the ransomware gangs. Good luck with that! It isn't his problem, because the gangs write their software to avoid encrypting systems that have default languages from the former USSR. I've writtten before (here, here, here) about the importance of disrupting the cryptocurrency payment channel that enables ransomware, but it looks like the ransomware crisis has to get a great deal worse before effective action is taken. Below the fold I lay out a modest proposal that could motivate actions that would greatly reduce the risk. It turns out that the vulnerability that enabled the REvil attack didn't meet the strict definition of a 0-day. Gareth Corfield's White hats reported key Kaseya VSA flaw months ago. Ransomware outran the patch explains: Rewind to April, and the Dutch Institute for Vulnerability Disclosure (DIVD) had privately reported seven security bugs in VSA to Kaseya. Four were fixed and patches released in April and May. Three were due to be fixed in an upcoming release, version 9.5.7. Unfortunately, one of those unpatched bugs – CVE-2021-30116, a credential-leaking logic flaw discovered by DIVD's Wietse Boonstra – was exploited by the ransomware slingers before its fix could be emitted. DIVD praised Kaseya's response: Once Kaseya was aware of our reported vulnerabilities, we have been in constant contact and cooperation with them. When items in our report were unclear, they asked the right questions. Also, partial patches were shared with us to validate their effectiveness. During the entire process, Kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get this issue fixed and their customers patched. They showed a genuine commitment to do the right thing. Unfortunately, we were beaten by REvil in the final sprint, as they could exploit the vulnerabilities before customers could even patch. But if Kaseya's response to DIVD's disclosure was praisworthy, it turns out it was the exception. In Kaseya was warned about security flaws years ahead of ransomware attack by J., Fingas reports that: The giant ransomware attack against Kaseya might have been entirely avoidable. Former staff talking to Bloomberg claim they warned executives of "critical" security flaws in Kaseya's products several times between 2017 and 2020, but that the company didn't truly address them. Multiple staff either quit or said they were fired over inaction. Employees reportedly complained that Kaseya was using old code, implemented poor encryption and even failed to routinely patch software. The company's Virtual System Administrator (VSA), the remote maintenance tool that fell prey to ransomware, was supposedly rife with enough problems that workers wanted the software replaced. One employee claimed he was fired two weeks after sending executives a 40-page briefing on security problems. Others simply left in frustration with a seeming focus on new features and releases instead of fixing basic issues. Kaseya also laid off some employees in 2018 in favor of outsourcing work to Belarus, which some staff considered a security risk given local leaders' partnerships with the Russian government. ... The company's software was reportedly used to launch ransomware at least twice between 2018 and 2019, and it didn't significantly rethink its security strategy. To reiterate: The July 2nd attack was apparently at least the third time Kaseya had infected customers with ransomware! Kaseya outsourced development to Belarus, a country where ransomware gangs have immunity!. Kaseya fired security whistleblowers! The first two incidents didn't seem to make either Kaseya or its customers re-think what they were doing. Clearly, the only reason Kaseya responded to DIVD's warning was the threat of public disclosure. Without effective action to change this attitude the ransomware crisis will definitely result in what Stephen Diehl calls The Oncoming Ransomware Storm: Imagine a hundred new Stuxnet-level exploits every day, for every piece of a equipment in public works and health care. Where every day your check your phone for the level of ransomware in the wild just like you do the weather. Entire cities randomly have their metro systems, water, power grids and internet shut off and on like a sudden onset of bad cybersecurity “weather”. Or a time in business in which every company simply just allocates a portion of its earnings upfront every quarter and pre-pays off large ransomware groups in advance. It’s just a universal cost of doing business and one that is fully sanctioned by the government because we’ve all just given up trying to prevent it and it’s more efficient just to pay the protection racket. To make things worse, companies can insure against the risk of ransomware, essentially paying to avoid the hassle of maintaining security. Insurance companies can't price these policies properly, because they can't do enough underwriting to know, for example, whether the customer's backups actually work and whether they are offline enough so the ransomware doesn't encrypt them too. In Cyber insurance model is broken, consider banning ransomware payments, says think tank Gareth Corfield reports on the Royal United Services Institute's (RUSI) latest report, Cyber Insurance and the Cyber Security Challenge: Unfortunately, RUSI's researchers found that insurers tend to sell cyber policies with minimal due diligence – and when the claims start rolling in, insurance company managers start looking at ways to escape an unprofitable line of business. ... RUSI's position on buying off criminals is unequivocal, with [Jason] Nurse and co-authors Jamie MacColl and James Sullivan saying in their report that the UK's National Security Secretariat "should conduct an urgent policy review into the feasibility and suitability of banning ransom payments." The fundamental problem is that neither the software vendors nor the insurers nor their customers are taking security seriously enough because it isn't a big enough crisis yet. The solution? Take control of the crisis and make it big enough that security gets taken seriously. The US always claims to have the best cyber-warfare capability on the planet, so presumably they could do ransomware better and faster than gangs like REvil. The US should use this capability to mount ransomware attacks against US companies as fast as they can. Victims would see, instead of a screen demanding a ransom in Monero to decrypt their data, a screen saying: US Government CyberSecurity Agency Patch the following vulnerabilities immediately! The CyberSecurity Agency (CSA) used some or all of the following vulnerabilities to compromise your systems and display this notice: CVE-2021-XXXXX CVE-2021-YYYYY CVE-2021-ZZZZZ Three days from now if these vulnerabilities are still present, the CSA will encrypt your data. You will be able to obtain free decryption assistance from the CSA once you can prove that these vulnerabilities are no longer present. If the victim ignored the notice, three days later they would see: US Government CyberSecurity Agency The CyberSecurity Agency (CSA) used some or all of the following vulnerabilities to compromise your systems and encrypt your data: CVE-2021-XXXXX CVE-2021-YYYYY CVE-2021-ZZZZZ Once you have patched these vulnerabilities, click here to decrypt your data Three days from now if these vulnerabilities are still present, the CSA will re-encrypt your data. For a fee you will be able to obtain decryption assistance from the CSA once you can prove that these vulnerabilities are no longer present. The program would start out fairly gentle and ramp up, shortening the grace period to increase the impact. The program would motivate users to keep their systems up-to-date with patches for disclosed vulnerabilities, which would not merely help with ransomware, but also with botnets, data breaches and other forms of malware. It would also raise the annoyance factor customers face when their supplier fails to provide adequate security in their products. This in turn would provide reputational and sales pressure on suppliers to both secure their supply chain and, unlike Kaseya, prioritize security in their product development. Of course, the program above only handles disclosed vulnerabilities, not the 0-days REvil used. There is an flourishing trade in 0-days, of which the NSA is believed to be a major buyer. The supply in these markets is increasing, as Dan Goodin reports in iOS zero-day let SolarWinds hackers compromise fully updated iPhones: In the first half of this year, Google’s Project Zero vulnerability research group has recorded 33 zero-day exploits used in attacks—11 more than the total number from 2020. The growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through. The other big driver is the increased supply of zero-days from private companies selling exploits. “0-day capabilities used to be only the tools of select nation-states who had the technical expertise to find 0-day vulnerabilities, develop them into exploits, and then strategically operationalize their use,” the Google researchers wrote. “In the mid-to-late 2010s, more private companies have joined the marketplace selling these 0-day capabilities. No longer do groups need to have the technical expertise; now they just need resources.” The iOS vulnerability was one of four in-the-wild zero-days Google detailed on Wednesday. ... Based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. As has been true since the Cold-War era and the "Crypto Wars" of the 1980s when cryptography was considered a munition, the US has prioritized attack over defense. The NSA routinely hoards 0-days, preferring to use them to attack foreigners rather than disclose them to protect US citizens (and others). This short-sighted policy has led to several disasters, including the Juniper supply-chain compromise and NotPetya. Senators wrote to the head of the NSA, and the EFF sued the Director of National Intelligence, to obtain the NSA's policy around 0-days: Since these vulnerabilities potentially affect the security of users all over the world, the public has a strong interest in knowing how these agencies are weighing the risks and benefits of using zero days instead of disclosing them to vendors, It would be bad enough if the NSA and other nations' security services were the only buyers of 0-days. But the $11M REvil received from JBS buys a lot of them, and if each could net $70M they'd be a wonderful investment. Forcing ransomware gangs to use 0-days by getting systems up-to-date with patches is good, but the gangs will have 0-days to use. So although the program above should indirectly reduce the supply (and thus increase the price) of 0-days by motivating vendors to improve their development and supply chain practices, something needs to be done to reduce the impact of 0-days on ransomware. The Colonial Pipeline and JBS attacks, not to mention the multiple hospital chains that have been disrupted, show that it is just a matter of time before a ransomware attack has a major impact on US GDP (and incidentally on US citizens). In this light, the idea that NSA should stockpile 0-days for possible future use is counter-productive. At any time 0-days in the hoard might leak, or be independently discovered. In the past the fallout from this was limited, but no longer; they might be used for a major ransomware attack. Is the National Security Agency's mission to secure the United States, or to have fun playing Team America: World Police in cyberspace? Unless they are immediately required for a specific operation, the NSA should disclose 0-days it discovers or purchases to the software vendor, and once patched, add them to the kit it uses to run its "ransomware" program. To do less is to place the US economy at risk. PS: David Sanger reported Tuesday that Russia’s most aggressive ransomware group disappeared. It’s unclear who disabled them.: Just days after President Biden demanded that President Vladimir V. Putin of Russia shut down ransomware groups attacking American targets, the most aggressive of the groups suddenly went off-line early Tuesday. ... A third theory is that REvil decided that the heat was too intense, and took the sites down itself to avoid becoming caught in the crossfire between the American and Russian presidents. That is what another Russian-based group, DarkSide, did after the ransomware attack on Colonial Pipeline, ... But many experts think that DarkSide’s going-out-of-business move was nothing but digital theater, and that all of the group’s key ransomware talent will reassemble under a different name. This is by far the most likely explanation for REvil's disappearance, leaving victims unable to pay. The same day, Bogdan Botezatu and Radu Tudorica reported that Trickbot Activity Increases; new VNC Module On the Radar: The Trickbot group, which has infected millions of computers worldwide, has recently played an active role in disseminating ransomware. We have been reporting on notable developments in Trickbot’s lifecycle, with highlights including the analysis in 2020 of one of its modules used to bruteforce RDP connections and an analysis of its new C2 infrastructure in the wake of the massive crackdown in October 2020. Despite the takedown attempt, Trickbot is more active than ever. In May 2021, our systems started to pick up an updated version of the vncDll module that Trickbot uses against select high-profile targets. As regards the "massive crackdown", Ravie Lakshmanan notes: The botnet has since survived two takedown attempts by Microsoft and the U.S. Cyber Command, Update: Source Via Barry Ritholtz we find this evidence of Willie Sutton's law in action. When asked "Why do you rob banks?", Sutton replied "Because that's where the money is." Source And, thanks to Jack Cable, there's now ransomwhe.re, which tracks ransomware payments in real time. It suffers a bit from incomplete data. Because it depends upon tracking Bitcoin addresses, it will miss the increasing proportion of demands that insist on Monero. Posted by David. at 8:00 AM Labels: malware, security 16 comments: Unknown said... Companies could also stop creating monoculture networks that are easy to manage and also easy to compromise. When every device is a domain joined Windows 10 machine running some low level centralized remote management system, it's just a matter of time before you are completely owned. This is the "Encryption Backdoor" problem in Computer Science (aka "Exceptional Access Systems"). It is impossible to build an exceptional access system and then ensure it is only used by good people to do good things. July 15, 2021 at 11:33 AM David. said... Lorenzo Franceschi-Bicchierai reports on today's 0-day news in Mysterious Israeli Spyware Vendor’s Windows Zero-Days Caught in the Wild: "Citizen Lab concluded that the malware and the zero-days were developed by Candiru, a mysterious Israel-based spyware vendor that offers “high-end cyber intelligence platform dedicated to infiltrate PC computers, networks, mobile handsets," according to a document seen by Haaretz. Candiru was first outed by the Israeli newspaper in 2019, and has since gotten some attention from cybersecurity companies such as Kaspersky Lab." July 15, 2021 at 12:02 PM Alwyn Schoeman said... How could we exploit the Russian Locale exception? July 15, 2021 at 5:17 PM Unknown said... The correct response is to copy the law that the EU passed that can fine companies up to 10% of revenue for lax cyber security. Equifax, SolarWinds and Kaseya all had lax security that caused untold damage to businesses and the public. I do not support letting cyber criminals get away without punishment, but I do support holding companies liable for gross negligence in cyber security. July 15, 2021 at 9:17 PM David. said... Brian Krebs originally suggested to Try This One Weird Trick Russian Hackers Hate by installing Russian keyboard support, but the bad guys figured out quickly that what they needed to test was not the keyboard support but the default language. So unless you want your machine to talk to you in Cyrillic, forget it. Hat tip to Bruce Schneier. July 16, 2021 at 7:20 AM David. said... I can't find anything that says the EU can impose 10% of revenue. When the UK implemented the EU regulations in 2018 (my emphasis): "some of these organisations could be liable for fines of up to £17 million - or four per cent of global turnover - if lax cyber security standards result in loss of service under the Government’s proposals to implement the EU's Network and Information Systems (NIS) directive from May 2018." £17M is chickenfeed compared to the damage, this only applies to critical infrastructure, and I don't see any evidence that the EU has levied any such fines. July 16, 2021 at 8:55 AM David. said... All I could find for the US was this from 2018: "The U.S Department of Health and Human Services has fined Fresenius Medical Care Holdings Inc., a major supplier of medical equipment, $3.5 million for five separate data breaches that occurred in 2012." A derisory fine, 6 years late, for losing control of physical devices containing health information. Not exactly impressive. July 16, 2021 at 8:58 AM HMTKSteve said... How about keeping an air gap between critical systems and the internet? When companies used dedicated data circuits this kind of thing did not happen. Too many accountants have veto powers over IT and it shows. July 16, 2021 at 7:25 PM Static said... The government has no business patronizing anyone to keep their IT secure anymore than it has business telling them to lock their front door. July 17, 2021 at 9:32 AM David. said... Static, tell that to the FBI. Alex Hern reported April 14 that FBI hacks vulnerable US computers to fix malicious malware: "The FBI has been hacking into the computers of US companies running insecure versions of Microsoft software in order to fix them, the US Department of Justice has announced. The operation, approved by a federal court, involved the FBI hacking into “hundreds” of vulnerable computers to remove malware placed there by an earlier malicious hacking campaign, which Microsoft blamed on a Chinese hacking group known as Hafnium." July 19, 2021 at 6:56 AM David. said... The Washington Post's Gerrit De Vynck, Rachel Lerman, Ellen Nakashima and Chris Alcantara have a excellent explainer from 10 days ago entitled The anatomy of a ransomware attack. July 19, 2021 at 8:03 AM David. said... And the class action lawyers get in on the ransomware act. In First came the ransomware attacks, now come the lawsuits, Gerrit De Vynck reports on Eddie Darwich, a pioneering plaintiff: "Now he’s suing Colonial Pipeline over those lost sales, accusing it of lax security. He and his lawyers are hoping to also represent the hundreds of other small gas stations that were hurt by the hack. It’s just one of several class-action lawsuits that are popping up in the wake of high-profile ransomware attacks. Another lawsuit filed against Colonial in Georgia in May seeks damages for consumers who had to pay higher gas prices. A third is in the works, with law firm Chimicles Schwartz Kriner & Donaldson-Smith LLP pursuing a similar effort. And Colonial isn’t the only company being sued. San Diego-based hospital system Scripps Health is facing class-action lawsuits stemming from a ransomware attack in April." July 25, 2021 at 1:46 PM David. said... Charlie Osborne's Updated Kaseya ransomware attack FAQ: What we know now is a useful overview. July 27, 2021 at 10:07 AM David. said... In the wake of major attacks, ransomware groups Avaddon, DarkSide and REvil went dark. Now Dan Gooding reports that they may be re-branding themselves in Haron and BlackMatter are the latest groups to crash the ransomware party: "Both groups say they are aiming for big-game targets, meaning corporations or other large businesses with the pockets to pay ransoms in the millions of dollars. ... As S2W Lab pointed out, the layout, organization, and appearance of [Haron's] site are almost identical to those for Avaddon, the ransomware group that went dark in June after sending a master decryption key to BleepingComputer that victims could use to recover their data. ... Recorded Future, The Record, and security firm Flashpoint, which also covered the emergence of BlackMatter, have questioned if the group has connections to either DarkSide or REvil." July 28, 2021 at 8:55 AM David. said... Just a reminder that the ransomware threat has been being ignored for a long time. Nearly five years ago I wrote Asymmetric Warfare. The first comment was: "Ransomware is another example. SF Muni has been unable to collect fares for days because their systems fell victim to ransomware. The costs to mount this attack are insignificant in comparison to the costs imposed on the victim. Quinn Norton reports: 'The pre­dic­tions for this year from some analy­sis is that we’ll hit seventy-five bil­lion in ran­somware alone by the end of the year. Some esti­mates say that the loss glob­al­ly could be well over a tril­lion this year, but it’s hard to say what a real num­ber is.'" August 6, 2021 at 1:48 PM David. said... Catalin Cimpanu reports that Accenture downplays ransomware attack as LockBit gang leaks corporate data: "Fortune 500 company Accenture has fell victim to a ransomware attack but said today the incident did not impact its operations and has already restored affected systems from backups. News of the attack became public earlier this morning when the company’s name was listed on the dark web blog of the LockBit ransomware cartel. The LockBit gang claimed it gained access to the company’s network and was preparing to leak files stolen from Accenture’s servers at 17:30:00 GMT. ... Just before this article was published, the countdown timer on the LockBit gang’s leak site also reached zero. Following this event, the LockBit gang leaked Accenture’s files, which, following a cursory review, appeared to include brochures for Accenture products, employee training courses, and various marketing materials. No sensitive information appeared to be included in the leaked files." August 12, 2021 at 4:01 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ▼  July (6) Economics Of Evil Revisited Yet Another DNA Storage Technique Alternatives To Proof-of-Work A Modest Proposal About Ransomware Intel Did A Boeing Graphing China's Cryptocurrency Crackdown ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-1332 ---- DSHR's Blog: Proofs of Space DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, March 22, 2018 Proofs of Space Bram Cohen, the creator of BitTorrent, gave an EE380 talk entitled Stopping grinding attacks in proofs of space. Two aspects were really interesting: A detailed critique of both the Proof of Work system used by most cryptocurrencies and blockchains, and schemes such as Proof of Stake that have been proposed to replace it. An alternate scheme for securing blockchains based on combining Proof of Space with Verifiable Delay Functions. But there was another aspect that concerned me. Follow me below the fold for details. I'll first outline Cohen's critiques of Proof of Work, Proof of Stake, and Proofs of Other Things, then summarize his proposed scheme combining Proof of Space with Verifiable Delay Functions (PoSp/VDF), and conclude by addressing the aspects of his talk that concerned me. I have tried to refer to quotes, concepts and slides by the time at which they appear in the video thus [MM:SS] but these times are approximate. I apologize if the quotes are mis-transcribed from the audio. Proof of Work The goal of PoSp/VDF is to reduce greatly the cost of verifying blocks in a blockchain. But "zero cost brings grinding attacks", in which an attacker tries lots of possibilities to find the best one. "Bitcoin fixes this by making grinding the expected behavior rather than an attack" using Proof of Work, in which peers try to invert a hash function. The computational cost of the vast number of hashes needed is the cause of Bitcoin's massive energy consumption. Cost of rewriting attack Cohen points out that Proof of Work is "Also susceptible to rewriting attacks". Bitcoin core describes rewriting attacks thus: powerful miners have the ability to rewrite the block chain and replace their own transactions, allowing them to take back previous payments. The cost of this attack depends on the percentage of total network hash rate the attacking miner controls. The more centralized mining becomes, the less expensive the attack for a powerful miner. They provide a real example: In September 2013, someone used centralized mining pool GHash.io to steal an estimated 1,000 bitcoins (worth $124,000 USD) from the gambling site BetCoin. The attacker would spend bitcoins to make a bet. If he won, he would confirm the transaction. If he lost, he would create a transaction returning the bitcoins to himself and confirm that, invalidating the transaction that lost the bet. By doing so, he gained bitcoins from his winning bets without losing bitcoins on his losing bets. Although this attack was performed on unconfirmed transactions, the attacker had enough hash rate (about 30%) to have profited from attacking transactions with one, two, or even more confirmations. More details of this attack are here. Proof of Stake Proof of Stake is an alternative consensus system for cryptocurrencies in which holders put some or all of their holdings "at stake", a process known as "bonding". Blocks of transactions are validated by a quorum of the currency "at stake". Misbehavior by holders is deterred by "slashing", miscreants lose their stake. It appears attractive, in that it can vastly reduce the energy demand of Proof of Work systems such as Bitcoin's. It is especially attractive to core team members and early adopters of a cryptocurrency, since they will be large holders and will thus be able to control the currency. Cohen's critique of Proof of Stake starts around [8:30] and covers six main points: Its threat model is weaker than Proof of Work. Just as Proof of Work is in practice centralized around large mining pools, Proof of Stake is centralized around large currency holdings (which were probably acquired much more cheaply than large mining installations). The choice of a quorum size is problematic. "Too small and it's attackable. Too large and nothing happens." And "Unfortunately, those values are likely to be on the wrong side of each other in practice." Incentivizing peers to put their holdings at stake creates a class of attacks in which peers "exaggerate one's own bonding and blocking it from others." Slashing introduces a class of attacks in which peers cause others to be fraudently slashed. The incentives need to be strong enough to overcome the risks of slashing, and of keeping their signing keys accessible and thus at risk of compromise. "Defending against those attacks can lead to situations where the system gets wedged because a split happened and nobody wants to take one for the team" Proofs of Other Things Cohen's critique of other types of proof starts around [34:30]. The main points are: Peers need to be able to audit proofs locally, which isn't true of proofs of participation, importance or more nodes. The system needs to adjust the "difficulty" of proofs, which requires an exponential distribution of "difficulty", which these other types typically lack. In particular, at [37:50] he critiques "proof of storage of user-supplied data" as used by services such as Maidsafe. An attacker doesn't have to store the amount of data they requested: "because you can just take a key and use it to ... generate completely fake data which everyone else has to store the data to claim their rewards later and you yourself don't have to do that, you just store the seed and there's no way to detect that because they are all supposed to be encrypted files that just look like garbage anyway". Proof of Space and Verifiable Delay Functions Cohen's explanation of PoSp/VDF starts at [13:32]. His summary is: Bring in verifiable delay functions (VDFs) and alternate between proofs of space and time Split between the 'trunk' of the blockchain which challenges come from and the 'foliage' which contains transactions Attach public keys to proofs of space so there's no choice after one wins Use canonical proofs of space, verifiable delay functions, and signatures All we have to do is invent a new proof of space algorithm, verifiable delay algorithm, and method of combining them My understanding of PoSp/VDF may be deficient. I have read Beyond Hellman’s Time-Memory Trade-Offs with Applications to Proofs of Space, which explains the PoSp technique, but the mathematics exceed my capabilities. The VDF technique doesn't appear to have been published yet. As I understand it, the proof of space technique in essence works by having the prover fill storage space with an array of pseudo-random points in [0,1] via a time-consuming process. The verifier can then pose to the prover a question that can be answered either by a single storage access (fast) or by repeating the process of filling the storage (slow). By observing the time the prover takes the verifier can distinguish these two cases, and thus be assured that the prover has stored the (otherwise useless) data. As I understand it, verifiable delay functions work by forcing the prover to perform a specified number of iterations to generate a value that the verifier can quickly show is valid. Cohen describes how these two techniques work together in a blockchain at about [22:38]: To use in a blockchain, each block is a proof of space followed by a proof of time which finalizes it. To find a proof of space, take the hash of the last proof of time, put it on a point in [0,1], find the closest proof of space you can to that. To find the number of iterations of the proof of time, multiply the difference between those two positions by the current work difficulty factor and round up to the next integer. The result of this is that the best proof of space will finish first, with the distribution of arrival times of finalizations the same as happens in a proof of work system if resources are fixed over time. The only discretion left on the part of farmers is whether to withhold their winning proofs of space. In other words, the winning verification of a block will be the one from the peer whose plot contains the closest point, because that distance controls how long it will take for the peer to return its verification via the verifiable delay function. The more storage a peer devotes to its plot, and thus the shorter the average distance between points, the more likely it is to be the winner because its proof of space will suffer the shortest delay. Just as with Bitcoin's Proof of Work, farming pools would arise in PoSp/VDF to smooth out income. The pool would divide up [0,1] among its members. Concerns One aspect of the talk that concerned me was that Cohen didn't seem well-informed about the landscape of storage. Here are a few quotes: 2016 Media Shipments Exabytes Revenue $/GB Flash 120 $38.7B $0.320 Hard Disk 693 $26.8B $0.039 LTO Tape 40 $0.65B $0.016 [50:00] "people spend like $150B/year on storage media". Robert Fontana and Gary Decad of IBM report that in 2016 the total revenue of storage media vendors (excluding optical) was $66B. But Cohen is only interested in hard disk, which they report had 2016 revenue of $26.8B. Cohen's first slide claims "Storage is an over $100 billion a year industry with about 50% utilization". The vast majority of hard disks are now purchased by cloud companies. The idea that, for example, Amazon Web Services has purchased twice as much hard disk as it needs is ludicrous. [43:50] "You have this thing where mass storage medium you can set a bit and leave it there until the end of time and its not costing you any more power. DRAM is costing you power when its just sitting there doing nothing". A state-of-the-art disk drive, such as Seagate's 12TB BarraCuda Pro, consumes consumes about 1W spun-down in standby mode, about 5W spun-up idle and about 9W doing random 4K reads. Clearly, PoSp/VDF takes energy, just a lot less energy than Proof of Work. Cohen might argue that, since PoSp/VDF expects to use the empty 50% of drives that store actual user data, the energy cost is zero. But these drives are not just part empty, they are in standby much of the time too. Using them for Proof of Space means they are active somewhat more of the time, because they have to wake up from standby at least once every block time. They are thus consuming energy that the owner has to pay for. Also, the drives are typically warranted not "until the end of time" but only for 5 years. There is another economic impact. The consumer drives I believe he is thinking about are intended for consumer workloads, and are thus designed to be idle much of the time. They are specified with a "rated workload". The 12TB BarraCuda Pro specifies: Maximum rate of 300TB/year. Workloads exceeding the annualized rate may degrade the drive MTBF and impact product reliability. The Annualized Workload Rate is in units of TB per year, or TB per 8760 power on hours. Thus the drive is specified to average no more than about 35GB/hr over its lifetime. The drive has a maximum sustained transfer rate of 0.238GB/s or 857GB/hr. So the drive is designed for a duty cycle of about 4%. The Proof of Space workload would increase the drives' duty cycle somewhat, since it requires a few disk accesses per block time. It would thus wear the drives out somewhat faster, imposing further costs on the storage owner. Write endurance vs. cell size [51:30] "my understanding is that the overwriting limits on flash has gotten better over time". As flash has added more bits per cell, the write endurance has decreased, as shown in the table. Enterprise flash obscures this unfortunate fact by over-provisioning cells, spreading the writes across more cells, but this is costly. Another aspect of the talk that concerned me was that, as I understand it, Cohen's vision is of a PoSp/VDF network comprising large numbers of desktop PCs, continuously connected and powered up, each with one, or at most a few, half-empty hard drives. The drives would have been purchased at retail a few years ago. The desktop PC is a declining market, and the devices such as laptops, smartphones, tablets and small form-factor PCs that are displacing it use flash storage, are intermittently connected, and asleep whenever possible. Thus Proof of Space addresses a declining market. The edge of the Internet has changed since Cohen invented BitTorrent. Source These changes have affected the market for hard disk. Flash is increasingly destroying the markets for enterprise drives, and for 2.5" laptop drives. One the other hand, "The Cloud" has increased demand for "nearline" and consumer 3.5" drives. They are purchased in bulk at prices far lower than retail by customers such as Amazon, who cannot afford to have 50% of their investment in storage media sitting empty. Nevertheless, the small proportion of their total inventory that is empty on average gives them much more empty storage than any retail customer. Cohen's first slide claims "As long as rewards are below depreciation value it will be unprofitable to buy storage just for farming" Suppose a network matching Cohen's vision existed, how would economies of scale affect it? The cloud companies' empty drives are brand new, purchased in bulk direct from the manufacturer at a huge discount. That is three reasons why they would have much lower cost per byte than the drives in the network nodes. If the cloud companies chose to burn-in their new drives by using them for Proof of Space they would easily dominate the network at almost zero cost. Once again Economies of Scale in Peer-to-Peer Networks was prophetic. PS - I have not had time to analyze in detail the FileCoin proposal, which has some similarities to PoSp/VDF. It has many significant differences, in that it stores user data rather than verifies transactions, uses a combined Proof of SpaceTime rather than separate proofs, and employs bonding and slashing. It also appears to depend upon Lightning-style off-chain payment channels. Posted by David. at 8:00 AM Labels: bitcoin, storage media 7 comments: David. said... David Gerard's Ethereum Casper proof-of-stake only has to work well enough: Worse is Better in action reports: "Proof-of-stake is a bit too obviously “thems what has, gets” — so you have to convince the users to go along with it. It’s also as naturally centralising as proof-of-work, if not more so. ... Vitalik Buterin has been talking about proof-of-stake since 2013. Ethereum was intended to be proof-of-stake in the white paper, but Buterin noted in 2014 that proof-of-stake would be nontrivial — so Ethereum went for proof-of-work instead, while they worked on the problem. ... Ethereum Casper, the project to move Ethereum to proof-of-stake, started in 2014. It’s been six months away for four years now. The Proof of Stake FAQ is a list of approaches that haven’t quite worked well enough. Casper has had numerous technical and security issues. The current version only adds a bit of proof-of-stake to the existing proof-of-work system. But Casper 0.1 has just been released — and Buterin is talking about taking it live." May 14, 2018 at 6:11 PM David. said... More from Chia Network, the company building on Bram's ideas, in The ASIC Resistance of Proof of Space. August 7, 2018 at 6:58 PM David. said... Back in 1999 Ron Rivest posed a cryptographic puzzle based on a Verifiable Delay Function (VDF). Estimates at the time were that it would take until the 2030s to solve it. But, as Katyanna Quach reports in Self-taught Belgian bloke cracks crypto conundrum that was supposed to be uncrackable until 2034: "Fabrot wrote the equation in a few lines of C code and called upon the GNU Multiple Precision Arithmetic Library, a free mathematical software library to run the computation over 79 trillion times. He used a bog standard PC with an Intel Core i7-6700 processor, and took about three and a half years to finally complete over 79 trillion calculations." April 30, 2019 at 8:07 AM David. said... Bram Choen isn't the only skeptic about Proof of Space; Messari CEO: Ethereum 2.0 Proof-of-Stake Transition Not to Happen Until at Least 2021 by Helen Partz reports on discussions at the Ethereal Summit. May 11, 2019 at 6:55 AM David. said... As I predicted three years ago, once the Chia network got started the cloud providers' economies of scale would kick in. Jamie Crawley now reports that Amazon Offers Mining in the Cloud for New Chia Cryptocurrency. May 12, 2021 at 10:43 AM David. said... David Gerard's Chia Is a New Way to Waste Resources for Cryptocurrency is the summary of the rollout of the Chia network that I wish I would have written. Go read it. May 23, 2021 at 4:19 PM David. said... Chia Cryptocurrency, Started By BitTorrent Creator Bram Cohen, Engaging In Obnoxiously Bogus Trademark Bullying by Mike Masnick reports on Chia's efforts to make friends and influence people. June 3, 2021 at 11:12 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ▼  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ▼  March (9) Flash vs. Disk (Again) Bitcoin: The Future World Currency? Bad Blockchain Content Proofs of Space Pre-publication Peer Review Subtracts Value Ethics and Archiving the Web The "Grand Challenges" of Curation and Preservation Techno-hype part 2.5 Archival Media: Not a Good Business ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2015 ---- DSHR's Blog DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, August 10, 2021 The Economist On Cryptocurrencies The Economist edition dated August 7th has a leader (Unstablecoins) and two articles (in the Finance section (The disaster scenario and Here comes the sheriff). Source The leader argues that: Regulators must act quickly to subject stablecoins to bank-like rules for transparency, liquidity and capital. Those failing to comply should be cut off from the financial system, to stop people drifting into an unregulated crypto-ecosystem. Policymakers are right to sound the alarm, but if stablecoins continue to grow, governments will need to move faster to contain the risks. But even The Economist gets taken in by the typical cryptocurrency hype, balancing current actual risks against future possible benefits: Yet it is possible that regulated private-sector stablecoins will eventually bring benefits, such as making cross-border payments easier, or allowing self-executing “smart contracts”. Regulators should allow experiments whose goal is not merely to evade financial rules. They don't seem to understand that, just as the whole point of Uber is to evade the rules for taxis, the whole point of cryptocurrency is to "evade financial rules". Below the fold I comment on the two articles. Read more » Posted by David. at 8:00 AM 2 comments: Labels: bitcoin Tuesday, August 3, 2021 Stablecoins Part 2 I wrote Stablecoins about Tether and its "magic money pump" seven months ago. A lot has happened and a lot has been written about it since, and some of it explores aspects I didn't understand at the time, so below the fold at some length I try to catch up. Read more » Posted by David. at 8:00 AM 3 comments: Labels: bitcoin Thursday, July 29, 2021 Economics Of Evil Revisited Eight years ago I wrote Economics of Evil about the death of Google Reader and Google's habit of leaving its customers users in the lurch. In the comments to the post I started keeping track of accessions to le petit musée des projets Google abandonnés. So far I've recorded at least 33 dead products, an average of more than 4 a year. Two years ago Ron Amadeo wrote about the problem this causes in Google’s constant product shutdowns are damaging its brand: We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days. Below the fold, some commentary on Amadeo's latest report from the killing fields, in which he detects a little remorse. Read more » Posted by David. at 8:00 AM 1 comment: Labels: cloud economics Tuesday, July 27, 2021 Yet Another DNA Storage Technique Source An alternative approach to nucleic acid memory by George D. Dickinson et al from Boise State University describes a fundamentally different way to store and retrieve data using DNA strands as the medium. Will Hughes et al have an accessible summary in DNA ‘Lite-Brite’ is a promising way to archive data for decades or longer: We and our colleagues have developed a way to store data using pegs and pegboards made out of DNA and retrieving the data with a microscope – a molecular version of the Lite-Brite toy. Our prototype stores information in patterns using DNA strands spaced about 10 nanometers apart. Below the fold I look at the details of the technique they call digital Nucleic Acid Memory (dNAM). Read more » Posted by David. at 8:00 AM No comments: Labels: storage media Tuesday, July 20, 2021 Alternatives To Proof-of-Work The designers of peer-to-peer consensus protocols such as those underlying cryptocurrencies face three distinct problems. They need to prevent: Being swamped by a multitude of Sybil peers under the control of an attacker. This requires making peer participation expensive, such as by Proof-of-Work (PoW). PoW is problematic because it has a catastrophic carbon footprint. A rational majority of peers from conspiring to obtain inappropriate benefits. This is thought to be achieved by decentralization, that is a network of so many peers acting independently that a conspiracy among a majority of them is highly improbable. Decentralization is problematic because in practice all successful cryptocurrencies are effectively centralized. A rational minority of peers from conspiring to obtain inappropriate benefits. This requirement is called incentive compatibility. This is problematic because it requires very careful design of the protocol. In the rather long post below the fold I focus on some potential alternatives to PoW, inspired by Jeremiah Wagstaff's Subspace: A Solution to the Farmer’s Dilemma, the white paper for a new blockchain technology. Read more » Posted by David. at 8:00 AM 5 comments: Labels: bitcoin, P2P Thursday, July 15, 2021 A Modest Proposal About Ransomware On the evening of July 2nd the REvil ransomware gang exploited a 0-day vulnerability to launch a supply chain attack on customers of Kaseya's Virtual System Administrator (VSA) product. The timing was perfect, with most system administrators off for the July 4th long weekend. By the 6th Alex Marquardt reported that Kaseya says up to 1,500 businesses compromised in massive ransomware attack. REvil, which had previously extorted $11M from meat giant JBS, announced that for the low, low price of only $70M they would provide everyone with a decryptor. The US government's pathetic response is to tell the intelligence agencies to investigate and to beg Putin to crack down on the ransomware gangs. Good luck with that! It isn't his problem, because the gangs write their software to avoid encrypting systems that have default languages from the former USSR. I've writtten before (here, here, here) about the importance of disrupting the cryptocurrency payment channel that enables ransomware, but it looks like the ransomware crisis has to get a great deal worse before effective action is taken. Below the fold I lay out a modest proposal that could motivate actions that would greatly reduce the risk. Read more » Posted by David. at 8:00 AM 16 comments: Labels: malware, security Tuesday, July 13, 2021 Intel Did A Boeing Two years ago, Wolf Richter noted that Boeing's failure to invest in a successor airframe was a major cause of the 737 Max debacle: From 2013 through Q1 2019, Boeing has blown a mind-boggling $43 billion on share buybacks I added up the opportunity costs: Suppose instead of buying back stock, Boeing had invested in its future. Even assuming an entirely new replacement for the 737 series was as expensive as the 787 (the first of a new airframe technology), they could have delivered the first 737 replacement ($32B), and be almost 70% through developing another entirely new airframe ($11B/$16B). But executive bonuses and stock options mattered more than the future of the company's cash cow product. Below the fold I look at how Intel made the same mistake as Boeing, and early signs that they have figured out what went wrong. Read more » Posted by David. at 8:00 AM 3 comments: Labels: stock buybacks Older Posts Home Subscribe to: Posts (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ▼  August (2) The Economist On Cryptocurrencies Stablecoins Part 2 ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2470 ---- DSHR's Blog: Securing The Software Supply Chain DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, December 18, 2018 Securing The Software Supply Chain This is the second part of a series about trust in digital content that might be called: Is this the real life? Is this just fantasy? The first part was Certificate Transparency, about how we know we are getting content from the Web site we intended to. This part is about how we know we're running the software we intended to. This question, how to defend against software supply chain attacks, has been in the news recently: A hacker or hackers sneaked a backdoor into a widely used open source code library with the aim of surreptitiously stealing funds stored in bitcoin wallets, software developers said Monday. The malicious code was inserted in two stages into event-stream, a code library with 2 million downloads that's used by Fortune 500 companies and small startups alike. In stage one, version 3.3.6, published on September 8, included a benign module known as flatmap-stream. Stage two was implemented on October 5 when flatmap-stream was updated to include malicious code that attempted to steal bitcoin wallets and transfer their balances to a server located in Kuala Lumpur. See also here and here. The good news is that this was a highly specific attack against a particular kind of cryptocurrency wallet software; things could have been much worse. The bad news is that, however effective they may be against some supply chain attacks, none of the techniques I discuss below the fold would defend against this particular attack. In an important paper entitled Software Distribution Transparency and Auditability, Benjamin Hof and Georg Carle from TU Munich use Debian's Advanced Package Tool (APT) as an example of a state-of-the-art software supply chain, and: Describe how APT works to maintain up-to-date software on clients by distributing signed packages. Review previous efforts to improve the security of this process. Propose to enhance APT's security by layering a system similar to Certificate Transparency (CT) on top. Detail the operation of their systems' logs, auditors and monitors, which are similar to CT's in principle but different in detail. Describe and measure the performance of an implementation of their layer on top of APT using the Trillian software underlying some CT implementations. There are two important "missing pieces" in their system, and all the predecessors, which are the subjects of separate efforts: Reproducible Builds. Bootstrappable Compilers. How APT Works A system running Debian or other APT-based Linux distribution runs software it received in "packages" that contain the software files, and metadata that includes dependencies. Their hashes can be verified against those in a release file, signed by the distribution publisher. Packages come in two forms, source and compiled. The source of a package is signed by the official package maintainer and submitted to the distribution publisher. The publisher verifies the signature and builds the source to form the compiled package, whose hash is then included in the release file. The signature on the source package verifies that the package maintainer approves this combination of files for the distributor to build. The signature on the release file verifies that the distributor built the corresponding set of packages from approved sources and that the combination is approved for users to install. Previous Work It is, of course, possible for the private keys on which the maintainer's and distributor's signatures depend to be compromised: Samuel et al. consider compromise of signing keys in the design of The Update Framework (TUF), a secure application updater. To guard against key compromise, TUF introduces a number of different roles in the update release process, each of which operates cryptographic signing keys. The following three properties are protected by TUF. The content of updates is secured, meaning its integrity is preserved. Securing the availability of updates protects against freeze attacks, where an outdated version with known vulnerabilities is served in place of a security update. The goal of maintaining the correct combination of updates implies the security of meta data. The goal of introducing multiple roles each with its own key is to limit the damage a single compromised key can do. An orthogonal approach is to implement multiple keys for each role, with users requiring a quorum of verified signatures before accepting a package: Nikitin et al. develop CHAINIAC, a system for software update transparency. Software developers create a Merkle tree over a software package and the corresponding binaries. This tree is then signed by the developer, constituting release approval. The signed trees are submitted to co-signing witness servers. The witnesses require a threshold of valid developer signatures to accept a package for release. Additionally, the mapping between source and binary is verified by some of the witnesses. If these two checks succeed, the release is accepted and collectively signed by the witnesses. The system allows to rotate developer keys and witness keys, while the root of trust is an offline key. It also functions as a timestamping service, allowing for verification of update timeliness. CT-like Layer Hof and Carle's proposal is to use verifiable logs, similar to those in CT, to ensure that malfeasance is detectable. They write: Compromise of components and collusion of participants must not result in a violation of the following security goals remaining undetected. A goal of our system is to make it infeasible for the attacker to deliver targeted backdoors. For every binary, the system can produce the corresponding source code and the authorizing maintainer. Defined irregularities, such as a failure to correctly increment version numbers, also can be detected by the system. As I understand it, this is accurate but somewhat misleading. Their system adds a transparency layer on top of APT: The APT release file identifies, by cryptographic hash, the packages, sources, and meta data which includes dependencies. This release file, meta data, and source packages are submitted to a log server operating an appendonly Merkle tree, as shown in Figure 2. The log adds a new leaf for each file. We assume maintainers may only upload signed source packages to the archive, not binary packages. The archive submits source packages to one or more log servers. We further assume that the buildinfo files capturing the build environment are signed and are made public, e.g. by them being covered by the release file, together with other meta data. In order to make the maintainers uploading a package accountable, a source package containing all maintainer keys is created and submitted into the archive. This constitutes the declaration by the archive, that these keys were authorized to upload for this release. The key ring is required to be append-only, where keys are marked with an expiry date instead of being removed. This allows verification of source packages submitted long ago, using the keys valid at the respective point in time. Just as with CT, the log replies to each valid submission with a signed commitment, guaranteeing that it will shortly produce the signed root of a Merkle tree that includes the submission: At release time, meta data and release file are submitted into the log as well. The log server produces a commitment for each submission, which constitutes its promise to include the submitted item into a future version of the tree. The log only accepts authenticated submissions from the archive. The commitment includes a timestamp, hash of the release file, log identifier and the log's signature over these. The archive should then verify that the log has produced a signed tree root that resolves the commitment. To complete the release, the archive publishes the commitments together with the updates. Clients can then proceed with the verification of the release file. The log regularly produces signed Merkle tree roots after receiving a valid inclusion request. The signed tree root produced by the log includes the Merkle tree hash, tree size, timestamp, log identifier, and the log's signature. The client now obtains from the distribution mirror not just the release file, but also one or more inclusion commitments showing that the release file has been submitted to one or more of the logs trusted both by the distributor and the client: Given the release file and inclusion commitment, the client can verify by hashing that the commitment belongs to this release file and also verify the signature. The client can now query the log, demanding a current tree root and an inclusion proof for this release file. Per standard Merkle tree proofs, the inclusion proof consists of a list of hashes to recompute the received root hash. For the received tree root, a consistency proof is demanded to a previous known tree root. The consistency proof is again a list of hashes. For the two given tree roots, it shows that the log only added items between them. Clients store the signed tree root for the largest tree they have seen, to be used in any later consistency proofs. Set aside split view attacks, which will be discussed later, clients verifying the log inclusion of the release file will detect targeted modifications of the release. Like CT, in addition to logs their system includes auditors, typically integrated with clients, and independent monitors regularly checking the logs for anomalies. For details, you need to read the paper, but some idea can be gained from their description of how the system detects two kinds of attack: The Hidden Version Attack The Split View Attack The Hidden Version Attack Hof and Carle describe this attack thus: The hidden version attack attempts to hide a targeted backdoor by following correct signing and log submission procedures. It may require collusion by the archive and an authorized maintainer. The attacker prepares targeted malicious update to a package, say version v1.2.1, and a clean update v1.3.0. The archive presents the malicious package only to the victim when it wishes to update. The clean version v.1.3.0 will be presented to everybody immediately afterwards. A non-targeted user is unlikely to ever observe the backdoored version, thereby drawing a minimal amount of attention to it. The attack however leaves an audit trail in the log, so the update itself can be detected by auditing. A package maintainer monitoring uploads for their packages using the log would notice an additional version being published. A malicious package maintainer would however not alert the public when this happens. This could be construed as a targeted backdoor in violation of the stated security goals. It is true that the backdoored package would be in the logs, but that in and of itself does not indicate that it is malign: To mitigate this problem a minimum time between package updates can be introduced. This can be achieved by a fixing the issuance of release files and their log submission to a static frequency, or by alerting on quick subsequent updates to one package. There may be good reasons for releasing a new update shortly after its predecessor; for example a vulnerability might be discovered in the predecessor shortly after release. In the hidden version attack, the attacker increases a version number in order to get the victim to update a package. The victim will install this backdoored update. The monitor detects the hidden version attack due to the irregular release file publication. There are now two cases to be considered. The backdoor may be in the binary package, or it may be in the source package. The first case will be detected by monitors verifying the reproducible builds property. A monitor can rebuild all changed source packages on every update and check if the resulting binary matches. If not, the blame falls clearly on the archive, because the source does not correspond to the binary, which can be demonstrated by exploiting reproducible builds. The second case requires investigation of the packages modified by the update. The source code modifications can be investigated for the changed packages, because all source code is logged. The fact that source code can be analyzed, and no analysis on binaries is required, makes the investigation of the hidden version alert simpler. The blame for this case falls on the maintainer, who can be identified by their signature on the source package. If the upload was signed by a key not in the allowed set, the blame falls on the archive for failing to authorize correctly. If the package version numbers in the meta data are inconsistent, this constitutes a misbehavior by the submitting archive. It can easily be detected by a monitor. Using the release file the monitor can also easily ensure, by demanding inclusion proofs, that all required files have been logged. Note that although their system's monitors detect this attack, and can correctly attribute it, they do so asynchronously. They do not prevent the victim installing the backdoored update. The Split View Attack The logs cannot be assumed to be above suspicion. Hof and Carle describe a log-based attack: The most significant attack by the log or with the collusion of the log is equivocation. In a split-view or equivocation attack, a malicious log presents different versions of the Merkle tree to the victim and to everybody else. Each tree version is kept consistent in itself. The tree presented to the victim will include a leaf that is malicious in some way, such as an update with a backdoor. It might also omit a leaf in order to hide an update. This is a powerful attack within the threat model that violates the security goals and must therefore be defended. A defense against this attack requires the client to learn if they are served from the same tree as the others. Their defense requires that their be multiple logs under independent administration, perhaps run by different Linux distributions. Each time a "committing" log generated a new tree root containing new package submissions, it would be required to submit a signed copy of the root to one or more "witness" logs under independent administration. The "committing" log will obtain commitments from the "witness" logs, and supply them to clients. Clients can then verify that the root they obtain from the "committing" log matches that obtained directly from the "witness" logs: When the client now verifies a log entry with the committing log, it also has to verify that a tree root covering this entry was submitted into the witnessing log. Additionally, the client verifies the append-only property of the witnessing log. The witnessing log introduces additional monitoring requirements. Next to the usual monitoring of the append-only operation, we need to check that no equivocating tree roots are included. To this end, a monitor follows all new log entries of the witnessing log that are tree roots of the committing log. The monitor verifies that they are all valid extensions of the committing log's tree history. Reproducible Builds One weakness in Hof and Carle's actual implementation is in the connection between the signed package of source and the hashes of the result of compiling it. It is in general impossible to verify that the binaries are the result of compiling the source. In many cases, even if the source is re-compiled in the same environment the resulting binaries will not be bit-for-bit identical, and thus their hashes will differ. The differences have many causes, including timestamps, randomized file names, and so on. Of course, changes in the build environment can also introduce differences. To enable binaries to be securely connected to their source, a Reproducible Builds effort has been under way for more than 5 years. Debian project lead Chris Lamb's 45-minute talk Think you're not a target? A tale of 3 developers ... provides an overview of the problem and the work to solve it using three example compromises: Alice, a package developer who is blackmailed to distribute binaries that don't match the public source. Bob, a build farm sysadmin whose personal computer has been compromised, leading to a compromised build toolchain in the build farm that inserts backdoors into the binaries. Carol, a free software enthusiast who distributes binaries to friends. An evil maid attack has compromised her laptop. As Lamb describes, eliminating all sources of irreproducibility from a package is a painstaking process because there are so many possibilities. They include non-deterministic behaviors such as iterating over hashmaps, parallel builds, timestamps, build paths, file system directory name order, and so on. The work started in 2013 with 24% of Debian packages building reproducibly. Currently, over 90% of the Debian packages are now reproducible. That is good, but 100% coverage is really necessary to provide security. Bootstrappable Compilers One of the most famous of the ACM's annual Turing Award lectures was Ken Thompson's 1984 Reflections On Trusting Trust (also here). In 2006, Bruce Schneier summarized its message thus: Way back in 1974, Paul Karger and Roger Schell discovered a devastating attack against computer systems. Ken Thompson described it in his classic 1984 speech, "Reflections on Trusting Trust." Basically, an attacker changes a compiler binary to produce malicious versions of some programs, INCLUDING ITSELF. Once this is done, the attack perpetuates, essentially undetectably. Thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing Thompson to log in as root without using a password. The victim never noticed the attack, even when they disassembled the binaries -- the compiler rigged the disassembler, too. Schneier was discussing David A. Wheeler's Countering Trusting Trust through Diverse Double-Compiling. Wheeler's subsequent work led to his 2009 Ph.D. thesis. To oversimpify, his technique involves the suspect compiler compiling its source twice, and comparing the output to that from a "trusted" compiler compiling the same source twice. He writes: DDC uses a second “trusted” compiler cT, which is trusted in the sense that we have a justified confidence that cT does not have triggers or payloads There are two issues here. The first is an assumption that the suspect compiler's build is reproducible. The second is the issue of where the "justified confidence" comes from. This is the motivation for the Bootstrappable Builds project, whose goal is to create a process for building a complete toolchain starting from a "seed" binary that is simple enough to be certified "by inspection". One sub-project is Stage0: Stage0 starts with just a 280byte Hex monitor and builds up the infrastructure required to start some serious software development. With zero external dependencies, with the most painful work already done and real langauges such as assembly, forth and garbage collected lisp already implemented The current 0.2.0 release of Stage0: marks the first C compiler hand written in Assembly with structs, unions, inline assembly and the ability to self-host it's C version, which is also self-hosting There is clearly a long way still to go to a bootstrapped full toolchain. A More Secure Software Supply Chain A software supply chain based on APT enhanced with Hof and Carle's transparency layer, distributing packages reproducibly built with bootstrapped compilers, would be much more difficult to attack than current technology. Users of the software could have much higher confidence that the binaries they installed had been built from the corresponding source, and that no attacker had introduced functionality not evident in the source. These checks would take place during software installation or update. Users would still need to verify that the software had not been modified after installation, perhaps using a tripwire-like mechanism, But this mechanism would have a trustworthy source of the hashes it needs to do its job. Remaining Software Problems Despite all these enhancements, the event-stream attack would still have succeeded. The attackers targeted a widely-used, fairly old package that was still being maintained by the original author, a volunteer. They offered to take over what had become a burdensome task, and the offer was accepted. Now, despite the fact that the attacker was just an e-mail address, they were the official maintainer of the package and could authorize changes. Their changes, being authorized by the official package maintainer, would pass unimpeded through even the enhanced supply chain. First, it is important to observe the goal of Hof and Carle's system is to detect targeted attacks, those delivered to a (typically small) subset of user systems. The event-stream attack was not targeted; it was delivered to all systems updating the package irrespective of whether they contained the wallet to be compromised. That their system is designed only to detect targeted attacks seems to me to be a significant weakness. It is very easy to design an attack, like the event-stream one, that is broadcast to all systems but is harmless on all but the targets. Second, Hof and Carle's system operates asynchronously, so is intended to detect rather than prevent victim compromise. Of course, once the attack was detected it could be unambiguously attributed. But: The attack would already have succeeded in purloining cryptocurrency from the target wallets. This seems to me to be a second weakness; in many cases the malign package would only need to be resident on the victim for a short time to exfiltrate critical data, or install further malware providing persistence. Strictly speaking, the attribution would be to a private key. More realistically, it would be to a key and an e-mail address. In the case of an attack, linking these to a human malefactor would likely be difficult, leaving the perpetrators free to mount further attacks. Even if the maintainer had not, as in the event-stream attack, been replaced via social engineering, it is possible that their e-mail and private key could have been compromised. The event-stream attack can be thought of as the organization-level analog of a Sybil attack on a peer-to-peer system. Creating an e-mail identity is almost free. The defense against Sybil attacks is to make maintaining and using an identity in the system expensive. As with proof-of-work in Bitcoin, the idea is that the white hats will spend more (compute more useless hashes) than the black hats. Even this has limits. Eric Budish's analysis shows that, if the potential gain from an attack on a blockchain is to be outweighed by its cost, the value of transactions in a block must be less than the block reward. Would a similar defense against "Sybil" attacks on the software supply chain be possible? There are a number of issues: The potential gains from such attacks are large, both because they can compromise very large numbers of systems quickly (event-stream had 2M downloads), and because the banking credentials, cryptocurrency wallets, and other data these systems contain can quickly be converted into large amounts of cash. Thus the penalty for mounting an attack would have to be an even larger amount of cash. Package maintainers would need to be bonded or insured for large sums, which implies that distributions and package libraries would need organizational structures capable of enforcing these requirements. Bonding and insurance would be expensive for package maintainers, who are mostly unpaid volunteers. There would have to be a way of paying them for their efforts, at least enough to cover the costs of bonding and insurance. Thus users of the packages would need to pay for their use, which means the packages could neither be free, nor open source. The FOSS (Free Open Source Software) movement will need to find other ways to combat Sybil attacks, which will be hard if the reward for a successful attack greatly exceeds the cost of mounting it. How to adequately reward maintainers for their essential but under-appreciated efforts is a fundamental problem for FOSS. Hof and Carle's system shares one more difficulty with CT. Both systems are layered on top of an existing infrastructure, respectively APT and TLS with certificate authorities. In both cases there is a bootstrap problem, an assumption that as the system starts up there is not an attack already underway. In CT's case the communications between the CA's, Web sites, logs, auditors and monitors all use the very TLS infrastructure that is being secured (see here and here). This is also the case for Hof and Carle, plus they have to assume the lack of malware in the initial state of the packages. Hardware Supply Chain Problems All this effort to secure the software supply chain will be for naught if the hardware it runs on is compromised: Much of what we think of as "hardware" contains software to which what we think of as "software" has no access or visibility. Examples include Intel's Management Engine, the baseband processor in mobile devices, complex I/O devices such as NICs and GPUs. Even if this "firmware" is visible to the system CPU, it is likely supplied as a "binary blob" whose source code is inaccessible. Attacks on the hardware supply chain have been in the news recently, with the firestorm of publicity sparked by Bloomberg's, probably erroneous reports, of a Chinese attack on SuperMicro motherboards that added "rice-grain" sized malign chips. The details will have to wait for a future post. Posted by David. at 8:00 AM Labels: security 24 comments: Bryan Newbold said... Nit: in the last bullet point, I think you mean "Bloomberg", not "Motherboard". December 18, 2018 at 1:58 PM David. said... Thanks for correcting my fused neurons, Bryan! December 18, 2018 at 4:10 PM David. said... I really should have pointed out that this whole post is about software that is installed on your device. These days, much of the software that runs on your device is not installed, it is delivered via ad networks and runs inside your browser. As blissex wrote in this comment, we are living: "in an age in which every browser gifts a free-to-use, unlimited-usage, fast VM to every visited web site, and these VMs can boot and run quite responsive 3D games or Linux distributions" Ad blockers, essential equipment in this age, merely reduce the incidence of malware delivered via ad networks. Brannon Dorsey's fascinating experiments in malvertising are described by Cory Doctorow thus: "Anyone can make an account, create an ad with god-knows-what Javascript in it, then pay to have the network serve that ad up to thousands of browser. ... Within about three hours, his code (experimental, not malicious, apart from surreptitiously chewing up processing resources) was running on 117,852 web browsers, on 30,234 unique IP addresses. Adtech, it turns out, is a superb vector for injecting malware around the planet. Some other fun details: Dorsey found that when people loaded his ad, they left the tab open an average of 15 minutes. That gave him huge amounts of compute time -- 327 full days, in fact, for about $15 in ad purchase." December 22, 2018 at 3:47 PM David. said... I regret not citing John Leyden's Open-source software supply chain vulns have doubled in 12 months to illustrate the scope of the problem: "Miscreants have even started to inject (or mainline) vulnerabilities directly into open source projects, according to Sonatype, which cited 11 recent examples of this type of malfeasance in its study. El Reg has reported on several such incidents including a code hack on open-source utility eslint-scope back in July." and: "organisations are still downloading vulnerable versions of the Apache Struts framework at much the same rate as before the Equifax data breach, at around 80,000 downloads per month. Downloads of buggy versions of another popular web application framework called Spring were also little changed since a September 2017 vulnerability, Sonatype added. The 85,000 average in September 2017 has declined only 15 per cent to 72,000 over the last 12 months." December 24, 2018 at 8:29 AM David. said... Catalin Cimpanu's Users report losing Bitcoin in clever hack of Electrum wallets describes a software supply chain attack that started around 21st December and netted around $750K "worth" of BTC. December 27, 2018 at 11:53 AM David. said... Popular WordPress plugin hacked by angry former employee is like the event-stream hack in that no amount of transparency would have prevented it. The disgruntled perpetrator apparently had valid credentials for the official source of the software: "The plugin in question is WPML (or WP MultiLingual), the most popular WordPress plugin for translating and serving WordPress sites in multiple languages. According to its website, WPML has over 600,000 paying customers and is one of the very few WordPress plugins that is so reputable that it doesn't need to advertise itself with a free version on the official WordPress.org plugins repository." January 21, 2019 at 6:20 AM David. said... The fourth annual report for the National Security Adviser from the Huawei Cyber Security Evaluation Centre Oversight Board in the UK is interesting. The Centre has access to the source code for Huawei products, and is working with Huawei to make the builds reproducible: "3.15 HCSEC have worked with Huawei R&D to try to correct the deficiencies in the underlying build and compilation process for these four products. This has taken significant effort from all sides and has resulted in a single product that can be built repeatedly from source to the General Availability (GA) version as distributed. This particular build has yet to be deployed by any UK operator, but we expect deployment by UK operators in the future, as part of their normal network release cycle. The remaining three products from the pilot are expected to be made commercially available in 2018H1, with each having reproducible binaries." January 31, 2019 at 8:47 AM David. said... Huawei says fixing "the deficiencies in the underlying build and compilation process" in its carrier products will take five years. February 6, 2019 at 7:17 PM David. said... In Cyber-Mercenary Groups Shouldn't be Trusted in Your Browser or Anywhere Else, the EFF's Cooper Quintin describes the latest example showing why Certificate Authorities can't be trusted: "DarkMatter, the notorious cyber-mercenary firm based in the United Arab Emirates, is seeking to become approved as a top-level certificate authority in Mozilla’s root certificate program. Giving such a trusted position to this company would be a very bad idea. DarkMatter has a business interest in subverting encryption, and would be able to potentially decrypt any HTTPS traffic they intercepted. One of the things HTTPS is good at is protecting your private communications from snooping governments—and when governments want to snoop, they regularly hire DarkMatter to do their dirty work. ... DarkMatter was already given an "intermediate" certificate by another company, called QuoVadis, now owned by DigiCert. That's bad enough, but the "intermediate" authority at least comes with ostensible oversight by DigiCert." Hat tip to Cory Doctorow. February 23, 2019 at 2:29 PM David. said... Gareth Corfield's Just Android things: 150m phones, gadgets installed 'adware-ridden' mobe simulator games reports on a very successful software supply chain attack: "Android adware found its way into as many as 150 million devices – after it was stashed inside a large number of those bizarre viral mundane job simulation games, we're told. ... Although researchers believed that the titles were legitimate, they said they thought the devs were “scammed” into using a “malicious SDK, unaware of its content, leading to the fact that this campaign was not targeting a specific country or developed by the same developer.” March 15, 2019 at 8:24 AM David. said... Kim Zetter's Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers is an excellent example of a software supply chain attack: "Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company, Kaspersky Lab says." March 25, 2019 at 10:21 AM David. said... Sean Gallagher's UK cyber security officials report Huawei’s security practices are a mess reports on the latest report from the HCSEC Oversight Board. They still can't do reproducible builds: "HCSEC reported that the software build process used by Huawei results in inconsistencies between software images. In other words, products ship with software with widely varying fingerprints, so it’s impossible to determine whether the code is the same based on checksums." Which isn't a surprise, Huawei already said it'd take another 5 years. But I'd be more concerned that: "One major problem cited by the report is that a large portion of Huawei’s network gear still relies on version 5.5 of Wind River’s VxWorks real-time operating system (RTOS), which has reached its “end of life” and will soon no longer be supported. Huawei has bought a premium long-term support license from VxWorks, but that support runs out in 2020." And Huawei is rolling its own RTOS based on Linux. What could possibly go wrong? March 28, 2019 at 1:29 PM David. said... The latest software supply chain attack victim is bootstrap-sass via RubyGems, with about 28M downloads. April 6, 2019 at 7:20 AM David. said... It turns out that ShadowHammer Targets Multiple Companies, ASUS Just One of Them: "ASUS was not the only company targeted by supply-chain attacks during the ShadowHammer hacking operation as discovered by Kaspersky, with at least six other organizations having been infiltrated by the attackers. As further found out by Kaspersky's security researchers, ASUS' supply chain was successfully compromised by trojanizing one of the company's notebook software updaters named ASUS Live Updater which eventually was downloaded and installed on the computers of tens of thousands of customers according to experts' estimations." April 23, 2019 at 5:45 PM David. said... Who Owns Huawei? by Christopher Balding and Donald C. Clarke concludes that: "Huawei calls itself “employee-owned,” but this claim is questionable, and the corporate structure described on its website is misleading." April 27, 2019 at 1:56 PM David. said... David A. Wheeler reports on another not-very-successful software supply chain attack: "A malicious backdoor has been found in the popular open source software library bootstrap-sass. This was done by someone who created an unauthorized updated version of the software on the RubyGems software hosting site. The good news is that it was quickly detected (within the day) and updated, and that limited the impact of this subversion. The backdoored version (3.2.0.3) was only downloaded 1,477 times. For comparison, as of April 2019 the previous version in that branch (3.2.0.2) was downloaded 1.2 million times, and the following version 3.2.0.4 (which duplicated 3.2.0.2) was downloaded 1,700 times (that’s more than the subverted version!). So it is likely that almost all subverted systems have already been fixed." Wheeler has three lessons from this: 1. Maintainers need 2FA. 2. Don't update your dependencies in the same day they're released. 3. Reproducible builds! May 2, 2019 at 9:43 AM David. said... Andy Greenberg's A mysterious hacker gang is on a supply-chain hacking spree ties various software supply chain attacks together and attributes them: "Over the past three years, supply-chain attacks that exploited the software distribution channels of at least six different companies have now all been tied to a single group of likely Chinese-speaking hackers. The group is known as Barium, or sometimes ShadowHammer, ShadowPad, or Wicked Panda, depending on which security firm you ask. More than perhaps any other known hacker team, Barium appears to use supply-chain attacks as its core tool. Its attacks all follow a similar pattern: seed out infections to a massive collection of victims, then sort through them to find espionage targets." May 4, 2019 at 9:55 AM David. said... Someone Is Spamming and Breaking a Core Component of PGP’s Ecosystem by Lorenzo Franceschi-Bicchierai reports on an attack on two of the core PGP developers,Robert J. Hansen and Daniel Kahn Gillmor : "Last week, contributors to the PGP protocol GnuPG noticed that someone was “poisoning” or “flooding” their certificates. In this case, poisoning refers to an attack where someone spams a certificate with a large number of signatures or certifications. This makes it impossible for the the PGP software that people use to verify its authenticity, which can make the software unusable or break. In practice, according to one of the GnuPG developers targeted by this attack, the hackers could make it impossible for people using Linux to download updates, which are verified via PGP." The problem lies in the SKS keyserver: "the SKS software was written in an obscure language by a PhD student for his thesis. And because of that, according to Hansen, “there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.” In other words, these attacks are here to stay." July 3, 2019 at 4:40 PM David. said... Dan Goodin's The year-long rash of supply chain attacks against open source is getting worse is a useful overview of the recent incidents pointing to the need for verifiable logs and reproducible builds. And, of course, for requiring developers to use multi--factor authentication. August 21, 2019 at 4:43 PM David. said... Catalin Cimpanu's Hacking 20 high-profile dev accounts could compromise half of the npm ecosystem is based on Small World with High Risks:A Study of Security Threats in the npm Ecosystem by Marcus Zimmerman et al: "Their goal was to get an idea of how hacking one or more npm maintainer accounts, or how vulnerabilities in one or more packages, reverberated across the npm ecosystem; along with the critical mass needed to cause security incidents inside tens of thousands of npm projects at a time. ... the normal npm JavaScript package has an abnormally large number of dependencies -- with a package loading 79 third-party packages from 39 different maintainers, on average. This number is lower for popular packages, which only rely on code from 20 other maintainers, on average, but the research team found that some popular npm packages (600) relied on code written by more than 100 maintainers. ... "391 highly influential maintainers affect more than 10,000 packages, making them prime targets for attacks," the research team said. "If an attacker manages to compromise the account of any of the 391 most influential maintainers, the community will experience a serious security incident." Furthermore, in a worst-case scenario where multiple maintainers collude, or a hacker gains access to a large number of accounts, the Darmstadt team said that it only takes access to 20 popular npm maintainer accounts to deploy malicious code impacting more than half of the npm ecosystem." October 18, 2019 at 11:55 AM David. said... Five years after the Equation Group HDD hacks, firmware security still sucks by Catalin Cimpanu illustrates how far disk drive firmware security is ahead of the rest of the device firmware world: "In 2015, security researchers from Kaspersky discovered a novel type of malware that nobody else had seen before until then. The malware, known as NLS_933.dll, had the ability to rewrite HDD firmware for a dozen of HDD brands to plant persistent backdoors. Kaspersky said the malware was used in attacks against systems all over the world. Kaspersky researchers claimed the malware was developed by a hacker group known as the Equation Group, a codename that was later associated with the US National Security Agency (NSA). Knowing that the NSA was spying on their customers led many HDD and SSD vendors to improve the security of their firmware, Eclypsium said. However, five years since the Equation Group's HDD implants were found in the wild and introduced the hardware industry to the power of firmware hacking, Eclypsium says vendors have only partially addressed this problem. "After the disclosure of the Equation Group's drive implants, many HDD and SSD vendors made changes to ensure their components would only accept valid firmware. However, many of the other peripheral components have yet to follow suit," researchers said." February 20, 2020 at 5:23 PM David. said... Marc Ohm et al analyze supply chain attacks via open source packages in three reposiotries in Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks: "This paper presents a dataset of 174 malicious software packages that were used in real-world attacks on open source software supply chains,and which were distributed via the popular package repositories npm, PyPI, and RubyGems. Those packages, dating from November 2015 to November 2019, were manually collected and analyzed. The paper also presents two general attack trees to provide a structured overview about techniques to inject malicious code into the dependency tree of downstream users, and to execute such code at different times and under different conditions." May 30, 2020 at 7:37 AM David. said... Bruce Schneier's Survey of Supply Chain Attacks starts: "The Atlantic Council has a released a report that looks at the history of computer supply chain attacks." The Atlantic Council also has a summary of the report entitled Breaking trust: Shades of crisis across an insecure software supply chain: "Software supply chain security remains an under-appreciated domain of national security policymaking. Working to improve the security of software supporting private sector enterprise as well as sensitive Defense and Intelligence organizations requires more coherent policy response together industry and open source communities. This report profiles 115 attacks and disclosures against the software supply chain from the past decade to highlight the need for action and presents recommendations to both raise the cost of these attacks and limit their harm." July 29, 2020 at 4:56 PM David. said... Via my friend Jim Gettys, we learn of a major milestone in the development of a truly reproducible build environment. Last June Jan Nieuwenhuizen posted Guix Further Reduces Bootstrap Seed to 25%. The TL;DR is: "GNU Mes is closely related to the Bootstrappable Builds project. Mes aims to create an entirely source-based bootstrapping path for the Guix System and other interested GNU/Linux distributions. The goal is to start from a minimal, easily inspectable binary (which should be readable as source) and bootstrap into something close to R6RS Scheme. Currently, Mes consists of a mutual self-hosting scheme interpreter and C compiler. It also implements a C library. Mes, the scheme interpreter, is written in about 5,000 lines of code of simple C. MesCC, the C compiler, is written in scheme. Together, Mes and MesCC can compile a lightly patched TinyCC that is self-hosting. Using this TinyCC and the Mes C library, it is possible to bootstrap the entire Guix System for i686-linux and x86_64-linux." The binary they plan to start from is: "Our next target will be a third reduction by ~50%; the Full-Source bootstrap will replace the MesCC-Tools and GNU Mes binaries by Stage0 and M2-Planet. The Stage0 project by Jeremiah Orians starts everything from ~512 bytes; virtually nothing. Have a look at this incredible project if you haven’t already done so." In mid November Nieuwenhuizen tweeted: "We just compiled the first working program using a Reduced Binary Seed bootstrap'ped*) TinyCC for ARM" And on December 21 he tweeted: "The Reduced Binary Seed bootstrap is coming to ARM: Tiny C builds on @GuixHPC wip-arm-bootstrap branch" Starting from a working TinyCC, you can build the current compiler chain. December 30, 2020 at 3:26 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ▼  2018 (96) ▼  December (7) Securing The Hardware Supply Chain Meta: Impending Blog Slowdown Securing The Software Supply Chain Software Preservation Network Blockchain: What's Not To Like? Irina Bolychevsky on Solid Selective Amnesia ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2612 ---- DSHR's Blog: Economics of Evil DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, June 27, 2013 Economics of Evil Back in March Google announced that this weekend is the end of Google Reader, the service many bloggers and journalists used to use to read online content via RSS. This wasn't the first service Google killed, but because the people who used the service write for the Web, the announcement sparked a lively discussion. Because many people believe that commercial content platforms and storage services will preserve digital content for the long term, the discussion below the fold should be of interest here. Paul Krugman commented: Google’s decision to shut down Google Reader has ... provoked a lot of discussion about the future of web-based services. The most interesting discussion, I think, comes from Ryan Avent, who argues that Google has been providing crucial public infrastructure — but doesn’t seem to have an interest in maintaining that infrastructure. The Ryan Avent post at The Economist's Free Exchange blog that Krugman linked to pointed out that dependence on Google's services has a big impact on the real world: Once we all become comfortable with [Google's services] we quickly begin optimising the physical and digital resources around us. Encyclopaedias? Antiques. Book shelves and file cabinets? Who needs them? And once we all become comfortable with that, we begin rearranging our mental architecture. We stop memorising key data points and start learning how to ask the right questions. We begin to think differently. About lots of things. We stop keeping a mental model of the physical geography of the world around us, because why bother? We can call up an incredibly detailed and accurate map of the world, complete with satellite and street-level images, whenever we want. ... The bottom line is that the more we all participate in this world, the more we come to depend on it. The more it becomes the world. Avent described Google's motivation for providing the infrastructure: Google has asked us to build our lives around it: to use its e-mail system ..., its search engines, its maps, its calendars, its cloud-based apps and storage services, its video- and photo-hosting services, ... It hasn't done this because we're its customers, it's worth remembering. We aren't; we're the products Google sells to its customers, the advertisers. Google wants us to use its services in ways that provide it with interesting and valuable information, and eyeballs. So, Google may have good reasons for killing some services: It's a big company, but even big companies have finite resources, and devoting those precious resources to something that isn't making money and isn't judged to have much in the way of development potential is not an attractive option. ... Someone else will come along to provide the service and, if they give it their full attention, to improve it. And indeed alternate services such as Feedly are taking over. The only push-back for Google is weak: But that makes it increasingly difficult for Google to have success with new services. Why commit to using and coming to rely on something new if it might be yanked away at some future date? Google only partially mitigates this through their Data Liberation Front, whose role is to ensure that users of their services can extract their data in a re-usable form. But there remains a problem for society: That's a lot of power to put in the hands of a company that now seems interested, mostly, in identifying core mass-market services it can use to maximise its return on investment. What this implies is that Google is becoming an essential utility: the history of modern urbanisation is littered with examples of privately provided goods and services that became the domain of the government once everyone realised that this new life and new us couldn't work without them. Paul Krugman described the economics underlying this: even in a plain-vanilla market, a monopolist with high fixed costs and limited ability to price-discriminate may not be able to make a profit supplying a good even when the potential consumer gains from that good exceed the costs of production. Basically, if the monopolist tries to charge a price corresponding to the value intense users place on the good, it won’t attract enough low-intensity users to cover its fixed costs; if it charges a low price to bring in the low-intensity user, it fails to capture enough of the surplus of high-intensity users, and again can’t cover its fixed costs. What Avent adds is network externalities, in which the value of the good to each individual user depends on how many others are using it. ... they mean that if the monopolist still doesn’t find it worthwhile to provide the good, the consumer losses are substantially larger than in a conventional monopoly-pricing analysis. So what’s the answer? As Avent says, historical examples with these characteristics — like urban transport networks — have been resolved through public provision. It seems hard at this point to envision search and related functions as public utilities, but that’s arguably where the logic will eventually lead us. It is indeed hard to envision these services as public utilities; they are world-wide rather than national or local, and the communication infrastructure on which they rely, despite being utility-like, is provided by for-profit, lightly regulated companies. What does this mean for digital preservation? "Free" services, such as Google Drive, in which the user is the product rather than the customer should never be depended upon. At any moment they may suffer the fate of Google Reader and many other Google services; because the user is not the customer there is essentially no recourse. Over time any business model for extracting value from content will become less and less effective. There are several reasons. Competitors will arise and decrease the available margins. As content accumulates, the average age of an item will increase; it is easier to extract value from younger content. Thus the service is continually in a race between this decreasing effectiveness and the cost reduction from improving technology such as Kryder's Law for storage. Once the cost reduction fails to outpace the value reduction the service is probably doomed. So the slowing pace of storage cost reduction is likely to cause more casualties, especially among those services that accumulate content. Because archived content is rarely accessed it generates little valuable information and lacks network effects, making it a poor business for the provider. This casts another shadow over the future of free or all-you-can-eat storage services. Although the user of services such as Google Cloud Storage and S3 is a paying customer, digital preservation will be a very small part of the service's customer base. Thus the incentives for the service provider to continue or terminate those aspects of the service that make it suitable for digital preservation will not be greatly different from those of a free service. Because digital preservation is a relatively small market, services tailored to its needs will lack the economies of scale of the kind of infrastructure services Avent and Krugman are discussing. Thus they will be more expensive. But the temptation will always be to implement them as a thin layer of customization over generic infrastructure services, as we see with Duracloud and Preservica, to capture as much of the economies of scale as possible. This leaves them vulnerable to the whims of the infrastructure provider. See also my comment on on-demand vs. base-load economics. Posted by David. at 8:00 AM Labels: cloud economics, digital preservation, storage costs 39 comments: David. said... Marco Arment convincingly tells the depressing story of why Google Reader had to die. It was distracting from the Google + vs. Facebook battle. July 6, 2013 at 6:40 AM David. said... Latitude is the latest casualty. again because it was not part of Google+. July 10, 2013 at 3:24 PM David. said... Scott Gilbertson at The Register writes about recovering from the death of Reader, riffing on the "If you're not paying for something, you're not the customer; you're the product being sold." quote. But he points out that "Just because you are paying companies like Google, Apple or Microsoft you might feel they are, some how, beholden to you." Dream on. November 4, 2013 at 7:05 AM David. said... Yet another Google service bites the dust, this time Helpout. February 13, 2015 at 5:03 PM David. said... Hugh Pickens at /. points me to Le Monde's Google Memorial, le petit musée des projets Google abandonnés. Who knew there were so many? March 6, 2015 at 7:35 PM David. said... And another one bites the dust. March 12, 2015 at 4:54 PM David. said... Google Plus isn't quite dead, but it is on life support. August 3, 2015 at 3:25 PM David. said... Google Talk is headed for the Google Memorial. March 24, 2017 at 2:44 PM David. said... "The goo.gl link is very common on the web and was first launched by Google in 2009. However, the company announced today that it’s winding down the URL Shortener beginning next month, with a complete deprecation by next year." writes Abner Li at 9to5Google. March 31, 2018 at 7:18 AM David. said... Google Hangouts is headed for the Google Memorial. November 30, 2018 at 5:44 PM David. said... The Allo messaging service is the next to enter the Google Memorial. December 5, 2018 at 7:38 PM David. said... G+ is on its way to the Google Memorial, and Lauren Weinstein is not happy with the reasons: "Google knows that as time goes on their traditional advertising revenue model will become decreasingly effective. This is obviously one reason why they’ve been pivoting toward paid service models aimed at businesses and other organizations. That doesn’t just include G Suite, but great products like their AI offerings, Google Cloud, and more. But no matter how technically advanced those products, there’s a fundamental question that any potential paying user of them must ask themselves. Can I depend on these services still being available a year from now? Or in five years? How do I know that Google won’t treat business users the same ways as they’ve treated their consumer users?" Nor is he happy with the process: "We already know about Google’s incredible user trust failure in announcing dates for this process. First it was August. Then suddenly it was April. The G+ APIs (which vast numbers of web sites — including mine — made the mistake of deeply embedding into their sites, we’re told will start “intermittently failing” (whatever that actually means) later this month. It gets much worse though. While Google has tools for users to download their own G+ postings for preservation, they have as far as I know provided nothing to help loyal G+ users maintain their social contacts — the array of other G+ followers and users with whom many of us have built up friendships on G+ over the years." January 21, 2019 at 6:34 AM David. said... Close behind G+ on the road to the Google Memorial is Hangouts and Ron Amadeo is skeptical: "Google previously announced that its most popular messaging app, Google Hangouts, would be shutting down. In a post today on the GSuite Updates blog, Google detailed what the Hangouts shutdown will look like, and the company shared some of its plan to transition Hangouts users to "Hangouts Chat," a separate enterprise Slack clone." Note that this is another instance of Google moving up a previously announced shutdown date. And, like Weinstein, Amadeo sees impacts on the Google brand: "Google's argument seems to be that the transition plan makes everything OK. But clumsy shutdowns like this are damaging to the Google brand, and they undermine confidence in all of Google's other products and services." January 23, 2019 at 9:46 AM David. said... Ron Amadeo takes Google to the woodshed in Google’s constant product shutdowns are damaging its brand: "It's only April, and 2019 has already been an absolutely brutal year for Google's product portfolio. The Chromecast Audio was discontinued January 11. YouTube annotations were removed and deleted January 15. Google Fiber packed up and left a Fiber city on February 8. Android Things dropped IoT support on February 13. Google's laptop and tablet division was reportedly slashed on March 12. Google Allo shut down on March 13. The "Spotlight Stories" VR studio closed its doors on March 14. The goo.gl URL shortener was cut off from new users on March 30. Gmail's IFTTT support stopped working March 31. And today, April 2, we're having a Google Funeral double-header: both Google+ (for consumers) and Google Inbox are being laid to rest. Later this year, Google Hangouts "Classic" will start to wind down, and somehow also scheduled for 2019 is Google Music's "migration" to YouTube Music, with the Google service being put on death row sometime afterward. We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days." April 2, 2019 at 11:35 AM David. said... Of course, other companies shutter services too, just somewhat less irresponsibly. Cory Doctorow's Microsoft announces it will shut down ebook program and confiscate its customers' libraries reports: "Microsoft has a DRM-locked ebook store that isn't making enough money, so they're shutting it down and taking away every book that every one of its customers acquired effective July 1. Customers will receive refunds." April 2, 2019 at 12:40 PM David. said... And the next entrant for le petit musée des projets Google abandonnés is YouTube Gaming. Ron Amadeo reports: "YouTube Gaming is more or less shutting down this week. Google launched the standalone YouTube gaming vertical almost four years ago as a response to Amazon's purchase of Twitch, and on May 30, Google will shut down the standalone YouTube Gaming app and the standalone gaming.youtube.com website." May 29, 2019 at 10:54 AM David. said... Take a number! Ron Amadeo reports that This week’s dead Google product is Google Trips, may it rest in peace: "Google's wild ride of service shutdowns never stops. Next up on the chopping block is Google Trips, a trip organization app that is popular with frequent travelers. Recently Google started notifying users of the pending shutdown directly in the Trips app; a splash screen now pops up before the app starts, saying "We're saying goodbye to Google Trips Aug 5," along with a link to a now all-to-familiar Google shutdown support document." June 6, 2019 at 6:33 AM David. said... Another product joins the queue of Google products waiting to get into le petit musée des projets Google abandonnés. Ron Amadeo reports: "Another day, another dead or dying Google product. This time, Google has decided to shut down "Hangouts on Air," a fairly popular service for broadcasting a group video call live over the Internet. Notices saying the service is "going away later this year" have started to pop up for users when they start a Hangout on Air. Hangouts on Air, by the way, is a totally different and unrelated service from "Google Hangouts," which is also shutting down sometime in the future." June 21, 2019 at 3:22 PM David. said... Lina M. Khan's The Separation of Platforms and Commerce discusses (Page 1071) one of last year's acquisitions by le petit musée: "The Justice Department’s remedies in the Google–ITA merger illustrate one instance of imposing an information firewall in a digital market. ITA developed and licensed a software product known as “QPX,” a “mini-search engine” that airlines and online travel agents used to provide users with customized flight search functionality. Because the merger would put Google in the position of supplying QPX to its rival travel-search websites, the Justice Department required as a condition of the merger that Google establish internal firewalls to avoid misappropriation of rivals’ information. Although one commentator highlighted the risks and inherent difficulties associated with designing a comprehensive behavioral remedy, the court approved the order. Whether the information firewall was successful in preventing Google from accessing rivals’ business information is not publicly known. A year after the remedy expired, Google shut down its QPX API." June 22, 2019 at 3:51 PM David. said... Slashdot reader freelunch reports on le petit musée's latest acquisition. August 11, 2019 at 3:50 PM David. said... Greg Kumparak reports on one of next year's acquisitions by le petit musée in Google will shut down Google Hire in 2020. August 28, 2019 at 10:04 AM David. said... Jason Perlow's Google is a bald-faced IoT liar and its Nest pants are on fire is all about the effects of Google's sunset of the "Works With Nest" program on anyone unlucky enough to have non-Google devices in their home IoT ecosystem. September 6, 2019 at 12:17 PM David. said... Le petit musée just gained another exhibit, the corpse of Google Daydream, according to Emil Protalinski's obituary in Google discontinues Daydream VR. October 15, 2019 at 11:08 AM David. said... Le petit musée's reputation is getting noticed. In Stadia launch dev: Game makers are worried “Google is just going to cancel it”, Kyle Orland reports that: "Google has a long and well-documented history of launching new services only to shut them down a few months or years later. And with the launch of Stadia imminent, one launch game developer has acknowledged the prevalence of concerns about that history among her fellow developers while also downplaying their seriousness in light of Stadia's potential." November 13, 2019 at 1:13 PM David. said... Next year Le petit musée will accession another major exhibit. According to Stephen Hall, Google Cloud Print is dead as of December 31, 2020. November 21, 2019 at 6:15 PM David. said... It only too three days for Le petit musée's first accession of the New Year.Shelby Brown's Google News reportedly ends digital magazines, refunds active subscriptions. It was doomed because Google couldn't decide what to call it: "Google launched Play Magazines in 2012, and later renamed it Play Newsstand to focus on newspapers. The services later merged to Google News." January 3, 2020 at 3:13 PM David. said... And the next exhibit in Le petit musée' is [drumroll, please] Google Chrome Apps! January 15, 2020 at 6:44 PM David. said... At this rate le petit musée's acquisition budget for the year will be exhausted long before December! Only 12 days after the last one, comes the acquisition of App Maker. January 27, 2020 at 5:27 PM David. said... They're falling like ninepins! Ben Schoon reports that Google is killing its ‘One Today’ donation app w/ only one week’s notice. January 29, 2020 at 12:40 PM David. said... Thomas Claiburn reports that Another body for the Google graveyard: Chrome Web Store payments. Bad news if you wanted to bank some income from these apps. September 23, 2020 at 1:56 PM David. said... Ron Amadeo reports on the latest acquisition by the Petit Musee in RIP Google Play Music, 2011 – 2020. October 28, 2020 at 1:49 PM David. said... And another one bites the dust. Matthew Hughes provides the details of the latest from the Petit Musee in Stony-faced Google drags Android Things behind the cowshed. Two shots ring out. December 17, 2020 at 8:04 AM David. said... The hits keep coming! Cohen Coberly's Google's 'Cloud Print' service is shutting down soon celebrates the Petit Musée's latest acquisition. 27 exhibits in 7 years. December 30, 2020 at 6:03 AM David. said... The Petit Musée is going to need more space. In Google’s dream of flying Internet balloons is dead—Loon is shutting down Ron Amadeo writes: "Google has decided that a network of flying Internet balloons is indeed not a feasible idea. Loon announced it is shutting down, citing the lack of a "long-term, sustainable business."" January 22, 2021 at 9:31 AM David. said... And another one bites the dust. Stephen Totilo reports that Google Stadia Shuts Down Internal Studios, Changing Business Focus. February 1, 2021 at 12:46 PM David. said... Another acquisition for the Petit Musée! Ron Amadeo reports that Google is killing the Google Shopping app: "The Google Shopping app launched only 19 months ago, when it took over for another Google shopping shutdown, Google Express. The Google Shopping service has been a rough proposition for users—starting in 2012, it has been nothing but an ad vector that exclusively showed "paid listings" and no organic results whatsoever. This made some sense as a service that showed advertisements in little embedded boxes in Google.com search results, but it was unclear why a user would download an app that exclusively shows ads." April 12, 2021 at 3:05 PM David. said... Ron Amadeo reports on Le Petit Musée's latest acquisition in Google kills its augmented reality “Measure” app. June 9, 2021 at 12:00 PM David. said... I missed Alastair Westgarth's Saying goodbye to Loon back in January: "While we’ve found a number of willing partners along the way, we haven’t found a way to get the costs low enough to build a long-term, sustainable business. Developing radical new technology is inherently risky, but that doesn’t make breaking this news any easier. Today, I’m sad to share that Loon will be winding down." June 9, 2021 at 12:12 PM David. said... Katie Baker's The Day the Good Internet Died makes some excellent points: "And when Google Reader disappeared in 2013, it wasn’t just a tale of dwindling user numbers or of what one engineer later described as a rotted codebase. It was a sign of the crumbling of the very foundation upon which it had been built: the era of the Good Internet. ... The offering certainly was, over the course of its 2005 to 2013 existence, an ideal showcase for some of the web’s prevailing strengths at the time—one of which was the emergence of the blogosphere, that backbone of what the Good Internet could be. ... Google had purchased Blogger back in 2003, and in an interview with Playboy in 2004, founder Sergey Brin outlined a vision for his company that felt extremely bloggy: “We want to get you out of Google and to the right place as fast as possible,” he said. But nothing gold can stay, and as smartphones and tablets and apps and social networks began to supplement and then supplant the simple-text-on-a-desktop experience near the start of the 2010s, Google’s corporate frame of mind shifted ever-inward." July 25, 2021 at 7:57 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ▼  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ▼  June (6) Economics of Evil The Big Deal Petabyte DVD? Not trusting cloud storage Cliff Lynch Brief talk at ElPub 2013 ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-2967 ---- DSHR's Blog: Economies of Scale in Peer-to-Peer Networks DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, October 7, 2014 Economies of Scale in Peer-to-Peer Networks In a recent IEEE Spectrum article entitled Escape From the Data Center: The Promise of Peer-to-Peer Cloud Computing, Ozalp Babaoglu and Moreno Marzolla (BM) wax enthusiastic about the potential for Peer-to-Peer (P2P) technology to eliminate the need for massive data centers. Even more exuberance can be found in Natasha Lomas' Techcrunch piece The Server Needs To Die To Save The Internet (LM) about the MaidSafe P2P storage network. I've been working on P2P technology for more than 16 years, and although I believe it can be very useful in some specific cases, I'm far less enthusiastic about its potential to take over the Internet. Below the fold I look at some of the fundamental problems standing in the way of a P2P revolution, and in particular at the issue of economies of scale. After all, I've just written a post about the huge economies that Facebook's cold storage technology achieves by operating at data center scale. Economies of Scale Back in April, discussing a vulnerability of the Bitcoin network, I commented: Gradually, the economies of scale you need to make money mining Bitcoin are concentrating mining power in fewer and fewer hands. I believe this centralizing tendency is a fundamental problem for all incentive-compatible P2P networks. ... After all, the decentralized, distributed nature of Bitcoin was supposed to be its most attractive feature. In June, discussing Permacoin, I returned to the issue of economies of scale: increasing returns to scale (economies of scale) pose a fundamental problem for peer-to-peer networks that do gain significant participation. One necessary design goal for networks such as Bitcoin is that the protocol be incentive-compatible, or as Ittay Eyal and Emin Gun Sirer (ES) express it: the best strategy of a rational minority pool is to be honest, and a minority of colluding miners cannot earn disproportionate benefits by deviating from the protocol They show that the Bitcoin protocol was, and still is, not incentive-compatible. Even if the protocol were incentive-compatible, the implementation of each miner would, like almost all technologies, be subject to increasing returns to scale. Since then I've become convinced that this problem is indeed fundamental. The simplistic version of the problem is this: The income to a participant in a P2P network of this kind should be linear in their contribution of resources to the network. The costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale. Thus the proportional profit margin a participant obtains will increase with increasing resource contribution. Thus the effects described in Brian Arthur's Increasing Returns and Path Dependence in the Economy will apply, and the network will be dominated by a few, perhaps just one, large participant. The advantages of P2P networks arise from a diverse network of small, roughly equal resource contributors. Thus it seems that P2P networks which have the characteristics needed to succeed (by being widely adopted) also inevitably carry the seeds of their own failure (by becoming effectively centralized). Bitcoin is an example of this. Some questions arise: Does incentive-compatibility imply income linear in contribution? If not, are there incentive-compatible ways to deter large contributions? The simplistic version is, in effect, a static view of the network. Are there dynamic effects also in play? Does incentive-compatibility imply income linear in contribution? Clearly, the reverse is true. If income is linear in, and solely dependent upon, contribution there is no way for a colluding minority of participants to gain more than their just reward. If, however: Income grows faster than linearly with contribution, a group of participants can pool their contributions, pretend to be a single participant, and gain more than their just reward. Income goes more slowly than linearly with contribution, a group of participants that colluded to appear as a single participant would gain less than their just reward. So it appears that income linear in contribution is the limiting case, anything faster is not incentive-compatible. Are there incentive-compatible ways to deter large contributions? In principle, the answer is yes. Arranging that income grows more slowly than contribution, and depends on nothing else, will do the trick. The problem lies in doing so. Source: bitcoincharts.com The actual income received by a participant is the value of the reward the network provides in return for the contribution of resources, for example the Bitcoin, less the costs incurred in contributing the resources, the capital and running costs of the mining hardware, in the Bitcoin case. As the value of Bitcoins collapsed (as I write, BTC is about $320, down from about $1200 11 months ago and half its value in August) many smaller miners discovered that mining wasn't worth the candle. The network has to arrange not just that the reward grows more slowly than the contribution, but that it grows more slowly than the cost of the contribution to any participant. If there is even one participant whose rewards outpace their costs, Brian Arthur's analysis shows they will end up dominating the network. Herein lies the rub. The network does not know what an individual participant's costs, or even  the average participant's costs, are and how they grow as the participant scales up their contribution. So the network would have to err on the safe side, and make rewards grow very slowly with contribution, at least above a certain minimum size. Doing so would mean few if any participants above the minimum contribution, making growth dependent entirely on recruiting new participants. This would be hard because their gains from participation would be limited to the minimum reward. It is clear that mass participation in the Bitcoin network was fuelled by the (unsustainable) prospect of large gains for a small investment. Source: blockchain.info A network that assured incentive-compatibility in this way would not succeed, because the incentives would be so limited. A network that allowed sufficient incentives to motivate mass participation, as Bitcoin did, would share Bitcoin's vulnerability to domination by, as at present, two participants (pools, in Bitcoin's case). Are there dynamic effects also in play? As well as increasing returns to scale, technology markets exhibit decreasing returns through time. Bitcoin is an extreme example of this. Investment in Bitcoin mining hardware has a very short productive life: the overall network hash rate has been doubling every 3-4 weeks, and therefore, mining equipment has been losing half its production capability within the same time frame. After 21-28 weeks (7 halvings), mining rigs lose 99.3% of their value. This effect is so strong that it poses temptations for the hardware manufacturers that some have found impossible to resist. The FBI recently caught Butterfly Labs using hardware that customers had bought and paid for to mine on their own behalf for a while before shipping it to the customers. They thus captured the most valuable week or so of the hardware's short useful life for themselves Source: blockchain.info Even though with technology improvement rates much lower than the Bitcoin network hash rate increase, such as Moore's Law or Kryder's Law, the useful life of hardware is much longer than 6 months, this effect can be significant. When new, more efficient technology is introduced, thus reducing the cost per unit contribution to a P2P network, it does not become instantly available to all participants. As manufacturing ramps up, the limited supply preferentially goes to the manufacturers best customers, who would be the largest contributors to the P2P network. By the time supply has increased so that smaller contributors can enjoy the lower cost per unit contribution, the most valuable part of the technology's useful life is over. Early availability of new technology acts to reduce the costs of the larger participants, amplifying their economies of scale. This effect must be very significant in Bitcoin mining, as Butterfly Labs noticed. At pre-2010 Kryder rates it would be quite noticeable since storage media service lives were less than 60 months. At the much lower Kryder rates projected by the industry storage media lifetimes will be extended and the effect correspondingly less. Trust BM admit that there are significant unresolved trust issues in P2P technology: The people using such a cloud must trust that none of the many strangers operating it will do something malicious. And the providers of equipment must trust that the users won’t hog computer time. These are formidable problems, which so far do not have general solutions. If you just want to store data in a P2P cloud, though, things get easier: The system merely has to break up the data, encrypt it, and store it in many places. Unfortunately, even for storage this is inadequate. The system cannot trust the peers claiming to store the shards of the encrypted data but must verify that they actually are storing them. This is a resource-intensive process. Permacoin's proposal, to re-purpose resources already being expended elsewhere, is elegant but unlikely to be successful. Worse, the verification process consumes not just resources, but time. At each peer there is necessarily a window of time between successive verifications. During that time the system believes the peer has a good copy of the shard, but it might no longer have one. Edge of the internet P2P enthusiasts describe the hardware from which their network is constructed in similar terms. Here is BM: the P2P cloud is made up of a diverse collection of different people’s computers or game consoles or whatever and here is LM: Users of MaidSafe’s network contribute unused hard drive space, becoming the network’s nodes. It’s that pooling — or, hey, crowdsourcing — of many users’ spare computing resource that yields a connected storage layer that doesn’t need to centralize around dedicated datacenters. When the idea of P2P networks started in the 90s: Their model of the edge of the Internet was that there were a lot of desktop computers, continuously connected and powered-up, with low latency and no bandwidth charges, and with 3.5" hard disks that were mostly empty. Since then, the proportion of the edge with these characteristics has become vanishingly small. The edge is now intermittently powered up and connected, with bandwidth charges, and only small amounts of local storage. Monetary rewards This means that, if the network is to gain mass participation, the majority of participants cannot contribute significant resources to it; they don't have suitable resources to contribute. They will have to contribute cash. This in turn means that there must be exchanges, converting between the rewards for contributing resources and cash, allowing the mass of resource-poor participants to buy from the few resource-rich participants. Both Permacoin and MaidSafe envisage such exchanges, but what they don't seem to envisage is the effect on customers of the kind of volatility seen in the Bitcoin graph above. Would you buy storage from a service with this price history, or from Amazon? What exactly is the value to the mass customer of paying a service such as MaidSafe, by buying SafeCoin on an exchange, instead of paying Amazon directly, that would overcome the disadvantage of the price volatility? As we see with Bitcoin, a network whose rewards can readily be converted into cash is subject to intense attack, and attracts participants ranging from sleazy to criminal. Despite its admirably elegant architecture, Bitcoin has suffered from repeated vulnerabilities. Although P2P technology has many advantages in resisting attack, especially the elimination of single points of failure and centralized command and control, it introduces a different set of attack vectors. Measuring contributions Discussion of P2P storage networks tends to assume that measuring the contribution a participant supplies in return for a reward is easy. A Gigabyte is a Gigabyte after all. But compare two Petabytes of completely reliable and continuously available storage, one connected to the outside world by a fiber connection to a router near the Internet's core, and the other connected via 3G. Clearly, the first has higher bandwidth, higher availability and lower cost per byte transferred, so its contribution to serving the network's customers is vastly higher. It needs a correspondingly greater reward. In fact, networks would need to reward many characteristics of a peer's storage contribution as well as its size: Reliability Availability Bandwidth Latency Measuring each of these parameters, and establishing "exchange rates" between them, would be complex, would lead to a very mixed marketing message, and would be the subject of disputes. For example, the availability, bandwidth and latency of a network resource depends on the location in the network from which the resource is viewed, so there would be no consensus among the peers about these parameters. Conclusion While it is clear that P2P storage networks can work, and can even be useful tools for small communities of committed users, the non-technical barriers to widespread adoption are formidable. They have been effective in preventing widespread adoption since the late 90s, and the evolution of the Internet has since raised additional barriers. Posted by David. at 8:00 AM Labels: networking, P2P, storage costs 5 comments: David. said... Steve Randy Waldman has an interesting post Econometrics, open science, and cryptocurrency arguing that the infrastructure for research should be a P2P network with a cryptocurrency-like consensus system. I have a lot of sympathy for his vision of a shared, preserved research infrastructure: "Ultimately, we should want to generate a reusable, distributed, permanent, and ever-expanding web of science, including conjectures, verifications, modifications, and refutations, and reanalyses as new data arrives. Social science should become a reified public commons. It should be possible to build new analyses from any stage of old work, by recruiting raw data into new projects, by running alternative models on already cleaned-up or normalized data tables, by using an old model's estimates to generate inputs to simulations or new analyses." But, alas, for the reasons set out above, it will be very difficult to implement this using P2P cryptocurrency techniques. There will be significant real costs associated with running nodes in such a network. To motivate participation, there has to be some way to defray these costs, so there has to be an exchange between the cryptocurrency and currency real enough to pay electricity bills and purchase hardware. "From each according to his ability, to each according to his need" isn't going to cut it on a scale big enough to matter in research. October 19, 2014 at 12:44 PM David. said... My comment on Waldman's post is here. October 19, 2014 at 12:46 PM David. said... On the general topic of digital currencies, the FT has an excellent corrective to misplaced enthusiasm, Izabella Kaminska's From the annals of disruptive digital currencies past. November 5, 2014 at 6:22 AM David. said... One major problem with the centralization that is driven by economies of scale is that it reduces the resilience of the network to failures. A "data center" that was a major contributor to the overall Bitcoin hash rate was just destroyed by fire. November 10, 2014 at 10:09 AM David. said... I apologize for the mistake I made writing this post. I linked to, rather than captured, the pool size graph. blockchain.info has moved the graph, it is now here. As I write four pools, F2Pool, AntPool, GHash.io and BTCChinaPool control 54% of the mining power. This is better than one or two pools, But, as Ittay Eyal points out in an important post: "The dismantling of overly large pools is one of the most important and difficult tasks facing the Bitcoin community." Pools are needed to generate consistent income but: "[Miners] can get steady income from pools well below 10%, and they have only little incentive to use very large pools; it's mostly convenience and a feeling of trust in large entities." Right now, at least 46% of the mining power is in pools larger than 10%. January 9, 2015 at 7:01 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ▼  2014 (68) ►  December (7) ►  November (8) ▼  October (6) This is what an emulator should look like Familiarity Breeds Contempt Facebook's Warm Storage Journal "quality" The Internet of Things Economies of Scale in Peer-to-Peer Networks ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-38 ---- DSHR's Blog: China's Cryptocurrency Crackdown DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, June 24, 2021 China's Cryptocurrency Crackdown BTC "price" Cryptocurrency mining in Sichuan, especially in the rainy season, is hydro-powered, so miners thought they'd be spared the Chinese government's crackdown, for example in Qinghai, Xinjiang and Yunnan. But they were rapidly disabused of this idea, as Matt Novak reports in Bitcoin Plunges as China's Sichuan Province Pulls Plug on Crypto Mining: BTC miners' revenue Bitcoin continued its dramatic plunge to $32,281 Monday morning, down 17.65% from a week earlier as some of China’s largest bitcoin mining farms were shut down over the weekend. The bitcoin mining facilities of Sichuan Province received an order on Friday to stop doing business by Sunday, according to Chinese state media outlet the Global Times. The Sichuan Provincial Development and Reform Commission and the Sichuan Energy Bureau issued an order to all electricity companies in the region on Friday to stop supplying electricity to any known crypto mining organizations, including 26 firms that had already been publicly identified, BTC Hash Rate As the "price" chart shows, the crackdown is having an impact. The result of this is that miners' revenue has taken a hit. The result of this is to squeeze uneconomic mining out of the mining pools, decreasing the hash rate. The result of that is that the network has to adapt, by reducing the difficulty of mining the next block in order to maintain the six blocks an hour target for the Bitcoin blockchain, averaged over time. Below the fold, more details and graphs. BTC Difficulty Clearly, the decline of miners' revenue, and thus the hash rate, and thus the difficulty of mining blocks, has a long way to go before it significantly decreases the security of the Bitcoin blockchain. But this could be the start of a self-reinforcing cycle leading in that direction as the current uncertainty and the decline in the "price" as the dump follows the pump cause HODL-ers to HODL. Avg Transaction Fee This decrease in the demand for transactions can be seen from the collapse in average fee per transaction since the peak around $60 to under $5. This in itself contributes to the drop in miners' revenue, and thus to the drop in the hash rate, and so on. Miners desperate to recoup some of their investment in hardware are shipping it to destinations outside China: Videos on social media sites purported to show miners in Sichuan turning off their mining machines and packing up their businesses. Miners in China are now looking to sell their equipment overseas, and it appears many have already found buyers. CNBC’s Eunice Yoon tweeted early Monday that a Chinese logistics firm was shipping 6,600 lbs (3,000 kilograms) of crypto mining equipment to an unnamed buyer in Maryland for just $9.37 per kilogram. The more of the hash rate that is located in Western countries, using power that is much more expensive than under-the-counter hydro-power in China, the worse the economics of mining. The more of the hash rate that is located in countries that follow FinCEN's guidance, the more effective Nicholas Weaver's suggestion: It is time to seriously disrupt the cryptocurrency ecology. Directly attacking mining as incompatible with the Bank Secrecy Act is one potentially powerful tool. Weaver means that classifying mining as money transmission would force miners in countries that follow FinCEN to adhere to the Anti-Money Laundering/Know Your Customer (AML/KYC) rules. This would make mining in these countries effectively illegal, as being either legally risky or impossibly expensive, and thereby help to suppress ransomware. Postscript: If you think that Bitcoin makes sense as a currency used to buy and sell you need to justify the graph below, which shows the total cost of making a single transaction, the average value transferred to the miners for each transaction, has been over $100 for a year and peaked at $300. Source Posted by David. at 8:00 AM Labels: bitcoin No comments: Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ▼  June (8) Taleb On Cryptocurrency Economics China's Cryptocurrency Crackdown DNA Data Storage: A Different Approach Mining Is Money Transmission (Updated) Mempool Flooding Meta: Apology To Commentors Unreliability At Scale Unstoppable Code? ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-3922 ---- DSHR's Blog: Talk at PDA2012 DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Saturday, February 25, 2012 Talk at PDA2012 I spoke at this year's Personal Digital Archiving conference at the Internet Archive, following on from my panel appearance there a year ago. Below the fold is an edited text of the talk with links to the sources. At last year's PDA I sparked a lively discussion with my panel appearance called Paying for Long-Term Storage. I'm hoping to leave enough time for a similar discussion this year. Last year's talk covered the three possible business models for long-term storage, and focused on endowment as being the only really viable one. Endowment involves depositing the data together with a capital sum sufficient to pay for its storage indefinitely. The reason endowment is thought to be feasible is Kryder's Law, the 30-year history of exponential increase in disk capacity at roughly constant cost. Provided that it continues for another decade or so after you deposit your data, the endowment model works. Unfortunately, exponential growth curves never continue indefinitely. At some point, they stop. This leaves us with two intertwined questions: How long can we expect Kryder's Law to continue? How much should we charge per TB? The questions are intertwined because, obviously, the sooner Kryder's Law stops the more we have to charge. I was hoping that finding out how to answer these questions would be somebody else's problem. But it turned out to be my problem after all. I've been working for the Library of Congress on using cloud storage for a LOCKSS box (PDF). It turns out that there are several meanings of "using cloud storage for a LOCKSS box", and I have some of them actually working. But as I was starting to write up this work, I realised that the question I was going to get asked was "does it make economic sense to use cloud storage for a LOCKSS box?" A real LOCKSS box has both capital and running costs, whereas a virtual LOCKSS box in the cloud has only running costs. For an apples-to-apples comparison, I need to compare cash flows through time. Economists have a standard technique for comparing costs through time, called Discounted Cash Flow (DCF). The idea is that needing to pay a dollar in a year is the same as investing less than a dollar now so that the investment plus the accrued interest in a year will be the dollar I need to pay. Simple. In all the textbooks. But when I looked into it, two problems emerged. First, it doesn't work in practice. You need to know what interest rate to use. Here is research from the Bank of England (PDF) showing that the interest rates investors use are systematically wrong, in a way that makes endowing data, or making any other long-term investment, very difficult. Second, it doesn't even work in theory. Here is research from Doyne Farmer of the Santa Fe Institute and John Geankoplos of Yale (PDF), pointing out that (assuming you could choose the correct interest rate) using a fixed real interest rate would be OK if the outcome was linearly related to the interest rate. But it isn't. Using a constant interest rate averages out periods (like the 80s) of high interest rates and periods (like now) of very low (or negative) real interest rates. In order to model long-term investments, you need to use Monte Carlo techniques with an interest rate model. Similarly, if we assume that in the future storage costs will drop at varying rates, we need to use Monte Carlo techniques with a storage cost model. Why would we believe that in the future storage costs will drop at varying rates? Five reasons come to mind: First, they just did. The floods in Thailand increased disk prices by 50-100% almost overnight. These increased prices have flattened in recent months, but are expected to remain above trend for at least a year. Second, you might be wanting to use the well-known and increasingly popular "affordable cloud storage". Here's a table of the price history of four major cloud storage providers showing that the best case is a 3% per year price drop. That's 3% not 30%. Third, disk manufacturers are already finding further increases in density difficult. To stay on the curve we should have had 4TB disks by the middle of last year at the latest, but all we have are 3TB drives. The transition to future disk technologies such as HARM and BPM is being delayed, and desperate measures, called "shingled writes", are under way to build a 6th generation of the current technology, PMR. Shingled writes means, among other problems, that disks are no longer randomly writable. They become an append-only medium.  Fourth, even if we assume that Kryder's Law continues, we are in for a pause in the cost drop. The market for 3.5" disks is desktop PCs, which is collapsing. The volume consumer market is now 2.5" drives, which are on the same curve, just at a higher price per byte. And the life of the 2.5" form factor is also limited. If Kryder's Law continues until 2020 we should in theory have a $40 2.5" drive holding 14TB. But no-one is going to build this drive because no-one wants 14TB on their laptop. How would you back it up? They would much rather have a 2TB 1" drive for $15 and much less power draw. Fifth, there is a hard theoretical limit to the minimal size of a magnetic domain at the temperatures in a disk drive. This means Kryder's Law for magnetic disks pretty much has to stop by 2026 at the latest, and probably much earlier. Mark Kryder and Chang Soo Kim of C-MU compared the various competing solid state technologies with the 2020 14TB 2.5" drive (PDF), and none of them looked like good candidates for continued rapid drop in storage costs beyond there. So, we need a Monte Carlo model. I started building one, and it rapidly became clear that this was a problem much bigger than I could solve on my own. So we have started up a research program at UC Santa Cruz and Stony Brook University, with help from NetApp. I'm about to show you some early results from this collaboration. I need to stress that this is very much work in progress. We are just at the stage of trying to understand what a comprehensive model would look like, by building simple models and seeing if they produce plausible results. The first model is work by Daniel Rosenthal (no relation) of UCSC. It follows a unit of storage capacity, as it might be a shelf of a filer, as the demand for storage grows, disks fail or age out and are replaced by drives storing more data, and power and other running costs are incurred. Daniel's model doesn't account for the time value of money, so it can only be used for short periods. Here is a graph reproducing the well-known fact that drives (or tapes in a robot) are replaced when the value of the data they hold is no longer enough to justify the space they take up, not when their service life expires. With Daniel's parameters, the optimum drive replacement age is under 3 years. The second model is my initial simulation. It follows a unit of data, say a TB, as it migrates between media as they are replaced, occupying less and less of them. Unlike Daniel's model, this one uses an interest rate model to properly account for the time value of money. In this case interest rates are based on the last 20 years. Here are about a thousand runs of the model. We gradually increase the endowment and each time see what probability we have of surviving 100 years without running out of money. As you see, if storage media prices are, as we assumed, dropping 25% a year the variation in interest rates doesn't have a big effect. Here are a few million runs of the model, varying the Kryder's Law rate and the endowment to get a 3D graph. If we take the 98% contour of this graph, we get the next graph. This shows the relationship between the endowment needed for a 98% probability of not running out of money in 100 years, and the rate of Kryder's Law decrease in cost per byte, which we assume to be constant. The plausible result is that the endowment is relatively insensitive to the Kryder's Law rate if it is large, say above 25%/yr. But if it is small, say below 15%/yr, the endowment is rather sensitive to the rate. This is one of the key insights from our work so far. Storage industry experts disagree about the details but agree that the big picture is that Kryder's Law is slowing down. Thus we're moving from the right, flat side of the graph to the left, steep side. Despite being in a region of the graph where the cost is relatively low and easy to predict, the economic sustainability of digital preservation has been a major concern. Going forward, digital preservation faces two big problems: The cost of preserving a unit of data will increase. The uncertainty about the cost of preserving a unit of data will increase. The next graph applies the model to cloud storage, assuming an initial cost of 13 cents/GB/yr and interest rates from the last 20 years. We compute the endowment needed per TB for various rates of cost decrease. For example, if costs decrease at the 3% rate of the last 6 years of S3, we need $29K/TB. This is a lot of money. It is clearly possible that prices in the first 6 years of cloud storage were an anomaly, and in the near future they will start dropping as quickly as media prices. But the media price drop is slowing, and S3 does not appear to be under a lot of pricing pressure. Unless things change, cloud storage is simply too expensive for long-term use. The last graph shows the effect on the endowment of a spike that doubles disk prices a number of years into the life of the data. The Y=0 line has no spike for comparison. As expected, the effect is big if the Kryder's Law drop is slow and the spike is soon. Note the ridge, which shows that if the spike happens at the 4-year life I assumed for the drives, you are in trouble. As I said, we are at the very early stages of this work. It has turned out to be a lot more interesting and difficult than I could have imagined when I spoke here last year. Some of the improvements we're looking at are pluggable alternate models for interest rates (the last 20 years may not be representative) and technology evolution (we want to model the introduction of new technologies with very different properties). We want to use the these initial models to study questions such as: How does the increasing short-termism discovered by the Bank of England affect the endowment required? How can we choose between storage technologies with different cost structures, such as tape, disk and solid state, as their costs evolve at different rates? Can cloud storage services compete for long-term storage? By next year, we hope to have a simulation that is realistic enough for you to use for scenario planning. We are anxious to learn if you think a simulation of this kind would be useful, and what questions you would like to ask it. Posted by David. at 2:00 PM Labels: pda2012, storage costs 1 comment: David. said... One interesting point that came out in questions after my talk. The Internet Archive is offering an endowment storage service - the cost is 40 times the cost of the raw disk. Given the replication, ingest and other costs, this number roughly matches the output from the model with reasonable Kryder's Law assumptions. March 5, 2012 at 7:12 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ▼  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ▼  February (5) Talk at PDA2012 FAST 2012 Cloud Storage Pricing History Tide Pools and Terrorists Domain Name Persistence ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-3990 ---- DSHR's Blog: Kai Li's FAST Keynote DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, February 21, 2013 Kai Li's FAST Keynote Kai Li's keynote at the FAST 2013 conference was entitled Disruptive Innovation: Data Domain Experience. Data Domain was the pioneer of deduplication for backups. I was one of the people Sutter Hill asked to look at Data Domain when they were considering a B-round investment in 2003. I was very impressed, not just with their technology, but more with the way it was packaged as an appliance so that it was very easy to sell. The elevator pitch was "It is a box. You plug it into your network. Backups work better." I loved Kai's talk. Not just because I had a small investment in the B round, so he made me money, but more because just about everything he said matched experiences I had at Sun or nVIDIA. Below the fold I discuss some of the details. Kai quoted Dr. Geoffrey Nicholson: Research is the transformation of money into knowledge. Innovation is the transformation of knowledge into money. Data Domain is a stellar example of the second. The outlines of the story are simple. They started in late 2001, raised a total of about $41M in 3 rounds, IPO-ed less than 6 years later at a $1B valuation having spent only $27M, and were acquired 2 years after that at a $2.4B valuation. Before their IPO they had more than 60% of the market and more than 70% gross margin. That is an extraordinary performance. The vision was to replace tape for backup with disk at roughly the same price but much lower space, power, and network costs, and to make restoring from a backup much faster, thereby reducing the operational impact of failures.  The only way to do this was to use deduplication to get a high enough compression factor to swamp the cost per byte difference between disk and tape. Kai illustrated their success by showing a line of 17 full racks each containing an IBM tape library that were replaced by 3 3U Data Domain systems. The key to implementing this vision was to bet on long-term technology trends. The two that Kai pointed out were that disk had already replaced tape in personal audio (Walkman to iPod) and in TV time-shifting (VHS to Tivo), and that Moore's Law had already shifted from faster CPUs to more cores. The two major challenges they faced were: They had to sell for no more than a tape system, so their gross margin was directly related to the compression ratio they could achieve. The amount of data to be backed up was doubling every 18 months, but there are only 24 hours in a day, so their throughput needed to at least double every 18 months. The three founders started the company just after 9/11, at a time when no-one was starting companies. We started nVIDIA in 1993 in one of Silicon Valley's periodic downturns; we were the only semiconductor company to get any funding the quarter of our A round. Starting a company when no-one else is - absolutely the best time to do it. Kai laid out a list of key precepts, all of which I agree with despite some caveats: Build "must have" products. Customer driven technology. For nVIDIA, this was more difficult, since we had customers (PC and board manufacturers) and end-users (game players). Work with the best VC funds.  The difference between the best and the merely good in VCs is at least as big as the difference between the best and the merely good programmers. At nVIDIA we had two of the very best, Sutter Hill and Sequoia. The result is that, like Kai but unlike many entrepreneurs, we think VCs are enormously helpful. Raise more than we need - give up a lot of equity. One downside of starting a company when the market has gone south is that you have to give up more equity. But you are giving it up to VCs willing to invest when no-one else is, who are the ones you want to work with. And having cash in a downturn gives you the ability to move fast. High standard for early team, even if you miss the hiring plan. After the IPO at Sun came the bozo invasion, but then the company was well-enough established to survive it. No egos - take the best ideas wherever they come from. This is often hard for the best people to handle, and is a real test of the initial management. Kai's slides plotting revenue and things like lines of code and deduplication throughput on the same  graph were fascinating. They matched closely, with throughput increasing 100-fold in 6 years, and lines of code growing much faster after the IPO than before. Posted by David. at 9:00 AM Labels: deduplication, venture capital 1 comment: David. said... Usenix has posted the video and audio of Kai's talk. February 28, 2013 at 11:09 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ▼  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ▼  February (5) Facebook's "Cold Storage" Kai Li's FAST Keynote Thoughts from FAST 2013 Amazon's margins Rothenberg still wrong ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-4134 ---- DSHR's Blog: Unstoppable Code? DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, June 1, 2021 Unstoppable Code? This is the website of DeFi100, a "decentralized finance" system running on the Binance Smart Chain, after the promoters pulled a $32M exit scam. Their message sums up the ethos of cryptocurrencies: The business model of crypto is to provide a platform for crooks to scam muppets without running the risk of jail time. Few understand this. https://t.co/vFeyosRKPE — Trolly🧻 McTrollface 🌷🥀💩 (@Tr0llyTr0llFace) May 7, 2021 Governments around the world have started to wake up to the fact that this message isn't just for the "muppets", it is also the message of cryptocurrencies for governments and civil society. Below the fold I look into how governments might respond. The externalities of cryptocurrencies include: Massive carbon emissions. Funding "rogue states" such as North Korea and Iran. Tax evasion. Laundering the proceeds of crime, including the drug trade, theft and fraud, and armed robbery. An epidemic of ransomware. A wave of securities fraud targeting the greedy and vulnerable. Shortages of products including graphics cards, hard disks, and chips in general as limited fab capacity is diverted to mining ASICS. Abuse of free tiers of Web services. Noise pollution. It seems that recent ransomware attacks, including the May 7th one on Colonial Pipeline and the less publicized May 1st one on Scripps Health La Jolla, have reached a tipping point. Kat Jerick's Scripps Health slowly coming back online, 3 weeks after attack reports: "It’s likely that it’s taking a long time because of negotiations going on with the perpetrators, and the prevailing narrative is that they have the contents of the electronic health records system that are being used for 'double extortion,'" said Michael Hamilton, former chief information security officer for the city of Seattle and CISO of healthcare cybersecurity firm CI Security, in an email to Healthcare IT News. If that's true, Scripps certainly wouldn't be alone: The healthcare industry saw a number of high-profile ransomware incidents in the last year, including a cyberattack on Universal Health Services that led to a lengthy network shutdown and a $67 million loss. More recently, customers of the electronic health record vendor Aprima also reported weeks of security-related outages. In response governments are trying to regulate (US) or ban (China, India) cryptocurrencies. The libertarians who designed the technology believed they had made governments irrelevant. For example, the Decentralized Autonomous Organization (DAO)'s home page said: The DAO's Mission: To blaze a new path in business organization for the betterment of its members, existing simultaneously nowhere and everywhere and operating solely with the steadfast iron will of unstoppable code. This was before a combination of vulnerabilities in the underlying code was used to steal its entire contents, about 10% of all the Ether in circulation. If cryptocurrencies are based on the "iron will of unstoppable code" how would regulation or bans work? Nicholas Weaver explains how his group stopped the plague of Viagra spam in The Ransomware Problem Is a Bitcoin Problem: Although they drop-shipped products from international locations, they still needed to process credit card payments, and at the time almost all the gangs used just three banks. This revelation, which was highlighted in a New York Times story, resulted in the closure of the gangs’ bank accounts within days of the story. This was the beginning of the end for the spam Viagra industry. ... Subsequently, any spammer who dared use the “Viagra” trademark would quickly find their ability to accept credit cards irrevocably compromised as someone would perform a test purchase to find the receiving bank and then Pfizer would send the receiving bank a nastygram. Weaver draws the analogy with cryptocurrencies and "big-game" ransomware: These operations target companies instead of individuals, in an attempt to extort millions rather than hundreds of dollars at a time. The revenues are large enough that some gangs can even specialize and develop zero-day vulnerabilities for specialized software. Even the cryptocurrency community has noted that ransomware is a Bitcoin problem. Multimillion-dollar ransoms, paid in Bitcoin, now seem to be commonplace. This strongly suggests that the best way to deal with this new era of big-game ransomware will involve not just securing computer systems (after all, you can’t patch against a zero-day vulnerability) or prosecuting (since Russia clearly doesn’t care to either extradite or prosecute these criminals). It will also require disrupting the one payment channel capable of moving millions at a time outside of money laundering laws: Bitcoin and other cryptocurrencies. ... There are only three existing mechanisms capable of transferring a $5 million ransom — a bank-to-bank transfer, cash or cryptocurrencies. No other mechanisms currently exist that can meet the requirements of transferring millions of dollars at a time. The ransomware gangs can’t use normal banking. Even the most blatantly corrupt bank would consider processing ransomware payments as an existential risk. My group and I noticed this with the Viagra spammers: The spammers’ banks had a choice to either unbank the bad guys or be cut off from the financial system. The same would apply if ransomware tried to use wire transfers. Cash is similarly a nonstarter. A $5 million ransom is 110 pounds (50 kilograms) in $100 bills, or two full-weight suitcases. Arranging such a transfer, to an extortionist operating outside the U.S., is clearly infeasible just from a physical standpoint. The ransomware purveyors need transfers that don’t require physical presence and a hundred pounds of stuff. This means that cryptocurrencies are the only tool left for ransomware purveyors. So, if governments take meaningful action against Bitcoin and other cryptocurrencies, they should be able to disrupt this new ransomware plague and then eradicate it, as was seen with the spam Viagra industry. For in the end, we don’t have a ransomware problem, we have a Bitcoin problem. I agree with Weaver that disrupting the ransomware payment channel is an essential part of a solution to the ransomware problem. It would require denying cryptocurrency exchanges access to the banking system, and global agreement to do this would be hard. Given the involvement of major financial institutions and politicians, it would be hard even in the US. So what else could be done? Nearly a year ago Joe Kelly wrote a two-part post explaining how governments could take action against Bitcoin (and by extension any Proof-of-Work blockchain). In the first part, How To Kill Bitcoin (Part 1): Is Bitcoin ‘Unstoppable Code’? he summarized the crypto-bro's argument: They say Bitcoin can’t be stopped. Just like there’s no way you can stop two people sending encrypted messages to each other, so — they say — there’s no way you can stop the Bitcoin network. There’s no CEO to put on trial, no central server to seize, and no organisation to put pressure on. The Bitcoin network is, fundamentally, just people sending messages to each other, peer to peer, and if you knock out 1 node on the network, or even 1,000 nodes, the honey badger don’t give a shit: the other 10,000+ nodes keep going like nothing happened, and more nodes can come online at any time, anywhere in the world. So there you have it: it’s thousands of people running nodes — running code — and it’s unstoppable… therefore Bitcoin is unstoppable code; Q.E.D.; case closed; no further questions Your Honour. This money is above the law, and governments cannot possibly hope to control it, right? The problem with this, as with most of the crypto-bros arguments, is that it applies to the Platonic ideal of the decentralized blockchain. In the real world economies of scale mean things aren't quite like the ideal, as Kelly explains: It’s not just a network, it’s money. The whole system is held together by a core structure of economic incentives which critically depends on Bitcoin’s value and its ability to function for people as money. You can attack this. It’s not just code, it’s physical. Proof-of-work mining is a real-world process and, thanks to free-market forces and economies of scale, it results in large, easy-to-find operations with significant energy footprints and no defence. You can attack these. If you can exploit the practical reality of the system and find a way to reduce it to a state of total economic dysfunction, then it doesn’t matter how resilient the underlying peer-to-peer network is, the end result is the same — you have killed Bitcoin. Kelly explains why the idea of regulating cryptocurrencies is doomed to failure: The entire point of Bitcoin is to neutralise government controls on money, which includes AML and taxes. Notice that there’s no great technological difficulty in allowing for the completely unrestricted anonymous sending of a fixed-supply money — the barrier is legal and societal, because of the practical consequences of doing that. So the cooperation of the crypto world with the law is a temporary arrangement, and it’s not an honest handshake. The right hand (truthfully) expresses “We will do everything we can to comply” while the left hand is hard at work on the technology which makes that compliance impossible. Sure, Bitcoin is pretty traceable now, and sometimes it even helps with finding criminals who don’t have the technical savvy to cover their tracks, but you’ll be fighting a losing battle over time as stronger, more convenient privacy tooling gets added to the Bitcoin protocol and the wider ecosystem around it. ... So yeah: half measures like AML and censorship aren’t going to cut it. If you want to kill Bitcoin, that means taking it out wholesale; it means forcing the system into disequilibrium and inducing economic collapse. In How To Kill Bitcoin (Part 2): No Can Spend Kelly explains how a group of major governments could seize control of the majority of the mining power and mount a specific kind of 51% attack. The basic idea is that governments ban businesses, including exchanges, from transacting in Bitcoin, and seize 80% of the hash rate to mine empty blocks: As it stands, after seizing 80% of the active hash rate, you can generate proof-of-work hashes at 4x the speed of the remaining miners around the world. You control ~80 exahashes/sec, they control ~20 exahashes/sec For every valid block that rebel miners, collectively, can produce on the Bitcoin blockchain, you can produce 4 ... You use your limitless advantage to execute the following strategy: Mine an empty block — i.e. a block which is perfectly valid but contains no transactions Keep 5–10 unannounced blocks to yourself — i.e. mine 5–10 ‘extra’ empty blocks ahead of where the chain tip is now, but don’t actually share any of these blocks with the network Whenever a rebel miner announces a valid block, orphan it (override it) by announcing a longer chain with more cumulative proof-of-work — i.e. announce 2 of your blocks Repeat (go back to 2) The result of this is that Bitcoin transactions are no longer being processed, and you’ve created a black hole of expenditure for rebel miners. Every time a rebel miner spends $ to mine a block, it’s money down the drain: they don’t earn any block rewards for it All transactions just sit in the mempool, being (unstoppably) messaged back and forth between nodes, waiting to be included in a block, but they never make it in In other words, no-one can spend their bitcoin, no matter who they are or where they are in the world. Empty blocks wouldn't be hard to detect and ignore, but it would be easy for the government miners to fill their blocks with valid transactions between addresses that they control. Source Things have changed since Kelly wrote, in ways that complicate his solution. When he wrote, it was estimated that 75% of the Bitcoin mining power was located in China; China could have implemented Kelly's attack unilaterally. But since then China has been gradually increasing the pressure on cryptocurrency miners. This has motivated new mining capacity to set up elsewhere. The graph shows a recent estimate, with only 65% in China. As David Gerard reports: Bitcoin mining in China is confirmed to be shutting down — miners are trying to move containers full of mining rigs out of the country as quickly as possible. It’s still not clear where they can quickly put a medium-sized country’s worth of electricity usage. [Reuters] ... Here’s a Twitter thread about the miners getting out of China before the hammer falls. You’ll be pleased to hear that this is actually good news for Bitcoin. [Twitter] Source If the recent estimate is correct, Kelly's assumption that a group of governments could seize 80% of the mining power looks implausible. The best that could be done would be an unlikely agreement between the US and China for 72%. So for now lets assume that the Chinese government is doing this alone with only 2/3 of the mining power. Because mining is a random process, they are thus only able on average to mine 2 blocks for every one from the "rebel" miners. Because Bitcoin users would know the blockchain was under attack, they would need to wait several blocks (the advice is 6) before regarding a transaction as final. The rebels would have to win six times in a row, with probability 0.14%, for a transaction to go through. The Bitcoin network can normally sustain a transaction rate of around 15K/hr. Waiting 6 block times would reduce this to about 20/hr. Even if the requirement was only to wait 3 block times, the rate would be degraded to about 550/hr, so the attack would greatly reduce the supply of transactions and greatly increase their price. The recent cryptocurrency price crash caused average transaction fees to spike to about $60. In the event of an attack like this HODL-ers would be desperate to sell their Bitcoin, so bidding for the very limited supply of transactions would be intense. Anyone on the buy-side of these transactions would be making a huge bet that the attack would fail, so the "price" of Bitcoin in fiat currencies would collapse. Thus the economics for the rebel miners would look bleak. Their chances of winning a block reward would be greatly reduced, and the value of any reward they did win would be greatly reduced. There would be little incentive for the rebels to continue spending power to mine doomed blocks, so the cost of the attack for the government would drop rapidly once in place. The cost of the attack is roughly 2/3 of 6.25 BTC/block times 144block/day. Even making the implausible assumption that the price didn't collapse from its current $35K/BTC the cost is $21M/day or $7.6B/yr. A drop in the bucket for a major government. Thus it appears that, until the concentration of mining power in China decreases further, the Chinese government could kill Bitcoin using Kelly's attack. For an analysis of an alternate attack on the Bitcoin blockchain see In The Economic Limits of Bitcoin and the Blockchain by Eric Budish. Kelly addresses the government attacker: Normally what keeps the core structure of incentives in balance in the Bitcoin system, and the reason why miners famously can’t dictate changes to the protocol, or collude to double-spend their coins at will, is the fact that for-profit miners have a stake in Bitcoin’s future, so they have a very strong disincentive towards using their power to attack the network. In other words, for-profit miners are heavily invested in and very much care about the future value of bitcoin, because their revenue and the value of their mining equipment critically depends on it. If they attack the network and undermine the integrity of Bitcoin and its fundamental value proposition to end users, they’re shooting themselves in the foot. You don’t have this problem. In fact this critical variable is flipped on its head: you have a stake in the destruction of Bitcoin’s future. You are trying to get the price of BTC to $0, and the value of all future block rewards along with it. Attacking the network to undermine the integrity of Bitcoin and its value proposition to end users is precisely your goal. This fundamentally breaks the game theory and the balance of power in the system, and the result is disequilibrium. In short, Bitcoin is based on a Mexican Standoff security model which only works as a piece of economic design if you start from the assumption that every actor is rational and has a stake in the system continuing to function. That is not a safe assumption. There are two further problems. First, Bitcoin is only one, albeit the most important, of almost 10,000 cryptocurrencies. Second, some of these other cryptocurrencies don't use Proof-of-Work. Ethereum, the second most important cryptocurrency, after nearly seven years work, is promising shortly to transition from Proof-of-Work to Proof-of-Stake. The resources needed to perform a 51% attack on a Proof-of-Stake blockchain are not physical, and thus are not subject to seizure in the way Kelly assumes. I have written before on possible attacks in Economic Limits Of Proof-of-Stake Blockchains, but only in the context of double-spending attacks. I plan to do a follow-up post discussing sabotage attacks on Proof-of-Stake blockchains once I've caught up with the literature Source Update: The Economist's Graphic Detail convincingly demonstrates Crypto-miners are probably to blame for the graphics-chip shortage. The subhead sums up the graph: Secondhand graphics-card prices move nearly in lockstep with those of Ethereum The report compares the effect of ETH "price" on GPUs and CPUs: Since 2015 asking prices for six GPUs tracked by Keepa have moved in lockstep with Ethereum’s value. In late 2017 the currency’s first big rally coincided with a surge in listed GPU prices. Once the crypto bubble burst, GPU costs fell back to earth. Another boom began last year. As Ethereum’s price rose from $107 in March 2020 to $4,400 last month, the value of mining hardware once again followed suit. In six months, the six GPUs’ listed prices climbed by 150%. Those of CPUs barely budged. The GPU shortage has hurt data scientists and computer-aided-design users as well as gamers. Some relief may be on the way. Ethereum’s price is now 40% below its record high. GPU prices have yet to fall, but if history is any guide, they probably will soon. Posted by David. at 8:00 AM Labels: bitcoin 18 comments: David. said... Dan Goodin's Shortages loom as ransomware hamstrings the world’s biggest meat producer reveals the latest cryptocurrency externality: "A ransomware attack has struck the world’s biggest meat producer, causing it to halt some operations in the US, Canada, and Australia while threatening shortages throughout the world, including up to a fifth of the American supply. Brazil-based JBS SA said on Monday that it was the target of an organized cyberattack that had affected servers supporting North American and Australian IT operations. A White House spokeswoman later said the meat producer had been hit by a ransomware attack “from a criminal organization likely based in Russia” and that the FBI was investigating." June 1, 2021 at 3:19 PM David. said... Today's ransomware attacks: - Live Streams Go Down Across Cox Radio and TV Stations in Apparent Ransomware Attack. - Fujifilm becomes the latest victim of a network-crippling ransomware attack. And, Christopher Bing reports U.S. to give ransomware hacks similar priority as terrorism, official says: "The U.S. Department of Justice is elevating investigations of ransomware attacks to a similar priority as terrorism in the wake of the Colonial Pipeline hack and mounting damage caused by cyber criminals, a senior department official told Reuters. Internal guidance sent on Thursday to U.S. attorney’s offices across the country said information about ransomware investigations in the field should be centrally coordinated with a recently created task force in Washington. “It’s a specialized process to ensure we track all ransomware cases regardless of where it may be referred in this country, so you can make the connections between actors and work your way up to disrupt the whole chain,” said John Carlin, acting deputy attorney general at the Justice Department." Sure, that'll fix the problem. June 3, 2021 at 3:14 PM David. said... I missed one of yesterday's ransomware attacks. Lawrence Abrams reports that UF Health Florida hospitals back to pen and paper after cyberattack. That is yet another major hospital chain crippled. June 4, 2021 at 10:10 AM David. said... Heather Kelly manages to write an entire article entitled Ransomware attacks are closing schools, delaying chemotherapy and derailing everyday life without pointing out that ransomware is enabled by cryptocurrencies. June 5, 2021 at 11:42 AM David. said... William Turton and Kartikay Mehrotra report that Hackers Breached Colonial Pipeline Using Compromised Password: "Hackers gained entry into the networks of Colonial Pipeline Co. on April 29 through a virtual private network account, ... The account was no longer in use at the time of the attack but could still be used to access Colonial’s network, ... The account’s password has since been discovered inside a batch of leaked passwords on the dark web. That means a Colonial employee may have used the same password on another account that was previously hacked, ... The VPN account, which has since been deactivated, didn’t use multifactor authentication" Three strikes and you're out; unrevoked obsolete account, reused password, no 2FA. June 5, 2021 at 5:06 PM Fazal Majid said... I was going to suggest ransomware authors might use precious metals as an alternative, but it turns out $5M in palladium is 1760 XPD (troy oz) at 31g each, or 54kg, basically the same as the suitcase full of $100 notes. June 6, 2021 at 12:01 PM David. said... The Feds understand the importance of disrupting the ransomware payment channel. Dan Goodin reports that US seizes $2.3 million Colonial Pipeline paid to ransomware attackers: "On Monday, the US Justice Department said it had traced 63.7 of the roughly 75 bitcoins Colonial Pipeline paid to DarkSide, which the Biden administration says is likely located in Russia. ... FBI Deputy Director Paul M. Abbate said at a press conference. "For financially motivated cyber criminals, especially those presumably located overseas, cutting off access to revenue is one of the most impactful consequences we can impose." ... The law enforcement success intensifies speculation that Colonial Pipeline paid the ransom not to gain access to a decryptor it knew was buggy but rather to help the FBI track DarkSide and its mechanism for obtaining and laundering ransoms. The speculation is reinforced by the fact that Colonial Pipeline paid in bitcoin, despite that option requiring an additional 10 percent added to the ransom. Bitcoin is pseudo-anonymous, meaning that while names aren't attached to digital wallets, the wallets and the coins they store can still be tracked." Criming on an immutable public ledger has risks. This is good news for Monero! June 8, 2021 at 8:12 AM David. said... Today's ransomware news includes: - Ransomware hits Capitol Hill contractor by Catalin Cimpanu: "A company that provides a user engagement platform for US politicians has suffered a ransomware attack, leaving many lawmakers unable to email their constituents for days." - Ransomware Struck Another Pipeline Firm—and 70GB of Data Leaked by Andy Greenberg: "A group identifying itself as Xing Team last month posted to its dark web site a collection of files stolen from LineStar Integrity Services, a Houston-based company that sells auditing, compliance, maintenance, and technology services to pipeline customers. The data, ... includes 73,500 emails, accounting files, contracts, and other business documents, around 19 GB of software code and data, and 10 GB of human resources files that includes scans of employee driver's licenses and Social Security cards." June 8, 2021 at 3:05 PM David. said... There is a fairly reasonable discussion of this post on Hacker News. June 8, 2021 at 7:50 PM David. said... The New York Times reports that JBS the Meat processor paid $11 million in ransom to hackers. June 9, 2021 at 6:50 PM David. said... Reuters reports that More Chinese provinces issue bans on cryptomining: "Authorities in China's northwestern province of Qinghai and a district in neighbouring Xinjiang ordered cryptocurrency mining projects to close this week, as local governments put into practice Beijing's call to crack down on the industry. ... The Qinghai office of China's Ministry of Industry and Information Technology, on Wednesday ordered a ban on new cryptomining projects in the province, and told existing ones to shut down, according to a notice seen by Reuters and confirmed by local officials. Cryptominers who set up projects claiming to be running big data and super-computing centres will be punished, and companies are barred from providing sites or power supplies to mining activities. The Development & Reform Commission of Xinjiang's Changji Hui Prefecture also sent out a notice on Wednesday, seen by Reuters and confirmed with officials, ordering a cleanup of the sector." June 15, 2021 at 8:18 AM David. said... Wolfie Zhao's Here's what Yunnan is actually doing with bitcoin mining reports that Yunnan, where mining is hydro-powered, isn't banning mining explicitly, but it is requiring miners to pay the grid price for power, which could make it uneconomic: "The media report said the Yunnan Energy Bureau is requiring subordinate departments to inspect and then either shut down or rectify bitcoin mining farms that are using unauthorized hydroelectricity. This includes power stations that are directly supplying energy to bitcoin mining farms without paying a profit cut to the government." June 16, 2021 at 5:34 PM David. said... It turns out that I have time to work on a post about attacking Proof-of-Stake blockchains. Kai Morris reports that Buterin Explains Why Ethereum 2.0 Upgrade Won’t Arrive Until Late 2022: "To the disappointment of many however, the shipping of shard chains is not expected until sometime in late 2022, according to Ethereum’s latest roadmap. While a transition from PoW to PoS is expected to take place sometime in 2021/2022, the inclusion of shard chains is seen by many as the official completion of the Ethereum 2.0 upgrade. While many have believed the delay was due to the technically-burdensome transition, the actual issue is apparently something different." Buterin is blaming his co-workers for the delay in shipping the project they've worked on now for seven years. June 17, 2021 at 4:21 PM David. said... VentureBeat reports that Cybereason: 80% of orgs that paid the ransom were hit again: "Cybereason’s study found that the majority of organizations that chose to pay ransom demands in the past were not immune to subsequent ransomware attacks, often by the same threat actors. In fact, 80% of organizations that paid the ransom were hit by a second attack, and almost half were hit by the same threat group." June 19, 2021 at 7:28 AM David. said... Danny Palmer asks Have we reached peak ransomware?. Betteridge's Law of Headlines supplies the answer, No! June 22, 2021 at 10:07 AM David. said... Hannah Murphy reports that Monero emerges as crypto of choice for cybercriminals: "For cybercriminals looking to launder illicit gains, bitcoin has long been the payment method of choice. But another cryptocurrency is coming to the fore, promising to help make dirty money disappear without a trace. While bitcoin leaves a visible trail of transactions on its underlying blockchain, the niche “privacy coin” monero was designed to obscure the sender and receiver, as well as the amount exchanged. As a result, it has become an increasingly sought-after tool for criminals such as ransomware gangs, posing new problems for law enforcement." June 22, 2021 at 11:40 AM David. said... Strong evidence for the #1 business case for cryptocurrencies in Roxanne Henderson and Loni Prinsloo's South African Brothers Vanish, and So Does $3.6 Billion in Bitcoin: "The first signs of trouble came in April, as Bitcoin was rocketing to a record. Africrypt Chief Operating Officer Ameer Cajee, the elder brother, informed clients that the company was the victim of a hack. He asked them not to report the incident to lawyers and authorities, as it would slow down the recovery process of the missing funds. Some skeptical investors roped in the law firm, Hanekom Attorneys, and a separate group started liquidation proceedings against Africrypt. ... The firm’s investigation found Africrypt’s pooled funds were transferred from its South African accounts and client wallets, and the coins went through tumblers and mixers -- or to other large pools of bitcoin -- to make them essentially untraceable." Exit scams, they.re what Bitcoin was made for. June 24, 2021 at 7:30 AM David. said... The UK government's glacial approach to regulating cryptocurrency creeps forward, as Reuters reports in UK financial watchdog cracks down on cryptocurrency exchange Binance: "Britain’s financial regulator has ordered Binance, one of the world’s largest cryptocurrency exchanges, to stop all regulated activity and issued a warning to consumers about the platform, which is coming under growing scrutiny globally. ... Since January, the FCA has required all firms offering cryptocurrency-related services to register and show they comply with anti-money laundering rules. However, this month it said that just five firms had registered, and that the majority were not yet compliant." June 27, 2021 at 12:29 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ▼  June (8) Taleb On Cryptocurrency Economics China's Cryptocurrency Crackdown DNA Data Storage: A Different Approach Mining Is Money Transmission (Updated) Mempool Flooding Meta: Apology To Commentors Unreliability At Scale Unstoppable Code? ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-428 ---- DSHR's Blog: Cost-Reducing Writing DNA Data DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, March 21, 2019 Cost-Reducing Writing DNA Data In DNA's Niche in the Storage Market, I addressed a hypothetical DNA storage company's engineers and posed this challenge: increase the speed of synthesis by a factor of a quarter of a trillion, while reducing the cost by a factor of fifty trillion, in less than 10 years while spending no more than $24M/yr. Now, a company called Catalog plans to demo a significant step in the right direction: The goal of the demonstration, says Park, is to store 125 gigabytes, ... in 24 hours, on less than 1 cubic centimeter of DNA. And to do it for $7,000. That would be 1E11 bits for $7E3. At the theoretical maximum 2 bits/base, it would be $3.5E-8 per base, versus last year's estimate of 1E-4, or around 30,000 times better. If the demo succeeds, it marks a major achievement. But below the fold I continue to throw cold water on the medium-term prospects for DNA storage. Catalog's technique is different from experiments such as Microsoft's, which synthesize DNA strands a base at a time When Park and Roquet formed CATALOG in 2016, they shunned the idea of assembling bases one by one to represent the digital “alphabet.” ... CATALOG opted for prefab: it buys or makes fragments of DNA, “in massive quantities,” and then assembles with a custom-made liquid-handling robot. “DNA molecules are like Lego blocks,” says Park. “We can string them together in virtually infinite combinations. We take advantage of that and start with a few hundred molecules to generate in the end, trillions of different molecules.” Park likens the approach to movable type. Instead of having to write out every letter each time you want to write something, old-style typesetters cast their letters in advance, and then slotted them into position. But, despite Catalog's technical ingenuity, they face enormous obstacles to market success: $56,000/TB is still an extraordinarily expensive storage medium. 10TB hard disks retail at around $300, so are nearly 2000 times cheaper. 125GB in 24 hours is around 1.4MB/s, compared to the 240MB/s transfer rate of a single current 10TB drive. Since DNA storage has both very slow write and read, and is not rewritable, it is restricted to competing in the archival storage market. It has to be much cheaper than tape and optical media, not just hard disk, before it can compete successfully. Catalog's pitch is based on the idea that demand for data storage is insatiable: For a startup, a solution is less important than a solid problem, Park told the Weinert Center’s Distinguished Entrepreneurs Lunch on Feb. 27. And Park’s problem – the glut of information sometimes called the “datapocalypse” — is a result of a tsunami of data from pretty much every sphere of human activity. But, as I discussed Where Did All Those Bits Go?, the actual shipment data for storage vendors shows this is a fallacy. The demand for storage media, like the demand for any good, depends upon the price. At current prices demand for bytes of hard disk is growing steadily but more slowly than the Kryder rate, so unit shipments are falling. As I pointed out a year ago in Archival Media: Not a Good Business. The total market is probably less than $1B/yr, and new archival media have to compete with legacy media, such as hard disk, whose R&D and manufacturing investments have long been amortized. Given the long latency of DNA storage, to compete with these fully depreciated and much faster media it has to be vastly cheaper. In The Future Of Storage I discussed the fundamental problems of long-lived media such as DNA, including: The research we have been doing in the economics of long-term preservation demonstrates the enormous barrier to adoption that accounting techniques pose for media that have high purchase but low running costs, such as these long-lived media. To sum up, while Catalog may be able to demonstrate a significant advance in the technology of DNA storage, they will still be many orders of magnitude away from a competitive product in the archival storage market. Posted by David. at 8:00 AM Labels: green preservation, long-lived media, storage costs, storage media 2 comments: David. said... The team from Microsoft Research and U.W. have a paper and a video describing a fully-automated write-store-read pipeline for DNA. This is, I believe, a first automated end-to-end demonstration. From their abstract: "Our device encodes data into a DNA sequence, which is then written to a DNA oligonucleotide using a custom DNA synthesizer, pooled for liquid storage, and read using a nanopore sequencer and a novel, minimal preparation protocol. We demonstrate an automated 5-byte write, store, and read cycle with a modular design enabling expansion as new technology becomes available." Their system is base-at-a-time, so it is still slow: "Our system’s write-to-read latency is approximately 21 h. The majority of this time is taken by synthesis, viz., approximately 305 s per base, or 8.4 h to synthesize a 99-mer payload and 12 h to cleave and deprotect the oligonucleotides at room temperature. After synthesis, preparation takes an additional 30 min, and nanopore reading and online decoding take 6 min." Again, this is a significant step forward, but a practical product is a long way away. March 26, 2019 at 1:38 PM David. said... Tom Coughlin reports on Iridia's chip-based DNA storage technology. May 29, 2019 at 8:28 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ▼  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ▼  March (7) The 47 Links Mystery FAST 2019 Cost-Reducing Writing DNA Data Compression vs. Preservation It's The Enforcement, Stupid! It Isn't Just Cryptocurrency Mining Demand Is Far From Insatiable ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-456 ---- DSHR's Blog: The Bitcoin "Price" DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, January 14, 2021 The Bitcoin "Price" Jemima Kelly writes No, bitcoin is not “the ninth-most-valuable asset in the world” and its a must-read. Below the fold, some commentary. Source The "price" of BTC in USD has quadrupuled in the last three months, and thus its "market cap" has sparked claims that it is the 9th most valuable asset in the world. Kelly explains the math: Just like you would calculate a company’s market capitalisation by multiplying its stock price by the number of shares outstanding, with bitcoin you just multiply its price by its total “supply” of coins (ie, the number of coins that have been mined since the first one was in January 2009). Simples! If you do that sum, you’ll see that you get to a very large number — if you take the all-time-high of $37,751 and multiply that by the bitcoin supply (roughly 18.6m) you get to just over $665bn. And, if that were accurate and representative and if you could calculate bitcoin’s value in this way, that would place it just below Tesla and Alibaba in terms of its “market value”. (On Wednesday!) Then Kelly starts her critique, which is quite different from mine in Stablecoins: In the context of companies, the “market cap” can be thought of as loosely representing what someone would have to pay to buy out all the shareholders in order to own the company outright (though in practice the shares have often been over- or undervalued by the market, so shareholders are often offered a premium or a discount). Companies, of course, have real-world assets with economic value. And there are ways to analyse them to work out whether they are over- or undervalued, such as price-to-earnings ratios, net profit margins, etc. With bitcoin, the whole value proposition rests on the idea of the network. If you took away the coinholders there would be literally nothing there, and so bitcoin’s value would fall to nil. Trying to value it by talking about a “market cap” therefore makes no sense at all. Secondly, she takes aim at the circulating BTC supply: Another problem is that although 18.6m bitcoins have indeed been mined, far fewer can actually be said to be “in circulation” in any meaningful way. For a start, it is estimated that about 20 per cent of bitcoins have been lost in various ways, never to be recovered. Then there are the so-called “whales” that hold most of the bitcoin, whose dominance of the market has risen in recent months. The top 2.8 per cent of bitcoin addresses now control 95 per cent of the supply (including many that haven’t moved any bitcoin for the past half-decade), and more than 63 per cent of the bitcoin supply hasn’t been moved for the past year, according to recent estimates. The small circulating supply means that BTC liquidity is an illusion: the idea that you can get out of your bitcoin position at any time and the market will stay intact is frankly a nonsense. And that’s why the bitcoin religion’s “HODL” mantra is so important to be upheld, of course. Because if people start to sell, bad things might happen! And they sometimes do. The excellent crypto critic Trolly McTrollface (not his real name, if you’re curious) pointed out on Twitter that on Saturday a sale of just 150 bitcoin resulted in a 10 per cent drop in the price. And there are a lot of "whales' HODL-ing. If one decides to cash out, everyone will get trampled in the rush for the exits: More than 2,000 wallets contain over 1,000 bitcoin in them. What would happen to the price if just one of those tried to unload their coins on to the market at once? It wouldn’t be pretty, we would wager. What we call the “bitcoin price” is in fact only the price of the very small number of bitcoins that wash around the retail market, and doesn’t represent the price that 18.6m bitcoins would actually be worth, even if they were all actually available. Source Note that Kelly's critique implictly assumes that BTC is priced in USD, not in the mysteriously inflatable USDT. The graph shows that the vast majority of the "very small number of bitcoins that wash around the retail market" are traded for, and thus priced in USDT. So the actual number of bitcoins being traded for real money is a small fraction of a very small number. Bitfinex & Tether have agreed to comply with the New York Supreme Court and turn over their financial records to the New York Attorney General by 15th January. If they actually do, and the details of what is actually backing the current stock of nearly 24 billion USDT become known, things could get rather dynamic. As Tim Swanson explains in Parasitic Stablecoins, the 24B USD are notionally in a bank account, and the solvency of that account is not guaranteed by any government deposit insurance. So even if there were a bank account containing 24B USD, if there is a rush for the exits the bank holding that account could well go bankrupt. To give a sense of scale, the 150 BTC sale that crashed the "price" by 10% represents ( 150 / 6.25 ) / 6 = 4 hours of mining reward. If miners were cashing out their rewards, they would be selling 900BTC or $36M/day. In the long term, the lack of barriers to entry means that the margins on mining are small. But in the short term, mining capacity can't respond quickly to large changes in the "price". It certainly can't increase four times in three months. Source Lets assume that three months ago, when 1BTC≈10,000USDT, the BTC ecosystem was in equilibrium with the mining rewards plus fees slightly more than the cost of mining. While the BTC "price" has quadrupled, the hash rate and thus the cost of mining has oscillated between 110M and 150M TeraHash/s. It hasn't increased significantly, so miners only now need to sell about 225BTC or $9M/day to cover their costs. With the price soaring, they have an incentive to HODL their rewards. Posted by David. at 8:00 AM Labels: bitcoin 24 comments: David. said... Alex Pickard was an early buyer of BTC, and became a miner in 2017. But the scales have fallen from his eyes, In Bitcoin: Magic Internet Money he explains that BTC is useless for anything except speculation: "Essentially overnight it became “digital gold” with no use other than for people to buy and hodl ... and hope more people would buy and hodl, and increase the price of BTC until everyone on earth sells their fiat currency for BTC, and then…? Well, what exactly happens then, when BTC can only handle about 350,000 transactions per day and 7.8 billion people need to buy goods and services?" And he is skeptical that Tether will survive: "If Tether continues as a going concern, and if the rising price of BTC is linked to USDT issuance, then BTC will likely continue to mechanically build a castle to the sky. I have shown how BTC price increases usually follow USDT issuance. In late 2018, when roughly 1 billion USDT were redeemed, the price of BTC subsequently fell by over 50%. Now, imagine what would happen if Tether received a cease-and-desist order, and its bank accounts were seized. Today’s digital gold would definitely lose its luster." January 14, 2021 at 10:10 AM David. said... The saga of someone trying to turn "crypto" into "fiat". January 14, 2021 at 3:39 PM David. said... An anonymous Bitcoin HODL-er finally figured out the Tether scam and realized his winnings. His must-read account is The Bit Short: Inside Crypto’s Doomsday Machine: "The legitimate crypto exchanges, like Coinbase and Bitstamp, clearly know to stay far away from Tether: neither supports Tether on their platforms. And the feeling is mutual! Because if Tether Ltd. were ever to allow a large, liquid market between Tethers and USD to develop, the fraud would instantly become obvious to everyone as the market-clearing price of Tether crashed far below $1. Kraken is the biggest USD-banked crypto exchange on which Tether and US dollars trade freely against each other. The market in that trading pair on Kraken is fairly modest — about $16M worth of daily volume — and Tether Ltd. surely needs to keep a very close eye on its movements. In fact, whenever someone sells Tether for USD on Kraken, Tether Ltd. has no choice but to buy it — to do otherwise would risk letting the peg slip, and unmask the whole charade. My guess is that maintaining the Tether peg on Kraken represents the single biggest ongoing capital expense of this entire fraud. If the crooks can’t scrape together enough USD to prop up the Tether peg on Kraken, then it’s game over, and the whole shambles collapses. And that makes it the fraud’s weak point." January 18, 2021 at 8:22 AM David. said... Tether's bank is Deltec, in the Bahamas. The anonymous Bitcoin HODL-er points out that: "Bahamas discloses how much foreign currency its domestic banks hold each month." As of the end of September 2020, all Bahamian banks in total held about $5.3B USD worth of foreign currency. At that time there were about 15.5B USDT in circulation. Even if we assume that Deltec held all of it, USDT was only 34% backed by actual money. January 18, 2021 at 9:14 AM David. said... David Gerard's Tether printer go brrrrr — cryptocurrency’s substitute dollar problem collects a lot of nuggets about Tether, but also this: "USDC loudly touts claims that it’s well-regulated, and implies that it’s audited. But USDC is not audited — accountants Grant Thornton sign a monthly attestation that Centre have told them particular things, and that the paperwork shows the right numbers. An audit would show for sure whether USDC’s reserve was real money, deposited by known actors — and not just a barrel of nails with a thin layer of gold and silver on top supplied by dubious entities. But, y’know, it’s probably fine and you shouldn’t worry." February 3, 2021 at 3:05 PM David. said... In 270 addresses are responsible for 55% of all cryptocurrency money laundering, Catalin Cimpanu discusses a report from Chainalysis: "1,867 addresses received 75% of all criminally-linked cryptocurrency funds in 2020, a sum estimated at around $1.7 billion. ... The company believes that the cryptocurrency-related money laundering field is now in a vulnerable position where a few well-orchestrated law enforcement actions against a few cryptocurrency operators could cripple the movement of illicit funds of many criminal groups at the same time. Furthermore, additional analysis also revealed that many of the services that play a crucial role in money laundering operations are also second-tier services hosted at larger legitimate operators. In this case, a law enforcement action wouldn't even be necessary, as convincing a larger company to enforce its anti-money-laundering policies would lead to the shutdown of many of today's cryptocurrency money laundering hotspots." February 15, 2021 at 12:24 PM David. said... In Bitcoin is now worth $50,000 — and it's ruining the planet faster than ever, Eric Holthaus points out the inevitable result of the recent spike in BTC: "The most recent data, current as of February 17 from the University of Cambridge shows that Bitcoin is drawing about 13.62 Gigawatts of electricity, an annualized consumption of 124 Terawatt-hours – about a half-percent of the entire world’s total – or about as much as the entire country of Pakistan. Since most electricity used to mine Bitcoin comes from fossil fuels, Bitcoin produces a whopping 37 million tons of carbon dioxide annually, about the same amount as Switzerland does by simply existing." February 21, 2021 at 12:19 PM David. said... In Elon Musk wants clean power, but Tesla's dealing in environmentally dirty bitcoin notes that: "Tesla boss Elon Musk is a poster child of low-carbon technology. Yet the electric carmaker's backing of bitcoin this week could turbocharge global use of a currency that's estimated to cause more pollution than a small country every year. Tesla revealed on Monday it had bought $1.5 billion of bitcoin and would soon accept it as payment for cars, sending the price of the cryptocurrency though the roof. ... The digital currency is created via high-powered computers, an energy-intensive process that currently often relies on fossil fuels, particularly coal, the dirtiest of them all." But Reuters fails to ask where the $1.5B that spiked BTC's "price" came from. It wasn't Musk's money, it was the Tesla shareholder's money. And how did they get it? By selling carbon offsets. So Musk is taking subsidies intended to reduce carbon emissions and using them to generate carbon emissions. February 21, 2021 at 12:30 PM David. said... One flaw in Eric Holthaus' Bitcoin is now worth $50,000 — and it's ruining the planet faster than ever is that while he writes: "There are decent alternatives to Bitcoin for people still convinced by the potential social benefits of cryptocurrencies. Ethereum, the world’s number two cryptocurrency, is currently in the process of converting its algorithm from one that’s fundamentally competitive (proof-of-work, like Bitcoin uses) to one that’s collaborative (proof-of-stake), a move that will conserve more than 99% of its electricity use." He fails to point out that (a) Ethereum has been trying to move to proof-of-stake for many years without success, and (b) there are a huge number of other proof-of-work cryptocurrencies that, in aggregate, also generate vast carbon emissions. February 21, 2021 at 12:57 PM David. said... Four posts worth reading inspired by Elon Musk's pump-and-HODL of Bitcoin. First, Jamie Powell's Tesla and bitcoin: the accounting explains how $1.5B of BTC will further obscure the underlying business model of Tesla. Of course, if investors actually understood Tesla's business model they might not be willing to support a PE of, currently, 1,220.78, so the obscurity may be the reason for the HODL. Second, Izabella Kaminska's What does institutional bitcoin mean? looks at the investment strategies hedge funds like Blackrock will use as they "dabble in Bitcoin". It involves the BTC futures market being in contango and is too complex to extract but well worth reading. Third, David Gerard's Number go up with Tether — Musk and Bitcoin set the world on fire points out that Musk's $1.5B only covers 36 hours of USDT printing: "Tether has given up caring about plausible appearances, and is now printing a billion tethers at a time. As I write this, Tether states its reserve as $34,427,896,266.91 of book value. That’s $34.4 billion — every single dollar of which is backed by … pinky-swears, maybe? Tether still won’t reveal what they’re claiming to constitute backing reserves." In Bitcoin's 'Elon Musk pump' rally to $48K was exclusively driven by whales, Joseph Young writes: "n recent months, so-called “mega whales” sold large amounts of Bitcoin between $33,000 and $40,000. Orders ranging from $1 million to $10 million rose significantly across major cryptocurrency exchanges, including Binance. But as the price of Bitcoin began to consolidate above $33,000 after the correction from $40,000, the buyer demand from whales surged once again. Analysts at “Material Scientist” said that whales have been showing unusually large volume, around $150 million in 24 hours. This metric shows that whales are consistently accumulating Bitcoin in the aftermath of the news that Tesla bought $1.5 billion worth of BTC." February 21, 2021 at 4:49 PM David. said... Ethereum consumes about 22.5TWh/yr - much less than Bitcoin's 124TWh/yr, but still significant. It will continue to waste power until the switch to proof-of-stake, underway for the past 7 years, finally concludes. Don't hold your breath. February 22, 2021 at 10:21 AM David. said... The title of Jemima Kelly's Hey Citi, your bitcoin report is embarrassingly bad says all that needs to be said, but her whole post is a fun read. March 2, 2021 at 9:26 AM David. said... Jemima Kelley takes Citi's embarrassing "bitcoin report" to the woodshed again in The many chart crimes of *that* Citi bitcoin report: "Not only was this “report” actually just a massive bitcoin-shilling exercise, it also contained some really quite embarrassing errors from what is meant to be one of the top banks in the world (and their “premier thought leadership” division at that). The error that was probably most shocking was the apparent failure of the six Citi analysts who authored the report to grasp the difference between basis points and percentage points." March 3, 2021 at 6:43 AM David. said... Adam Tooze's Talking (and reading) about Bitcoin is an economist's view of Bitcoin: "To paraphrase Gramsci, crypto is the morbid symptom of an interregnum, an interregnum in which the gold standard is dead but a fully political money that dares to speak its name has not yet been born. Crypto is the libertarian spawn of neoliberalism’s ultimately doomed effort to depoliticize money." Tooze quotes Izabella Kaminska contrasting the backing of "fiat" by the requirement to pay tax with Bitcoin: "Private “hackers” routinely raise revenue from stealing private information and then demanding cryptocurrency in return. The process is known as a ransom attack. It might not be legal. It might even be classified as extortion or theft. But to the mindset of those who oppose “big government” or claim that “tax is theft”, it doesn’t appear all that different. A more important consideration is which of these entities — the hacker or a government — is more effective at enforcing their form of “tax collection” upon the system. The government, naturally, has force, imprisonment and the law on its side. And yet, in recent decades, that hasn’t been quite enough to guarantee effective tax collection from many types of individuals or corporations. Hackers, at a minimum, seem at least comparably effective at extracting funds from rich individuals or multinational organisations. In many cases, they also appear less willing to negotiate or to cut deals." March 5, 2021 at 8:49 AM David. said... IBM Blockchain Is a Shell of Its Former Self After Revenue Misses, Job Cuts: Sources by Ian Allison is the semi-official death-knell for IBM's Hyperledger: "IBM has cut its blockchain team down to almost nothing, according to four people familiar with the situation. Job losses at IBM (NYSE: IBM) escalated as the company failed to meet its revenue targets for the once-fêted technology by 90% this year, according to one of the sources." David Gerard comments: "Hyperledger was a perfect IBM project — a Potemkin village open source project, where all the work was done in an IBM office somewhere." March 5, 2021 at 2:23 PM David. said... Ketan Joshi's Bitcoin is a mouth hungry for fossil fuels is a righteous rant about cryptocurrencies' energy usage: "I think the story of Bitcoin isn’t a sideshow to climate; it’s actually a very significant and central force that will play a major role in dragging down the accelerating pace of positive change. This is because it has an energy consumption problem, it has a fossil fuel industry problem, and it has a deep cultural / ideological problem. All three, in symbiotic concert, position Bitcoin to stamp out the hard-fought wins of the past two decades, in climate. Years of blood, sweat and tears – in activism, in technological development, in policy and regulation – extinguished by a bunch of bros with laser-eye profile pictures." March 16, 2021 at 10:57 AM David. said... The externalities of cryptocurrencies, and bitcoin in particular, don't just include ruining the climate, but also ruining the lives of vulnerable elderly who have nothing to do with "crypto". Mark Rober's fascinating video Glitterbomb Trap Catches Phone Scammer (who gets arrested) reveals that Indian phone scammers transfer their ill-gotten gains from stealing the life savings of elderly victims from the US to India using Bitcoin. March 19, 2021 at 6:19 PM David. said... The subhead of Noah Smith's Bitcoin Miners Are on a Path to Self-Destruction is: "Producing the cryptocurrency is a massive drain on global power and computer chip supplies. Another way is needed before countries balk." March 26, 2021 at 11:50 AM David. said... In Before Bitfinex and Tether, Bennett Tomlin pulls together the "interesting" backgrounds of the "trustworthy" people behind Bitfinex & Tether. March 29, 2021 at 4:14 PM David. said... David Gerard reports that: "Coinbase has had to pay a $6.5 million fine to the CFTC for allowing an unnamed employee to wash-trade Litecoin on the platform. On some days, the employee’s wash-trading was 99% of the Litecoin/Bitcoin trading pair’s volume. Coinbase also operated two trading bots, “Hedger and Replicator,” which often matched each others’ orders, and reported these matches to the market." As he says: "If Coinbase — one of the more regulated exchanges — did this, just think what the unregulated exchanges get up to." Especially with the "trustworthy" characters running the unregulated exchanges. March 29, 2021 at 4:19 PM David. said... Martin C. W. Walker and Winnie Mosioma's Regulated cryptocurrency exchanges: sign of a maturing market or oxymoron? examines the (mostly lack of) regulation of exchanges and concludes; "In general, cryptocurrencies lack anyone that is genuinely accountable for core processes such as transfers of ownership, trade validation and creation of cryptocurrencies. A concern that can ultimately only be dealt with by acceptance of the situation or outright bans. However, the almost complete lack of regulation of the highly centralised cryptocurrency exchanges should be an easier-to-fill gap. Regulated entities relying on prices from “exchanges” for accounting or calculation of the value of futures contracts are clearly putting themselves at significant risk." Coinbase just filed for a $65B direct listing despite just having been fine $6.5M forwash-tradding Litecoin. April 14, 2021 at 12:10 PM David. said... Izabella Kaminska outlines the the risks underlying Coinbase's IPO in Why Coinbase’s stellar earnings are not what they seem. The sub-head is: "It’s easy to be profitable if your real unique selling point is being a beneficiary of regulatory arbitrage." And she concludes: "Coinbase may be a hugely profitable business, but it may also be a uniquely risky one relative to regulated trading venues such as the CME or ICE, neither of which are allowed to take principal positions to facilitate liquidity on their platforms. Instead, they rely on third party liquidity providers. Coinbase, however, is not only known to match client transactions on an internalised “offchain” basis (that is, not via the primary blockchain) but also to square-off residual unmatched positions via bilateral relationships in crypto over-the-counter markets, where it happens to have established itself as a prominent market maker. It’s an ironic state of affairs because the netting processes that are at the heart of this system expose Coinbase to the very same risks that real-time gross settlement systems (such as bitcoin) were meant to vanquish." April 16, 2021 at 1:24 PM David. said... Nathan J. Robinson hits the nail on the head with Why Cryptocurrency Is A Giant Fraud: "You may have ignored Bitcoin because the evangelists for it are some of the most insufferable people on the planet—and you may also have kicked yourself because if you had listened to the first guy you met who told you about Bitcoin way back, you’d be a millionaire today. But now it’s time to understand: is this, as its proponents say, the future of money?" and: "But as is generally the case when someone is trying to sell you something, the whole thing should seem extremely fishy. In fact, much of the cryptocurrency pitch is worse than fishy. It’s downright fraudulent, promising people benefits that they will not get and trying to trick them into believing in and spreading something that will not do them any good. When you examine the actual arguments made for using cryptocurrencies as currency, rather than just being wowed by the complex underlying system and words like “autonomy,” “global,” and “seamless,” the case for their use by most people collapses utterly. Many believe in it because they have swallowed libertarian dogmas that do not reflect how the world actually works." Robinson carefully dismantles the idea that cryptocurrencies offer "security", "privacy", "convenience", and many of the other arguments for them. TGhe whole article is well worth reading. April 25, 2021 at 5:34 PM David. said... Rob Beschizza reports on the effects of Elon Musk's cryptocurrency dump: "After Elon Musk turned on Bitcoin, so goes the market. Bitcoin lost about 20% of its value in a few hours before recovering to rest about 12% down, reports CNN Business. Julia Horowitz writes that it's bad news for Crypto in general, with similar falls for Ethereum, Dogecoin and the rest" May 13, 2021 at 10:41 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ▼  January (5) Effort Balancing And Rate Limits ISP Monopolies The Bitcoin "Price" Two Million Page Views! The New Oldweb.today ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-4675 ---- DSHR's Blog: Mempool Flooding DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, June 15, 2021 Mempool Flooding In Unstoppable Code? I discussed Joe Kelly's suggestion for how governments might make it impossible to transact Bitcoin by mounting a 51% attack using seized mining rigs. That's not the only way to achieve the same result, so below the fold I discuss an alternative approach that could be used alone or in combination with Kelly's concept. The Lifecycle Of The Transaction The goal is to prevent transactions in a cryptocurrency based on a permissionless blockchain. We need to understand how transactions are supposed to work in order to establish their attack surface: Transactions transfer cryptocurrency between inputs and outputs identified by public keys (or typically hashes of the keys). The input creates a proposed transaction specifying the amount for each output, and a miner fee, then signs it with their private key. The proposed transaction is broadcast to the mining pools, typically by what amounts to a Gossip Protocol. Mining pools validate the proposed transactions they receive and add them to a database of proposed transactions, typically called the "mempool". When a mining pool starts trying to mine a block, they choose some of the transactions from their mempool to include in it. Typically, they choose transactions (or sets of dependent transactions) that yield the highest fee should their block win. Once a transaction is included in a winning block, or more realistically in a sequence of winning blocks, it is final. Attack Surface My analysis of the transaction lifecycle's attack surface may well not be complete, but here goes: The security of funds before a proposed transaction depends upon the private key remaining secret. The DarkSide ransomware group lost part of their takings from the Colonial Pipeline compromise because the FBI knew the private key of one of their wallets. The gossip protocol makes proposed transactions public. A public "order book" is necessary because the whole point of a permissionless blockchain is to avoid the need for trust between particpants. This leads to the endemic front-running I discussed in The Order Flow, and which Naz automated (see How to Front-run in Ethereum). The gossip protocol is identifiable traffic, which ISPs could be required to block. The limited blocksize and fixed block time limit the rate at which transactions can leave the mempool. Thus when the transaction demand exceeds this rate the mempool will grow. Mining pools have limited storage for their mempools. When the limit is reached mining pools will drop less-profitable transactions from their mempools. Like any network service backed by a limited resource, the mempool is vulnerable to a Distributed Denial of Service (DDoS) attack. Each mining pool is free to choose transactions to include in the blocks they try to mine at will. Thus a transaction need not appear in the mempool to be included in a block. For example, mining pools' own transactions or those of their friends could avoid the mempool, the equivalent of "dark pools" in equity markets. Once a transaction is included in a mined block, it is vulnerable to a 51% attack. Flooding The Mempool Source Lets focus on the idea of DDoS-ing the mempool. As John Lewis of the Bank of England wrote in 2018's The seven deadly paradoxes of cryptocurrency: Bitcoin has an estimated maximum of 7 transactions per second vs 24,000 for visa. More transactions competing to get processed creates logjams and delays. Transaction fees have to rise in order to eliminate the excess demand. So Bitcoin’s high transaction cost problem gets worse, not better, as transaction demand expands. Worse, pending transactions are in a blind auction to be included in the next block. Because users don't know how much to bid to be included, they either overpay, or suffer a long delay or possibly fail completely. The graph shows this effect in practice. As the price of Bitcoin crashed on May 18th and HODL-ers rushed to sell, the average fee per transaction spiked to over $60. The goal of the attack is to make victims' transactions rare, slow and extremely expensive by flooding the mempool with attackers' transactions. Cryptocurrencies have no intrinsic value, their value is determined by what the greater fool will pay. If HODL-ers find it difficult and expensive to unload their HODL-ings, and traders find it difficult and expensive to trade, the "price" of the currency will decrease. This attack isn't theoretical, it has already been tried. For example, in June 2018 Bitcoin Exchange Guide reported: What appears to be happening is a bunch (possibly super spam) of 1 satoshi transactions (smallest unit in bitcoin) which will put a decent stress test if sustained. Some are saying near 4,500 spam transactions and counting. This is obviously not an effective attack. There is no incentive for the mining pools to prefer tiny unprofitable transactions over normal user transactions. Unless it were combined with a 51% attack, an effective flooding attack needs to incentivize mining pools who are not part of the attack to prefer the attackers' transactions to those of victims. The only way to do this is to make the attacker's transactions more profitable, which means they have to come with large fees. If a major government wanted to mount a flooding attack on, for example, Bitcoin they would need a lot of Bitcoin as ammunition. Fortunately, at least the US government has seized hundreds of millions of dollars of cryptocurrencies: Mr. Raimondi of the Justice Department said the Colonial Pipeline ransom seizure was the latest sting operation by federal prosecutors to recoup illicitly gained cryptocurrency. He said the department has made “many seizures, in the hundreds of millions of dollars, from unhosted cryptocurrency wallets” used for criminal activity. If they needed more, they could always hack one of the numerous vulnerable exchanges. With this ammunition the government could generate huge numbers of one-time addresses and huge numbers of valid transactions among them with fees large enough to give them priority. The result would be to bid up the Bitcoin fee necessary for victim transactions to get included in blocks. It would be hard for mining pools to identify the attackers' transactions, as they would be valid and between unidentifiable addresses. As the attack continued this would ensure that: The minimum size of economically feasible transactions would increase, restricting trading to larger and larger HODL-ers, or to exchanges. The visible fact that Bitcoin was under sustained, powerful attack would cause HODL-ers to sell for fiat or other cryptocurrencies. This would depress the "price" of Bitcoin, as the exchanges would understand the risk that the attack would continue and further depress the price. Mining pools, despite receiving their normal rewards plus increased fees in Bitcoin, would suffer reduction of their income in fiat terms. Further, the mining pools need transactions to convert their rewards and fees to fiat to pay for power, etc. With transactions scarce and expensive, and reduced fiat income, the hash rate would decline, making a 51% attack easier. How Feasible Are Flooding Attacks? Back on May 18th, as the Bitcoin "price" crashed to $30K, its blockchain was congested and average fees spiked to $60. Clearly, the distribution of fees would have been very skewed, with a few fees well above $60 and most well below; the median fee was around $26. Fees are measured in Satoshi, 10-8 of a BTC, so at that time the average fee was 60 / 3*104 BTC or 2*105 Satoshi. Lets assume that ensuring that no transactions with less than 2*105 Satoshi as a fee succeed is enough to keep the blockchain congested. Lets assume that when the Feds claim to have seized hundreds of millions of dollars of cryptocurrencies they mean $5*108 or 5*108/3*104 BTC or 1.67*1012 Satoshi. That would be enough to pay the 2*105 Satoshi for 6*107 transactions. At 6 transaction/second that would keep the blockchain congested for nearly 116 days or nearly 4 months. In practice, the attack would last much longer, since the attackers could dynamically adjust the fees they paid to keep the blockchain congested as, inevitably, the demand for transactions from victims declined as they realised it was futile. Ensuring that almost no victim transactions succeeded for 4 months would definitely greatly reduce the BTC "price". Thus the 16,700 BTC the mining pools would have earned in fees, plus the 104,400 BTC they would have earned in block rewards during that time would be worth much less than the $3.6B they would represent at a $30K "price". Funding the mining pools is a downside of this attack, but the increment is only about 14% in BTC terms, so likely to be swamped by the decrease in fiat terms. Potential Defenses Blockchain advocates argue that one of the benefits of the decentralization they claim for the technology is "censorship resistance". This is a problem for them because defending against a mempool flooding attack requires censorship. The mining pools need to identify and censor (i.e. drop) the attackers' transactions. Fortunately for the advocates, the technology is not actually decentralized (3-4 mining pools have dominated the hash rate for the last 7 years), so does not actually provide "censorship resistance". The pools could easily conspire to implement the necessary censorship. Unfortunately for the advocates, the attackers would be flooding with valid transactions offering large fees, so the pools would find it hard to, and not be motivated to, selectively drop them. 16,700 BTC is only about half of Tesla's HODL-ings, so it would be possible for a whale, or a group of whales, to attempt to raise the cost of the attack, or equivalently reduce its duration, by mounting a simultaneous flood themselves. The attackers would respond by reducing their flood, since the whales were doing their job for them. This would be expensive for the whales and wouldn't be an effective defense. Since it is possible for mining pools to include transactions in blocks they mine, and the attack would render the mempool effectively useless, one result of the attack would be to force exchanges and whales to establish "dark pool" type direct connections to the mining pools, allowing the mining pools to ignore the mempool and process transactions only from trusted addresses. This would destroy the "decentralized" myth, completing the transition of the blockchain into a permissioned one run by the pools, and make legal attacks on the exchanges an effective weapon. Also, the mining pools would be vulnerable to government-controlled "trojan horse" exchanges, as the bad guys were to ANOM encrypted messaging. Conclusion If my analysis is correct, if would be feasible for a major government to mount a mempool flooding attack that would seriously disrupt, but not totally destroy Bitcoin and, by extension other cryptocurrencies. The attack would amplify the effect of using seized mining power as I discussed on Unstoppable Code?. Interestingly, the mempool flooding attack is effective irrespective of the consensus mechanism underlying the cryptocurrency. It depends only upon a public means of submitting transactions. Posted by David. at 8:00 AM Labels: bitcoin No comments: Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ▼  June (8) Taleb On Cryptocurrency Economics China's Cryptocurrency Crackdown DNA Data Storage: A Different Approach Mining Is Money Transmission (Updated) Mempool Flooding Meta: Apology To Commentors Unreliability At Scale Unstoppable Code? ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-4901 ---- DSHR's Blog: Chia Network DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, September 4, 2018 Chia Network Back in March I wrote Proofs of Space, analyzing Bram Cohen's fascinating EE380 talk. I've now learned more about Chia Network, the company that is implementing a network using his methods. Below the fold I look into their prospects. Chia Network's blockchain First, it is important to observe that, although Proofs of Space and Time are important for distributed storage networks such as FileCoin's, this is not what Chia Network is using Cohen's Proofs of Space and Proofs of Time (Verifiable Delay Functions VDF) for. Instead, they are using them as a replacement for Proof of Work in blockchains such as Bitcoin's. Here is the brief explanation of how they would be used I wrote in Proofs of Space: As I understand it, the proof of space technique in essence works by having the prover fill storage space with an array of pseudo-random points in [0,1] (a plot) via a time-consuming process. The verifier can then pose to the prover a question that can be answered either by a single storage access (fast) or by repeating the process of filling the storage (slow). By observing the time the prover takes the verifier can distinguish these two cases, and thus be assured that the prover has stored the (otherwise useless) data. As I understand it, verifiable delay functions work by forcing the prover to perform a specified number of iterations to generate a value that the verifier can quickly show is valid. To use in a blockchain, each block is a proof of space followed by a proof of time which finalizes it. To find a proof of space, take the hash of the last proof of time, put it on a point in [0,1], find the closest proof of space you can to that. To find the number of iterations of the proof of time, multiply the difference between those two positions by the current work difficulty factor and round up to the next integer. The result of this is that the best proof of space will finish first, with the distribution of arrival times of finalizations the same as happens in a proof of work system if resources are fixed over time. The only discretion left on the part of farmers is whether to withhold their winning proofs of space. In other words, the winning verification of a block will be the one from the peer whose plot contains the closest point, because that distance controls how long it will take for the peer to return its verification via the verifiable delay function. The more storage a peer devotes to its plot, and thus the shorter the average distance between points, the more likely it is to be the winner because its proof of space will suffer the shortest delay. There are a number of additional points that need to be understood: The process of filling storage with pseudo-random points must be slow enough that a peer cannot simulate extra storage by re-filling the limited storage it has with a new array of points. The process is presumably linear in the size of the storage to be filled, in the speed of the CPU, and in the write bandwidth of the storage medium. But since it is done only once, the slowness doesn't impair the Proof of Space time. The delay imposed by the Proof of Time must be orders of magnitude longer than the read latency of the storage medium, so that slow storage is not penalized, and fast storage such as SSDs are not advantaged. The process of filling storage has to be designed so that it is impossible to perform iterative refinement, in which a small amount of permanent storage is used to find the neighborhood of the target point, then a small amount of storage filled with just that neighborhood, and the process repeated to zero in on the target, hoping that the return in terms of reduced VDF time, exceeds the cost of populating the small neighborhood. As I understand it, Chia Network's goals are that their blockchain would be more energy efficient, more ASIC-resistant, less centralized, and more secure than Proof of Work blockchains. Given the appalling energy waste of Proof of Work blockchains, even if Chia only managed the first of these it would be a really good thing. Energy Efficiency As regards energy efficiency, I believe a peer's duty cycle looks like this: Compute 1 hash, consuming infinitesimal energy in the CPU. Access 1 value from the disk by waking it from standby, doing a seek and a read, and dropping back into standby. This uses little energy. Do a Proof of Time, which takes a long time with the CPU and RAM using energy, and the disk in standby using perhaps 1W. So the overall energy consumption of PoSp/VDP depends entirely on the energy consumption of the Proof of Time, the Proof of Space is irrelevant. Two recent papers on VDF are Dan Boneh et al's Verifiable Delay Functions and A Survey of Two Verifiable Delay Functions. The math is beyond me, but I believe their VDFs all involve iterating on a computation in which the output of each iteration is the input for the next, to prevent parallelization. If this is the case, during the Proof of Time one of the CPU's cores will be running flat-out and not using any RAM. Thus I don't see that the energy demands of these VDFs are very different from the energy demands of hashing, as in Proof of Work. So if every peer ran a VDF the Chia network would use as much energy as Bitcoin. But, as I understand it, this isn't what Chia is planning. Instead, the idea is that there would be a small number of VDF services, to which peers would submit their Proofs of Space. In this way the energy used by the network would be vastly smaller than Bitcoin's. ASIC resistance The math of VDFs is based on imposing a number of iterations, not on imposing a delay in wall clock time. There may be an advantage to executing the VDF faster than the next peer. As David Vorick argues: At the end of the day, you will always be able to create custom hardware that can outperform general purpose hardware. I can’t stress enough that everyone I’ve talked to in favor of ASIC resistance has consistently and substantially underestimated the flexibility that hardware engineers have to design around specific problems, even under a constrained budget. For any algorithm, there will always be a path that custom hardware engineers can take to beat out general purpose hardware. It’s a fundamental limitation of general purpose hardware. Thus one can expect that if the rewards are sufficient, ASICs will be built to speed up any time-consuming algorithm (I agree that Cohen's Proof of Space is unlikely to attract ASICs, since the time it takes is negligible). I don't know what happens if a VDF service A with ASICs gets a point with distance d and, thanks to the ASICs, announces it significantly before VDF service B without ASIC assistance gets a point with distance d-ε and announces it. In the Bitcoin blockchain, all possible solutions to a block are equally valid. A later, different solution to the block is just later, and thus inferior. But in the Chia blockchain a later solution to the block could be unambiguously better (i.e. provably having performed fewer iterations) if it came from a slower peer. How does the network decide how long to wait for a potentially better solution? If the network doesn't wait long enough, and the rewards are enough, it will attract ASICs. I believe that Chia hopes that the VDF services will all use ASICs, and thus be close in speed. But there won't be enough VDF servers to make a market for ASICs. Tim Swanson estimates that there are about 3.8M Antminer S9s mining on the Bitcoin blockchain. That's a market for ASICs, but a few 10s isn't. Perhaps FPGAs could be used instead. Centralization Proof of Work blockchains become centralized through the emergence of mining pools: Bitcoin gives out a block reward, 25 BTC as of today, every 10 minutes on average. This means that a miner whose power is a small fraction of the total mining power is unlikely to win in a very long time. Since this can take years, miners congregate in pools. A pool has a manager, and whenever a participant in the pool finds proof of work, the reward goes to the manager, which distributes it among the participants according to their contribution to the pool. But the emergence of pools isn't due to Proof of Work, it is a necessary feature of every successful blockchain. Consider a blockchain with a 10-minute block time and 100K equal miners. A miner will receive a reward on average once in 1.9 years. If the average Chia network miner's disk in such a network has a 5-year working life the probability that it will not receive any reward is 2.5%. This is not a viable sales pitch to miners; they join pools to smooth out their income. Thus, all other things being equal, a successful blockchain would have no more than say 60 equal pools, generating on average a reward every 10 hours. But all other things are not equal, and this is where Brian Arthur's Increasing Returns and Path Dependence in the Economy comes in. Randomly, one pool will be bigger than the others, and will generate better returns through economies of scale. The better returns will attract miners from other pools, so it will get bigger and generate even better returns, attracting more miners. This feedback loop will continue, as it did with Ghash.io. Mining Pools 9/1/18 Bitcoin's mining pools are much larger than needed to provide reasonable income smoothing, though the need to retain the appearance of decentralization means that it currently takes 4 pools (two apparently operated by Bitmain) to exceed 51% of the mining power. Presumably economies of scale generating better return on investment account for the additional size. As regards Chia Network's centralization: Just as with Bitcoin's Proof of Work, farming pools would arise in PoSp/VDF to smooth out income. What would pools mining a successful Chia Network blockchain look like? Lets assume initially that it is uneconomic to develop ASICs to speed up the VDF. There would appear to be three possible kinds of participants in a pool: Individuals using the spare space in their desktop PC's disk. The storage for the Proof of Space is effectively "free", but unless these miners joined pools, they would be unlikely to get a reward in the life of the disk. Individuals buying systems with CPU, RAM and disk solely for mining. The disruption to the user's experience is gone, but now the whole cost of mining has to be covered by the rewards. To smooth out their income, these miners would join pools. Investors in data-center scale mining pools. Economies of scale would mean that these participants would see better profits for less hassle than the individuals buying systems, so these investor pools would come to dominate the network, replicating the Bitcoin pool centralization. Thus if Chia's network were to become successful, mining would be dominated by a few large pools. Each pool would run a VDF server to which the pool's participants would submit their Proofs of Space, so that the pool manager could verify their contribution to the pool. The emergence of pools, and dominance of a small number of pools, has nothing to do with the particular consensus mechanism in use. Thus I am skeptical that alternatives to Proof of Work will significantly reduce centralization of mining in blockchains generally, and in Chia Network's blockchain specifically. Security Like Radia Perlman, Eric Budish in The Economic Limits Of Bitcoin And The Blockchain (commentary on it here) observes that: From a computer security perspective, the key thing to note about (2) is that the security of the blockchain is linear in the amount of expenditure on mining power, ... In contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) — analogously to how a lock on a door increases the security of a house by more than the cost of the lock. Budish shows that the security of a blockchain against a 51% attack depends on the per-block mining reward being large relative to the maximum value of the transactions in a block. Reducing the expenditure on mining makes the blockchain less secure. As I wrote in Cryptocurrencies Have Limits: Budish combines Equations 1 & 2 to get Equation 3: Pblock > Vattack⁄α This inequality expresses the honest equilibrium condition for deterring an outsider's 51% attack (to deter insiders, Pblock has to be twice as big): the equilibrium per-block payment to miners for running the blockchain must be large relative to the one-off benefits of attacking it. Equation (3) places potentially serious economic constraints on the applicability of the Nakamoto (2008) blockchain innovation. By analogy, imagine if users of the Visa network had to pay fees to Visa, every ten minutes, that were large relative to the value of a successful one-off attack on the Visa network. This is why, especially with the advent of "mining as a service", 51% attacks on alt-coins have become endemic. There is nothing specific to Proof of Work in Budish's analysis; the same limit will apply to Chia Network's blockchain. But Cohen hopes to motivate the use of spare space in individual's desktops by keeping mining rewards small. His first slide claims: As long as rewards are below depreciation value it will be unprofitable to buy storage just for farming I believe that by depreciation value he means the depreciation of all the storage in the network. Robert Fontana and Gary Decad's 2016 numbers are vendor revenue of $0.039/GB for hard disk. Assuming 5-year straight-line depreciation (262,800 10-minute blocks), the block reward and thus the maximum value of the transactions in a block must be less than $0.15/PB. Kryder's law means this limit will decrease with time. Note also that it is possible at low cost to rent very large amounts of storage and computation for short periods of time in order to mount a 51% attack on a PoSp/VDP network in the same way that "mining as a service" enables 51% attacks on alt-coins. For example, a back-of-the-envelope computation of a hour-long Petabyte attack at Amazon would start by using about 7 days of AWS Free Tier to write the data to the sc1 version of Elastic Block Storage. The EBS would cost $35/PB/hr, so the setup would cost $2940. Then the actual hour-long attack would cost $35, for a total of just under $3000. I'm skeptical that the low-reward approach to maintaining decentralization is viable. Burstcoin It turns out that for the past four years there has been a Proof-of-Space coin, BurstCoin. Its "market cap" spiked on launch and when Bitcoin spiked, but it is currently under 2.5K BTC with a 24-hr volume just over 12 BTC. There are currently 670 cryptocurrencies with greater 30-day volume than Burst. So it hasn't been a great success. As expected, mining is dominated by a few pools. Despite the lack of success, BurstCoin claims to have about 284PB of storage devoted to mining. Lets take that at face value for now. Using the same numbers as above, that's $11M in capital. With 5-year straight-line depreciation and a 4-minute block time (657K blocks) the depreciation per block is $16.74 per block. The reward is currently 809 coins/block or at current "market price" $7.17, so they are below Bram Cohen's criterion for not being worth investing just for mining. Despite that, last year people were claiming to be mining with 120-250TB, and earning ~700 coins/day. That clearly isn't happening today. There are 360 blocks/day so the daily mining reward is 291,240 coins, or $2.58K. 240TB would be 8.5E-4 of the claimed mining power, so should gain 247.6 coins or $2.18/day. 40 cheap 6TB drives at Newegg would be around $6K, so the 5-year depreciation would be $3.29/day for a loss of $1.11/day before considering power, etc. And as the rewards decreased the loss would increase. Suppose I have 2TB spare space in my desktop PC. I'd have 7E-6 of the claimed mining power so, before pool expenses, I could expect to earn about 2c/day, decreasing. Why bother? Mining BurstCoin only makes sense if you have free access to large amounts of spare disk space for substantial periods. Despite the rhetoric, it isn't viable to aggregate the unused space in people's desktops. Deployment The history of BurstCoin might be seen as a poor omen for Chia. But BitTorrent has given Cohen a stellar reputation, and Chia has funding from Andreessen Horwitz, so it is likely to fare better. Nevertheless, Chia mining isn't likely to be the province of spare space on people's desktops: There are fewer and fewer desktops, and the average size of their disks is decreasing as users prefer smaller, faster, more robust SSDs to the larger, slower, more fragile 2.5" drives that have mostly displaced the even larger 3.5" drives in desktops. The only remaining market for large, 3.5" drives is data centers, and drive manufacturers are under considerable pressure to change the design of their drives to make them more efficient in this space. This will make them unsuitable for even those desktops that want large drives. So, if the rewards are low enough to prevent investment in storage dedicated to mining, where is the vast supply of free hard disk space to come from? I can see two possible sources: Manufacturers could add to their hard drives' firmware the code to fill them with a plot during testing and then mine during burn-in before shipment. This is analogous to what Butterfly Labs did with their mining hardware, just legal. Cloud data centers need spare space to prepare for unexpected spikes in demand. Experience and "big data" techniques can reduce the amount, but enough uncertainty remains that their spare space is substantial. So it is likely that the Chia blockchain would be dominated by a small number of large companies (2 disk manufacturers, maybe 10 clouds). Arguably, these companies would be more trustworthy and more decentralized than the current Bitcoin miners. Posted by David. at 8:00 AM Labels: bitcoin, cloud economics, kryder's law, storage costs No comments: Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ▼  2018 (96) ►  December (7) ►  November (8) ►  October (10) ▼  September (5) Web Archives As Evidence Vint Cerf on Traceability Blockchain Solves Preservation! What Does Data "Durability" Mean Chia Network ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-4909 ---- DSHR's Blog: Economics Of Evil Revisited DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, July 29, 2021 Economics Of Evil Revisited Eight years ago I wrote Economics of Evil about the death of Google Reader and Google's habit of leaving its customers users in the lurch. In the comments to the post I started keeping track of accessions to le petit musée des projets Google abandonnés. So far I've recorded at least 33 dead products, an average of more than 4 a year. Two years ago Ron Amadeo wrote about the problem this causes in Google’s constant product shutdowns are damaging its brand: We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days. Below the fold, some commentary on Amadeo's latest report from the killing fields, in which he detects a little remorse. Belatedly, someone at Google seems to have realized that repeatedly suckering people into using one of your products then cutting them off at the knees, in some cases with one week's notice, can reduce their willingness to use your other products. And they are trying to do something about it, as Amadeo writes in Google Cloud offers a model for fixing Google’s product-killing reputation: A Google division with similar issues is Google Cloud Platform, which asks companies and developers to build a product or service powered by Google's cloud infrastructure. Like the rest of Google, Cloud Platform has a reputation for instability, thanks to quickly deprecating APIs, which require any project hosted on Google's platform to be continuously updated to keep up with the latest changes. Google Cloud wants to address this issue, though, with a new "Enterprise API" designation. What Google means by "Enterprise API" is: Our working principle is that no feature may be removed (or changed in a way that is not backwards compatible) for as long as customers are actively using it. If a deprecation or breaking change is inevitable, then the burden is on us to make the migration as effortless as possible. They then have this caveat: The only exception to this rule is if there are critical security, legal, or intellectual property issues caused by the feature. And go on to explain what should happen: Customers will receive a minimum of one year’s notice of an impending change, during which time the feature will continue to operate without issue. Customers will have access to tools, docs, and other materials to migrate to newer versions with equivalent functionality and performance. We will also work with customers to help them reduce their usage to as close to zero as possible. This sounds good, but does anyone believe if Google encountered "critical security, legal, or intellectual property issues" that meant they needed to break customer applications they'd wait a year before fixing them? Amadeo points out that: Despite being one of the world's largest Internet companies and basically defining what modern cloud infrastructure looks like, Google isn't doing very well in the cloud infrastructure market. Analyst firm Canalys puts Google in a distant third, with 7 percent market share, behind Microsoft Azure (19 percent) and market leader Amazon Web Services (32 percent). Rumor has it (according to a report from The Information) that Google Cloud Platform is facing a 2023 deadline to beat AWS and Microsoft, or it will risk losing funding. The linked story from 2019 actually says: While the company has invested heavily in the business since last year, Google wants its cloud group to outrank those of one or both of its two main rivals by 2023 On Canalys numbers, the "and" target to beat (AWS plus Azure) has happy customers forming 51% of the market. So there is 42% of the market up for grabs. If Google added every single one of them to its 7% they still wouldn't beat a target of "both". Adding six times their customer base in 2 years isn't a realistic target. Even the "or" target of Azure is unrealistic. Since 2019 Google's market share has been static while Azure's has been growing slowly. Catching up in the 2 years remaining would involve adding 170% of Google's current market share. So le petit musée better be planning to enlarge its display space to make room for a really big new exhibit in 2024. Posted by David. at 8:00 AM Labels: cloud economics 1 comment: David. said... Writing about Gartner's latest cloud report, Tim Anderson recounts Google's customers unhappiness: "expressed concern about low post-sales satisfaction when dealing with Google Cloud Platform (GCP), aggressive pricing that may not be maintained, and that GCP is the only hyperscale provider reporting a financial loss for this part of its business ($591m)." August 3, 2021 at 12:50 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ▼  July (6) Economics Of Evil Revisited Yet Another DNA Storage Technique Alternatives To Proof-of-Work A Modest Proposal About Ransomware Intel Did A Boeing Graphing China's Cryptocurrency Crackdown ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-5245 ---- DSHR's Blog: Mining Is Money Transmission (Updated) DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, June 17, 2021 Mining Is Money Transmission (Updated) In How to Start Disrupting Cryptocurrencies: “Mining” Is Money Transmission, Nicholas Weaver makes an important point that seems to have been overlooked (my emphasis): The mining process starts with a pile of unconfirmed digital checks, cryptographically signed by the accounts’ corresponding private keys (in public key cryptography, only the private key can generate a signature but anyone can verify the signature with the public key). Each miner takes all the checks and decides which ones they are going to consider. Miners first have to make sure that each check they consider is valid and that the sending account has sufficient funds. Miners then choose from the set of valid checks they want to include and collect them together in a “block.” Below the fold, I look into the implications Weaver draws from this. The main implication is that miners are providing money transmission services under US law: The term “money transmission services” means the acceptance of currency, funds, or other value that substitutes for currency from one person and the transmission of currency, funds, or other value that substitutes for currency to another location or person by any means. Thus, in the US, they are required to follow the Anti-Money Laundering/Know Your Customer (AML/KYC) rules: Not only do the miners have to make sure checks are valid, but they also have to make numerous choices beyond this, usually focused on maximizing revenue by selecting the checks that provide the highest fee to the miner. So a miner who creates a block is explicitly making decisions about which transactions to confirm. This successful miner ... is a money transmitter. And these miners are transmitting a lot of value. Let us examine a single Bitcoin block — the newest block when I wrote this paragraph. In this block the miner, “F2Pool,” confirmed 2,644 transactions representing a notional value of $1.6 billion. Of course many of these transactions are simply noise (the Bitcoin blockchain is notorious for transactions that do not represent real transactions), but even the “small” transactions represent several hundred dollars moving between pseudonymous numbered accounts. And each and every one of them was processed, validated, selected and recorded by this one mining pool. And there is an existence proof that miners can use their freedom to choose which transactions to include in the blocks they mine to exclude transactions from unknown parties: There is proof that one can attempt to produce a “sanctions-compliant” mining pool. Marathon Digital Holdings is a small mining pool (roughly 1 percent of the current mining rate). During the month of May, Marathon used a risk-scoring method to select transactions, intending to create Bitcoin blocks untainted by money laundering or other criminal activity. Yet they stopped doing this because the larger Bitcoin community objects to the idea of attempting to restrict Bitcoin to legal uses! David Gerard comments: Nicholas Weaver points out that this completely gives the game away: miners have always been able to comply with money transmission rules, they just got away with not doing it. In the US the AML/KYC rules are enforced by the Financial Crimes Enforcement Network (FinCEN). Most countries follow FinCEN's lead because the penalty for not doing so can be loss of access to the Western world's banking system: This basic observation — that cryptocurrency miners, no matter the cryptocurrency itself, are money transmitters and should be treated as such — would effectively outlaw Bitcoin, Ethereum and other cryptocurrency mining in most of the world. And some nations that generally don’t follow FinCEN’s model, notably Iran and China, are cracking down on Bitcoin mining because it poses both a local money-laundering threat and an obscene waste of energy. Source The Chinese government's moves to shut down cryptocurrency mining are already having a significant effect. David Gerard reports: HashCow will no longer sell mining rigs in China. Sichuan Duo Technology put its machines up for sale on WeChat. BTC.TOP, which does 18% of all Bitcoin mining, is suspending operations in China, and plans to mine mainly in North America. [Time] Mining rigs are for sale at 20–40% off Chinese miners are looking to set up elsewhere. Some are looking to Kazakhstan. [Wired] Some have an eye on Texas — a state not entirely famous for its robust grid and ability to keep the lights on in bad weather. [CNBC] Weaver points out the entrepreneurial opportunity a collapse of the hash rate opens up: Additionally, Bitcoin and other proof-of-work cryptocurrencies have a security weakness: The system is secure only as long as there is a lot of continuously wasted effort. If the available mining drops precipitously, this enables attackers to rewrite history (a rewriting process that, if it only removes transactions, is arguably not a money transmitter). I’m certain ransomware victims and their insurers would pay $1 million to a service that would undo a $5 million payment. He concludes: It is time to seriously disrupt the cryptocurrency ecology. Directly attacking mining as incompatible with the Bank Secrecy Act is one potentially powerful tool. The whole post is well worth reading. Update July 4th: Three days after I posted this, Nicholas Weaver co-authored a follow-up article with Bruice Schneier entitled How to Cut Down on Ransomware Attacks Without Banning Bitcoin which is also well worth reading. They write: Ransomware isn’t new; the idea dates back to 1986 with the “Brain” computer virus. Now, it’s become the criminal business model of the internet for two reasons. The first is the realization that no one values data more than its original owner, and it makes more sense to ransom it back to them — sometimes with the added extortion of threatening to make it public — than it does to sell it to anyone else. The second is a safe way of collecting ransoms: Bitcoin. Alas, this is already out-of-date. When the DarkSide gang hit Colonial Pipeline: Colonial Pipeline paid in bitcoin, despite that option requiring an additional 10 percent added to the ransom. DarkSide made a mistake in handling the roughly 75BTC and Dan Goodin reported that US seizes $2.3 million Colonial Pipeline paid to ransomware attackers:: Source "On Monday, the US Justice Department said it had traced 63.7 of the roughly 75 bitcoins Colonial Pipeline paid to DarkSide The 10% additional ransom was for payment in Bitcoin rather than the more anonymous Monero. The ransomware industry has learned from this not to allow payment in Bitcoin. Lawrence Abrams reports in REvil ransomware hits 1,000+ companies in MSP supply-chain attack: The ransomware gang is demanding a $5,000,000 ransom to receive a decryptor from one of the samples. The image of the demand shows that payment in Monero is now the only option. Nevertheless, Weaver and Schneier's argument that the ransomware industry can be disrupted by targeting exchanges is plausible: Criminals and their victims act differently. Victims are net buyers, turning millions of dollars into Bitcoin and never going the other way. Criminals are net sellers, only turning Bitcoin into currency. The only other net sellers are the cryptocurrency miners, and they are easy to identify. Any banked exchange that cares about enforcing money laundering laws must consider all significant net sellers of cryptocurrencies as potential criminals and report them to both in- country and U.S. financial authorities. Any exchange that doesn’t should have its banking forcefully cut. The U.S. Treasury can ensure these exchanges are cut out of the banking system. By designating a rogue but banked exchange, the Treasury says that it is illegal not only to do business with the exchange but for U.S. banks to do business with the exchange’s bank. As a consequence, the rogue exchange would quickly find its banking options eliminated. They also agree with my suspicion that Tether has a magic money pump when they write: While most cryptocurrencies have values that fluctuate with demand, Tether is a “stablecoin” that is supposedly backed one- to-one with dollars. Of course, it probably isn’t, as its claim to be the seventh largest holder of commercial paper (short-term loans to major businesses) is blatantly untrue. Instead, they appear part of a cycle where new Tether is issued, used to buy cryptocurrencies, and the resulting cryptocurrencies now “back” Tether and drive up the price. This behavior is clearly that of a “wildcat bank,” a 1800s fraudulent banking style that has long been illegal. Tether also bears a striking similarity to Liberty Reserve, an online currency that the Department of Justice successfully prosecuted for money laundering in 2013. Shutting down Tether would have the side effect of eliminating the value proposition for the exchanges that support chain swapping since these exchanges need a “stable” value for the speculators to trade against. I would add that, while they are correct to write: banning cryptocurrencies like Bitcoin is an obvious solution. But while the solution is conceptually simple, it’s also impossible because — despite its overwhelming problems — there are so many legitimate interests using cryptocurrencies, albeit largely for speculation and not for legal payments. Source Bitcoin is almost impossible to use directly to pay for legal goods and services; both volatility and irreversibility mean that it has to be converted into fiat before the transaction. Monero has these problems in spades, plus the problem that any banked exchange could not trade it. So it has to be traded into Bitcoin before being traded into fiat. Thus any exchange account buying or selling Monero (or one of the smaller anonymous cryptocurrencies such as Zcash) falls under suspicion of crimes such as ransomware or money laundering, and should be reported. Posted by David. at 8:00 AM Labels: bitcoin 5 comments: David. said... An indication that Western governments are not happy with cryptocurrencies is that neither the IMF: "Adoption of bitcoin as legal tender raises a number of macroeconomic, financial and legal issues that require very careful analysis" nor the World Bank: "While the government did approach us for assistance on bitcoin, this is not something the World Bank can support given the environmental and transparency shortcomings" approve of El Salvador's scheme to convert dollar remittances to Tethers. June 17, 2021 at 10:56 AM David. said... Around 3am this morning BTC's "price" spiked 6.6% from around $32.60K to around $34.75K. The probable cause was unconfirmed reports that renowned HODL-er Mircea Popescu had drowned off Costa Rica. The death of a HODL-er is always good news for Bitcoin as they are likely to have taken the keys to their HODL-ings, which in Popescu's case are thought to amount to around 5% of all the Bitcoin there will ever be, with them. Less supply = higher price, according to the tenets of Austrian economics. Anthony “Pomp” Pompliano, in a now-deleted tweet, celebrated thus: 'Mircea Popescu, a Bitcoin OG, has passed away. He likely owned quite a bit of bitcoin. We may never know how much or if they are lost forever, but reminds me that Satoshi said: "Lost coins only make everyone else's coins worth slightly more. Think of it as a donation to everyone."' June 28, 2021 at 8:11 AM David. said... The need for regulation fo cryptocurrencies is evident from Misrylena Egkolfopolou and Charlie Wells' Crypto Scammers Rip Off Billions as Pump-and-Dump Schemes Go Digital: "It might sound like a joke, given the crypto meltdowns of late, but serious money is at stake here. Billions — real billions — are getting pilfered annually through a variety of cryptocurrency scams. The way things are going, this will only get worse. ... Nowadays crypto hustlers and star-gazers like Titan Maxamus have established a weird symbiotic relationship. It seems to capture everything that’s gone wrong with money culture, from Reddit-fueled thrill-seeking to conspiracy theorizing to predatory wheeling-dealing. The rug pull is only one play. There’s also the gentler soft rug, the crypto version of getting ghosted on Hinge. And the honey pot, which functions like a trap. Old-fashioned Ponzi schemes, newly cryptodenominated, have swindled people out of billions too." July 8, 2021 at 9:43 AM David. said... And another reason for regulation in Mike Peterson's Fake Apple stocks are starting to trade on various blockchain platforms: "Synthetic versions of popular technology stocks like Apple, Tesla, and Amazon have started trading on blockchains, joining a growing pool of various crypto assets. The digital assets are engineered to reflect the prices of the stocks that they reflect, but no actual trading of real stocks is involved. Although sales volumes are still just a tiny percentage of trades on actual exchanges, crypto enthusiasts are excited about the potential. For proponents, it's a way to trade stock-like assets without any of the restrictions. ... Traders can exchange the synthetic stocks anonymously, 24 hours a day, and without restrictions like "know your client" rules or capital controls. ... Of course, unregulated finance options like the synthetic tokens could soon draw the attention of enforcement agencies like the Securities and Exchange Commission. Billionaire crypto investor Mike Novogratz, for example, recently said that decentralized finance companies should start abiding by some rules soon to avoid the ire of regulators." The whole point of permissionless blockchains is that "abiding by some rules soon" is a bug. July 8, 2021 at 10:16 AM David. said... In The Oncoming Ransomware Storm Stephen Diehl continues to point to suppressing the payment channel as the way to stop the dystopian ransomware future: "Imagine a world in which every other month you’re forced to bid for your personal data back from hackers who continuously rob you. And a world where all of this is is so commonplace there are automated darknet marketplaces where others can bid on your data, and every detail of your personal life is up for sale to the highest bidder. Every private text, photo, email, and password is just a digital commodity to be traded on the market. Because that’s what the market demands and that’s what capitalism left unchecked will provide." July 9, 2021 at 7:01 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ▼  June (8) Taleb On Cryptocurrency Economics China's Cryptocurrency Crackdown DNA Data Storage: A Different Approach Mining Is Money Transmission (Updated) Mempool Flooding Meta: Apology To Commentors Unreliability At Scale Unstoppable Code? ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-5283 ---- DSHR's Blog: Economic Model of Long-Term Storage DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, August 22, 2017 Economic Model of Long-Term Storage Cost vs. Kryder rate As I wrote last month in Patting Myself On The Back, I started working on economic models of long-term storage six years ago. I got a small amount of funding from the Library of Congress; when that ran out I transferred the work to students at UC Santa Cruz's Storage Systems Research Center. This work was published here in 2012 and in later papers (see here). What I wanted was a rough-and-ready Web page that would allow interested people to play "what if" games. What the students wanted was something academically respectable enough to get them credit. So the models accumulated lots of interesting details. But the details weren't actually useful. The extra realism they provided was swamped by the uncertainty from the "known unknowns" of the future Kryder and interest rates. So I never got the rough-and-ready Web page. Below the fold, I bring the story up-to-date and point to a little Web site that may be useful. Earlier this year the Internet Archive asked me to update the numbers we had been working with all those years ago. And, being retired with time on my hands (not!), I decided instead to start again. I built an extremely simple version of my original economic model, eliminating all the details that weren't relevant to the Internet Archive and everything else that was too complex to implement at short notice, and put it behind an equally simple Web site running on a Raspberry Pi (so please don't beat up on it). What This Model Does For a single Terabyte of data, the model computes the endowment, the money which deposited with the Terabyte and invested at interest would suffice to pay for the storage of the data "for ever" (actually 100 years in this model). Assumptions These are the less than totally realistic assumptions underlying the model: Drive cost is constant, although each year the same cost buys drives with more capacity as given by the Kryder rate. The interest rate and the Kryder rate do not vary for the duration. The storage infrastructure consists of multiple racks, containing multiple slots for drives. I.e. the Terabyte occupies a very small fraction of the infrastructure. The number of drive slots per rack is constant. Ingesting the Terabyte into the infrastructure incurs no cost. The failure rate of drives is constant and known in advance, so that exactly the right number of spare drives is included in each purchase to ensure that failed drives can be replaced by an identical drive. Drives are replaced after their specified life although they are still working. Some of these assumptions may get removed in the future (see below). Parameters This model's adjustable parameters are as follows. Media Cost Factors DriveCost: the initial cost per drive, assumed constant in real dollars. DriveTeraByte: the initial number of TB of useful data per drive (i.e. excluding overhead). KryderRate: the annual percentage by which DriveTeraByte increases. DriveLife: working drives are replaced after this many years. DriveFailRate: percentage of drives that fail each year. Infrastructure Cost factors SlotCost: the initial non-media cost of a rack (servers, networking, etc) divided by the number of drive slots. SlotRate: the annual percentage by which SlotCost decreases in real terms. SlotLife: racks are replaced after this many years Running Cost Factors SlotCostPerYear: the initial running cost per year (labor, power, etc) divided by the number of drive slots. LaborPowerRate: the annual percentage by which SlotCostPerYear increases in real terms. ReplicationFactor: the number of copies. This need not be an integer, to account for erasure coding. Financial Factors DiscountRate: the annual real interest obtained by investing the remaining endowment. Defaults The defaults are my invention for a rack full of 8TB drives. They should not be construed as representing the reality of your storage infrastructure. If you want to use the output of this model, for example for budgeting purposes, you need to determine your own values for the various parameters. Default values Parameter Value Units DriveCost 250.00 Initial $ DriveTeraByte 7.2 Usable TB per drive KryderRate 10 % per year DriveLife 4 years DriveFailRate 2 % per year SlotCost 150.00 Initial $ SlotRate 0 % per year SlotLife 8 years SlotCostPerYear 100.00 Initial $ per year LaborPowerRate 4 % per year DiscountRate 2 % per year ReplicationFactor 2 # of copies Unlike the KryderRate and the SlotRate, the LaborPowerRate reflects that the real cost of staff increases over time. Of course, the capacity of the slots is typically increasing faster than the LaborPowerRate, so the per-Terabyte cost from the LaborPowerRate still decreases over time. Nevertheless, the endowment calculated is quite sensitive to the value of the LaborPowerRate. Calculation The model works through the 100-year duration year by year. Each year it figures out the payments needed to keep the Terabyte stored, including running costs and equipment purchases. It then uses the DiscountRate to figure out how much would have to have been invested at the start to supply that amount at that time. In other words, it computes the Net Present Value of each year's expenditure and sums them to compute the endowment needed to pay for storage over the full duration. Usage Sample model output The Web site provides two ways to use the model: Provide a set of parameters including a DiscountRate and a KryderRate, and compute the model's estimate of the endowment. Provide a set of parameters excluding the DiscountRate and the KryderRate, and draw a graph of how the model's estimate of the endowment varies with the DiscountRate and KryderRate for reasonable ranges of these two parameters. The sample graph shows why adding lots of detail to the model isn't really useful, because the effects of the unknowable future DiscountRate and KryderRate parameters are so large. Code The code is here under an Apache 2.0 license. What This Model Doesn't (Yet) Do If I can find the time, some of these deficiencies in the model may be removed: Unlike earlier published research, this model ignores the cost of ingesting the data in the first place, and accessing it later. Experience suggests the following rule of thumb: ingest is half the total lifetime cost, storage is one-third the total lifetime cost, and access is one-sixth. Thus a reasonable estimate of the total preservation cost of a Terabyte is three times the result of this model. The model assumes that the parameters are constant through time. Historically, interest rates, the Kryder rate, labor costs, etc. have varied, and thus should be modeled using Monte Carlo techniques and a probability distribution for each such parameter. It is possible for real interest rates to go negative, disk cost per Terabyte to spike upwards, as it did after the Thai floods, and so on. These low-probability events can have a large effect on the endowment needed, but are excluded from this model. Fixing this needs more CPU power than a Raspberry Pi. There are a number of different possible policies for handling the inevitable drive failures, and different ways to model each of them. This model assumes that it is possible to predict at the time a batch of drives is purchased what proportion of them will fail, and inflates the purchase cost by that factor. This models the policy of buying extra drives so that failures can be replaced by the same drive model. The model assumes that drives are replaced after DriveLife years even though they are working. Continuing to use the drives beyond this can have significant effects on the endowment (see this paper). Posted by David. at 10:00 AM Labels: storage costs 4 comments: Unknown said... Nice post, and model, even if it can't be predictive. Might want to throw in inflation. Effect might be large, given the decade average in the US hasn't been lower than 2% for the last 100 years. Revised code, assumed to be buggy, here. Rick August 22, 2017 at 12:26 PM David. said... Rick, please read the post more carefully: "constant in real dollars" and: "annual real interest" The model works in real dollars, that is after adjusting for inflation. In other words, your idea of the average future rate of inflation needs to be subtracted from your idea of the KryderRate, SlotRate and LaborPowerRate in nominal dollars. Adding an inflation parameter would be double-counting. August 22, 2017 at 12:52 PM Unknown said... Oops. Apologies. I should've caught that by inference from your straw-man 2% discount rate, as well. August 22, 2017 at 1:04 PM David. said... I want to use the Pi for something else, so I have taken the model down. If you need to use the model please install it on your own hardware from github: https://github.com/dshrosenthal/EconomicModel If this isn't possible, post a comment and I'll see if I can resurrect the model. February 21, 2020 at 5:37 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ▼  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ▼  August (7) Don't own cryptocurrencies Recent Comments Widget Why Is The Web "Centralized"? Economic Model of Long-Term Storage Approaching The Physical Limits Preservation Is Not A Technical Problem Disk media market update ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-645 ---- DSHR's Blog: Graphing China's Cryptocurrency Crackdown DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, July 6, 2021 Graphing China's Cryptocurrency Crackdown Below the fold an update to last Thursday's China's Cryptocurrency Crackdown with more recent graphs. McKenzie Sigalos reports that Bitcoin mining is now easier and more profitable as algorithm adjusts after China crackdown: China had long been the epicenter of bitcoin miners, with past estimates indicating that 65% to 75% of the world's bitcoin mining happened there, but a government-led crackdown has effectively banished the country's crypto miners. "For the first time in the bitcoin network's history, we have a complete shutdown of mining in a targeted geographic region that affected more than 50% of the network," said Darin Feinstein, founder of Blockcap and Core Scientific. More than 50% of the hashrate – the collective computing power of miners worldwide – has dropped off the network since its market peak in May. Source Here is the hashrate graph. It is currently 86.3TH/s, down from a peak of 180.7TH/s, so down 52.2% from the peak and trending strongly down. We may not have seen the end of the drop. This is good news for Bitcoin. The result is that the Bitcoin system slowed down: Typically, it takes about 10 minutes to complete a block, but Feinstein told CNBC the bitcoin network has slowed down to 14- to 19-minute block times. And thus, as shown in the difficulty graph, the Bitcoin algorithm adjusted the difficulty: This is precisely why bitcoin re-calibrates every 2016 blocks, or about every two weeks, resetting how tough it is for miners to mine. On Saturday, the bitcoin code automatically made it about 28% less difficult to mine – a historically unprecedented drop for the network – thereby restoring block times back to the optimal 10-minute window. Source It went from a peak of 25.046t to 19.933t, a drop of 20.4%. This is good news for Bitcoin, as Sigalos writes: Fewer competitors and less difficulty means that any miner with a machine plugged in is going to see a significant increase in profitability and more predictable revenue. "All bitcoin miners share in the same economics and are mining on the same network, so miners both public and private will see the uplift in revenue," said Kevin Zhang, former Chief Mining Officer at Greenridge Generation, the first major U.S. power plant to begin mining behind-the-meter at a large scale. Assuming fixed power costs, Zhang estimates revenues of $29 per day for those using the latest-generation Bitmain miner, versus $22 per day prior to the change. Longer-term, although miner income can fluctuate with the price of the coin, Zhang also noted that mining revenues have dropped only 17% from the bitcoin price peak in April, whereas the coin's price has dropped about 50%. Source Here is the miners' revenue graph. It went from a peak of $80.172M/day on April 15th to a trough of $13.065M/day on June 26th, a drop of 83.7%. It has since bounced back a little, so this is good news for Bitcoin, if not quite as good as Zhang thinks. Obviously, the trough was before the decrease in difficulty, which subsequently resulted in 6.25BTC rewards happening more frequently than before and thus increased miners' revenue somewhat. Have you noticed how important it is to check the numbers that the HODL-ers throw around? Matt Novak reported on June 21st that: Miners in China are now looking to sell their equipment overseas, and it appears many have already found buyers. CNBC’s Eunice Yoon tweeted early Monday that a Chinese logistics firm was shipping 6,600 lbs (3,000 kilograms) of crypto mining equipment to an unnamed buyer in Maryland for just $9.37 per kilogram. And Sigalos adds details: Of all the possible destinations for this equipment, the U.S. appears to be especially well-positioned to absorb this stray hashrate. CNBC is told that major U.S. mining operators are already signing deals to patriate some of these homeless Bitmain miners. U.S. bitcoin mining is booming, and has venture capital flowing to it, so they are poised to take advantage of the miner migration, Arvanaghi told CNBC. "Many U.S. bitcoin miners that were funded when bitcoin's price started rising in November and December of 2020 means that they were already building out their power capacity when the China mining ban took hold," he said. "It's great timing." And, as always, the HODL-ers ignore economies of scale and hold out hope for the little guy: But Barbour believes that much smaller players in the residential U.S. also stand a chance at capturing these excess miners. "I think this is a signal that in the future, bitcoin mining will be more distributed by necessity," said Barbour. "Less mega-mines like the 100+ megawatt ones we see in Texas and more small mines on small commercial and eventually residential spaces. It's much harder for a politician to shut down a mine in someone's garage." It is good news for Bitcoin that more of the mining power is in the US where the US government could suppress it by, for example, declaring that Mining Is Money Transmission and thus that pools needed to adhere to the AML/KYC rules. Doing so would place the poor little guy in a garage in a dilemma — mine on his own and be unlikely to get a reward before their rig was obsolete, or join an illegal pool and risk their traffic being spotted. Update: The Malaysian government's crackdown is an example to the world. Andrew Hayward reports that Police Destroy 1,069 Bitcoin Miners With Big Ass Steamroller In Malaysia. Posted by David. at 8:00 AM Labels: bitcoin 6 comments: David. said... The Economist covers the crackdown in Deep in rural China, bitcoin miners are packing up: "In May, a government committee tasked with promoting financial stability vowed to put a stop to bitcoin mining. Within weeks the authorities in four main mining regions—Inner Mongolia, Sichuan, Xinjiang and Yunnan—ordered the closure of local projects. Residents of Inner Mongolia were urged to call a hotline to report anyone flouting the ban. In parts of Sichuan, miners were ordered to clear out computers and demolish buildings housing them overnight. Power suppliers pulled the plug on most of them. ... China had accounted for about 65% of bitcoins earned through mining, according to the Cambridge Bitcoin Electricity Consumption Index. But analysts think about 90% of its mining has now ceased. Chinese miners are selling their computers at half their value. ... China had accounted for about 65% of bitcoins earned through mining, according to the Cambridge Bitcoin Electricity Consumption Index. But analysts think about 90% of its mining has now ceased. Chinese miners are selling their computers at half their value." July 11, 2021 at 9:40 AM David. said... It isn't just China. Kevin Shaley's Take a look inside this underground crypto mining farm in Ukraine with its 3,800 PlayStations and 5,000 computers reports that: "A huge underground cryptocurrency mining operation has been busted by Ukraine police for allegedly stealing electricity from the grid. Police said they'd seized 5,000 computers and 3,800 games consoles that were being used in the illegal mine, the largest discovered in the country. The mine, in the city of Vinnytsia, near Kyiv, stole as much as $259,300 in electricity each month, the Security Service of Ukraine said. To conceal the theft, the operators of the mine used electricity meters that did not reflect their actual energy consumption, officials said." Check out the picture! July 12, 2021 at 12:51 PM David. said... Bitcoin miners break new ground in Texas, a state hailed as the new cryptocurrency capital by Dalvin Brown explains the attraction of Texas' electricity, despite unreliability and enormous price spikes: "In the world of crypto mining, having all your computers shut down at once, and stay down for hours, as they did in June, sounds like a disaster. Crypto miners compete with one another the world over to generate the computer code that results in the production of a single bitcoin, and the algorithm that governs bitcoin’s production allows only 6.25 bitcoin to be produced every 10 minutes, among the perhaps 70,000 crypto mines that operate around the world. If you’re not able to generate the code, but your rivals can, you are out of luck. But thanks to the way Texas power companies deal with large electricity customers like Whinstone, Harris’s bitcoin mine, one of the few owned by a publicly traded company, didn’t suffer. Instead, the state’s electricity operator, the Electric Reliability Council of Texas (ERCOT), began to pay Whinstone — for having agreed to quit buying power amid heightened demand." The good news is that the more mining happens in the US the easier it would be for the US to stop the unstoppable code. July 12, 2021 at 1:05 PM David. said... Failure to proceed moonwards causes loss of interest, as Tanya Macheel reports in Cryptocurrency trading volume plunges as interest wanes following bitcoin price drop: "Trading volumes at the largest exchanges, including Coinbase, Kraken, Binance and Bitstamp, fell more than 40% in June, according to data from crypto market data provider CryptoCompare, which cited lower prices and lower volatility as the reason for the drop. In June the price of bitcoin hit a monthly low of $28,908, according to the report, and ended the month down 6%. A daily volume maximum of $138.2 billion on June 22 was down 42.3% from the intra-month high in May." July 13, 2021 at 6:21 AM David. said... MacKenzie Sigalos continues reporting on the Bitcoin mining migration in How the U.S. became the world’s new bitcoin mining hub: "Well before China decided to kick out all of its bitcoin miners, they were already leaving in droves, and new data from Cambridge University shows they were likely headed to the United States. The U.S. has fast become the new darling of the bitcoin mining world. It is the second-biggest mining destination on the planet, accounting for nearly 17% of all the world’s bitcoin miners as of April 2021. That’s a 151% increase from September 2020. ... This dataset doesn’t include the mass mining exodus out of China, which led to half the world’s miners dropping offline, and experts tell CNBC that the U.S. share of the mining market is likely even bigger than the numbers indicate. According to the newly-released Cambridge data, just before the Chinese mining ban began, the country accounted for 46% of the world’s total hashrate, an industry term used to describe the collective computing power of the bitcoin network. That’s a sharp decline from 75.5% in September 2019, and the percentage is likely much lower given the exodus underway now." July 19, 2021 at 3:41 PM David. said... Bloomberg reports that China’s Central Bank Says It Will Keep Pressure on Crypto Market: "China’s central bank vowed to maintain heavy regulatory pressure on cryptocurrency trading and speculation after escalating its clampdown in the sector earlier this year. The People’s Bank of China will also supervise financial platform companies to rectify their practices according to regulations, it said in a statement on Saturday. Policy makers met on Friday to discuss work priorities for the second half of the year." August 12, 2021 at 2:46 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ▼  July (6) Economics Of Evil Revisited Yet Another DNA Storage Technique Alternatives To Proof-of-Work A Modest Proposal About Ransomware Intel Did A Boeing Graphing China's Cryptocurrency Crackdown ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-6794 ---- DSHR's Blog: Lack Of Anti-Trust Enforcement DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, August 27, 2020 Lack Of Anti-Trust Enforcement The accelerating negative effects that have accumulated since the collapse of anti-trust enforcement in the US have been a prominent theme on this blog. This search currently returns 16 posts stretching back to 2009. Recently, perhaps started by Lina M. Khan's masterful January 2017 Yale Law Journal article Amazon's Antitrust Paradox a consensus has been gradually emerging as to these negative effects. One problem for this consensus is that "real economists" don't believe the real world, they only believe mathematical models that produce approximations to the real world. Now, Yves Smith's Fed Economists Finger Monopoly Concentration as Underlying Driver of Neoliberal Economic Restructuring; Barry Lynn in Harpers and Fortnite Lawsuit Put Hot Light on Tech Monopoly Power covers three developments in the emerging anti-monopoly consensus: Apple and Google ganging up on Epic Games. Lina M. Khan's ex-boss Barry Lynn's The Big Tech Extortion Racket: How Google, Amazon, and Facebook control our lives. Market Power, Inequality, and Financial Instability by Fed economists Isabel Cairó and Jae Sim The first two will have to wait for future posts, but the last of these may start to convince "real economists" because, as Yves Smith writes: they developed a model to simulate the impact of companies’ rising market power, in conjunction with the assumption that the owners of capital liked to hold financial assets (here, bonds) as a sign of social status. They wanted to see it it would explain six developments over the last forty years. ... And it did! Follow me below the fold for the details. Yves Smith lists the six developments: Real wage growth stagnating and lagging productivity growth Pre-tax corporate profits rising rapidly relative to GDP Increasing income inequality Increasing wealth inequality Higher household leverage Increased financial instability The fit between Cairó & Sim's model and the real world was impressive: For a model to have pretty decent fit for so many variables is not trivial. The first are what the model coughed up, the second, in parenthesis, are the real world data: ... The authors did quite a few sensitivity test and also modeled some alternative explanations, as well as showing panels that compared model outputs over time to real economy outcomes. They also recommend wealth distribution as a way to dampen financial crises, or even just taxing dividends at a healthy level. Here is Cairó & Sim's abstract: Over the last four decades, the U.S. economy has experienced a few secular trends, each of which may be considered undesirable in some aspects: declining labor share; rising profit share; rising income and wealth inequalities; and rising household sector leverage, and associated financial instability. We develop a real business cycle model and show that the rise of market power of the firms in both product and labor markets over the last four decades can generate all of these secular trends. We derive macroprudential policy implications for financial stability. Their model contains two kinds of agents, owners of capital who lend, and workers who borrow: The first type of agents, named agents K, whose population share is calibrated at 5 percent, own monopolistically competitive firms and accumulate real (capital) and financial assets (bonds). The second type of agents, named agents W, whose population share is calibrated at 95 percent, work for labor earnings and do not participate in capital market, but issue private bonds for consumption smoothing. The two types of agents interact in two markets. In the labor market, they bargain over the wage. In the credit market, agents K play the role of creditors and agents W the role of borrowers. We assign so-called spirit-of-capitalism preferences to agent K such that they earn direct utility from holding financial wealth, which is assumed to represent the social status What is going on here is that the agents K want to lend as much money as possible, so as to receive interest. In order for them to do this, it is necessary for the agents W to borrow as much money as possible: We show that such preferences are key in creating a direct link between income inequality and credit accumulation, as they control the marginal propensity to save (MPS) out of permanent income shocks. ... We posit that the market power of the firms owned by agent K in both product market and labor market (in the form of bargaining power) steadily increases over time for three decades (1980– 2010) and study the transitional dynamics of the model economy. Agents K in the model want to lend money, rather than directly own productive capital resources. The authors tried changing this, so that agents K owned real capital assets, but this produced much less realistic results: Since the investor earns strictly positive marginal utility from holding capital, capital accumulation is enhanced far beyond the level in the baseline, increasing the marginal productivity of labor, raising labor demand and lowering the unemployment rate 10 percentage points in 30 years, which is clearly counterfactual. Furthermore, the investment to output ratio increases 18 percent over this period, which contrasts with the 18 percent decline both in the data and in our baseline model. Finally, the greater incentive to accumulate physical capital generates far greater income for wealthy households, creating the rise of credit-to-GDP ratio that greately overshoots the level observed in the data. They also investigated changing the motivations of the agents W: Another popular narrative behind the rise of credit accumulation is the “keeping-up-with-the-Joneses” preferences for borrowers. This narrative argues that it was the borrowers’ desire to catch up with the lifestyle of the wealthy households, even when their income stagnated, that explains the rise of the household sector leverage ratio. To test this narrative, we modify the preferences of agent W such that the reference point in their external habit is agent K’s consumption level, which is larger than agent W’s consumption level by construction, as agents W are the poorest agents in the model. We find that if keeping-up-with- the-Joneses preferences were the main driver of the credit expansion, credit-to-GDP ratio rises 50 percentage points in 30 years, a substantially higher increase than the one observed in the baseline and also larger than in the data. However, such overshooting helps match the rise in the probability of financial crises. For this reason, we cannot preclude the possibility that the demand factor known as “keeping-up-with-the-Joneses” is one of the factors behind the rises of household leverage and financial instability. If agents W are reluctant to borrow, the returns from the lending by agents K will be lower. So it is in the interests of agents K to (a) increase the prices of essential goods, and/or (b) increase the desire of agents W to purchase inessential goods. This is where advertising comes in, to enhance the “keeping-up-with-the-Joneses” effect. Fortunately, for most Americans the "Joneses" are not the top 5% of agents W who they only ever see on TV, but their slighly better-off neighbors. The altered model probably exaggerates the effect significantly. So the six bad effects are caused by the increasing market power of agents K, and thus their ability to persuade agents W to borrow from them. What to do to reduce them? we introduce a redistribution policy to our baseline model that consists of a dividend income tax for agent K and social security spending for agent W. This taxation is non-distortionary in our economy, as the tax rate does not interfere with production decisions. Our results show that a policy of gradually increasing the tax rate from zero to 30 percent over the last 30 years might have been effective in preventing almost 50 percent of buildup in income inequality, credit growth and the increase in the endogenous probability of financial crisis. Since the taxation leaves production efficiency intact, the secular decline in labor share is left intact while the increase in income inequality is substantially subdued. This suggests that carefully designed redistribution policies can be quite effective macroprudential policy tools and more research is warranted in this area. "Macroprudential policy" in this context has meant trying to avoid the regular financial crises that mean the taxpayer bails out the banks, and then endures years of "austerity" allegedly to repair the nation's finances so they will be ready for the next crisis. Typically this has involved showering the banks with free money in return for a promise not to indulge in such risky behavior again until enough time has passed for people to forget where their money went. This has worked less well than one might have hoped. The Federal Reserve authors' radical, socialist suggestion of imposing a tax on agents K and spending the proceeds on stuff that agents W need, like health care, low-cost housing, public transit, clean air and water, unemployment insurance, public education, less murderous police and so on is so not going to happen. But if it did, the authors suggest it just might work: the taxation does not affect the wealth of nation, it simply breaks the link between the decline of the labor income share and the increase in income inequality. It does so by redistributing income from agents K to agents W with no significant changes in product and labor market equilibrium. This experiment has important implications for macroprudential policies. Since the GFC, most of the focus of macroprudential policies has been on building the resilience of financial intermediaries by bolstering their capital positions, restricting their risk exposures, and restraining excessive interconnectedness among them. These policies are useful in maintaining financial stability. However, these policies might not address a much more fundamental issue: Why is there so much income “to be intermediated” to begin with? In our framework, the root cause of financial instability is the income inequality driven by changes in market structure and institutional changes that reward the groups at the top of the income distribution. Our experiment suggests that if an important goal for public policy is to limit the probability of a tail event, such as a financial crisis, a powerful macroprudential policy may be a redistribution policy that moderates the rise in income inequality. In other words, finanicial crises are caused by agents K having so much money sloshing around the financial system compared to the supply of productive investments (restricted by the fact the agents W can't afford to borrow to purchase the products) that the excess money has to be placed in investments so risky that regular crises are guaranteed. The authors' tax diverts the excess to agents W, reducing the risk level because they spend it and thus increase the supply of productive, less risky investments. Neat, huh? Notice that the authors do not propose anti-trust measures; their model continues to increase the market power of agents K. But their redistributive tax ameliorates some of the bad effects of monopolization. . Matt Stoller noticed the paper and wrote Monopolization as a Challenge for Both Parties: Basically, people who produce things for a living don’t make as much money, and people who serve as monopoly middlemen make more money, and then the monopolists lend to the producers, creating a society built on asymmetric power relationships and unstable debt. This paper joins a host of other research coming out in recent years on the perils of concentration. For instance, on the labor front, one study showed that concentration costs the average American household $5,000 a year in lost purchasing power. Another showed that since 1980, markups—how much companies charge for products beyond their production costs—have tripled from 21 percent to 61 percent due to growing consolidation. Another revealed that median annual compensation—now only $33,000—would be over $10,000 higher if employers were less concentrated. Concentration doesn’t just hit wages. Monopolization hits innovation, small business formation, and regional inequality. Hospitals in concentrated markets have higher mortality rates, and concentration of lab capacity in the hands of LabCorp and Quest Diagnostics is likely even behind the Covid testing shortage. The problems induced by monopolization are virtually endless, because fundamentally corporate monopolies are a mechanism to strip people of power and liberty, and people without power and liberty do not flourish. Stoller seems not to have noticed that Cairó & Sim's model is not about reducing monopolization, but about ameliorating its bad effects. But he is right about the awkward politics of reducing it: Monopolization doesn’t fit neatly into any partisan box, because structuring markets is not about taxing and spending; it is about what happens before the tax system starts dealing with profits and revenue. It is about avoiding the need to spend on social welfare by preventing the impoverishment in the first place. Since the 1970s, American policymakers in both parties have believed in a philosophy in which markets are natural forums where buyers and sellers congregate, ideally free of politics. They stopped paying attention to the details of markets, because doing so was irrelevant to the goal of leaving market actors as remote as possible from the meddling hand of government. Such a view is profoundly at odds with the bipartisan American anti-monopoly tradition, a tradition in which most merchants and workers from the 18th century onward understood markets and chartered corporations as creatures of public policy organized for the convenience and liberty of the many. This shift fifty years ago had profound consequences. Leaders in both parties have come to believe that larger corporations are generally a good thing, as they reflect more efficient operations instead of reflecting the rise of market power enabled by policy choices. Posted by David. at 8:00 AM Labels: anti-trust 1 comment: David. said... Paul Krugman explains the contrast between pessimism about the economy and soaring tech stocks in this interesting thread: "And of course that's a good description of the tech giants whose stocks have soared most. So a good guess is that at least part of what's going on is that long-term pessimism has reduced interest rates, and this has *increased* the value of stocks issued by monopolists" September 2, 2020 at 7:09 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ▼  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ▼  August (5) Lack Of Anti-Trust Enforcement Optical Media Durability: Update Atlantic Council Report On Software Supply Chains "Good" News For Bitcoin! Contextual vs. Behavioral Advertising ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-6796 ---- DSHR's Blog: Falling Research Productivity DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, April 3, 2018 Falling Research Productivity Are Ideas Getting Harder to Find? by Nicholas Bloom et al looks at the history of investment in R&D and its effect on the product across several industries. Their main example is Moore's Law, and they show that [page 19]: research effort has risen by a factor of 18 since 1971. This increase occurs while the growth rate of chip density is more or less stable: the constant exponential growth implied by Moore’s Law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward. Assuming a constant growth rate for Moore’s Law, the implication is that research productivity has fallen by this same factor of 18, an average rate of 6.8 percent per year. If the null hypothesis of constant research productivity were correct, the growth rate underlying Moore’s Law should have increased by a factor of 18 as well. Instead, it was remarkably stable. Put differently, because of declining research productivity, it is around 18 times harder today to generate the exponential growth behind Moore’s Law than it was in 1971. Below the fold, some commentary on this and other relevant research. Actually, of course, in recent years Moore's Law has slowed as the technology gets closer and closer to the physical limits. This slowing increases the rate at which research productivity falls. The implications of their finding are disturbing for the economy as a whole [Page 44]: Taking the aggregate economy number as a representative example, research productivity declines at an average rate of 5.3 percent per year, meaning that it takes around 13 years for research productivity to fall by half. Or put another way, the economy has to double its research efforts every 13 years just to maintain the same overall rate of economic growth. Source The annual rate of change in US R&D expenditure is shown in the graph above. It was below 5.3% for 8 of the 17 years before 2016. Source It is therefore likely that inadequate R&D expenditure was a significant contributor to the approximate halving of the rate of growth of US GDP over the same period. The rate at which research productivity has fallen in semiconductors is significantly higher than in other areas of the economy (6.8% vs. 5.3%) [Page 46]: Research productivity for semiconductors falls so rapidly, not because that sector has the sharpest diminishing returns — the opposite is true. It is instead because research in that sector is growing more rapidly than in any other part of the economy, pushing research productivity down. A plausible explanation for the rapid research growth in this sector is the “general purpose” nature of information technology. Demand for better computer chips is growing so fast that it is worth suffering the declines in research productivity there in order to achieve the gains associated with Moore’s Law. Or even the smaller gains associated with growth significantly slower than Moore's Law. Industry projections for the Kryder rate of both SSDs and HDDs depend heavily on rapid progress in density, i.e. on the products of R&D investment. Flash is a very competitive market, and although hard disk is down to 2.5 manufacturers, which might suggest improving margins, hard disks are under sustained margin pressure from flash. Thus falling research productivity has a particular impact on the future of storage, because neither the SSD nor the HDD markets can sustain the large increases in R&D spending needed to increase, or even sustain, their Kryder rates. Bloom et al bolsters the case for low and falling Kryder rates. This paragraph on [Page 48]: is perhaps the most interesting of the whole paper: The only reason models with declining research productivity can sustain exponential growth in living standards is because of the key insight from [endogenous growth theory]: ideas are nonrival. And if research productivity were constant, sustained growth would actually not require that ideas be nonrival; Akcigit, Celik and Greenwood show that fully rivalrous ideas in a model with perfect competition can generate sustained exponential growth in this case. Our paper therefore clarifies that the fundamental contribution of endogenous growth theory is not that research productivity is constant or that subsidies to research can necessarily raise growth. Rather it is that ideas are different from all other goods in that they do not get depleted when used by more and more people. Exponential growth in research leads to exponential growth in [research expenditure]. And because of nonrivalry, this leads to exponential growth in per capita income. It is a strong argument for open source and open science. I believe that the problem of declining research productivity is related to "cost disease", as explained by Scott Alexander in Considerations on Cost Disease which starts: Tyler Cowen writes about cost disease. ... Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that. Alexander shows that inflation-adjusted costs have increased rapidly with no corresponding increase in output in several US areas, including: Source K through 12 education: There was some argument about the style of this graph, but as per Politifact the basic claim is true. Per student spending has increased about 2.5x in the past forty years even after adjusting for inflation. At the same time, test scores have stayed relatively stagnant. You can see the full numbers here, but in short, high school students’ reading scores went from 285 in 1971 to 287 today – a difference of 0.7% NB: not inflation-adjusted University education: Inflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year. No, it’s not because of decreased government funding, and there are similar trajectories for public and private schools. I don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment. Do you think that modern colleges provide $18,000/year greater value than colleges did in your parents’ day? Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000? Source Per-capita health expenditure: The cost of health care has about quintupled since 1970. It’s actually been rising since earlier than that, but I can’t find a good graph; it looks like it would have been about $1200 in today’s dollars in 1960, for an increase of about 800% in those fifty years. ... This study attempts to directly estimate a %GDP health spending to life expectancy conversion, and says that an increase of 1% GDP corresponds to an increase of 0.05 years life expectancy. That would suggest a slightly different number of 0.65 years life expectancy gained by healthcare spending since 1960) Alexander writes: I worry that people don’t appreciate how weird this is. I didn’t appreciate it for a long time. I guess I just figured that Grandpa used to talk about how back in his day movie tickets only cost a nickel; that was just the way of the world. But all of the numbers above are inflation-adjusted. These things have dectupled in cost even after you adjust for movies costing a nickel in Grandpa’s day. They have really, genuinely dectupled in cost, no economic trickery involved. And this is especially strange because we expect that improving technology and globalization ought to cut costs. The fields Alexander uses as examples have a lot of human input, but they should have reaped significant cost benefits from technology and globalization. As he writes about health care: Patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand. Note that R&D is also a human-intensive business that should have reaped significant cost savings from technology and globalization. But like these other fields, it has increased massively in price. In fact, 18-fold instead of 10-fold. Cowen and Alexander are on to a really significant problem for the economy as a whole. Posted by David. at 8:00 AM Labels: intellectual property, storage costs 5 comments: Chris Rusbridge said... Nice post again, David. Have you come across "Do the Math"? It is (was?) a blog (https://dothemath.ucsd.edu) by a UCSD physicist (Tom Murphy), about sustainable energy, or at least how to reduce the exponential growth in energy consumption. He hasn't posted for a long time, but something in what you wrote reminded me of his starting premise, which is that in a finite world exponential growth in resource consumption is impossible. OK, it's a very different case, but there seem some parallels. The Moore's Law issue should remind us that many of these "exponentials" are really the bottom part of an S curve, limited as we bump up against natural barriers. So as you get towards the top part of the S curve it shouldn't be surprising if research productivity drops off, even 18-fold. But surely the effect will be field-specific, related to the actual hard limits, so it's not appropriate to transfer the loss of research productivity from on (measurable) field to another? As to health care costs, it does bug me how much costs have risen. You will no doubt be aware of the many crises afflicting our health care in the UK, and it appears US health care is not immune (and an even higher proportion of GDP). Annoyingly we see a lot of articles about costs through over use of A&E, bed blocking because of lack of social care to move older people out of hospital, and other articles hammering the government about lack of funding (to which the government irrelevantly replies that they have increased funding). But I don't see analyses of where the cost pressures are coming from. There have been many rounds of hospital closures and mergers, and other "efficiencies" so you'd think things would get better, but they seem to get worse. I would guess there are at least 3 other factors. One is that the endless rounds of "reforms" have greatly increased the "managerial" cost of the NHS, so that a decreasing proportion of staffing effort is directly involved in healthcare. Another (I guess) is price gouging by pharmaceutical companies (perhaps I should say profit maximisation rather than price gouging!). And the third would be the increasing use of very high cost health technologies, like proton beam scanners etc. These simply weren't available in the past, but are now (presumably with concomitant improvements in health outcomes), so you'd expect above-inflation cost increases to pay for them. There are efficiencies as you describe (electronic booking etc), but the NHS is slow to turn, and much stuff still happens the other way. My daughter works in healthcare on two sites; they don't have an electronic health record system, and frequently the paper records end up on the wrong site. Sometimes she drives to the other site to get them, sometimes they are sent in a taxi. Seems daft, but if you are as risk-averse as the NHS, perhaps not so surprising! Lost the text of your piece, an annoying Blogger feature, but I think those are the points I wanted to make! April 4, 2018 at 9:43 AM David. said... John Horgan posts Is Science Hitting a Wall? based in part on Are Ideas Getting Harder to Find? but is focused on the R rather than the D: "The economists are concerned primarily with what I would call applied science, the kind that fuels economic growth and increases wealth, health and living standards. Advances in medicine, transportation, agriculture, communication, manufacturing and so on. But their findings resonate with my claim in The End of Science that “pure” science—the effort simply to understand rather than manipulate nature--is bumping into limits. And in fact I was invited to The Session because an organizer had read my gloomy tract, which was recently republished." April 8, 2018 at 8:09 AM David. said... John Horgan's Is Science Hitting a Wall?, Part 2 is as good as part 1. He writes: "My last post, “Is Science Hitting a Wall?,” provoked lots of reactions. Some readers sent me other writings about diminishing returns from research. One is “Diagnosing the decline in pharmaceutical R&D efficiency,” published in Nature Reviews Drug Discovery in 2012. The paper is so clever, loaded with ideas and relevant to science as a whole that I’m summarizing its main points here." The paper points out that: "the number of new drugs approved per billion U.S. dollars spent on R&D has halved roughly every 9 years since 1950." April 18, 2018 at 10:02 AM David. said... Kelvin Stott's 2-part series Pharma's broken business model: Part 1: An industry on the brink of terminal decline and Part 2: Scraping the barrel in drug discovery uses a simple economic model to show that the Internal Rate of Return (IRR) of Pharma companies is already less than their cost of capital, and will become negative in 2020. Stott shows that this is a consequence of the Law of Diminishing Returns; because the most promising research avenues (i.e. the ones promising the greatest return) are pursued first, the returns on a research dollar decrease with time. May 5, 2018 at 7:53 PM David. said... The Economist has an interesting take on cost disease in The rising cost of education and health care is less troubling than believed: "The real culprit, the authors write, is a steady increase in the cost of labour—of teachers and doctors. That in turn reflects the relentless logic of Baumol’s cost disease, named after the late William Baumol, who first described the phenomenon. Productivity grows at different rates in different sectors. It takes far fewer people to make a car than it used to—where thousands of workers once filled plants, highly paid engineers now oversee factories full of robots—but roughly the same number of teachers to instruct a schoolful of children. Economists reckon that workers’ wages should vary with their productivity. But real pay has grown in high- and low-productivity industries alike. That, Baumol pointed out, is because teachers and engineers compete in the same labour market." The article concludes: "These possibilities reveal the real threat from Baumol’s disease: not that work will flow toward less-productive industries, which is inevitable, but that gains from rising productivity are unevenly shared. When firms in highly productive industries crave highly credentialed workers, it is the pay of similar workers elsewhere in the economy—of doctors, say—that rises in response. That worsens inequality, as low-income workers must still pay higher prices for essential services like health care. Even so, the productivity growth that drives cost disease could make everyone better off. But governments often do too little to tax the winners and compensate the losers." July 1, 2019 at 12:25 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ▼  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ▼  April (8) Michael Nelson's Fifteen Minutes Of Fame Cryptographers On Blockchains All Your Tweets Are Belong To Kannada Your Tax Dollars At Work Natural Redundancy John Perry Barlow RIP Emulating Stephen Hawking's Voice Falling Research Productivity ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-6944 ---- DSHR's Blog: Blockchain briefing for DoD DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, July 30, 2019 Blockchain briefing for DoD I was asked to deliver Blockchain: What's Not To Like? version 3.0 to a Department of Defense conference-call. I took the opportunity to update the talk, and expand it to include some of the "Additional Material" from the original, and from the podcast. Below the fold, the text of the talk with links to the sources. The yellow boxes contain material that was on the slides but was not spoken. [Slide 1] It’s one of these things that if people say it often enough it starts to sound like something that could work, Sadhbh McCarthy I'd like to thank Jen Snow for giving me the opportunity to talk about blockchain technology and cryptocurrencies. The text of my talk with links to the sources is up on my blog, so you don't need to take notes. There's been a supernova of hype about them. Almost everything positive you have heard is paid advertising, and should be completely ignored. Why am I any more credible? First, I'm retired. No-one is paying me to speak, and I have no investments in cryptocurrencies or blockchain companies. [Slide 2] This is not to diminish Nakamoto's achievement but to point out that he stood on the shoulders of giants. Indeed, by tracing the origins of the ideas in bitcoin, we can zero in on Nakamoto's true leap of insight—the specific, complex way in which the underlying components are put together. Bitcoin's Academic Pedigree, Arvind Narayanan and Jeremy Clark Second, I've been writing skeptically about cryptocurrencies and blockchain technology for more than five years. What are my qualifications for such a long history of pontification? Nearly sixteen years ago, about five years before Satoshi Nakamoto published the Bitcoin protocol, a cryptocurrency based on a decentralized consensus mechanism using proof-of-work, my co-authors and I won a "best paper" award at the prestigious SOSP workshop for a decentralized consensus mechanism using proof-of-work. It is the protocol underlying the LOCKSS system. The originality of our work didn't lie in decentralization, distributed consensus, or proof-of-work. All of these were part of the nearly three decades of research and implementation leading up to the Bitcoin protocol, as described by Arvind Narayanan and Jeremy Clark in Bitcoin's Academic Pedigree. Our work was original only in its application of these techniques to statistical fault tolerance; Nakamoto's only in its application of them to preventing double-spending in cryptocurrencies. We're going to start by walking through the design of a system to perform some function, say monetary transactions, storing files, recording reviewers' contributions to academic communication, verifying archival content, whatever. My goal is to show you how the pieces fit together in such a way that the problems the technology encounters in practice aren't easily fixable; they are inherent in the underlying requirements. Being of a naturally suspicious turn of mind, you don't want to trust any single central entity, but instead want a decentralized system. You place your trust in the consensus of a large number of entities, which will in effect vote on the state transitions of your system (the transactions, reviews, archival content, ...). You hope the good entities will out-vote the bad entities. In the jargon, the system is trustless (a misnomer). Techniques using multiple voters to maintain the state of a system in the presence of unreliable and malign voters were first published in The Byzantine Generals Problem by Lamport et al in 1982. Alas, Byzantine Fault Tolerance (BFT) requires a central authority to authorize entities to take part. In the blockchain jargon, it is permissioned. You would rather let anyone interested take part, a permissionless system with no central control. [Slide 3] In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. The Meaning of Decentralization, Vitalik Buterin, co-founder of Ethereum The security of your permissionless system depends upon the assumption of uncoordinated choice, the idea that each voter acts independently upon its own view of the system's state. If anyone can take part, your system is vulnerable to Sybil attacks, in which an attacker creates many apparently independent voters who are actually under his sole control. If creating and maintaining a voter is free, anyone can win any vote they choose simply by creating enough Sybil voters. [Slide 4] From a computer security perspective, the key thing to note ... is that the security of the blockchain is linear in the amount of expenditure on mining power, ... In contrast, in many other contexts investments in computer security yield convex returns (e.g., traditional uses of cryptography) ... analogously to how a lock on a door increases the security of a house by more than the cost of the lock. The Economic Limits of Bitcoin and the Blockchain, Eric Budish, Booth School, University of Chicago So creating and maintaining a voter has to be expensive. Permissionless systems can defend against Sybil attacks by requiring a vote to be accompanied by a proof of the expenditure of some resource. This is where proof-of-work comes in; a concept originated by Cynthia Dwork and Moni Naor in 1992. In a BFT system, the value of the next state of the system is that computed by the majority of the nodes. In  a proof-of-work system such as Bitcoin, the value of the next state of the system is that computed by the first node to solve a puzzle. There is no guarantee that any other node computed that value; BFT is a consensus system whereas Bitcoin-type systems select a winning node. Proof-of-work is a random process, but at scale the probability of being selected is determined by how quickly you can compute hashes. The idea is that the good voters will spend more on hashing power, and thus compute more useless hashes, than the bad voters. [Slide 5] The blockchain trilemma much of the innovation in blockchain technology has been aimed at wresting power from centralised authorities or monopolies. Unfortunately, the blockchain community’s utopian vision of a decentralised world is not without substantial costs. In recent research, we point out a ‘blockchain trilemma’ – it is impossible for any ledger to fully satisfy the three properties shown in [the diagram] simultaneously ... In particular, decentralisation has three main costs: waste of resources, scalability problems, and network externality inefficiencies. The economics of blockchains, Markus K Brunnermeier & Joseph Abadi, Princeton Brunnermeir and Abadi's Blockchain Trilemma shows that a blockchain has to choose at most two of the following three attributes: correctness decentralization cost-efficiency Obviously, your system needs the first two, so the third has to go. Running a voter (mining in the jargon) in your system has to be expensive if the system is to be secure. No-one will do it unless they are rewarded. They can't be rewarded in "fiat currency", because that would need some central mechanism for paying them. So the reward has to come in the form of coins generated by the system itself, a cryptocurrency. To scale, permissionless systems need to be based on a cryptocurrency; the system's state transitions will need to include cryptocurrency transactions in addition to records of files, reviews, archival content, whatever. Your system needs names for the parties to these transactions. There is no central authority handing out names, so the parties need to name themselves. As proposed by David Chaum in 1981 they can do so by generating a public-private key pair, and using the public key as the name for the source or sink of each transaction. [Slide 6] we created a small Bitcoin wallet, placed it on images in our honeyfarm, and set up monitoring routines to check for theft. Two months later our monitor program triggered when someone stole our coins. This was not because our Bitcoin was stolen from a honeypot, rather the graduate student who created the wallet maintained a copy and his account was compromised. If security experts can't safely keep cryptocurrencies on an Internet-connected computer, nobody can. If Bitcoin is the "Internet of money," what does it say that it cannot be safely stored on an Internet connected computer? Risks of Cryptocurrencies, Nicholas Weaver, U.C. Berkeley In practice this is implemented in wallet software, which stores one or more key pairs for use in transactions. The public half of the pair is a pseudonym. Unmasking the person behind the pseudonym turns out to be fairly easy in practice. The security of the system depends upon the user and the software keeping the private key secret. This can be difficult, as Nicholas Weaver's computer security group at Berkeley discovered when their wallet was compromised and their Bitcoins were stolen. [Slide 7] 2-yr Bitcoin "price" history The capital and operational costs of running a miner include buying hardware, power, network bandwidth, staff time, etc. Bitcoin's volatile "price", high transaction fees, low transaction throughput, and large proportion of failed transactions mean that almost no legal merchants accept payment in Bitcoin or other cryptocurrency. Thus one essential part of your system is one or more exchanges, at which the miners can sell their cryptocurrency rewards for the "fiat currency" they need to pay their bills. Who is on the other side of those trades? The answer has to be speculators, betting that the "price" of the cryptocurrency will increase. Thus a second essential part of your system is a general belief in the inevitable rise in "price" of the coins by which the miners are rewarded. If miners believe that the "price" will go down, they will sell their rewards immediately, a self-fulfilling prophesy. Over time, permissionless blockchains require an inflow of speculative funds at an average rate greater than the current rate of mining rewards if the "price" is not to collapse. To maintain Bitcoin's price at $10K would require an inflow of $750K/hour, or about $5B from now until the next reward halving around May 20th 2020.  [Slide 8] Ether miners 07/09/19 can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? The Meaning of Decentralization, Vitalik Buterin In order to spend enough to be secure, say $750K/hour, you need a lot of miners. It turns out that a third essential part of your system is a small number of “mining pools”. A year ago Bitcoin had the equivalent of around 3M Antminer S9s, and a block time of 10 minutes. Each S9, costing maybe $1K, could expect a reward about once every 60 years. It would be obsolete in about a year, so only 1 in 60 would ever earn anything. To smooth out their income, miners join pools, contributing their mining power and receiving the corresponding fraction of the rewards earned by the pool. These pools have strong economies of scale, so successful cryptocurrencies end up with a majority of their mining power in 3-4 pools. Each of the big pools can expect a reward every hour or so. These blockchains aren’t decentralized, but centralized around a few large pools. At multiple times in 2014 one mining pool controlled more than 51% of the Bitcoin mining power. At almost all times since 3-4 pools have controlled the majority of the Bitcoin mining power. Currently two of them, with 35.2% of the power, are controlled by Bitmain, the dominant supplier of mining ASICs. With the advent of mining-as-a-service, 51% attacks have become endemic among the smaller alt-coins. The security of a blockchain depends upon the assumption that these few pools are not conspiring together outside the blockchain; an assumption that is impossible to verify in the real world (and by Murphy's Law is therefore false). [Slide 9] Since then there have been other catastrophic bugs in these smart contracts, the biggest one in the Parity Ethereum wallet software ... The first bug enabled the mass theft from "multisignature" wallets, which supposedly required multiple independent cryptographic signatures on transfers as a way to prevent theft. Fortunately, that bug caused limited damage because a good thief stole most of the money and then returned it to the victims. Yet, the good news was limited as a subsequent bug rendered all of the new multisignature wallets permanently inaccessible, effectively destroying some $150M in notional value. This buggy code was largely written by Gavin Wood, the creator of the Solidity programming language and one of the founders of Ethereum. Again, we have a situation where even an expert's efforts fell short. Risks of Cryptocurrencies, Nicholas Weaver, U.C. Berkeley In practice the security of a blockchain depends not merely on the security of the protocol itself, but on the security of both the core software, and the wallets and exchanges used to store and trade its cryptocurrency. This ancillary software has bugs, such as last September's major vulnerability in Bitcoin Core, the Parity Wallet fiasco, the routine heists using vulnerabilities in exchange software, and the wallet that was sending user's pass-phrases to the Google spell-checker over HTTP. Who doesn't need their pass-phrase spell-checked? Recent game-theoretic analysis suggests that there are strong economic limits to the security of cryptocurrency-based blockchains. To guarantee safety, the total value of transactions in a block needs to be less than the value of the block reward, which kind of spoils the whole idea. Your system needs an append-only data structure to which records of the transactions, files, reviews, archival content, whatever are appended. It would be bad if the miners could vote to re-write history, undoing these records. In the jargon, the system needs to be immutable (another misnomer). [Slide 10] Merkle Tree (source) The necessary data structure for this purpose was published by Stuart Haber and W. Scott Stornetta in 1991. A company using their technique has been providing a centralized service of securely time-stamping documents for nearly a quarter of a century. It is a form of Merkle or hash tree, published by Ralph Merkle in 1980. For blockchains it is a linear chain to which fixed-size blocks are added at regular intervals. Each block contains the hash of its predecessor; a chain of blocks. The blockchain is mutable, it is just rather hard to mutate it without being detected, because of the Merkle tree’s hashes, and easy to recover, because there are Lots Of Copies Keeping Stuff Safe. But this is a double-edged sword. Immutability makes systems incompatible with the GDPR, and immutable systems to which anyone can post information will be suppressed by governments. [Slide 11] BTC transaction fees Cryptokitties’ popularity exploded in early December and had the Ethereum network gasping for air. ... Ethereum has historically made bold claims that it is able to handle unlimited decentralized applications ... The Crypto-Kittie app has shown itself to have the power to place all network processing into congestion. ... at its peak [CryptoKitties] likely only had about 14,000 daily users. Neopets, a game to which CryptoKitties is often compared, once had as many as 35 million users. How Crypto-Kitties Disrupted the Ethereum Network, Open Trading Network A user of your system wanting to perform a transaction, store a file, record a review, whatever, needs to persuade miners to include their transaction in a block. Miners are coin-operated; you need to pay them to do so. How much do you need to pay them? That question reveals another economic problem, fixed supply and variable demand, which equals variable "price". Each block is in effect a blind auction among the pending transactions. So lets talk about CryptoKitties, a game that bought the Ethereum blockchain to its knees despite the bold claims that it could handle unlimited decentralized applications. How many users did it take to cripple the network? It was far fewer than non-blockchain apps can handle with ease; CryptoKitties peaked at about 14K users. NeoPets, a similar centralized game, peaked at about 2,500 times as many. CryptoKitties average "price" per transaction spiked 465% between November 28 and December 12 as the game got popular, a major reason why it stopped being popular. The same phenomenon happened during Bitcoin's price spike around the same time. Cryptocurrency transactions are affordable only if no-one wants to transact; when everyone does they immediately become un-affordable. Nakamoto's Bitcoin blockchain was designed only to support recording transactions. It can be abused for other purposes, such as storing illegal content. But it is likely that you need additional functionality, which is where Ethereum's "smart contracts" come in. These are fully functional programs, written in a JavaScript-like language, embedded in Ethereum's blockchain. They are mainly used to implement Ponzi schemes, but they can also be used to implement Initial Coin Offerings, games such as Cryptokitties, and gambling parlors. Further, in On-Chain Vote Buying and the Rise of Dark DAOs Philip Daian and co-authors show that "smart contracts" also provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. [Slide 12] ICO Returns The first big smart contract, the DAO or Decentralized Autonomous Organization, sought to create a democratic mutual fund where investors could invest their Ethereum and then vote on possible investments. Approximately 10% of all Ethereum ended up in the DAO before someone discovered a reentrancy bug that enabled the attacker to effectively steal all the Ethereum. The only reason this bug and theft did not result in global losses is that Ethereum developers released a new version of the system that effectively undid the theft by altering the supposedly immutable blockchain. Risks of Cryptocurrencies, Nicholas Weaver, U.C. Berkeley "Smart contracts" are programs, and programs have bugs. Some of the bugs are exploitable vulnerabilities. Research has shown that the rate at which vulnerabilities in programs are discovered increases with the age of the program. The problems caused by making vulnerable software immutable were revealed by the first major "smart contract". The Decentralized Autonomous Organization (The DAO) was released on 30th April 2016, but on 27th May 2016 Dino Mark, Vlad Zamfir, and Emin Gün Sirer posted A Call for a Temporary Moratorium on The DAO, pointing out some of its vulnerabilities; it was ignored. Three weeks later, when The DAO contained about 10% of all the Ether in circulation, a combination of these vulnerabilities was used to steal its contents. The loot was restored by a "hard fork", the blockchain's version of mutability. Since then it has become the norm for "smart contract" authors to make them "upgradeable", so that bugs can be fixed. "Upgradeable" is another way of saying "immutable in name only". [Slide 13] security researchers from SRLabs revealed that a large chunk of the Ethereum client software that runs on Ethereum nodes has not yet received a patch for a critical security flaw the company discovered earlier this year. "According to our collected data, only two thirds of nodes have been patched so far," said Karsten Nohl, ... "The Parity Ethereum has an automated update process - but it suffers from high complexity and some updates are left out," Nohl said. All of these issues put all Ethereum users at risk, and not just the nodes running unpatched versions. The number of unpatched notes may not be enough to carry out a direct 51% attack, but these vulnerable nodes can be crashed to reduce the cost of a 51% attack on Ethereum, currently estimated at around $120,000 per hour. ...  "The patch gap signals a deep-rooted mistrust in central authority, including such any authority that can automatically update software on your computer." A large chunk of Ethereum clients remain unpatched Catalin Cimpanu It isn't just the "smart contracts" that need to be upgradeable, it is the core software for the blockchain. Bugs and vulnerabilities are inevitable. If you trust a central authority to update your software automatically, or if you don't but you think others do, what is the point of a permissionless blockchain? [Slide 14] Permissionless systems trust: The core developers of the blockchain software not to write bugs. The developers of your wallet software not to write bugs. The developers of the exchanges not to write bugs. The operators of the exchanges not to manipulate the markets or to commit fraud. The developers of your upgradeable "smart contracts" not to write bugs. The owners of the smart contracts to keep their secret key secret. The owners of the upgradeable smart contracts to avoid losing their secret key. The owners and operators of the dominant mining pools not to collude. The operators of miners to apply patches in a timely manner. The speculators to provide the funds needed to keep the “price” going up. Users' ability to keep their secret key secret. Users’ ability to avoid losing their secret key. Other users not to transact when you want to. So, this is the list of people your permissionless system has to trust if it is going to work as advertised over the long term. You started out to build a trustless, decentralized system but you have ended up with: A trustless system that trusts a lot of people you have every reason not to trust. A decentralized system that is centralized around a few large mining pools that you have no way of knowing aren’t conspiring together. An immutable system that either has bugs you cannot fix, or is not immutable A system whose security depends on it being expensive to run, and which is thus dependent upon a continuing inflow of funds from speculators. A system whose coins are convertible into large amounts of "fiat currency" via irreversible pseudonymous transactions, which is thus an irresistible target for crime. If the “price” keeps going up, the temptation for your trust to be violated is considerable. If the "price" starts going down, the temptation to cheat to recover losses is even greater. Maybe it is time for a re-think. Suppose you give up on the idea that anyone can take part and accept that you have to trust a central authority to decide who can and who can’t vote. You will have a permissioned system. The first thing that happens is that it is no longer possible to mount a Sybil attack, so there is no reason running a node need be expensive. You can use BFT to establish consensus, as IBM’s Hyperledger, the canonical permissioned blockchain system plans to. You need many fewer nodes in the network, and running a node just got way cheaper. Overall, the aggregated cost of the system got orders of magnitude cheaper. Now there is a central authority it can collect “fiat currency” for network services and use it to pay the nodes. No need for cryptocurrency, exchanges, pools, speculators, or wallets, so much less temptation for bad behavior. [Slide 15] Permissioned systems trust: The central authority. The software developers. The owners and operators of the nodes. The secrecy of a few private keys. This is now the list of entities you trust. Trusting a central authority to determine the voter roll has eliminated the need to trust a whole lot of other entities. The permissioned system is more trustless and, since there is no need for pools, the network is more decentralized despite having fewer nodes. [Slide 16] Faults Replicas 1 4 2 7 3 10 4 13 5 16 6 19 a Byzantine quorum system of size 20 could achieve better decentralization than proof-of-work mining at a much lower resource cost. Decentralization in Bitcoin and Ethereum Networks, Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert van Renesse and Emin Gün Sirer How many nodes does your permissioned blockchain need? The rule for BFT is that 3f + 1 nodes can survive f simultaneous failures. That's an awful lot fewer than you need for a permissionless proof-of-work blockchain. What you get from BFT is a system that, unless it encounters more than f simultaneous failures, remains available and operating normally. The problem with BFT is that if it encounters more than f simultaneous failures, the state of the system is irrecoverable. If you want a system that can be relied upon for the long term you need a way to recover from disaster. Successful permissionless blockchains have Lots Of Copies Keeping Stuff Safe, so recovering from a disaster that doesn't affect all of them is manageable. [Slide 17] Source So in addition to implementing BFT you need to back up the state of the system each block time, ideally to write-once media so that the attacker can't change it. But if you're going to have an immutable backup of the system's state, and you don't need continuous uptime, you can rely on the backup to recover from failures. In that case you can get away with, say, 2 replicas of the blockchain in conventional databases, saving even more money. I've shown that, whatever consensus mechanism they use, permissionless blockchains are not sustainable for very fundamental economic reasons. These include the need for speculative inflows and mining pools, security linear in cost, economies of scale, and fixed supply vs. variable demand. Proof-of-work blockchains are also environmentally unsustainable. The top 5 cryptocurrencies are estimated to use as much energy as The Netherlands. This isn't to take away from Nakamoto's ingenuity; proof-of-work is the only consensus system shown to work well for permissionless blockchains. The consensus mechanism works, but energy consumption and emergent behaviors at higher levels of the system make it unsustainable. [Slide 18] Mentions in S&P500 quarterlies Still new to NYC, but I met this really cool girl. Energy sector analyst or some such. Four dates in, she uncovers my love for BitCoin. Completely ghosted. Zack Voell S&P500 companies are slowly figuring out that there is no there there in blockchains and cryptocurrencies, and they're not the only ones: So if both permissionless and permissioned blockchains are fatally flawed, and experts in both cryptography and economics have been saying so for many years, how come they are generally perceived as huge successes? The story starts in the early 80s with David Chaum. His work on privacy was an early inspiration for the cypherpunks. Many of the cypherpunks were libertarians, so the idea of money not controlled by governments was attractive. But Chaum's pioneering DigiCash was centralized, a fatal flaw in their eyes. It would be two decades before the search for a practical decentralized cryptocurrency culminated with Nakamoto's Bitcoin. [Slide 19] Bitcoin failed at every one of Nakamoto's aspirations here. The price is ridiculously volatile and has had multiple bubbles; the unregulated exchanges (with no central bank backing) front-run their customers, paint the tape to manipulate the price, and are hacked or just steal their user's funds; and transaction fees and the unreliability of transactions make micropayments completely unfeasible. David Gerard A parallel but less ideological thread was the idea that the business model for the emerging Internet was micropayments. This was among the features Nakamoto touted for Bitcoin in early 2009, despite the idea having been debunked by Clay Shirky in 2000 and Andrew Odlyzko in 2003. In fact, none of Nakamoto's original goals worked out in practice. But Nakamoto was not just extremely clever in the way he assembled the various component technologies into a cryptocurrency, he also had exceptionally good timing. His paper was posted on 31st October 2008, and met three related needs: Just 40 days earlier, on 15th September 2008 Lehman Brothers had gone bankrupt, precipitating the Global Financial Crisis (the GFC). The GFC greatly increased the demand for flight capital in China. Mistaking pseudonymity for anonymity, vendors and customers on the dark web found Bitcoin a reassuring means of exchange. A major reason Bitcoin was attractive to the libertarian cypherpunks was that many were devotees of the Austrian economics cult. Because there would only ever be 21 million Bitcoin, they believed that, like gold, the price would inevitably increase. Consider a currency whose price is doomed to increase. It is a mechanism for transferring wealth from later adopters, called suckers, to early adopters, called geniuses. And the cypherpunks were nothing if not early adopters of technology. Sure enough, a few of the geniuses turned into "whales", HODL-ing the vast majority of the Bitcoin. The Gini coefficient of cryptocurrencies is an interesting research question; it is huge but probably less than Nouriel Roubini's claim of 0.86. The whales needed to turn large amounts of cryptocurrency in their wallets into large numbers in a bank account denominated in "fiat currency". To do this they needed to use an exchange to sell cryptocurrency to a sucker for dollars, and then transfer the dollars from the exchange's bank account into their bank account. [Slide 20] We’ve had banking hiccups in the past, we’ve just always been able to route around it or deal with it, open up new accounts, or what have you … shift to a new corporate entity, lots of cat and mouse tricks. Phil Potter of the Bitfinex exchange. FOWLER opened numerous U.S.-based business bank accounts at several different banks, and in opening and using these accounts FOWLER and YOSEF falsely represented to those banks that the accounts would be primarily used for real estate investment transactions even though FOWLER and YOSEF knew that the accounts would be used, and were in fact used, by FOWLER, YOSEF and others to transmit funds on behalf of an unlicensed money transmitting business related to the operation of cryptocurrency exchanges. US vs. Reginald Fowler and Ravid Yosef For the exchange to have a bank account, it had to either conform to or evade the "Know Your Customer/Anti-Money Laundering" laws. The whole point of cryptocurrencies is to avoid dealing with banks and laws such as KYC/AML, so most exchanges chose to evade KYC/AML by a cat-and-mouse game of fraudulent accounts. Once the banks caught on to the cat-and-mouse game, most exchanges could not trade cryptocurrency for fiat currency. To continue, they needed a "stablecoin", a cryptocurrency fixed against the US dollar as a substitute for actual dollars. The guys behind Bitfinex, one of the sketchier exchanges, invented Tether. They claimed their USDT was backed one-for-one by USD, promising an audit would confirm this. But before an audit appeared they fired their auditors. Earlier this year, after the New York Attorney General sued them, they claimed it was 74% backed by USD (except when they accidentally create 5 billion USDT), and revealed an 850M USD hole in Bitfinex' accounts. [Slide 21] "approximately 95% of this volume is fake and/or non-economic in nature, and that the real market for bitcoin is significantly smaller, more orderly, and more regulated than commonly understood." Bitwise Asset Management's detailed comments to the SEC about BTC/USDT trading on unregulated exchanges. According to Blockchain.info, about $417m worth of bitcoin was traded on Friday on the main dollar-based exchanges. Which sounds decent until you notice that about $37bn worth of Tether was traded on Friday, according to CoinMarketCap. Jemima Kelly, FT Alphaville There were many USDT exchanges, and competition was intense. Customers wanted the exchange with the highest volume for their trades, so these exchanges created huge volumes of wash trades to inflate their volume. Around 95% of all cryptocurrency trades are fake. [Slide 22] "An upset Mt. Gox creditor analyses the data from the bankruptcy trustee’s sale of bitcoins. He thinks he’s demonstrated incompetent dumping by the trustee — but actually shows that a “market cap” made of 18 million BTC can be crashed by selling 60,000 BTC, over months, at market prices, which suggests there is no market." David Gerard. Because there was so little real trading between cryptocurrencies and USD, trades of the size the whales needed would crash the price. It was thus necessary to pump the price before even part of their HODLings could be dumped on the suckers. [Slide 23] P&Ds have dramatic short-term impacts on the prices and volumes of most of the pumped tokens. In the first 70 seconds after the start of a P&D, the price increases by 25% on average, trading volume increases 148 times, and the average 10-second absolute return reaches 15%. A quick reversal begins 70 seconds after the start of the P&D. ... For an average P&D, investors make one Bitcoin (about $8,000) in profit, approximately one-third of a token’s daily trading volume. The trading volume during the 10 minutes before the pump is 13% of the total volume during the 10 minutes after the pump. This implies that an average trade in the first 10 minutes after a pump has a 13% chance of trading against these insiders and on average they lose more than 2% (18%*13%). Cryptocurrency Pump-and-Dump Schemes Tao Li, Donghwa Shin and Baolian Wang Off-chain collusion among cryptocurrency traders allows for extremely profitable pump-and-dump schemes, especially given the thin trading in "alt-coins". But the major pumps, such as the one currently under way, come from the creation of huge volumes of USDT, in this case about one billion USDT per month. [Slide 24] Issuance of USDT [In April] there were about $2 billion worth of tethers on the market. Since then, Tether has gone on a frenzied issuance spree. In the month of May, the stablecoin company pumped out $1 billion worth of tethers into the market. And this month, it is on track for another $1 billion. Currently, there are roughly $3.8 billion worth of tethers sloshing around in the bitcoin markets. Whether this money is backed by real dollars is anyone's guess. Amy Castor Who would believe that pushing a billion "dollars" a month that can only be used to buy cryptocurrency into the market might cause people to buy cryptocurrency and drive the price up? If we believe Bitfinex that the 26% of USDT that isn't USD is in cryptocurrencies, that might provide a motive for a massive pump to recover, say, 850M USD in losses. I want to end by talking about a technology with important implications for software supply chain security that looks like, but isn't a blockchain. [Slide 25] A green padlock (with or without an organization name) indicates that: You are definitely connected to the website whose address is shown in the address bar; the connection has not been intercepted. The connection between Firefox and the website is encrypted to prevent eavesdropping. How do I tell if my connection to a website is secure? Mozilla How do I know that I'm talking to the right Web site? Because there's a closed padlock icon in the URL bar, right? The padlock icon appears when the browser has verified that the connection to the URL in the URL bar supplied a certificate for the site in question carrying a signature chain ending in one of the root certificates the browser trusts. Browsers come with a default list of root certificates from Certificate Authorities (CAs). My current Firefox browser trusts 133 root certificates from 72 unique organizations, among them foreign governments but not the US government. Some of the organizations whose root certificates my browser trusts are known to have abused this trust, allowing miscreants to impersonate sites, spy on users and sign malware so it appears to be coming from, for example, Microsoft or Apple. [Slide 26] A crucial technical property of the HTTPS authentication model is that any CA can sign certificates for any domain name. In other words, literally anyone can request a certificate for a Google domain at any CA anywhere in the world, even when Google itself has contracted one particular CA to sign its certificate. Security Collapse in the HTTPS Market, Axel Arnbak et al For example, Google discovered that "Symantec CAs have improperly issued more than 30,000 certificates". But browsers still trust Symantec CAs; their market share is so large the Web would collapse if they didn't. As things stand, clients have no way of knowing whether the root of trust for a certificate, say for the Library of Congress, is the one the Library intended, or a spoof from some CA in Turkey or China. In 2012 Google started work on an approach based on Ronald Reagan's "trust but verify" paradigm, called Certificate Transparency (CT). The basic idea is to accompany the certificate with a hash of the certificate signed by a trusted third party, attesting that the certificate holder told the third party that the certificate with that hash was current. Thus in order to spoof a service, an attacker would have to both obtain a fraudulent certificate from a CA, and somehow persuade the third party to sign a statement that the service had told them the fraudulent certificate was current. Clearly this is: more secure than the current situation, which requires only compromising a CA, and: more effective than client-only approaches, which can detect that a certificate has changed but not whether the change was authorized. Clients now need two lists of trusted third parties, the CAs and the sources of CT attestations. The need for these trusted third parties is where the blockchain enthusiasts would jump in and claim (falsely) that using a blockchain would eliminate the need for trust. In the real world it isn't feasible to solve the problem of untrustworthy CAs by eliminating the need for trust. CT's approach instead is to provide a mechanism by which breaches of trust, both by the CAs and by the attestors, can be rapidly and unambiguously detected. This can be done because: Certificate owners obtain attestations from multiple sources, who are motivated not to conspire. Clients can verify these multiple attestations. The attestors publish Merkle trees of their attestations, which can be verified by their competitors. [Slide 27] Each log operates independently. Each log gets its content directly from the CAs, not via replication from other logs. Each log contains a subset of the total information content of the system. There is no consensus mechanism operating between the logs, so it cannot be abused by, for example, a 51% attack. Monitoring and auditing is asynchronous to Web content delivery, so denial of service against the monitors and auditors cannot prevent clients obtaining service. Certificate Transparency David S. H. Rosenthal How do I know I'm running the right software, and no-one has implanted a backdoor? Right now, there is no equivalent of CT for the signatures that purport to verify software downloads, and this is one reason for the rash of software supply chain attacks. The open source community has a long-standing effort to use CT-like techniques not merely to enhance the reliability of the signatures on downloads, but more importantly to verify that a binary download was compiled from the exact source code it claims to represent. The reason this project is taking a long time is that it is woefully under-funded, and it is a huge amount of work. It depends on ensuring that the build process for each package is reproducible, so that given the source code and the build specification, anyone can run the build and generate bit-for-bit identical results. To give you some idea of how hard this is, the UK government has been working with Huawei since 2015 to make their router software builds reproducible so they know the binaries running in the UK's routers match the source code Huawei disclosed. Huawei expects to finish this program in 2024. With a few million dollars in funding, in a couple of years the open source community could finish making the major Linux distributions reproducible and implement CT-like assurance that the software you were running matched the source code in the repositories, with no hidden backdoors. I would think this would be something the DoD would be interested in. Thank you for your attention, I'm ready for questions. Posted by David. at 8:30 AM Labels: bitcoin 51 comments: David. said... The topics of the questions I remember were: 1) Use of cryptocurrency for money laundering and terrorism funding. 2) Enforcement actions by governments. 3) Use of blockchain technology by major corporations. 4) PR for libertarian politics by cryptocurrency HODLers. 5) Relative security of decentralized vs. centralized blockchains. I will add shortly comments addressing them, with links to sources. July 30, 2019 at 10:48 AM David. said... 1) Use of cryptocurrency for money laundering and terrorism funding. In general it is a bad idea to commit crimes using an immutable public blockchain. Pseudonymous blockchains such as Bitcoin's require extremely careful op-sec if the pseudonym is not to be linked to Web trackers and cookies (see, for example, When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies by Steven Goldfeder et al). There are cryptocurrencies with stronger privacy features, such as Zcash and Monero. These are more popular among malefactors than Bitcoin. But turning cryptocurrencies into fiat currency with which to buy your Lamborghini while remaining anonymous faces difficulties. Users of exchanges that observe KYC/AML, such as Coinbase, will need to explain the source of funds to the tax authorities. The IRS recently sent letters to Coinbase users reminding them of their obligation to report the gains and losses on every single transaction. North Korea is reputed to be very active in stealing cryptocurrency via exchange hacks and other techniques. 2) Enforcement actions by governments. See my post Regulating Cryptocurrencies and the comments to it. 3) Use of blockchain technology by major corporations. See, for example, Blockchain for International Development: Using a Learning Agenda to Address Knowledge Gaps by John Burg, Christine Murphy, & Jean Paul Pétraud. And this, from David Gerard: "Bundesbank and Deutsche Boerse try settlements on the blockchain. You’ll be amazed to hear that it was slower and more expensive. “Despite numerous tests of blockchain-based prototypes, a real breakthrough in application is missing so far.” But at least it “in principle fulfilled all basic regulatory features for financial transactions.” July 30, 2019 at 2:53 PM David. said... 4) PR for libertarian politics by cryptocurrency HODLers. John McAfee is running for US President. See also Laurie Penny's must-read Four Days Trapped at Sea With Crypto’s Nouveau Riche. 5) Relative security of decentralized vs. centralized blockchains. As I described above, at scale anything claiming to be a "decentralized blockchain" isn't going to be decentralized. Economic forces will have centralized it around a small number of mining pools. See Decentralization in Bitcoin and Ethereum Networks by Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert van Renesse and Emin Gün Sirer. Its security will depend upon those pools not conspiring together, among many other things (Slide 14). Centralized, permissioned blockchains have fewer vulnerabilities, but their central authority is a single point of failure. IIRC the questioner used the phrase "100% secure". No networked computer system is ever 100% secure. 6) I seem to remember also a question on pump-and-dump schemes. The current pump is via Tether. Social Capital has a series explaining Tether and the "stablecoin" scam: * Pumps, spoofs and boiler rooms * Tether, Part One: The Stablecoin Dream * Tether, Part Two: PokeDEx * Tether, Part Three: Crypto Island July 30, 2019 at 3:52 PM David. said... North Korea took $2 billion in cyberattacks to fund weapons program: U.N. report by Michelle Nichols reports that: "North Korea has generated an estimated $2 billion for its weapons of mass destruction programs using “widespread and increasingly sophisticated” cyberattacks to steal from banks and cryptocurrency exchanges, according to a confidential U.N. report seen by Reuters on Monday." August 5, 2019 at 6:27 PM David. said... Timothy B. Lee debunks the idea of Bitcoin for purchases in I tried to pay with bitcoin at a Mexico City bar—it didn’t go well: "So we gave up and paid with a conventional credit card. After leaving the bar, I sent off an email to the support address listed on my receipt. The next morning, I got a response: "Transactions under [1,000 pesos] are taking a day to two, in the course of today they will reach the wallet." I finally got my bitcoins around 6pm." The bar is called Bitcoin Embassy: "Does Bitcoin Embassy pay its employees in bitcoin? "I always tell them I can pay you in bitcoin if you want to, but they don't want to," Ortiz says." August 7, 2019 at 7:21 AM David. said... Clare Duffy reports that The Fed is getting into the real-time payments business: "The Fed announced Monday that it will develop a real-time payment service called "FedNow" to help move money around the economy more quickly. It's the kind of government service that companies and consumers have been requesting for years — one that already exists in other countries. The service could also compete with solutions already developed in the private sector by big banks and tech companies. The Fed itself is not setting up a consumer bank, but it has always played a behind-the-scenes role facilitating the movement of money between banks and helping to verify transactions. This new system would help cut down on the amount of time between when money is deposited into an account and when it is available for use. FedNow would operate all hours and days of the week, with an aim to launch in 2023 or 2024. " "Real-time payments" are something that enthusiasts see as a competitive edge for cryptocurrencies against fiat currencies. This is strange, for two reasons: 1) In most countries except the US, instantaneous inter-bank transfers have been routine for years. But the enthusiasts are so ignorant of the way the world outside the US works that they don't know this. Similar ignorance was evident in the way Facebook thought that Libra would "bank the unbanked" in the third world. 2) Cryptocurrency transfers are not in practice real-time. Bitcoin users are advised to wait 6 block times (one hour) before treating a transaction as confirmed. August 7, 2019 at 6:12 PM David. said... Jemima Kelly's When bitcoin bros talk cryptography provides an excellent example of the hype surrounding cryptocurrencies. Anthony Pompliano, a "crypto fund manager" who "has over half his net worth in bitcoin" was talking (his book) to CNBC and: "When one of the CNBC journalists put it to Pomp that just because bitcoin is scarce that doesn’t necessarily make it valuable, as “there are a lot of things that are scarce that nobody cares about”, Pomp said:     Of course. Look, if you don’t believe in bitcoin, you’re essentially saying you don’t believe in cryptography. Have a watch for yourself here (and count the seconds it takes for the others to recover from his comment, around the 4.33 mark):" The video is here. August 11, 2019 at 1:17 PM David. said... More on the Fed's real-time payment proposal in The Fed is going to revamp how Americans pay for things. Big banks aren’t happy from MIT Technology Review. August 12, 2019 at 7:43 AM David. said... Trail of Bits has released: "findings from the full final reports for twenty-three paid security audits of smart contract code we performed, five of which have been kept private. The public audit reports are available online, and make informative reading. We categorized all 246 smart-contract related findings from these reports" The bottom line is that smart contracts are programs, and programs have bugs. Using current automated tools can find some but not all of them. August 15, 2019 at 2:05 PM David. said... Brenna Smith's The Evolution Of Bitcoin In Terrorist Financing makes interesting and somewhat scary reading: "Terrorists’ early attempts at using cryptocurrencies were filled with false starts and mistakes. However, terrorists are nothing if not tenacious, and through these mistakes, they’ve grown to have a highly sophisticated understanding of blockchain technology. This investigation outlines the evolution of terrorists’ public bitcoin funding campaigns starting from the beginning and ending with the innovative solutions various groups have cooked up to make the technology work in their favor." August 15, 2019 at 2:10 PM David. said... Larry Cermak has a Twitter thread that starts: "It’s now obvious that ICOs were a massive bubble that's unlikely to ever see a recovery. The median ICO return in terms of USD is -87% and constantly dropping. Let's look at some data!" Hat tip to David Gerard. August 27, 2019 at 1:09 PM David. said... The abstract for the European Central Bank's In search for stability in crypto-assets: are stablecoins the solution? reads: "Stablecoins claim to stabilise the value of major currencies in the volatile crypto-asset market. This paper describes the often complex functioning of different types of stablecoins and proposes a taxonomy of stablecoin initiatives. To this end it relies on a novel framework for their classification, based on the key dimensions that matter for crypto-assets, namely: (i) accountability of issuer, (ii) decentralisation of responsibilities, and (iii) what underpins the value of the asset. The analysis of different types of stablecoins shows a trade-off between the novelty of the stabilisation mechanism used in an initiative (from mirroring the traditional electronic money approach to the alleged introduction of an “algorithmic central bank”) and its capacity to maintain a stable market value. While relatively less innovative stablecoins could provide a solution to users seeking a stable store of value, especially if legitimised by the adherence to standards that are typical of payment services, the jury is still out on the potential future role of more innovative stablecoins outside their core user base." August 30, 2019 at 8:26 AM David. said... David Gerard writes: "Tethers as ERC-20 tokens on the Ethereum blockchain are so popular that they’re flooding Ethereum with transactions, and clogging the blockchain — “Yesterday I had to wait 1 and half hours for a standard transfer to go through.” Ethereum is the World Computer, as long as you don’t try to use it for any sort of real application. Another 100 million tethers were also printed today." September 2, 2019 at 8:28 PM David. said... David Gerard has been researching Libra, and has two posts up on the topic. Today's is Switzerland’s guidance on stablecoins — what it means for Facebook’s Libra: "Libra will need to register as a bank and as a payment provider (a money transmitter). It probably won’t need to register as a collective investment scheme for retail investors. FINMA notes explicitly: “The highest international anti-money laundering standards would need to be ensured throughout the entire ecosystem of the project” — and that Libra in particular requires an “internationally coordinated approach.” So the effective consequence is that Libra will be a coin for well-documented end users in highly regulated rich countries, and not so available in poorer ones." Yesterday's was Your questions about Facebook Libra — as best as we can answer them as yet (my emphasis): "As I write this, calibra.com, the big splash page for Calibra, doesn’t work in Firefox — only in Chrome. This is how companies behave toward products they don’t really take seriously. Facebook also forgot to buy the obvious typo, colibra.com — which is a domain squatter holding page." Facebook is under mounting anti-trust pressure, both in the US and elsewhere, and it is starting to look like cost-of-doing-business fines are no longer the worst that can happen. My take on Libra is that Facebook is floating it as a bargaining chip - in the inevitable negotiations on enforcement measures Facebook can sacrifice Libra to protect more valuable assets. September 11, 2019 at 2:13 PM David. said... Claire Jones and Izabella Kaminska's Libra is imperialism by stealth points out that, in practice, currency-backed stablecoins like Libra and (74% of) Tether are tied to the US dollar. Argentina and Zimbabwe are just two examples showing how bad an idea dollarizing your economy is: "A common criticism against dollarisation (and currency blocs) is that they are a form of neocolonialism, handing global powers -- whether they are states or tech behemoths -- another means of exercising control over more vulnerable players. Stablecoins backed by dollar assets are part of the same problem, which is why we believe their adoption in places like Argentina would constitute imperialism by stealth." September 13, 2019 at 9:52 AM David. said... Dan Goodin writes about a statement from the US Treasury announcing sanctions against 3 North Korean hacking groups: "North Korean hacking operations have also targeted virtual asset providers and cryptocurrency exchanges, possibly in an attempt to obfuscate revenue streams used to support the countries weapons programs. The statement also cited industry reports saying that the three North Korean groups likely stole about $571 million in cryptocurrency from five exchanges in Asia between January 2017 and September 2018. News agencies including Reuters have cited a United Nations report from last month that estimated North Korean hacking has generated $2 billion for the country’s weapons of mass destruction programs." September 13, 2019 at 6:52 PM David. said... Tether slammed as “part-fraud, part-pump-and-dump, and part-money laundering” by Jemima Kelly suggests some forthcoming increase in transparency about Tether: "a class-action lawsuit was filed against Tether, Bitfinex (a sister crypto exchange), and a handful of others. The suit was made public on Monday, having been filed on Saturday in Court of the Southern District of New York by Vel Freedman and Kyle Roche. Notably, they are the same lawyers who recently (and successfully) sued Craig Wright on behalf of Ira Kleiman." October 8, 2019 at 1:35 AM David. said... The abstract of Cryptodamages: Monetary value estimates of the air pollution and human health impacts of cryptocurrency mining by Goodkind et al reads: "Cryptocurrency mining uses significant amounts of energy as part of the proof-of-work time-stamping scheme to add new blocks to the chain. Expanding upon previously calculated energy use patterns for mining four prominent cryptocurrencies (Bitcoin, Ethereum, Litecoin, and Monero), we estimate the per coin economic damages of air pollution emissions and associated human mortality and climate impacts of mining these cryptocurrencies in the US and China. Results indicate that in 2018, each $1 of Bitcoin value created was responsible for $0.49 in health and climate damages in the US and $0.37 in China. The similar value in China relative to the US occurs despite the extremely large disparity between the value of a statistical life estimate for the US relative to that of China. Further, with each cryptocurrency, the rising electricity requirements to produce a single coin can lead to an almost inevitable cliff of negative net social benefits, absent perpetual price increases. For example, in December 2018, our results illustrate a case (for Bitcoin) where the health and climate change “cryptodamages” roughly match each $1 of coin value created. We close with discussion of policy implications." October 8, 2019 at 9:27 AM David. said... Ian Allison's Foreign Exchange Giant CLS Admits: No, We Don’t Need a Blockchain for That starts: "Blockchain technology is nice to have, but it’s hardly a must for rewiring the global financial markets. So says Alan Marquard, chief strategy and development officer at CLS Group, the global utility for settling foreign exchange trades, owned by the 71 largest banks active in that market. Nearly a year ago, it went live with CLSNet, touted as “the first global FX market enterprise application running on blockchain in production,” with megabanks Goldman Sachs, Morgan Stanley, and Bank of China (Hong Kong) on board. CLSNet was built on Hyperledger Fabric, the enterprise blockchain platform developed by IBM. But a blockchain was not the obvious solution for netting down high volumes of FX trades in 120 currencies, Marquard said recently." October 15, 2019 at 12:52 PM David. said... Preston Byrne's Fear and Loathing on the Blockchain: Leibowitz et al. v. iFinex et al. is a must-read summary of the initial pleadings in the civil case just filed against Tether and Bitfinex. Byrne explains how the risks for the defendants are different from earlier legal actions: "Being a civil case, protections Bitfinex might be able to rely on in other contexts, such as the Fourth Amendment in any criminal action, arguing that the Martin Act doesn't confer jurisdiction over Bitfinex's activities, or arguing that an administrative subpoena served on it by the New York Attorney General is overbroad, won't apply here. Discovery has the potential to be broader and deeper than Bitfinex has shown, to date, that it is comfortable with. The consequence of defaulting could be financially catastrophic. The burden of proof is lower, too, than it would be with a criminal case (balance of probabilities rather than beyond a reasonable doubt)." October 16, 2019 at 4:45 PM David. said... David Gerard provides some good advice: "If you’re going to do crimes, don’t do them on a permanent immutable public ledger of all transactions — and especially, don’t do crimes reprehensible enough that everyone gets together to come after you." From the Chainalysis blog: "Today, the Department of Justice announced the shutdown of the largest ever child pornography site by amount of material stored, along with the arrest of its owner and operator. More than 337 site users across 38 countries have also been arrested so far. Most importantly, as of today, at least 23 minors were identified and rescued from their abusers as a result of this investigation. U.S. Attorney Jessie K. Liu put it best: “Children around the world are safer because of the actions taken by U.S. and foreign law enforcement to prosecute this case and recover funds for victims.” Commenting on the investigation itself, IRS-Criminal Investigations Chief Don Fort mentioned the importance of the sophisticated tracing of bitcoin transactions in order to identify the administrator of the website. We’re proud to say that Chainalysis products provided assistance in this area, helping investigators analyze the website’s cryptocurrency transactions that ultimately led to the arrests. ... When law enforcement shut down the site, they siezed over 8 terabytes of child pornography, making it one of the largest siezures of its kind. The site had 1.3 million Bitcoin addresses registered. Between 2015 and 2018, the site received nearly $353,000 worth of Bitcoin across thousands of individual transactions." October 16, 2019 at 4:58 PM David. said... Tim Swanson has updated his post from August 2018 entitled How much electricity is consumed by Bitcoin, Bitcoin Cash, Ethereum, Litecoin, and Monero? which concluded as much as the Netherlands. In Have PoW blockchains become less resource intensive? he concludes that: "In aggregate, based on the numbers above, these five PoW coins likely consume between 56.7 billion kWh and 81.8 billion kWh annually. That’s somewhere around Switzerland on the low end to Finland or Pakistan near the upper end. It is likely much closer to the upper bound because the calculations above all assumed little energy loss ‘at the wall’ when in fact there is often 10% or more energy loss depending on the setup. This is a little lower than last year, where we used a similar method and found that these PoW networks may consume as much resources as The Netherlands. Why the decline? All of it is due to the large decline in coin prices over the preceding time period. Again, miners will consume resources up to the value of a block reward wherein the marginal cost to mine equals the marginal value of the coin (MC=MV)." October 16, 2019 at 7:06 PM David. said... What's Blockchain Actually Good for, Anyway? For Now, Not Much by Gregory Barber has a wealth of examples of blockchain hype fizzling but, being a journalist, he can't bring himself to reach an actual conclusion: "“decentralized” projects represent a tiny portion of corporate blockchain efforts, perhaps 3 percent, says Apolline Blandin, a researcher at the Cambridge Centre for Alternative Finance. The rest take shortcuts. So-called permissioned blockchains borrow ideas and terms from Bitcoin, but cut corners in the name of speed and simplicity. They retain central entities that control the data, doing away with the central innovation of blockchains. Blandin has a name for those projects: “blockchain memes.” Hype and lavish funding fueled many such projects. But often, the same applications could be built with less-splashy technology. As the buzzwords wear off, some have begun asking, what’s the point?" November 4, 2019 at 8:03 PM David. said... Patrick McKenzie's Tether: The Story So Far is now the one-stop go-to explainer for Tether and Bitfinex: "A friend of mine, who works in finance, asked me to explain what Tether was. Short version: Tether is the internal accounting system for the largest fraud since Madoff. Read on for the long version." You need to follow his advice. November 4, 2019 at 8:19 PM David. said... A Lone Bitcoin Whale Likely Fueled 2017 Surge, Study Finds by Matthew Leising and Matt Robinson reports on an update to 2018's Is Bitcoin Really Un-Tethered?: "One entity on the cryptocurrency exchange Bitfinex appears capable of sending the price of Bitcoin higher when it falls below certain thresholds, according to University of Texas Professor John Griffin and Ohio State University’s Amin Shams. Griffin and Shams, who have updated a paper they first published in 2018, say the transactions rely on Tether, a widely used digital token that is meant to hold its value at $1." November 5, 2019 at 5:03 PM David. said... Today's news emphasizes that using "trustless" systems requires trusting a lot more than just the core software. First, Dan Goodin's Official Monero website is hacked to deliver currency-stealing malware: "The supply-chain attack came to light on Monday when a site user reported that the cryptographic hash for a command-line interface wallet downloaded from the site didn't match the hash listed on the page. Over the next several hours, users discovered that the miss-matching hash wasn't the result of an error. Instead, it was an attack designed to infect GetMonero users with malware. Site officials later confirmed that finding." Second, David Gerard reports that Canada’s Einstein Exchange shuts down — and all the money’s gone: "Yet another Canadian crypto exchange goes down — this time, the Einstein exchange in Vancouver, British Columbia, Canada, which was shut down by the British Columbia Securities Commission on 1 November. Yesterday, 18 November, the news came out that there’s nothing left at all — the money and cryptos are gone." November 20, 2019 at 12:49 PM David. said... Jonathan Syu tweets: "The whole decentralization experiment is really just an attempt to figure out what's the maximum amount of control I can maintain over a system without having any legal accountability over what happens to it." November 22, 2019 at 4:33 PM David. said... Electronic voting machines mean you can't trust the result of the election, but there are even worse ways to elect politicians. One is Internet voting. But among the very worst is to use (drum-roll) blockchain technology! To understand why it is so bad, read What We Don’t Know About the Voatz “Blockchain” Internet Voting System by David Jefferson, Duncan Buell, Kevin Skogland, Joe Kiniry and Joshua Greenbaum. The list of unknowns covers ten pages, and every one should disqualify the system from use. November 22, 2019 at 6:37 PM David. said... Nathaniel Rich's Ponzi Schemes, Private Yachts, and a Missing $250 Million in Crypto: The Strange Tale of Quadriga is a great illustration of the kind of people you have to trust to use trustless cryptocurrencies. The subhead reads: "When Canadian blockchain whiz Gerald Cotten died unexpectedly last year, hundreds of millions of dollars in investor funds vanished into the crypto ether. But when the banks, the law, and the forces of Reddit tried to track down the cash, it turned out the young mogul may not have been who he purported to be." It is a fascinating story - go read it. November 27, 2019 at 9:18 AM David. said... Among the people you have to trust to use a trustless system are the core developers. People like Virgil Griffith, a core Ethereum developer. Read about him in David Gerard's Virgil Griffith arrested over North Korea visit — engineer arrogance, but on the blockchain and see whether you think he's trustworthy. December 1, 2019 at 10:10 AM David. said... I'm shocked, shocked to find illegal activities going on here!. Celia Wan reports that: "There are over $400 million worth of illicit activities conducted via XRP, and a large portion of these activities are scams and Ponzi schemes, Elliptic, a blockchain analytics startup, found in its research. The UK-based startup announced on Wednesday that it can now track the transactions of XRP, marking the 12th digital assets the firm supports. According to Elliptic co-founder Tom Robinson, the firm is the first to have transaction monitoring capacity for XRP and it has already identified several hundred XRP accounts related to illegal activities." December 1, 2019 at 5:02 PM David. said... Jemima Kelley's When is a blockchain startup not a blockchain startup? recounts yet another company discovering that permissioned blockchains are just an inefficient way to implement things you can do with a regular database: "It’s awkward when you set up a business around a technology that you reckon is going to disrupt global finance so you name your business after said technology, send your CEO on speaking tours to evangelise about said technology, but then decide that said technology isn’t going to do anything useful for you, isn’t it? Digital Asset was one of the pioneers of blockchain-in-finance, with the idea that it could make clearing and trade settlement sexy again (well OK maybe not but at least make it faster and more efficient). Its erstwhile CEO Blythe Masters, a glamorous former JP Executive who was credited with/blamed for pioneering credit-default swaps, trotted around the globe telling people that blockchain was “email for money”. ... What is unique about this “blockchain start-up” is that although it is designed to work with blockchain platforms, it can actually work with any old database." December 17, 2019 at 5:38 AM David. said... More dents in the credibility of the BTC "price" from Yashu Gola's How a Whale Crashed Bitcoin to Sub-$7,000 Overnight: "Bitcoin lost billions of dollars worth of valuation within a 30-minutes timeframe as a Chinese cryptocurrency scammer allegedly liquidated its steal via over-the-counter markets. The initial sell-off by PlusToken caused a domino effect, causing mass liquidations. PlusToken, a fraud scheme that duped investors of more than $2bn, dumped huge bitcoin stockpiles from its anonymous accounts, according to Chainalysis." December 18, 2019 at 8:35 AM David. said... PlusToken Scammers Didn’t Just Steal $2+ Billion Worth of Cryptocurrency. They May Also Be Driving Down the Price of Bitcoin from Chainalysis has the details of the PlusToken scammers use of the Huboi exchange to cash out the winnings from their Ponzi scheme. And there is more where that came from: "They’ve cashed out at least 10,000 of that initial 800,000 ETH, while the other 790,000 has been sitting untouched in a single Ethereum wallet for months. The flow of the 45,000 stolen Bitcoin is more complicated. So far, roughly 25,000 of it has been cashed out." December 22, 2019 at 4:10 PM David. said... In China electricity crackdown sparks concerns, Paul Muir reports on yet another way in which Bitcoin mining is centralized: "A recent crackdown in China on bitcoin miners who were using electricity illegally – about 7,000 machines were seized in Hebei and Shanxi provinces – raises concerns about the danger of so much of the leading cryptocurrency’s hash rate being concentrated in the totalitarian country, according to a crypto industry observer. ... However, he pointed out that just four regions in China account for 65% of the world’s hash rate and Siachen alone is responsible for 50%. Therefore, if China decides to shut down network access, it could be very problematic." December 28, 2019 at 9:50 AM David. said... Catalin Cimpanu's Chrome extension caught stealing crypto-wallet private keys is yet another example of the things you have to trust to work in the "trustless" world of cryptocurrencies: "A Google Chrome extension was caught injecting JavaScript code on web pages to steal passwords and private keys from cryptocurrency wallets and cryptocurrency portals. The extension is named Shitcoin Wallet (Chrome extension ID: ckkgmccefffnbbalkmbbgebbojjogffn), and was launched last month, on December 9." January 2, 2020 at 6:15 AM David. said... In Blockchain, all over your face, Jemima Kelly writes: "Enter the “Blockchain Creme” from Cosmetique Bio Naturelle Suisse (translation: Swiss Organic Natural Cosmetics). We thought it must be a joke when we first heard about it ... but yet here it is, being sold on the actual internet:" January 17, 2020 at 6:01 AM David. said... Adriana Hamacher's Is this the end of Malta's reign as Blockchain Island? reports on reality breaking in on Malta's blockchain hype: "Malta’s technicolor blockchain dream has turned an ominous shade of grey. Last weekend, Prime Minister Joseph Muscat—chief architect of the tiny Mediterranean island’s pioneering policies in blockchain, gaming and AI—was obliged to step down amid the crisis surrounding the murder of investigative journalist Daphne Caruana Galizia." It appears that the government's response to her criticisms was to put a large bomb in her car: "Anonymous Maltese blogger BugM, one of many determined to bring Caruana Galizia’s killers to justice, believes that an aggressively pro-blockchain policy was seized upon by the government to distract attention from the high-profile murder investigation that ensued." The whole article is worth a read. January 20, 2020 at 6:10 AM David. said... Jill Carlson's Trust No One. Not Even a Blockchain is a skeptical response to Emily Parker’s credulous The Truth Is All There Is. But in focusing on "garbage in, garbage out" Carlson is insufficiently skeptical about the immutability of blockchains. January 27, 2020 at 9:17 AM David. said... David Canellis reports that Bitcoin Gold hit by 51% attacks, $72K in cryptocurrency double-spent: "Malicious cryptocurrency miners took control of Bitcoin BTC Gold‘s blockchain recently to double-spend $72,000 worth of BTG. Bad actors assumed a majority of the network‘s processing power (hash rate) to re-organize the blockchain twice between Thursday and Friday last week: the first netted attackers 1,900 BTG ($19,000), and the second roughly 5,267 BTG ($53,000). Cryptocurrency developer James Lovejoy estimates the miners spent just $1,200 to perform each of the attacks, based on prices from hash rate marketplace NiceHash. This marks the second and third times Bitcoin Gold has suffered such incidents in two years." Mutating immutability can be profitable. Investing $1.2K to get $72K is a 5,900% return in 2 days. Find me another investment with that rate of return! January 27, 2020 at 5:36 PM David. said... John Nugée's What Libra means for money creation points out two big problems that Libra, or any private stablecoin system, would have for the economy: "the introduction of Libra threatens to split the banking sector’s currently unified balance sheet, by moving a significant proportion of customer deposits (that is, the banking system’s liabilities) to the digital currency issuers, while leaving customer loans and overdrafts (the banking system’s assets) with the banks. The inevitable result of this would be to force the banks to reduce the asset side of their balance sheet to match the reduced liability side – in other words reduce their loans. This would almost certainly lead to a major credit squeeze, which would be highly damaging to economic activity." And: "It is by no means clear that such a private sector payment system would be cheaper to operate than the existing bank-based system even on the narrow point of cost per transaction, particularly if, as seems probable, one digital currency soon becomes dominant to the exclusion of competitors. But there is the wider issue of whether society is advantaged by big tech creaming off yet more money from the economy into an unaccountable, untaxable and often overseas behemoth." February 7, 2020 at 6:34 AM David. said... Yet another company doing real stuff that started out enthusiastic finds out they don't need a blockchain: "We have run several POCs integrating blockchain technology but we so far decided to run our core services without blockchain technology. Meaning, the solutions that we are already providing are working fine without DLT." February 20, 2020 at 12:08 PM David. said... Drug dealer loses codes for €53.6m bitcoin accounts by Conor Lally reports that 6000BTC are frozen in wallets for which the codes have been lost. Presumably there is a fairly constant average background rate at which BTC are frozen in this way, in effect being destroyed. BTC are being created by the block reward, which is decreasing. Eventually, the rate of creation will fall below the rate of freezing and the universe of usable BTC will shrink, driving the value of the remaining BTC "to the moon" February 22, 2020 at 6:07 AM David. said... David Gerard summarizes Libra: "Libra can only work if Libra can evade regulation — and simultaneously, that no system that Libra’s competing with can evade regulation. And regulators would have to let Libra do this, for some reason. I’m not convinced." March 4, 2020 at 11:29 AM David. said... Trolly McTrollface's theory about the backing for Tether is worth considering: "We all know miners need real cash to pay their electricity bills. They could sell their rewards on exchanges - which someone (cough Tether cough) would have to buy, to prevent $BTC from crashing. But when everyone knows that everyone knows, something different happens. Tether has real money, because Bitfinex has real money from sheering muppets dumb enough to trade on its exchange. But why would they buy miners' Bitcoins, when they can LOAN them the money instead, and get a death grip on their balls? Tether is secured by these loans, not cash. In any case, Tether would be secured by loans, not cash. Nobody keeps $7B in cash in a bank account, especially not the kind of bank that would accept Tether's pedo laundromat money. Too much credit risk. ... What if, ... you called up Bitcoin miners, and offered them a lifeline, promising them to pay their electricity bills, in exchange of a small favour - a promise they won't sell their Bitcoin for a while? Let's put these Bitcoins in escrow, or, in finance words, let's offer miners a loan secured by their Bitcoins. Miners are happy because they can pay their bills without going through all the trouble of selling their rewards, while Tether is happy because, well, when someone owes you a lot of money, you have a metaphorical gun to his head." Hat tip to David Gerard, who writes: "In just six weeks, Tether’s circulation has doubled to 9 billion USDT! Gosh, those stablecoins sure are popular! ... Don’t believe those Tether conspiracy theorists who think that 4.5 billion tethers being pumped into the crypto markets since 31 March 2020 has anything to do with keeping the bitcoin price pumped" May 16, 2020 at 11:01 AM David. said... Jemima Kelly's Goldman Sachs betrays bitcoin reports that Goldman has seen the light: "We believe that a security whose appreciation is primarily dependent on whether someone else is willing to pay a higher price for it is not a suitable investment for our clients." May 29, 2020 at 10:36 AM David. said... Crimes on a public immutable ledger are risky, as three alleged perpetrators of the Twitter hack discovered: "Three individuals have been charged today for their alleged roles in the Twitter hack that occurred on July 15, 2020, the US Department of Justice has announced. ... The cyber crimes unit “analyzed the blockchain and de-anonymized bitcoin transactions allowing for the identification of two different hackers. This case serves as a great example of how following the money, international collaboration, and public-private partnerships can work to successfully take down a perceived anonymous criminal enterprise,” agent Jackson said." July 31, 2020 at 3:51 PM David. said... Implausibly good opsec is necessary if you're committing crimes on an immutable public ledger. Tim Cushing's FBI Used Information From An Online Forum Hacking To Track Down One Of The Hackers Behind The Massive Twitter Attack reveals how Mason John Sheppard was exposed as one of the perpetrators when his purchase of a video game username sent bitcoin to address 188ZsdVPv9Rkdiqn4V4V1w6FDQVk7pDf4. The FBI found this in a public database resulting from the compromise of an on-line forum: "available on various websites since approximately April 2020. On or about April 9, 2020, the FBI obtained a copy of this database. The FBI found that the database included all public forum postings, private messages between users, IP addresses, email addresses, and additional user information. Also included for each user was a list of the IP addresses that user used to log into the service along with a corresponding date and timestamp." August 3, 2020 at 10:28 AM David. said... Alex de Vries' Bitcoin’s energy consumption is underestimated: A market dynamics approach shows that: "most of the currently used methods to estimate Bitcoin’s energy demand are still prone to providing optimistic estimates. This happens because they apply static assumptions in defining both market circumstances (e.g. the price of available electricity) as well as the subsequent behavior of market participants. In reality, market circumstances are dynamic, and this should be expected to affect the preferences of those participating in the Bitcoin mining industry. The various choices market participants make ultimately determines the amount of resources consumed by the Bitcoin network. It will be shown that, when starting to properly consider the previous dynamics, even a conservative estimate of the Bitcoin network’s energy consumption per September 30 (2019) would be around 87.1 TWh annually (comparable to a country like Belgium)" Tip of the hat to David Gerard. August 6, 2020 at 3:52 PM David. said... In The Tether Whitepaper and You, Cas Piancey goes through the Tether "white paper" with a fine-tooth comb: "Since many Tether defenders are intent on making arguments about the actual promises Tether has committed to (ie; “Tether isn’t a cryptocurrency!” “Tether doesn’t need to be fully backed!”), it felt like the right time to run through as much of the Tether whitepaper as possible. Hopefully, through a long-form analysis of the whitepaper (which hasn’t been updated), we can come to conclusions about what promises Tether has kept, and what promises Tether has broken." One guess as to how many it has kept! Hat tip to David Gerard. October 10, 2020 at 7:53 PM David. said... In IBM Blockchain Is a Shell of Its Former Self After Revenue Misses, Job Cuts: Sources Ian Allison reports that reality has dawned at IBM: "IBM has cut its blockchain team down to almost nothing, according to four people familiar with the situation. Job losses at IBM escalated as the company failed to meet its revenue targets for the once-fêted technology by 90% this year, according to one of the sources. “IBM is doing a major reorganization,” said a source at a startup that has been interviewing former IBM blockchain staffers. “There is not really going to be a blockchain team any longer. Most of the blockchain people at IBM have left.” IBM’s blockchain unit missed its revenue targets by a wide margin for two years in a row, said a second source. Expectations for enterprise blockchain were too high, they said, adding that IBM “didn’t really manage to execute, despite doing a lot of announcements.” A spokesperson for IBM denied the claims." February 1, 2021 at 12:43 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ▼  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ▼  July (7) Blockchain briefing for DoD Boeing's Corporate Suicide Carl Malamud's Text Mining Project Not To Pick On Toyota The EFF vs. DMCA Section 1201 Finn Brunton's "Digital Cash" The Web Is A Low-Trust Society ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-7142 ---- DSHR's Blog: Stablecoins DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, December 22, 2020 Stablecoins I have long been skeptical of Bitcoin's "price" and, despite its recent massive surge, I'm still skeptical. But it turns out I was wrong two years ago when I wrote in Blockchain: What's Not To Like?: Permissionless blockchains require an inflow of speculative funds at an average rate greater than the current rate of mining rewards if the "price" is not to collapse. To maintain Bitcoin's price at $4K requires an inflow of $300K/hour. I found it hard to believe that this much actual money would flow in, but since then Bitcoin's "price" hasn't dropped below $4K, so I was wrong. Caution — I am only an amateur economist, and what follows below the fold is my attempt to make sense of what is going on. First, why did I write that? The economic argument is that, because there is a low barrier to entry for new competitors, margins for cryptocurrency miners are low. So the bulk of their income in terms of mining rewards has to flow out of the system in "fiat" currency to pay for their expenses such as power and hardware. These cannot be paid in cryptocurrencies. At the time, the Bitcoin block reward was 12.5BTC/block, or 75BTC/hour. At $4K/BTC this was $300K/hour, so on average 300K USD/hour had to flow in from speculators if the system was not to run out of USD. Source What has happened since then? Miners' income comes in two parts, transaction fees (currently averaging around 100BTC/day) and mining rewards (900BTC/day) for a total around 1K BTC/day. At $20K/BTC, that is $830K/hour. The combination of halving of the block reward, increasing transaction fees, and quintupling the "price" has roughly tripled the required inflow. Second, lets set the context for what has happened in cryptocurrencies in the last year. Source In the last year Bitcoin's "market cap" went from around $130B to around $340B (2.6x) and its "price" went from about $7K to about $22K. Source In the last year Ethereum's "market cap" went from around $16B to around $65B (4.1x) and its "price went from around $145 to around $590. Source The key observation that explains why I write "price" in quotes is shown in this graph. Very little of the trading in BTC is in terms of USD, most of it is in terms of Tether (USDT). The "price" is set by how many USDT people are prepared to pay for BTC, not by how many USD. The USD "price" follows because people believe that 1USDT ≅ 1USD.     Source In the past year, Tether's "market cap" has gone from about 4B USDT to about 20B USDT (5x). Tether (USDT) is a "stablecoin", intended to maintain a stable price of 1USD = 1USDT. Initially, Tether claimed that it maintained a stable "price" because every USDT was backed by an actual USD in a bank account. Does that mean that investors transferred around sixteen billion US dollars into Tether's bank account in the past year? No-one believes that. There has never been an audit of Tether to confirm what is backing USDT. Tether themselves admitted to the New York Attorney General in October 2018 that: the $2.8 billion worth of tethers are only 74% backed: Tether has cash and cash equivalents (short term securities) on hand totaling approximately $2.1 billion, representing approximately 74 percent of the current outstanding tethers. If USDT isn't backed by USD, what is backing it, and is 1USDT really worth 1USD? Source Just in October, Tether minted around 6B USDT. The graph tracks the "price" of Bitcoin against the "market cap" of USDT. Does it look like they're correlated? Amy Castor thinks so. Tether transfers newly created USDT to an exchange, where one of two things can happen to it: It can be used to buy USD or an equivalent "fiat" currency. But only a few exchanges allow this. For example, Coinbase, the leading regulated exchange, will not provide this "fiat off-ramp": Please note that Coinbase does not support USDT — do not send it to your Bitcoin account on Coinbase. Because of USDT's history and reputation, exchanges that do offer a "fiat off-ramp" are taking a significant risk, so they will impose a spread; the holder will get less than $1. Why would you send $1 to Tether to get less than $1 back? It can be used to buy another cryptocurrency, such as Bitcoin (BTC) or Ethereum (ETH), increasing demand for that cryptocurrency and thus increasing its price. Since newly created USDT won't be immediately sold for "fiat", they will pump the "price" of cryptocurrencies. For simplicity of explanation, lets imagine a world in which there are only USD, USDT and BTC. In this world some proportion of the backing for USDT is USD and some is BTC. Someone sends USD to Tether. Why would they do that? They don't want USDT as a store of value, because they already have USD, which is obviously a better store of value than USDT. They want USDT in order to buy BTC. Tether adds the USD to the backing for USDT, and issues the corresponding number of USDT, which are used to buy BTC. This pushes the "price" of BTC up, which increases the "value" of the part of the backing for USDT that is BTC. So Tether issues the corresponding amount of USDT, which is used to buy BTC. This pushes the "price" of BTC up, which increases the "value" of the part of the backing for USDT that is BTC. ... Tether has a magic "money" pump, creating USDT out of thin air. But there is a risk. Suppose for some reason the "price" of BTC goes down, which reduces the "value" of the backing for USDT. Now there are more USDT in circulation than are backed. So Tether must buy some USDT back. They don't want to spend USD for this, because they know that USD are a better store of value than USDT created out of thin air. So they need to sell BTC to get USDT. This pushes the "price" of BTC down, which reduces the "value" of the part of the backing for USDT that is BTC. So Tether needs to buy more USDT for BTC, which pushes the "price" of BTC down. ... The magic "money" pump has gone into reverse, destroying the USDT that were created out of thin air. Tether obviously wants to prevent this happening, so in our imaginary world what we would expect to see is that whenever the "price" of BTC goes down, Tether supplies the market with USDT, which are used to buy BTC, pushing the price back up. Over time, the BTC "price" would generally go up, keeping everybody happy. But there is a second-order effect. Over time, the proportion of the backing for USDT that is BTC would go up too, because each USD that enters the backing creates R>1 USD worth of "value" of the BTC part of the backing. And, over time, this effect grows because the greater the proportion of BTC in the backing, the greater R becomes. Source In our imaginary world we would expect to see: The "price" of BTC correlated with the number of USDT in circulation. The graph shows this in the real world. Both the "price" of BTC and the number of USDT in circulation growing exponentially. The graph shows this in the real world. Spikes in the number of USDT in circulation following falls in the "price" of BTC. Is Bitcoin Really Untethered? by John Griffin and Amit Shams shows that: Rather than demand from cash investors, these patterns are most consistent with the supply‐based hypothesis of unbacked digital money inflating cryptocurrency prices. Their paper was originally published in 2018 and updated in 2019 and 2020. Tether being extremely reluctant to be audited because that would reveal how little money and how much "money" was supporting the BTC "price". Our imaginary world replicates key features of the real world. Of course, since Tether has never been audited, we don't know the size or composition of USDT's backing. So we don't know whether Tether has implemented a magic "money" pump. But the temptation to get rich quick by doing so clearly exists, and Tether's history isn't reassuring about their willingness to skirt the law. Because of the feedback loops I described, if they ever dipped a toe in the flow from a magic "money" pump, they would have to keep doubling down. Apart from the work of Griffin and Shams, there is a whole literature pointing out the implausibility of Tether's story. Here are a few highlights: JP Konig's 18 things about Tether stablecoins Social Capital's series explaining Tether and the "stablecoin" scam: Pumps, spoofs and boiler rooms Tether, Part One: The Stablecoin Dream Tether, Part Two: PokeDEx Tether, Part Three: Crypto Island Price manipulation in the Bitcoin ecosystem by Neil Gandal et al Cryptocurrency Pump-and-Dump Schemes by Tao Li et al Patrick McKenzie's Tether: The Story So Far: A friend of mine, who works in finance, asked me to explain what Tether was. Short version: Tether is the internal accounting system for the largest fraud since Madoff. Bernie Madoff's $64.8B Ponzi scheme was terminated in 2008 but credible suspicions had been raised nine years earlier, not least by the indefatigable Harry Markopolos. Credible suspicions were raised against Wirecard shortly after it was incorporated in 1999, but even after the Financial Times published a richly documented series based on whistleblower accounts it took almost a year before Wirecard declared bankruptcy owing €1.9B. Massive frauds suffer from a "Wile E. Coyote" effect. Because they are "too big to fail" there is a long time between the revelation that they are frauds, and the final collapse. It is hard for people to believe that, despite numbers in the billions, there is no there there. Both investors and regulators get caught up in the excitement and become invested in keeping the bubble inflated by either attacking or ignoring negative news. For example, we saw this in the Wirecard scandal: BaFin conducted multiple investigations against journalists and short sellers because of alleged market manipulation, in response to negative media reporting of Wirecard. ... Critics cite the German regulator, press and investor community's tendency to rally around Wirecard against what they perceive as unfair attack. ... After initially defending BaFin's actions, its president Felix Hufeld later admitted the Wirecard Scandal is a "complete disaster". Similarly, the cryptocurrency world has a long history of both attacking and ignoring realistic critiques. An example of ignoring is the DAO: The Decentralized Autonomous Organization (The DAO) was released on 30th April 2016, but on 27th May 2016 Dino Mark, Vlad Zamfir, and Emin Gün Sirer posted A Call for a Temporary Moratorium on The DAO, pointing out some of its vulnerabilities; it was ignored. Three weeks later, when The DAO contained about 10% of all the Ether in circulation, a combination of these vulnerabilities was used to steal its contents. Source The graph shows how little of the trading in BTC is in terms of actual money, USD. On coinmarketcap.com as I write, USDT has a "market cap" of nearly $20B and the next largest "stablecoin" is USDC, at just over $3.2B. USDC is audited and[1] complies with banking regulations, which explains why it is used so much less. The supply of USDC can't expand enough to meet demand. The total "market cap" of all the cryptocurrencies the site tracks is $627B, an increase of more than 11% in the last day! So just one day is around the same as Bernie Madoff's Ponzi scheme. The top 4 cryptocurrencies (BTC, ETH, XRP, USDT) account for $521B (83%) of the total "market cap"; the others are pretty insignificant. David Gerard points out the obvious in Tether is “too big to fail” — the entire crypto industry utterly depends on it: The purpose of the crypto industry, and all its little service sub-industries, is to generate a narrative that will maintain and enhance the flow of actual dollars from suckers, and keep the party going. Increasing quantities of tethers are required to make this happen. We just topped twenty billion alleged dollars’ worth of tethers, sixteen billion of those just since March 2020. If you think this is sustainable, you’re a fool. Gerard links to Bryce Weiner's Hopes, Expectations, Black Holes, and Revelations — or How I Learned To Stop Worrying and Love Tether which starts from the incident in April of 2018 when Bitfinex, the cryptocurrency exchange behind Tether, encountered a serious problem: the wildcat bank backing Tether was raided by Interpol for laundering of criminally obtained assets to the tune of about $850,000,000. The percentage of that sum which was actually Bitfinex is a matter of some debate but there’s no sufficient reason not to think it was all theirs. ... the nature of the problem also presented a solution: instead of backing Tether in actual dollars, stuff a bunch of cryptocurrency in a basket to the valuation of the cash that got seized and viola! A black hole is successfully filled with a black hole, creating a stable asset. At the time, USDT's "market cap" was around $2.3B, so assuming Tether was actually backed by USD at that point, it lost 37% of its backing. This was a significant problem, more than enough to motivate shenanigans. Weiner goes on to provide a detailed explanation, and argue that Tether is impossible to shut down. He may be right, but it may be possible to effectively eliminate the "fiat off-ramp", thus completely detaching USDT and USD. This would make it clear that "prices" expressed in USDT are imaginary, not the same as prices expressed in USD. Source Postscript: David Gerard recounts the pump that pushed BTC over $20K: We saw about 300 million Tethers being lined up on Binance and Huobi in the week previously. These were then deployed en masse. You can see the pump starting at 13:38 UTC on 16 December. BTC was $20,420.00 on Coinbase at 13:45 UTC. Notice the very long candles, as bots set to sell at $20,000 sell directly into the pump. See Cryptocurrency Pump-and-Dump Schemes by Tao Li, Donghwa Shin and Baolian Wang. Source Ki Joung Yu watched the pump in real time: Lots of people deposited stablecoins to exchanges 7 mins before breaking $20k. Price is all about consensus. I guess the sentiment turned around to buy $BTC at that time. ... ETH block interval is 10-20 seconds. This chart means 127 exchange users worldwide were trying to deposit #stablecoins in a single block — 10 seconds. Note that "7 mins" is about one Bitcoin block time, and by "exchange users" he means "addresses — it could have been a pre-programmed "smart contract". [1] David Gerard points out that: USDC loudly touts claims that it’s well-regulated, and implies that it’s audited. But USDC is not audited — accountants Grant Thornton sign a monthly attestation that Centre have told them particular things, and that the paperwork shows the right numbers. Posted by David. at 8:00 AM Labels: bitcoin 17 comments: David. said... XRP, the third-largest unstablecoin by "market cap", has lost almost 40% of its value over the last 7 days. This might have something to do with the SEC suing Ripple Labs, who control the centralized cryptocurrency, claiming that XRP is an unregistered security. The SEC's argument, bolstered with copious statements by the founders, is at heart that the founders pre-mined and kept vast amounts of XRP, which they then pump and sell: "Defendants continue to hold substantial amounts of XRP and — with no registration statement in effect — can continue to monetize their XRP while using the information asymmetry they created in the market for their own gain, creating substantial risk to investors." David Gerard points out that Ripple Labs knew they should have registered: "Ripple received legal advice in February and October 2012 that XRP could constitute an “investment contract,” thus making it a security under the Howey test — particularly if Ripple promoted XRP as an investment. The lawyers advised Ripple to contact the SEC for clarification before proceeding. Ripple went ahead anyway, without SEC advice — and raised “at least $1.38 billion” selling XRP from 2013 to the present day, promoting it as an investment all the while" Izabella Kaminska notes that other cryptocurrencies may have similar legal issues: "This may concern other cryptocurrencies such as Ethereum and Eos, which unlike Bitcoin were pre-sold to the public in a similar fashion" December 23, 2020 at 10:33 AM David. said... Izabella Kaminska has the highlights of the SEC filing against Ripple Labs. They are really damning. December 24, 2020 at 6:58 AM David. said... David Gerard points to this transaction and asks: "Don’t you hate it when you send $1.18 in BTC with a fee of $82,000? I guess they can call Bitcoin Customer Service and get it sorted out! It’s not clear if this transaction ever showed up in the mempool — or if it was the miner putting it directly into the block, and doing some Bitcoin-laundering." December 24, 2020 at 4:22 PM David. said... The magic "money" pump is working overtime to make Santa gifts for the children: "Tether has issued 700 million tethers in just the past few days, 400 million of those just today. The market pumpers seem to have been blindsided by the SEC suit against Ripple, and are trying frantically to avert a Christmas crash. I’m sure there’s a ton of Institutional Investors going all-out on Christmas Eve." December 24, 2020 at 4:30 PM David. said... Amy Castor collected predictions for 2021 from cryptocurrency skeptics in Nocoiner predictions: 2021 will be a year of comedy gold. They're worth reading. For example: "Since 2018, the New York Attorney General has been investigating Tether and its sister company, crypto exchange Bitfinex, for fraud. Over the summer, the Supreme Court ruled that the companies need to hand over their financial records to show once and for all just how much money really is underlying the tethers they keep printing. The NYAG said Bitfinex/Tether have agreed to do so by Jan. 15." David Gerard expanded on his predictions in 2021 in crypto and blockchain: your 100% reliable guide to the future, including: "We’re currently in the throes of a completely fake Bitcoin bubble. This is fueled by billions of tethers, backed by loans, or maybe bitcoins, or maybe hot air. Large holders are spending corporate money on bitcoins, fundamentally to promote the value of their own holdings. Retail hasn’t shown up — there’s a lack of actual dollars in the exchange system. One 150 BTC sale last night (2 January) dropped the price $3,000. If 150 BTC crashes the price, then almost nobody will be able to get out without massive losses. The dollars don’t appear to exist when tested." January 3, 2021 at 3:12 PM David. said... In Parasitic Stablecoins Time Swanson focuses in exhaustive detail on the dependence of stablecoins on the banking system: "This post will go through some of the background for what commercial bank-backed stablecoins are, the loopholes that the issuers try to reside in, how reliant the greater cryptocurrency world is dependent on U.S. and E.U. commercial banks, and how the principles for financial market structures, otherwise known as PFMIs, are being ignored" January 6, 2021 at 2:49 PM David. said... Cas Piancy's brief history of Tether entitled A TL; DR for Tether and IMF researcher John Kiff's Kiffmeister's #Fintech Daily Digest (01/09/2021) are both worth reading for views on Tether. January 10, 2021 at 4:32 PM David. said... Further regulation of cryptocurrency on- and off-ramps is announced by FinCEN in The Financial Crimes Enforcement Network Proposes Rule Aimed at Closing Anti-Money Laundering Regulatory Gaps for Certain Convertible Virtual Currency and Digital Asset Transactions: "The proposed rule complements existing BSA requirements applicable to banks and MSBs by proposing to add reporting requirements for CVC and LTDA transactions exceeding $10,000 in value. Pursuant to the proposed rule, banks and MSBs will have 15 days from the date on which a reportable transaction occurs to file a report with FinCEN. Further, this proposed rule would require banks and MSBs to keep records of a customer’s CVC or LTDA transactions and counterparties, including verifying the identity of their customers, if a counterparty uses an unhosted or otherwise covered wallet and the transaction is greater than $3,000." January 10, 2021 at 4:37 PM David. said... Amy Castor has transcribed an interview with Paolo Ardino and Stuart Hoegner of Tether. Hoegner is their General Counsel, and he said: "We were very clear last summer in court that part of it is in bitcoin. And if nothing else, there are transaction fees that need to be paid on the Omni Layer. So bitcoin was and is needed to pay for those transactions, so that shouldn’t come as a surprise to anyone. And we don’t presently comment on our asset makeup overall as a general manner, but we are contemplating starting a process of providing updates on that on the website in this year, in 2021." So my speculation in this post is confirmed. They do have a magic "money" machine. January 14, 2021 at 4:18 PM David. said... There's nothing new under the sun. David Gerard's Stablecoins through history — Michigan Bank Commissioners report, 1839 starts: "A “stablecoin” is a token that a company issues, claiming that the token is backed by currency or assets held in a reserve. The token is usually redeemable in theory — and sometimes in practice. Stablecoins are a venerable and well-respected part of the history of US banking! Previously, the issuers were called “wildcat banks,” and the tokens were pieces of paper. The wildcat banking era, more politely called the “free banking era,” ran from 1837 to 1863. Banks at this time were free of federal regulation — they could launch just under state regulation. Under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. The quality of these reserves could be a matter of some dispute. The wildcat banks didn’t work out so well. The National Bank Act was passed in 1863, establishing the United States National Banking System and the Office of the Comptroller of the Currency — and taking away the power of state banks to issue paper notes." Go read the whole post - the parallels with cryptocurrencies are striking. January 21, 2021 at 6:08 PM David. said... In Tether publishes … two pie charts of its reserves, David Gerard analyses the uninformative "information" Tether published about its reserves: "I’m analysing Tether’s numbers on the basis that they aren’t just made up, and mean something in any conventional sense. It’s reasonable to doubt this — Tether’s been caught directly lying before — but previous Tether numbers have tended to have some sort of justification, if only a laughably flimsy one that meets no accepted accounting standards." And Amy Castor piles on in Tether’s first breakdown of reserves consists of two silly pie charts including this gem: "Specifically, this is a breakdown of the composition of Tether’s reserves on March 31, 2021, when Tether had roughly 41.7 billion tethers in circulation. (As of this writing, Tether now has nearly 58 billion tethers in circulation.)" So Tether is pumping the money supply at nearly $3B a week! May 14, 2021 at 8:38 AM David. said... Jemima Kelley is also all over Tether's "transparency" in Tether says its reserves are backed by cash to the tune of . . . 2.9%: "It’s almost like Tether thinks it is some kind of bank, isn’t it? Well, kind of. In the 2019 affadavit, Hoegner pointed out that commercial banks operate under a similar “fractional reserve” system, and that this was “hardly a novel concept”. But 2.9 per cent is really quite the fraction isn’t it? And the difference here, of course, is that commercial banks are subject to stringent regulations and thorough independent audits, neither of which apply to Tether." May 14, 2021 at 9:56 AM David. said... Frances Coppola makes an important point in Tether’s smoke and mirrors. Tether's terms of service place them under no obligation to redeem Tethers for fiat or indeed for anything at all: "Tether reserves the right to delay the redemption or withdrawal of Tether Tokens if such delay is necessitated by the illiquidity or unavailability or loss of any Reserves held by Tether to back to Tether Tokens, and Tether reserves the right to redeem Tether Tokens by in-kind redemptions of securities and other assets held in the Reserves." Coppola points out that: "if Tether is simply going to refuse redemption requests or offer people tokens it has just invented instead of fiat currency, it wouldn’t matter if the entire asset base was junk, since it will never have any significant need for cash. So whether Tether’s "reserves" are cash equivalents doesn't matter. But what does matter is capital. For banks, funds and other financial institutions, capital is the difference between assets and liabilities. It is the cushion that can absorb losses from asset price falls, whether because of fire sales to raise cash for redemption requests or simply from adverse market movements or creditor defaults. The accountant's attestations reveal that Tether has very little capital. The gap between assets and liabilities is paper-thin: on 31st March 2021 (pdf), for example, it was 0.36% of total consolidated assets, on a balance sheet of more than $40bn in size. Stablecoin holders are thus seriously exposed to the risk that asset values will fall sufficiently for the par peg to USD to break – what money market funds call “breaking the buck”." Go read the whole post. May 18, 2021 at 12:05 PM David. said... Simon Sharwood reports on an actual use case for Tether in Hong Kong busts $150m crypto money-laundering ring: "Hong Kong’s Customs and Excise Department yesterday arrested four men over alleged money-laundering using cryptocurrency. The Department says it detected multiple transactions in a coin named “Tether”, with value bouncing between a crypto exchange, local banks, another crypto exchange, and banks in Singapore. HK$1.2bn (US$155m) is alleged to have been laundered by the four suspects, in what authorities said was the first case of crypto-laundering detected in the Special Administrative Region (SAR). The launderers were busy: multiple daily transactions of HK$20m were sometimes detected as they went about their scheme, which ran from early 2020 to May 2021." July 16, 2021 at 11:44 AM David. said... Fais Kahn has two posts, Crypto and the infinite ladder: what if Tether is fake? and Bitcoin's end: Tether, Binance and the white swans that could bring it all down about the Binance/Tether nexus that are well worth reading. He concludes: "Everything around Binance and Tether is murky, even as these entities two dominate the crypto world. Tether redemptions are accelerating, and Binance is in trouble, but why some of these things are happening is guesswork. And what happens if something happens to one of those two? We’re entering some uncharted territory. But if things get weird, don’t say no one saw it coming." July 16, 2021 at 3:14 PM David. said... Taming Wildcat Stablecoins by Gary Gorton and Jeffery Zhang analyzes stablecoins with a historical perspective, starting with the 19th century "free banking" era in the US. Zhang is a lawyer at the Fed. Izabella Kaminska summarizes their argument in Gorton turns his attention to stablecoins,and points out that: "Gary Gorton has gained a reputation for being something of an experts’ expert on financial systems. Despite being an academic, this is in large part due to what might be described as his practitioner’s take on many key issues. The Yale School of Management professor is, for example, best known for a highly respected (albeit still relatively obscure) theory about the role played in bank runs by information-sensitive assets." July 20, 2021 at 11:00 AM David. said... Tom Schoenberg, Matt Robinson, and Zeke Faux report that Tether Executives Said to Face Criminal Probe Into Bank Fraud: "A U.S. probe into Tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case that would have broad implications for the cryptocurrency market. ... Specifically, federal prosecutors are scrutinizing whether Tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential." ... Federal prosecutors have been circling Tether since at least 2018. In recent months, they sent letters to individuals alerting them that they’re targets of the investigation, one of the people said." July 27, 2021 at 6:59 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ▼  2020 (55) ▼  December (4) Michael Nelson's Group On Archiving Twitter Stablecoins RISC vs. CISC 737 MAX Ungrounding ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-7539 ---- DSHR's Blog: Gini Coefficients Of Cryptocurrencies DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, October 23, 2018 Gini Coefficients Of Cryptocurrencies The Gini coefficient expresses a system's degree of inequality or, in the blockchain context, centralization. It therefore factors into arguments, like mine, that claims of blockchains' decentralization are bogus. In his testimony to the US Senate Committee on Banking, Housing and Community Affairs' hearing on “Exploring the Cryptocurrency and Blockchain Ecosystem" entitled Crypto is the Mother of All Scams and (Now Busted) Bubbles While Blockchain Is The Most Over-Hyped Technology Ever, No Better than a Spreadsheet/Database, Nouriel Roubini wrote: wealth in crypto-land is more concentrated than in North Korea where the inequality Gini coefficient is 0.86 (it is 0.41 in the quite unequal US): the Gini coefficient for Bitcoin is an astonishing 0.88. The link is to Joe Weisenthal's How Bitcoin Is Like North Korea from nearly five years ago, which was based upon a Stack Exchange post, which in turn was based upon a post by the owner of the Bitcoinica exchange from 2011! Which didn't look at all holdings of Bitcoin, let alone the whole of crypto-land, but only at Bitcoinica's customers! Follow me below the fold as I search for more up-to-date and comprehensive information. I'm not even questioning how Roubini knows the Gini coefficient of North Korea to two decimal places. Most cryptocurrencies will start with a Gini coefficient of 1; Satoshi Nakamoto mined the first million Bitcoin. As adoption spreads, the Gini coefficient will decrease naturally. The question isn't whether, but how fast it will decrease. On Steem, ckfrpark is concerned that it hasn't decreased anything like quickly enough: Cryptocurrency as of September 2018, has not been narrowing the gap between the rich and poor but also is aggravating the inequality in our society. Based on the three major cryptocurrency wallets, Bitcoin, Ethereum and Ripple, top 1% shares the property value of the rest 99%, resulting in a drastic figure of Gini coefficient of 0.99. If we consider those who do not own a cryptocurrency wallet, it would result as radical figure of over 0.999999 Gini coefficient. ... The greater the value of Cryptocurrency, the greater the gap between rich and poor, which governments and people will not tolerate. It is a prophecy that existing cryptocurrencies will fail. Balaji S. Srinivasan is CTO of Coinbase, a cryptocurrency insider. In July last year, together with Leland Lee, he wrote Quantifying Decentralization, arguing for the importance of measuring decentralization: The primary advantage of Bitcoin and Ethereum over their legacy alternatives is widely understood to be decentralization. However, despite the widely acknowledged importance of this property, most discussion on the topic lacks quantification. If we could agree upon a quantitative measure, it would allow us to: Measure the extent of a given system’s decentralization Determine how much a given system modification improves or reduces decentralization Design optimization algorithms and architectures to maximize decentralization Source Srinivasan and Lee start with an explanation of the Gini coefficient and the Lorenz curve from which it is derived. They go on to make the important point that a decentralized system is compromised if any of its decentralized subsystems is compromised, identifying six subsystems of cryptocurrencies; mining, exchanges, client, nodes, developers and ownership. Only the last has been the focus of most discussion of cryptocurrency Gini coefficients. Source They plot Lorenz curves for each of their six subsystems for Bitcoin and Ethereum, and derive these Gini coefficients: Subsystem Bitcoin Ethereum Mining 0.40 0.82 Client 0.92 0.92 Developer 0.79 0.91 Exchange 0.83 0.85 Node 0.84 0.85 Owner 0.65 0.76 These are rather large Gini coefficients, but in the case of the only one that Roubini and others have focused on, the distribution of wealth, it vastly underestimates the problem: One important point: if we actually include all 7 billion people on the earth, most of whom have zero BTC or Ethereum, the Gini coefficient is essentially 0.99+. And if we just include all balances, we include many dust balances which would again put the Gini coefficient at 0.99+. Thus, we need some kind of threshold here. The imperfect threshold we picked was the Gini coefficient among accounts with ≥185 BTC per address, and ≥2477 ETH per address. So this is the distribution of ownership among the Bitcoin and Ethereum rich with >$500k as of July 2017. In other words, even among the "whales" the distribution of wealth is extremely unequal (though not actually as unequal as North Korea). This, incidentally, explains the enthusiasm of the whalier Ethereum whales for "proof of stake" as a consensus mechanism. They could afford to control Etherum's blockchain by staking a small fraction of their wealth. The reason why decentralization is attractive is that, if it were actually achieved in practice, it would make compromising the system very difficult. Srinivasan and Lee go on to point out that the Gini coefficient, while indicative, isn't a good measure of the vulnerability of a decentralized system to compromise. Instead, they propose: The Nakamoto coefficient is the number of units in a subsystem you need to control 51% of that subsystem. It’s not clear that 51% is the number to worry about for each system, so you can pick a number and calculate it based on what you believe the critical threshold is. It’s also not clear which subsystems matter. Regardless, having a measure is an essential first step and here are the Nakamoto coefficients of each subsystem: Source They compute the Nakamoto coefficients of Bitcoin and Ethereum, as shown in this table. Subsystem Bitcoin Ethereum Mining 5 3 Client 1 1 Developer 5 2 Exchange 5 5 Node 3 4 Owner 456 72 These are interesting numbers: Source They show that Ethereum ("market cap" $21B) is significantly more vulnerable than Bitcoin ("market cap" $113B), reinforcing the observation that the Gini coefficient of the top 100 cryptocurrencies' "market cap" at 0.91 is extremely high. The smaller cryptocurrencies are very vulnerable to 51% attacks. Even Ethereum  currently suffers from the "selfish mining" attack, which has been known since 2013. They show the risk posed by software monocultures, driven by network effects and economies of scale. These risks were illustrated by the recent major bug in Bitcoin Core. Ether Miners 10/10/18 Even ignoring the fact that Bitmain: operates the world’s largest and second largest Bitcoin mining pools in terms of computing power, BTC.com and Antpool. so the 5 for Bitcoin mining should be 4, and that: two major mining pools, Ethpool and Ethermine, publicly reveal that they share the same admin they show that economies of scale mean mining pools are very concentrated. And that proving the 4 or 3 pools aren't colluding is effectively impossible. Only the top 456 wallets hold 51% of the Bitcoin held by the whales, and only the top 72 wallets hold 51% of the Ether held by the whales. There just aren't a lot of whales. Gini Coefficient Based Wealth Distribution in the Bitcoin Network: A Case Study by Manas Gupta and Parth Gupta was published behind Springer's obnoxious paywall last July. Alas, the data upon which it is based is from 2013, so it despite its recent publication it is only slightly less out-of-date than the data that Roubini quoted. An important, and much more up-to-date study of several different measures of decentralization (but that doesn't use the Gini coefficient) is Decentralization in Bitcoin and Ethereum Networks by Adem Efe Gencer, Soumya Basu, Ittay Eyal, Robbert van Renesse and Emin Gün Sirer: in Bitcoin, the weekly mining power of a single entity has never exceeded 21% of the overall power. In contrast, the top Ethereum miner has never had less than 21% of the mining power. Moreover, the top four Bitcoin miners have more than 53% of the average mining power. On average, 61% of the weekly power was shared by only three Ethereum miners. These observations suggest a slightly more centralized mining process in Ethereum. Although miners do change ranks over the observation period, each spot is only contested by a few miners. In particular, only two Bitcoin and three Ethereum miners ever held the top rank. The same mining pool has been at the top rank for 29% of the time in Bitcoin and 14% of the time in Ethereum. Over 50% of the mining power has exclusively been shared by eight miners in Bitcoin and five miners in Ethereum throughout the observed period. Even 90% of the mining power seems to be controlled by only 16 miners in Bitcoin and only 11 miners in Ethereum. This shows how incredibly poor proof-of-work is at decentralization compared with conventional distributed database technology: These results show that a Byzantine quorum system of size 20 could achieve better decentralization than proof-of-work mining at a much lower resource cost. This shows that further research is necessary to create a permissionless consensus protocol without such a high degree of centralization Source Raul at HowMuch.net published an analysis of the wealth distribution among Bitcoin wallets a year ago. It didn't compute a Gini coefficient but it did claim that only 118 wallets owned 17.49% of Bitcoin: There are a couple limitations in our data. Most importantly, each address can represent more than one individual person. An obvious example would be a bitcoin exchange or wallet, which hold the currency for a lot of different people. Another limitation has to do with anonymity. If you want to remain completely anonymous, you can use something called CoinJoin, a process that allows users to group similar transactions together. This makes it seem like two people are using the same address, when in reality they are not. BambouClub tweeted a superficially similar analysis at about the same time, again without computing a Gini coefficient,  but you had to read down into the tweet chain to discover it wasn't based on analyzing wallets at all. but on assuming that the Bitcoin distribution matched the global distribution of wealth. Hannah Murphy's Bitcoin: Who really owns it, the whales or small fry? reports, based on data from Chainalysis, that in the December 2017 "pump and dump": $30B Pump and Dump longer-term holders sold at least $30 billion worth of bitcoin to new speculators over the December to April period, with half of this movement taking place in December alone. “This was an exceptional transfer of wealth,” says Philip Gradwell, Chainalysis’ chief economist, who dubs the past six months as bitcoin’s “liquidity event”. Echo in ICOs Gradwell argues that this sudden injection of liquidity – the amount of bitcoin available for trading rose by close to 60 per cent over that period – has been a “fundamental driver” behind the recent price decline. At the same time, bitcoin trading volumes have now fallen in tandem with the prices, from close to $4 billion daily in December to $1 billion today. As far as I know no-one has measured by how much this transfer of wealth from later to early adopters, and Bitcoin in the reverse direction, will have decreased the Gini coefficient. Or how much the transfer of cryptocurrency from speculators to ICO promoters will have increased it. Posted by David. at 8:00 AM Labels: bitcoin, fault tolerance 1 comment: David. said... Analyzing Etheruem's Contract Topology by Lucianna Kiffer, Dave Levin and Alan Mislove (also here) reinforces the message of the Nakamoto coefficient: "Ethereum’s smart contract ecosystem has a considerable lack of diversity. Most contracts reuse code extensively, and there are few creators compared to the number of overall contracts. ... the high levels of code reuse represent a potential threat to the security and reliability. Ethereum has been subject to high-profile bugs that have led to hard forks in the blockchain (also here) or resulted in over $170 million worth of Ether being frozen; like with DNS’s use of multiple implementations, having multiple implementations of core contract functionality would introduce greater defense-in-depth to Ethereum." November 5, 2018 at 4:49 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ▼  2018 (96) ►  December (7) ►  November (8) ▼  October (10) Controlled Digital Lending Syndicating Journal Publisher Content Gini Coefficients Of Cryptocurrencies Betteridge's Law Violation Software Heritage Foundation Update I'm Shocked, Shocked To Find Collusion Going On Click On The Llama I Don't Really Want To Stop The Show Brief Talk At Internet Archive Event Bitcoin's Academic Pedigree ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-7661 ---- DSHR's Blog: The Economist On Cryptocurrencies DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, August 10, 2021 The Economist On Cryptocurrencies The Economist edition dated August 7th has a leader (Unstablecoins) and two articles (in the Finance section (The disaster scenario and Here comes the sheriff). Source The leader argues that: Regulators must act quickly to subject stablecoins to bank-like rules for transparency, liquidity and capital. Those failing to comply should be cut off from the financial system, to stop people drifting into an unregulated crypto-ecosystem. Policymakers are right to sound the alarm, but if stablecoins continue to grow, governments will need to move faster to contain the risks. But even The Economist gets taken in by the typical cryptocurrency hype, balancing current actual risks against future possible benefits: Yet it is possible that regulated private-sector stablecoins will eventually bring benefits, such as making cross-border payments easier, or allowing self-executing “smart contracts”. Regulators should allow experiments whose goal is not merely to evade financial rules. They don't seem to understand that, just as the whole point of Uber is to evade the rules for taxis, the whole point of cryptocurrency is to "evade financial rules". Below the fold I comment on the two articles. Here comes the sheriff This article is fairly short and mostly describes the details of Gary Gensler's statement in three "buckets": The first is about investor protection: The regulator claims jurisdiction over the crypto assets that it defines as securities; issuers of these must provide disclosures and abide by other rules. The SEC‘s definition uses a number of criteria, including the “Howey Test”, which asks whether investors have a stake in a common enterprise and are led to expect profits from the efforts of a third party. Bitcoin and ether, the two biggest cryptocurrencies, do not meet this criterion (they are commodities, under American law). But Mr Gensler thinks that ... a fair few probably count as securities—and do not follow the rules. These, he said, may include stablecoins ... some of which may represent a stake in a crypto platform. Mr Gensler asked Congress for more staff to police them. The second is about new products: For months the SEC has sat on applications for bitcoin ETFs and related products, filed by big Wall Street names like Goldman Sachs and Fidelity. Mr Gensler hinted that, in order to be approved, these may have to comply with the stricter laws governing mutual funds. The third is a request for new legal powers needed to pave over cracks in regulation that cryptocurrencies, whose whole point is to "evade financial rules", are exploiting: Mr Gensler is chiefly concerned with platforms engaged in crypto trading or lending as well as in decentralised finance (DeFi), where smart contracts replicate financial transactions without a trusted intermediary. Some of these, he said, may host tokens that should be regulated as securities; others could be riddled with scams. The SEC is likely to encounter massive opposition to these ideas. How cryptocurrency became a powerful force in Washington by Todd C. Frankel et al reports on the flow of lobbying dollars from cryptocurrency insiders to Capitol Hill and how it is blocking progress on the current infrastructure bill: And after years of debate over how to improve America’s infrastructure, and months of sensitive negotiations between the White House and lawmakers, the $1 trillion bipartisan infrastructure proposal suddenly stalled in part because of concerns about how government would regulate an industry best known for wild financial speculation, memes — and its role in ransomware attacks. ... Regardless of the measure’s ultimate fate, the fact that crypto regulation has become one of the biggest stumbling blocks to passage of the bill underscored how the industry has become a political force in Washington — and previewed a series of looming battles over a financial technology attracting billions of dollars of interest from Wall Street, Silicon Valley and financial players around the world, but that few still understand. It gets worse. Kate Riga reports that In Fit Of Pique, Shelby Kills Crypto Compromise: Sen. Richard Shelby (R-AL) killed a hard-earned cryptocurrency compromise amendment to the bipartisan infrastructure bill because his own amendment, to beef up the defense budget another $50 billion, was rejected by Sen. Bernie Sanders (I-VT). Shelby had tried to tack it on to the cryptocurrency amendment. ... So that’s basically it for the crypto amendment, which took the better part of the weekend for senators and the White House and hammer into a compromise. The issue here was that the un-amended bill would require: some cryptocurrency companies that provide a service “effectuating” the transfer of digital assets to report information on their users, as some other financial firms are required to do, in an effort to enforce tax compliance. Crypto supporters said the provision’s wording would seemingly apply to companies that have no ability to collect data on users, such as cryptocurrency miners, and could push a swath of the industry overseas. So maybe by accident Mining Is Money Transmission. The disaster scenario This article is far longer and far more interesting. It takes the form of a "stress test", discussing a scenario in which Bitcoin's "price" goes to zero and asking what the consequences for the broader financial markets and investors would be. It is hard to argue with the conclusion: Still, our extreme scenario suggests that leverage, stablecoins, and sentiment are the main channels through which any crypto-downturn, big or small, will spread more widely. And crypto is only becoming more entwined with conventional finance. Goldman Sachs plans to launch a crypto exchange-traded fund; Visa now offers a debit card that pays customer rewards in bitcoin. As the crypto-sphere expands, so too will its potential to cause wider market disruption. The article identifies a number of channels by which a Bitcoin collapse could "cause market disruption": Via the direct destruction of paper wealth for HODL-ers and actual losses for more recent purchasers. Via the stock price of companies, including cryptocurrency exchanges, payments companies, and chip companies such as Nvidia. Via margin calls on leveraged investments, either direct purchases of Bitcoin or derivatives. Via redemptions of stablecoins causing reserves to be liquidated. Via investor sentiment contagion from cryptocurrencies to other high-risk assets such as meme stocks, junk bonds, and SPACs. I agree that these are all plausible channels, but I have two main issues with the article. Issue #1: Tether Source First, it fails to acknowledge that the spot market in Bitcoin is extremely thin (a sell order for 150BTC crashed the "price" by 10%), especially compared to the 10x larger market in Bitcoin derivatives, and that the "price" of Bitcoin and other cryptocurrencies is massively manipulated, probably via the "wildcat bank" of Tether. The article contains, but doesn't seem to connect, these facts: Fully 90% of the money invested in bitcoin is spent on derivatives like “perpetual” swaps—bets on future price fluctuations that never expire. Most of these are traded on unregulated exchanges, such as FTX and Binance, from which customers borrow to make bets even bigger. ... The extent of leverage in the system is hard to gauge; the dozen exchanges that list perpetual swaps are all unregulated. But “open interest” ... has grown from $1.6bn in March 2020 to $24bn today. This is not a perfect proxy for total leverage, as it is not clear how much collateral stands behind the various contracts. But forced liquidations of leveraged positions in past downturns give a sense of how much is at risk. On May 18th alone, as bitcoin lost nearly a third of its value, they came to $9bn. ... Because changing dollars for bitcoin is slow and costly, traders wanting to realise gains and reinvest proceeds often transact in stablecoins, which are pegged to the dollar or the euro. Such coins, the largest of which are Tether and USD coin, are now worth more than $100bn. On some crypto platforms they are the main means of exchange. Source That last paragraph is misleading. Fais Kahn writes: Binance also hosts a massive perpetual futures market, which are “cash-settled” using USDT. This allows traders to make leveraged bets of 100x margin or more...which, in laymen’s terms, is basically a speculative casino. That market alone provides around ~$27B of daily volume, where users deposit USDT to trade on margin. As a result, Binance is by far the biggest holder of USDT, with $17B sitting in its wallet. Bernhard Meuller writes: A more realistic estimate is that ~70% of the Tether supply (43.7B USDT) is located on centralized exchanges. Interestingly, only a small fraction of those USDT shows up in spot order books. One likely reason is that a large share is sitting on wallets to collateralize derivative positions, in particular perpetual futures. ... It’s important to understand that USDT perpetual futures implementations are 100% USDT-based, including collateralization, funding and settlement. So on the exchange that dominates bitcoin derivative trading, where the majority of "Fully 90% of the money invested in bitcoin" lives, USDT is the exclusive means of exchange. The entire market's connection to the underlying spot market is that: Prices are tied to crypto asset prices via clever incentives, but in reality, USDT is the only asset that ever changes hands between traders. Other than forced liquidations, the article does not analyze how the derivative market would respond to a massive drop in the Bitcoin "price", and whether Tether could continue to pump the "price". As money market funds did in the Global Financial Crisis, the article suggests that stablecoins would have problems: Issuers back their stablecoins with piles of assets, rather like money-market funds. But these are not solely, or even mainly, held in cash. Tether, for instance, says 50% of its assets were held in commercial paper, 12% in secured loans and 10% in corporate bonds, funds and precious metals at the end of March. A cryptocrash could lead to a run on stablecoins, forcing issuers to dump their assets to make redemptions. In July Fitch, a rating agency, warned that a sudden mass redemption of tethers could “affect the stability of short-term credit markets”. It is certainly true that the off-ramps from cryptocurrencies to fiat are constricted; that is a major reason for the existence of stablecoins. But Fais Kahn makes two points: If there were a sudden drop in the market, and investors wanted to exchange their USDT for real dollars in Tether’s reserve, that could trigger a “bank run” where the value dropped significantly below one dollar, and suddenly everyone would want their money. That could trigger a full on collapse. But when that might actually happen? When Bitcoin falls in the frequent crypto bloodbaths, users actually buy Tether - fleeing to the safety of the dollar. This actually drives Tether’s price up! And: Tether’s own Terms of Service say users may not be redeemed immediately. Forced to wait, many users would flee to Bitcoin for lack of options, driving the price up again. It isn't just Tether that doesn't allow winnings out. Carol Alexander's Binance’s Insurance Fund is a fascinating, detailed examination of Binance's extremely convenient "outage" as BTC crashed on May 19. Her subhead reads: How insufficient insurance funds might explain the outage of Binance’s futures platform on May 19 and the potentially toxic relationship between Binance and Tether. I certainly don't understand all the ramifications of the "toxic relationship between Binance and Tether", but the article's implicit assumption that they, and similar market particiapants, behave like properly regulated financial institutions is implausible. Alexander's take on the relationship, on the other hand, is alarmingly plausible: In May 2021 ... Tether reported that only 2.9% of all tokens are actually backed by cash reserves and about 50% is in commercial paper, a form of unsecured debt that is normally only issued by firms with high-quality debt ratings. he simultaneous growth of Binance and tether begs the question whether Binance itself is the issuer of a large fraction of tether’s $30 billion commercial paper. Binance's B2B platform is the main online broker for tether. Suppose Binance is in financial difficulties (possibly precipitated by using its own money rather than insurance funds to cover payment to counterparties of liquidated positions). Then the tether it orders and gives to customers might not be paid for with dollars, or bitcoin or any other form of cash, but rather with an IOU. That is, commerical paper on which it pays tether interest, until the term of the loan expires. No new tether has been issued since Binance's order of $3 bn [Correction 6 Aug: net $1bn transfer] was made highly visible to the public on 31 May. [Correction: 6 Aug: Another $1 bn tether was issued on 4 Aug]. Maybe this is because Tether's next audit is imminent, and the auditers may one day investigate the identity of the issuers of the 50% (or more, now) of commercial paper it has for reserves. If it were found that the main issuer was Binance (maybe followed by FTX) then the entire crypto asset market place would have been holding itself up with its own bootstraps! This would certainly explain why Matt Levine wrote: There is a fun game among financial journalists and other interested observers who try to find anyone who has actually traded commercial paper with Tether, or any of its actual holdings. The game is hard! As far as I know, no one has ever won it, or even scored a point; I have never seen anyone publicly identify a security that Tether holds or a counterparty that has traded commercial paper with it. If Tether's reserves were 50% composed of unsecured debt from unregulated exchanges like Binance ... Issue #2: Dynamic Effects My second problem with the article is that this paragraph shows The Economist sharing two common misconceptions about blockchain technology: A crash would puncture the crypto economy. Bitcoin miners—who compete to validate transactions and are rewarded with new coins—would have less incentive to carry on, bringing the verification process, and the supply of bitcoin, to a halt. First, it is true that, were the "price" of Bitcoin zero, mining would stop. But if mining stops, it is transactions that stop. Bitcoin HODL-ings would be frozen in place, not just worth zero on paper but actually useless because nothing could be done with them. Second, the idea that the goal of mining is to create new Bitcoin is simply wrong. The goal of mining is to secure the blockchain by making Sybil attacks implausibly expensive. The creation of new Bitcoin is a side-effect, intended to motivate miners to make the blockchain secure by making Sybil attacks implausibly expensive. The fact that Nakamoto intended mining to continue after the final Bitcoin had been created clearly demonstrates this. The article is based on this scenario: in order to grasp the growing links between the crypto-sphere and mainstream markets, imagine that the price of bitcoin crashes all the way to zero. A rout could be triggered either by shocks originating within the system, say through a technical failure, or a serious hack of a big cryptocurrency exchange. Or they could come from outside: a clampdown by regulators, for instance, or an abrupt end to the “everything rally” in markets, say in response to central banks raising interest rates. But, as the article admits, a discontinuous change from $44K or so to $0 is implausible. A rapid but continuous drop over, say, a month is more plausible, and it could bring issues that the article understandably fails to address. Source As the "price" drops two effects take place. First, the value of the mining reward in fiat currency decreases. The least efficient and least profitable miners become uneconomic and drop out, decreasing the hash rate and thus increasing the block time and reducing the rate at which transactions can be processed: Typically, it takes about 10 minutes to complete a block, but Feinstein told CNBC the bitcoin network has slowed down to 14- to 19-minute block times. This effect occurred during the Chinese government's crackdown, as shown in the graph of hash rate. Source Second, every 2016 blocks (about two weeks) the algorithm adjusts, in this case decreases, the difficulty and thus the cost of mining the next 2016 blocks. The idea is to restore the block time to about 10 minutes despite the reduction in the hash rate. When the Chinese crackdown took 52.2% of Bitcoin's hash power off-line, the algorithm made the biggest reduction in difficulty in Bitcoin's history. In our scenario, Bitcoin plunges over a month. Lets assume it starts just after a difficulty adjustment. The month is divided into two parts, with the initial difficulty for the first part, and a much reduced difficulty for the second part. In the first part the rapid "price" decrease makes all but the most efficient miners uneconomic, so the hash rate decreases rapidly and block production slows rapidly. Producing the 2016-th block takes a lot more than two weeks. This is a time when the demand for transactions will be extreme, but during this part the supply of transactions is increasingly restricted. This, as has happened in other periods of high transaction demand, causes transaction fees to spike to extraordinary levels. In normal times fees are less than 10% of miner income, but it is plausible that they would spike an order of magnitude or more, counteracting the drop in the economics of mining. But median fees of say $200 would increase the sense of panic in the spot market. Lets assume that, by the 2016-th block, that more than half the mining power has been rendered uneconomic, so that the block time is around 20 minutes. Thus the adjustment comes after three weeks. When it happens, the adjustment, being based on the total time taken in the first part, will be large but inadequate to correct for the reduced hash rate at the end of the first part. With our assumptions the adjustment will be for a 25% drop in hash power, but the actual drop will have been 50%. Block production will speed up, but only to about 15 minutes/block. Given the panic, fees will drop somewhat but remain high. As the adjustment appraoches there are a lot of disgruntled miners, whose investment in ASIC mining rigs has been rendered uneconomic. The rigs can't be repurposed for anything but other Proof-of-Work cryptocurrencies, which have all crashed because, as the article notes: Investors would probably also dump other cryptocurrencies. Recent tantrums have shown that where bitcoin goes, other digital monies follow, says Philip Gradwell of Chainalysis, a data firm. Recall that what the mining power is doing is securing the blockchain against attack. Once it became possible to rent large amounts of mining power, 51% attacks on minor alt-coins became endemic. For example, there were three successful attacks on Ethereum Classic in a single month. Before the adjustment, some fraction of the now-uneconomic Bitcoin mining power has migrated to the rental market. Even a small fraction can overwhelm other cryptocurrencies. As I write, the Bitcoin hash rate is around 110M TH/s. Dogecoin is the next largest "market cap" coin using Bitcoin-like Proof-of-Work. Its hash rate is around 230 TH/s, or 500,000 times smaller. Thus during the first part there was a tidal wave of attacks against every other Proof-of-Work cryptocurrency. It has never been possible to rent enugh mining power to attack a major cryptocurrency. But now we have more than 50% of the Bitcoin mining power sitting idle on the sidelines desperate for income. These miners have choices: They can resume mining Bitcoin. The more efficient of them can do so and still make a profit, but if they all do most will find it uneconomic. They can mine other Proof-of-Work cryptocurrencies. But even if only a tiny fraction of them do so, it will be uneconomic. And trust in the alt-coins has been destroyed by the wave of attacks. They can collaborate to mount double-spend attacks against Bitcoin, since they have more than half the mining power. They can collaborate to mount the kind of sabotage attack described by Eric Budish in The Economic Limits Of Bitcoin And The Blockchain, aiming to profit by shorting Bitcoin in the derivative market and destroying confidence in the asset's security. The security of Proof-of-Work blockchains depends upon the unavailability of enough mining power to mount an attack. A massive, sustained drop in the value of Bitcoin would free up enormous amounts of mining power, far more than enough to destroy any smaller cryptocurrency, and probably enough to destroy Bitcoin. Posted by David. at 8:00 AM Labels: bitcoin 2 comments: David. said... Barbie says "Smart contracts are hard". Cross-Chain DeFi Site Poly Network Hacked; Hundreds of Millions Potentially Lost by Eliza Gkritsi and Muyao Shen starts: "Cross-chain decentralized finance (DeFi) platform Poly Network was attacked on Tuesday, with the alleged hacker draining roughly $600 million in crypto. Poly Network, a protocol launched by the founder of Chinese blockchain project Neo, operates on the Binance Smart Chain, Ethereum and Polygon blockchains. Tuesday’s attack struck each chain consecutively, with the Poly team identifying three addresses where stolen assets were transferred." August 10, 2021 at 2:24 PM David. said... In Poly Network Hack Analysis – Largest Crypto Hack, Mudit Gupyta provides a detailed analysis of the Poly Network heist: "Poly Network is a Blockchain interoperability project that allows people to send transactions across blockchains. One of their key use case is a cross-blockchain bridge that allows you to move assets from one blockchain to another by locking tokens on one blockchain and unlocking them on a different one. The attacker managed to unlock tokens on various blockchains without locking the corresponding amounts on other blockchains." David Gerard comments: "Poly Network asked Ethereum miners and exchanges to intercede and block the hacker’s addresses for them — sort of giving the game away that crypto miners are transaction processors." Nicholas Weaver would agree. According to Tim Copeland's At least $611 million stolen in massive cross-chain hack: "Blockchain security firm SlowMist has sent out a news alert that says they have already tracked down the attacker's ID. It claims to know their email address, IP information and device fingerprint. The firm said that the attacker's original funds were in monero (XMR), which were exchanged for BNB, ETH and MATIC and other tokens that were used to fund the attack." August 12, 2021 at 3:52 PM Post a Comment Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ▼  August (2) The Economist On Cryptocurrencies Stablecoins Part 2 ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-885 ---- DSHR's Blog: The Optimist's Telescope: Review DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, September 10, 2019 The Optimist's Telescope: Review The fundamental problem of digital preservation is that, although it is important and we know how to do it, we don't want to pay enough to have it done. It is an example of the various societal problems caused by rampant short-termism, about which I have written frequently. Bina Venkataraman has a new book on the topic entitled The Optimist's Telescope: Thinking Ahead in a Reckless Age. Robert H. Frank reviews it in the New York Times: How might we mitigate losses caused by shortsightedness? Bina Venkataraman, a former climate adviser to the Obama administration, brings a storyteller’s eye to this question in her new book, “The Optimist’s Telescope.” She is also deeply informed about the relevant science. The telescope in her title comes from the economist A.C. Pigou’s observation in 1920 that shortsightedness is rooted in our “faulty telescopic faculty.” As Venkataraman writes, “The future is an idea we have to conjure in our minds, not something that we perceive with our senses. What we want today, by contrast, we can often feel in our guts as a craving.” She herself is the optimist in her title, confidently insisting that impatience is not an immutable human trait. Her engaging narratives illustrate how people battle and often overcome shortsightedness across a range of problems and settings. Below the fold, some thoughts upon reading the book. The plot of Isaac Asimov's Foundation Trilogy evolves as a series of "Seldon crises", in which simultaneous internal and external crises combine to force history into the path envisioned by psychohistorian Hari Seldon and the Foundations he established with the aim of reducing the duration of the dark ages after the fall of the Galactic Empire from 30,000 to 1,000 years. The world today feels as though it is undergoing a Seldon crisis, with external (climate change) and internal (inequality, the rise of quasi-fascist leaders of "democracies") crises reinforcing each other.  What is lacking is a Foundation charting a long-term future that minimizes the dark ages to come after the fall of civilization. What ties the various current crises together is short-termism; all levels of society being incapable of long-term thinking, and failing to resist eating the marshmallow. In her introduction Venkataraman writes: I argue in this book that many decisions are made in the presence of information about future consequences but in the absence of good judgement. We try too hard to know the exact future and do too little to be ready for its many possibilities. The result is an epidemic of recklessness, a colossal failure to plan ahead. ... To act on behalf of our future selves can be hard enough; to act on behalf of future neighbors, communities, countries of the planet can seem impossible, even if we aspire to that ideal. By contrast, it is far easier to respond to an immediate threat. She divides her book into three parts, and in each deploys an impressive range of examples of the problems caused by lack of foresight. But it is an optimistic book, because in each part she provides techniques for applying foresight and examples of their successful application. Part 1: Individual and Family Dorian, the second-worst North Atlantic hurricane ever, was ravaging the Bahamas as I read Part 1's discussion of why, despite early and accurate warnings, people fail to evacuate or take appropriate precautions for hurricanes and other natural disasters: It is human nature to rely on mental shortcuts and gut feelings - more than gauges of the odds - to make decisions. ... These patterns of thinking, I have learned, explain why all the investment on better predictions can fall short of driving decisions about the future ... The threats that people take most seriously turn out to be those we can most vividly imagine. She illustrates why this is hard using the collapse of microfinance in Andra Pradesh: A person who might look reckless when poor could look smart and strategic when flush. Realizing that people who are lacking resources often have a kind of tunnel vision for the present helped me understand why many women involved in India's microfinance crisis went against their own future interest, taking on too many loans and falling deep into debt. It also explains why the poorest families have more trouble heeding hurricane predictions. The problem on the lending side of the collapse was "be careful what you measure". The microfinance companies were measuring the number of new loans, and the low default rate, not noticing that the new loans were being used to pay off old ones. The same phenomenon of scarcity causing recklessness helps explain why black kids in schools suffer more severe discipline: in exasperated moments, impulsive decisions reflecting ingrained biases become more likely. Teachers, like all of us, are exposed to portrayals in the media and popular culture of black people as criminals, and those images shape unconscious views and actions. University of Oregon professors Kent McIntosh and Erik Girvan call these moments of discipline in schools "vulnerable decision points." They track discipline incidents in schools around the country and analyze the data to show school administrators and teachers are often predictable. When teachers are fatigued at the end of a school day  or week, or hungry after skipping lunch for meetings, they are more likley to make rash decisions. ... This bears out the link Eldar Shafir and Sendhil Mullainathan have shown between scarcity - in this case, of time and attention - and reckless decision making. It is similar to the pattern that hamstrings the poor from saving for their future. Among the techniques she discusses for imagining the future are virtual reality experiences, and simpler techniques such as: an annual gathering where each person writes his own obituary and reads it aloud to the group. Prototype Clock Another is Danny Hillis' 10,000 year clock: The clock idea captivated those whom Hillis told about it, including futurist and technology guru Stewart Brand and the musician Brian Eno. And me. IIRC it was at the 1996 Hacker's conference where Hillis and Brand talked about the idea of the clock. The presentation set me thinking about the long-term future of digital information, and about how systems to provide it needed to be ductile rather than, like Byzantine Fault Tolerance, brittle. The LOCKSS Program was the result a couple of years later. ERNIE 1 via Wikipedian geni Another technique she calls "glitter bombs" - you'll need to read the book to find out why. The UK's Premium Bonds and other prize-linked savings schemes are examples: The British government launched its Premium Bonds program in 1956, to encourage savings after World War II. For the past seven decades, between 22 and 40 percent of UK citizens have held the bonds at any given time. The savers accept lower guaranteed returns than comparable government bonds in exchange for the prospect of winning cash prizes during monthly drawings. Tufano's research shows that people who save under these schemes typically do so not instead of saving elsewhere but instead of gambling. As kids, my brother and I routinely received small Premium Bonds as birthday or Christmas gifts. I recall watching on TV as "ERNIE" chose winners, but I don't recall ever being one. Part 2: Businesses and Organizations Venkataraman introduces this part thus: The unwitting ways that organizations encourage reckless decisions may pose an even greater threat, however, than the cheating we find so repulsive. The work of John Graham at the National Bureau of Economic Research puts eye-popping scandals into perspective. He has shown that more money is lost for shareholders of corporations ... by the routine, legal habit of executives making bad long-term decisions to boost near-term profits than what is siphoned off by corporate fraud. Among her examples of organizational short-termism are the Dust Bowl, gaming No Child Left Behind by "teaching to the test", over-prescribing of antibiotics, over-fishing, and the Global Financial Crisis (GFC). For each, she discusses examples of successful, albeit small-scale, mitigations: The Dust Bowl was caused by the economic incentives, still in place, for farmers to aggressively till their soil to produce more annual crops. She describes how people are developing perennial crops, needing much less tilling and irrigation: Perennial grains, unlike annuals, burrow thick roots ten to twenty feet deep into the ground. Plants with such entrenched roots don't require much irrigation and they withstand drought better. Perennial roots clench the fertile topsoil like claws and keep it from washing away. This makes it possible for a rich soil microbiome to thrive that helps crops use nutrients more efficiently. A field of perennials does not need to be plowed each year, and so more carbon remains trapped in its soil instead of escaping to the atmosphere But: To get perennial grains into production, Jackson also had to figure out how to overcome farmers' aversion to taking risks on unknown crops, and their immediate fears of not having buyers for their product. Researchers from the Land Institute and University of Minnesota have brokered deals for twenty farmers to plant fields with a perennial grain that resembles wheat. They persuaded the farmers by securing buyers willing to pay a premium for the grain This is an impressive demonstration of making "what lasts over time pay in the short run", but scaling up to displace annual grains in the market is an exercise left to the reader. Montessori and similar educational philosophies (e.g. Reggio Emilia early childhood education) are known to be effective alternatives to the testing-based No Child Left Behind. But they aren't as easy to measure, and thus to justify deploying widely. So this is what we get: Other reports have documented how "teaching to the test" curtails student curiosity, and how it has even driven some teachers and principals to cheat by correcting student answers. The metric might work for organizations at the bottom of the heap, but not for those near the top. Organizations at the bottom of the heap have low-hanging fruit, so they can see how to improve. It is much more difficult for organizations near the top to see how to improve, so the temptation to cheat is greater. Doctors have been effective at curbing over-prescribing by their colleagues using an in-person, patient-specific "postgame rehash" when suspect prescriptions are detected. But: The drawback is that it requires a lot of time and legwork, and even hospitals with antibiotic stewardship teams lack the resources to do this across an entire hospital year-round. So although this approach works, it can't scale up to match the problem of over-prescribing in hospitals, let alone by GPs. And it clearly can't deal with the even more difficult problem of agricultural over-use of antibiotics. Attempts to reduce over-fishing by limiting fishing days and landings haven't been effective. They lead to intensive, highly competitive "derby days" during which immature fish are killed and dumped, and prices are crashed because the entire quota arrives on the market at the same time. Instead, the approach of "catch shares", in effect giving the fishermen equity in the fishery, has driven the Gulf Coast red snapper fishery back from near-extinction: The success of catch shares shows that agreements to organize businesses - and wise policy - can encourage collective foresight. Programs that align future interests with the present can, in the words of Buddy Guindon, turn pirates into stewards. It isn't clear that it would have been possible to implement catch shares before the fishery faced extinction. The Global Financial Crisis of 2008 was driven by investors' monomaniacal focus on quarterly results, and thus executives monomaniacal focus on manipulating them to enhance their stock options and annual bonuses. She responds with the story of Eagle Capital Management, a patient value investment firm which, after enduring years of sub-par performance, flourished during and after the dot-com bust: Eagle fared well and way outperformed the plummeting markets in 1999 and 2000. In just those two years, the gains more than made up for the losses of the previous five. Today, the company has grown to manage more than $25 billion in assets and, on average, earned an annual return of more than 13 percent on its investments between 1998 and 2018. That's more than double the annual return from the S&P 500 during that time. Some of my money is managed by a firm with a similar investment strategy, so I can testify to the need for patience and a long-term view. Value investing has been out of favor during the recovery from the GFC. Note that the whole reason for Eagle's success was that most competitors were doing something different; if everyone had been taking Eagle's long view the GFC wouldn't have happened but Eagle would have been a run-of-the-mill performer. Source She examines long-lived biological systems, including: The Pando aspen colony in Utah, ... is more than eighty thousand years old, and it has persisted by virtue of self-propogation - cloning itself - and by slow migration to fulfill its needs for water and nutrients from the soil. It even survived the volcaninic winter spurred by the massive eruption seventy-five thousand years ago on Sumatra. ... Its strategy - making lots of copies of itself - is one echoed by digital archivist David Rosenthal ... Lots of copies dispersed to different environments and organizations, Rosenthal told me, is the only viable survival route for the ideas and records of the digital age, Rhizocarpon geographicum Source She is right that systems intended to survive for the long term needs high levels of redundancy, and low levels of correlation. She also points out another thing they need: Another secret of some of the oldest living things on Earth is slow growth. Sussman documents what are known as map lichens in Greenland, specimens at least three thousand years old that have grown one centimeter every hundred years - a hundred times slower than the pace of continental drift. The need to force systems to operate relatively slowly by imposing rate limits is something that I've written about several times (as has Paul Vixie), for example in Brittle Systems: The design goal of almost all systems is to do what the user wants as fast as possible. This means that when the bad guy wrests control of the system from the user, the system will do what the bad guy wants as fast as possible. Doing what the bad guy wants as fast as possible pretty much defines brittleness in a system; failures will be complete and abrupt. Rate limits are essential in the LOCKSS system. Another of her examples is also about rate limits. Gregg Popovich, coach of the San Antonio Spurs: pioneered the practice of keeping star players out of games for rest to prevent later injuries. Harrison's H4 Phantom Photographer The equivalent of "glitter bombs" in this part are prizes, the earliest success and perhaps the most famous is the Longitude Prize, a £25,000 prize that motivated John Harrison's succession of marine chronometers (preserved in working order at the Royal Greenwich Observatory). More recent successful prizes include the X-Prize spaceship and DARPA's prizes kick-starting autonomous car technology. But note that none of the recent successful prizes have spawned technologies relevant to solving the Seldon crisis we face. One interesting technique she details is "prospective hindsight": In contrast to the more common practice of describing what will happen in the future, prospective hindsight requires assuming something already happened and trying to explain why. This shifts people's focus away from mere prediction of future events and toward evaluating the consequences of their current choices. In the early days of Vitria Technology, my third startup, we worked with FedEx. On of the many impressive things about the company was their morning routine of reviewing the events of the previous 24 hours to enumerate everything that had gone wrong, and identify the root causes. Explaining why is an extremely valuable process. Part 3: Communities and Society Some of the examples in this part, such as the warnings of potential for terrorism at the Munich Olympics, the siting of the Fukushima reactors, the extraordinary delay in responding to the Ebola outbreak: E-mails later published by the Associated Press revealed that officials knew of the potential danger and scope of the epidemic months before the designation [of a global emergency], and were warned of its scale by Doctors Without Borders .. The World Health Organization's leaders, however, were worried about declaring the emergency because of the possible damage to the economies of the countries at the epicenter of the outbreak. and the Indian Ocean tsunami: After Dr. Smith Dharmasaroja, the head meteorologist of Thailand, advocated in 1998 for creating a network of sirens to warn of incoming tsunamis in the Indian Ocean, the ruling government replaced him. His superiors argued that a coastal warning system might deter tourists, as they would see Thailand as unsafe. Six years later, a massive Indian Ocean tsunami killed more than 200,000 people including thousands in coastal Thailand, many of them tourists. show how the focus on short-term costs has fatal consequences. One reason is "social discounting", the application of a discount rate to estimated future costs to reduce them to a "net present value". This technique might have value in purely economic computations, although as I pointed out in Two Sidelights on Short-Termism: in practice it gives wrong answers: I've often referred to the empirical work of Haldane & Davies and the theoretical work of Farmer and Geanakoplos, both of which suggest that investors using Discounted Cash Flow (DCF) to decide whether an investment now is justified by returns in the future are likely to undervalue the future. ... Now Harvard's Greenwood & Shleifer, in a paper entitled Expectations of Returns and Expected Returns, reinforce this ... They compare investors' beliefs about the future of the stock market as reported in various opinion surveys, with the outputs of various models used by economists to predict the future based on current information about stocks. They find that when these models, all enhancements to DCF of one kind or another, predict low performance investors expect high performance, and vice versa. If they have experienced poor recent performance and see a low market, they expect this to continue and are unwilling to invest. If they see good recent performance and a high market they expect this to continue. Their expected return from investment will be systematically too high, or in other words they will suffer from short-termism. But as applied to investments in preventing future death and disaster these techniques uniformly fail, partly because they undervalue human life, and partly because they underestimate the risk of death and disaster, because they cannot enumerate all the potential risks. Peter Schwartz founded the Global Business Network, which has decades of experience running scenario planning exercises for major corporations. He: has discovered that people are tempted to try to lock in on a single possible scenario that they prefer or see as the most likely and simply plan for that - defeating the purpose of scenario generation. The purpose being, of course, to get planners to think about the long tail of "black swan" events. The optimistic examples in this part are interesting, especially her account of the fight against the proposed Green Diamond development in the floodplain of Richland County, South Carolina, and Jared Watson's Eagle Scout project to educate the citizens of Mattapoisett, Massachusetts about the risk of flooding. But, as she recounts: In each of these instances, a community's size, or at least its cultural continuity between past and present, has made it easier to create and steward collective heirlooms. Similarly, of the hundreds of stone markers dedicated to past tsunamis in Japan, the two that were heeded centuries later were both in small villages, where oral tradition and school education reinforced the history and passed down the warning over time. Coda Venkataraman finishes with an optimistic coda pointing to the work of Paul Bain and his colleagues who: demonstrated that even climate deniers could be persuaded of the need for "environmental citizenship" if the actions to be taken, such as reducing carbon emissions, were framed as improvements in the way people would treat one another in the imagined future. A collective idea of the future in which people work together on environmental problems, and are more caring and considerate - or a future with greater economic and technological progress - motivated the climate change deniers to support such actions even when they didn't believe that human-caused climate change was a problem. She enumerates the five key lessons she takes away from her work on the book: Look beyond near-term targets. We can avoid being distracted by short-term noise and cultivate patience by measuring more than immediate results. Stoke the imagination. We can boost our ability to envision the range of possibilities that lie ahead. Create immediate rewards for future goals. We can find ways to make what's best for us over time pay off in the present. Direct attention away from immediate urges. We can reengineer cultural and environmental cues that condition us for urgency and instant gratification. Demand and design better institutions. We can create practices, laws and institutions that foster foresight. My Reaction It is hard not to be impressed by the book's collection of positive examples, but it is equally hard not to observe that in each case there are great difficulties in scaling them up to match the threats we face. And, in particular, there is a difficulty they all share that is inherent in Venkataraman's starting point: I argue in this book that many decisions are made in the presence of information about future consequences but in the absence of good judgement. The long history of first the tobacco industry's and subsequently the fossil fuel industry's massive efforts to pollute the information environment cast great doubt on the idea that "decisions are made in the presence of information about future consequences" if those consequences affect oligopolies. And research is only now starting to understand how much easier it is for those who have benefited from the huge rise in economic inequality to use social media to the same ends. As just one example: Now researchers led by Penn biologist Joshua B. Plotkin and the University of Houston’s Alexander J. Stewart have identified another impediment to democratic decision making, one that may be particularly relevant in online communities. In what the scientists have termed “information gerrymandering,” it’s not geographical boundaries that confer a bias but the structure of social networks, such as social media connections. Reporting in the journal Nature, the researchers first predicted the phenomenon from a mathematical model of collective decision making, and then confirmed its effects by conducting social network experiments with thousands of human subjects. Finally, they analyzed a variety of real-world networks and found examples of information gerrymandering present on Twitter, in the blogosphere, and in U.S. and European legislatures. “People come to form opinions, or decide how to vote, based on what they read and who they interact with,” says Plotkin. “And in today’s world we do a lot of sharing and reading online. What we found is that the information gerrymandering can induce a strong bias in the outcome of collective decisions, even in the absence of ‘fake news.’ In this light Nathan J. Robinson's The Scale Of What We're Up Against makes depressing reading: It can be exhausting to realize just how much money is being spent trying to make the world a worse place to live in. The Koch Brothers are often mentioned as bogeymen, and invoking them can sound conspiratorial, but the scale of the democracy-subversion operation they put together is genuinely quite stunning. Jane Mayer, in Dark Money, put some of the pieces together, and found that the Charles Koch Foundation had subsidized “pro-business, antiregulatory, and antitax” programs at over 300 institutes of higher education. That is to say, they endowed professorships and think tanks that pumped out a constant stream of phony scholarship. They established the Mercatus Center at George Mason University, a public university in Virginia. All of these professors, “grassroots” groups, and think tanks are dedicated to pushing a libertarian ideology that is openly committed to creating a neo-feudal dystopia. The Kochs provide just a small part of the resources devoted to polluting the information environment. Social networks, as the Cambridge Analytica scandal shows, have greatly improved the productivity of these resources. I'm sorry to end my review of an optimistic book on a pessimistic note. But I'm an engineer, and much of engineering is about asking What Could Possibly Go Wrong? Posted by David. at 8:00 AM 2 comments: David. said... Facing the Great Reckoning Head-On, danah boyd's speech accepting one of this year's Barlow awards from the EFF, is a must-read. It is in its own way another plea for longer-term thinking: "whether we like it or not, the tech industry is now in the business of global governance. “Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic." September 13, 2019 at 3:06 PM David. said... Last October, Alex Nevala-Lee made the same point about Hari Seldon in What Isaac Asimov Taught Us About Predicting the Future: "Asimov later acknowledged that psychohistory amounted to a kind of emotional reassurance: “Hitler kept winning victories, and the only way that I could possibly find life bearable at the time was to convince myself that no matter what he did, he was doomed to defeat in the end.” The notion was framed as a science that could predict events centuries in advance, but it was driven by a desire to know what would happen in the war over the next few months — a form of wishful thinking that is all but inevitable at times of profound uncertainty. Before the last presidential election, this impulse manifested itself in a widespread obsession with poll numbers and data journalism" September 13, 2019 at 3:26 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ▼  2019 (66) ►  December (2) ►  November (4) ►  October (8) ▼  September (5) Boeing 737 MAX: Two Competing Views Promising New Hard Disk Technology Google's Fenced Garden Interesting Articles From Usenix The Optimist's Telescope: Review ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-9202 ---- DSHR's Blog: Venture Capital Isn't Working DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, April 29, 2021 Venture Capital Isn't Working I was an early employee at three VC-funded startups from the 80s and 90s. All of them IPO-ed and two (Sun Microsystems and Nvidia) made it into the list of the top 100 US companies by market capitalization. So I'm in a good position to appreciate Jeffrey Funk's must-read The Crisis of Venture Capital: Fixing America’s Broken Start-Up System. Funk starts: Despite all the attention and investment that Silicon Valley’s recent start-ups have received, they have done little but lose money: Uber, Lyft, WeWork, Pinterest, and Snapchat have consistently failed to turn profits, with Uber’s cumulative losses exceeding $25 billion. Perhaps even more notorious are bankrupt and discredited start-ups such as Theranos, Luckin Coffee, and Wirecard, which were plagued with management failures, technical problems, or even outright fraud that auditors failed to notice. What’s going on? There is no immediately obvious reason why this generation of start-ups should be so financially disastrous. After all, Amazon incurred losses for many years, but eventually grew to become one of the most profitable companies in the world, even as Enron and WorldCom were mired in accounting scandals. So why can’t today’s start-ups also succeed? Are they exceptions, or part of a larger, more systemic problem? Below the fold, some reflections on Funk's insightful analysis of the "larger, more systemic problem". Funk introduces his argument thus: In this article, I first discuss the abundant evidence for low returns on VC investments in the contemporary market. Second, I summarize the performance of start-ups founded twenty to fifty years ago, in an era when most start-ups quickly became profitable, and the most successful ones rapidly achieved top-100 market capitalization. Third, I contrast these earlier, more successful start-ups with Silicon Valley’s current set of “unicorns,” the most successful of today’s start-ups. Fourth, I discuss why today’s start-ups are doing worse than those of previous generations and explore the reasons why technological innovation has slowed in recent years. Fifth, I offer some brief proposals about what can be done to fix our broken start-up system. Systemic problems will require systemic solutions, and thus major changes are needed not just on the part of venture capitalists but also in our universities and business schools. Is There A Problem? Funk's argument that there is a problem can be summarized thus: The returns on VC investments over the last two decades haven't matched the golden years of the proceeding two decades. In the golden years startups made profits. Now they don't. VC Returns Are Sub-Par Source This graph from a 2020 Morgan Stanley report shows that during the 90s the returns from VC investments greatly exceeded the returns from public equity. But since then the median VC return has been below that of public equity. This doesn't reward investors for the much higher risk of VC investments. The weighted average VC return is slightly above that of public equity because, as Funk explains: a small percentage of investments does provide high returns, and these high returns for top-performing VC funds persist over subsequent quarters. Although this data does not demonstrate that select VCs consistently earn solid profits over decades, it does suggest that these VCs are achieving good returns. It was always true that VC quality varied greatly. I discussed the advantages of working with great VCs in Kai Li's FAST Keynote: Work with the best VC funds. The difference between the best and the merely good in VCs is at least as big as the difference between the best and the merely good programmers. At nVIDIA we had two of the very best, Sutter Hill and Sequoia. The result is that, like Kai but unlike many entrepreneurs, we think VCs are enormously helpful. One thing that was striking about working with Sutter Hill was how many entrepreneurs did a series of companies with them, showing that both sides had positive experiences. Startups Used To Make Profits Before the dot-com boom, there used to be a rule that in order to IPO a company, it had to be making profits. This was a good rule, since it provided at least some basis for setting the stock price at the IPO. Funk writes: There was a time when venture capital generated big returns for investors, employees, and customers alike, both because more start-ups were profitable at an earlier stage and because some start-ups achieved high market capitalization relatively quickly. Profits are an important indicator of economic and technological growth, because they signal that a company is providing more value to its customers than the costs it is incurring. A number of start-ups founded in the late twentieth century have had an enormous impact on the global economy, quickly reaching both profitability and top-100 market capitalization. Among these are the so-called FAANMG (Facebook, Amazon, Apple, Microsoft, Netflix, and Google), which represented more than 25 percent of the S&P’s total market capitalization and more than 80 percent of the 2020 increase in the S&P’s total value at one point—in other words, the most valuable and fastest-growing compa­nies in America in recent years. Funk's Table 2 shows the years to profitability and years to top-100 market capitalization for companies founded between 1975 and 2004. I'm a bit skeptical of the details because, for example, the table says it took Sun Microsystems 6 years to turn a profit. I'm pretty sure Sun was profitable at its 1986 IPO, 4 years from its founding. Note Funk's stress on achieving profitability quickly. An important Silicon Valley philosophy used to be: Success is great! Failure is OK. Not doing either is a big problem. The reason lies in the Silicon Valley mantra of "fail fast". Most startups fail, and the costs of those failures detract from the returns of the successes. Minimizing the cost of failure, and diverting the resource to trying something different, is important. Unicorns, Not So Much What are these unicorns? Wikipedia tells us: In business, a unicorn is a privately held startup company valued at over $1 billion. The term was coined in 2013 by venture capitalist Aileen Lee, choosing the mythical animal to represent the statistical rarity of such successful ventures. Back in 2013 unicorns were indeed rare, but as Wikipedia goes on to point out: According to CB Insights, there are over 450 unicorns as of October 2020. Unicorns are breeding like rabbits, but the picture Funk paints is depressing: In the contemporary start-up economy, “unicorns” are purportedly “disrupting” almost every industry from transportation to real estate, with new business software, mobile apps, consumer hardware, internet services, biotech, and AI products and services. But the actual performance of these unicorns both before and after the VC exit stage contrasts sharply with the financial successes of the previous generation of start-ups, and suggests that they are dramatically overvalued. Figure 3 shows the profitability distribution of seventy-three unicorns and ex-unicorns that were founded after 2013 and have released net income and revenue figures for 2019 and/or 2020. In 2019, only six of the seventy-three unicorns included in figure 3 were profitable, while for 2020, seven of seventy were. Hey, they're startups, right? They just need time to become profitable. Funk debunks that idea too: Furthermore, there seems to be little reason to believe that these unprofitable unicorn start-ups will ever be able to grow out of their losses, as can be seen in the ratio of losses to revenues in 2019 versus the founding year. Aside from a tiny number of statistical outliers ... there seems to be little relationship between the time since a start-up’s founding and its ratio of losses to revenues. In other words, age is not correlated with profits for this cohort. Funk goes on to note that startup profitability once public has declined dramatically, and appears inversely related to IPO valuation: When compared with profitability data from decades past, recent start-ups look even worse than already noted. About 10 percent of the unicorn start-ups included in figure 3 were profitable, much lower than the 80 percent of start-ups founded in the 1980s that were profitable, according to Jay Ritter’s analysis, and also below the overall percentage for start-ups today (20 percent). Thus, not only has profitability dramatically dropped over the last forty years among those start-ups that went public, but today’s most valuable start-ups—those valued at $1 billion or more before IPO—are in fact less profitable than start-ups that did not reach such lofty pre-IPO valuations. Funk uses electric vehicles and biotech to illustrate startup over-valuation: For instance, driven by easy money and the rapid rise of Tesla’s stock, a group of electric vehicle and battery suppliers—Canoo, Fisker Automotive, Hyliion, Lordstown Motors, Nikola, and QuantumScape—were valued, combined, at more than $100 billion at their listing. Likewise, dozens of biotech firms have also achieved billions of dollars in market capitalizations at their listings. In total, 2020 set a new record for the number of companies going public with little to no revenue, easily eclipsing the height of the dot-com boom of telecom companies in 2000. The Alphaville team have been maintaining a spreadsheet of the EV bubble. They determined that there was no way these companies' valuations could be justified given the size of the potential market. Jamie Powell's April 12th Revisiting the EV bubble spreadsheet celebrates their assessment: At pixel time the losses from their respective peaks from all of the electric vehicle, battery and charging companies on our list total some $635bn of market capitalisation, or a fall of just under 38 per cent. Ouch. What Is Causing The Problem This all looks like too much money chasing too few viable startups, and too many me-too startups chasing too few total available market dollars. Funk starts his analysis of the causes of poor VC returns by pointing to the obvious one, one that applies to any successful investment strategy. Its returns will be eroded over time by the influx of too much money: There are many reasons for both the lower profitability of start-ups and the lower returns for VC funds since the mid to late 1990s. The most straightforward of these is simply diminishing returns: as the amount of VC investment in the start-up market has increased, a larger proportion of this funding has necessarily gone to weaker opportunities, and thus the average profitability of these investments has declined. But the effect of too much money is even more corrosive. I'm a big believer in Bill Joy's Law of Startups — "success is inversely proportional to the amount of money you have". Too much money allows hard decisions to be put off. Taking hard decisions promptly is key to "fail fast". Nvidia was an example of this. The company was founded in one of Silicon Valley's recurring downturns. We were the only hardware company funded in that quarter. We got to working silicon on a $2.5M A round. Think about it — each of our VCs invested $1.25M to start a company currently valued at $380,000M. Despite delivering ground-breaking performance, as I discussed in Hardware I/O Virtualization, that chip wasn't a success. But it did allow Jen-Hsun Huang to raise another $6.5M. He down-sized the company by 2/3 and got to working silicon of the highly successful second chip with, IIRC, six weeks' money left in the bank. Funk then discusses a second major reason for poor performance: A more plausible explanation for the relative lack of start-up successes in recent years is that new start-ups tend to be acquired by large incumbents such as the faamng companies before they have a chance to achieve top 100 market capitalization. For instance, YouTube was founded in 2004 and Instagram in 2010; some claim they would be valued at more than $150 billion each (pre-lockdown estimates) if they were independent companies, but instead they were acquired by Google and Facebook, respectively.18 In this sense, they are typical of the recent trend: many start-ups founded since 2000 were subsequently acquired by faamng, including new social media companies such as GitHub, LinkedIn, and WhatsApp. Likewise, a number of money-losing start-ups have been acquired in recent years, most notably DeepMind and Nest, which were bought by Google. But he fails to note the cause of the rash of acquisitions, which is clearly the total Lack Of Anti-Trust Enforcement in the US. As with too much money, the effects of this lack are more pernicious than at first appears. Again, Nvidia provides an example. Just like the founders and VCs of Sun, when we started Nvidia we knew that the route to an IPO and major return on investment involved years and several generations of product. So, despite the limited funding and with the full support of our VCs, we took several critical months right at the start to design an architecture for a family of successive chip generations based on Hardware I/O Virtualization. By ensuring that the drivers in application software interacted only with virtual I/O resources, the architecture decoupled the hardware and software release cycles. The strong linkage between them at Sun had been a consistent source of schedule slip. The architecture also structured the implementation of the chip as a set of modules communicating via an on-chip network. Each module was small enough that a three-person team could design, simulate and verify it. The restricted interface to the on-chip network meant that, if the modules verified correctly, it was highly likely that the assembled chip would verify correctly. Laying the foundations for a long-term product line in this way paid massive dividends. After the second chip, Nvidia was able to deliver a new chip generation every 6 months like clockwork. 6 months after we started Nvidia, we knew over 30 other startups addressing the same market. Only one, ATI, survived the competition with Nvidia's 6-month product cycle. VCs now would be hard to persuade that the return on the initial time and money to build a company that could IPO years later would be worth it when compared to lashing together a prototype and using it to sell the company to one of the FAANMGs. In many cases, simply recruiting a team that could credibly promise to build the prototype would be enough for an "aqui-hire", where a FAANMG buys a startup not for the product but for the people. Building the foundation for a company that can IPO and make it into the top-100 market cap list is no longer worth the candle. But Funk argues that the major cause of lower returns is this: Overall, the most significant problem for today’s start-ups is that there have been few if any new technologies to exploit. The internet, which was a breakthrough technology thirty years ago, has matured. As a result, many of today’s start-up unicorns are comparatively low-tech, even with the advent of the smartphone—perhaps the biggest technological breakthrough of the twenty-first century—fourteen years ago. Ridesharing and food delivery use the same vehicles, drivers, and roads as previous taxi and delivery services; the only major change is the replacement of dispatchers with smartphones. Online sales of juicers, furniture, mattresses, and exercise bikes may have been revolutionary twenty years ago, but they are sold in the same way that Amazon currently sells almost everything. New business software operates from the cloud rather than onsite computers, but pre-2000 start-ups such as Amazon, Google, and Oracle were already pursuing cloud computing before most of the unicorns were founded. Remember, Sun's slogan in the mid 80s was "The network is the computer"! Virtua Fighter on NV1 In essence, Funk argues that succssful startups out-perform by being quicker than legacy companies to exploit the productivity gains made possible by a technological discontinuity. Nvidia was an example of this, too. The technological discontinuity was the transition of the PC from the ISA to the PCI bus. It wasn't possible to do 3D games over the ISA bus, it lacked the necessary bandwidth. The increased bandwidth of the first version of the PCI bus made it just barely possible, as Nvidia's first chip demonstrated by running Sega arcade games at full frame rate. The advantages startups have against incumbents include: An experienced, high-quality team. Initial teams at startups are usually recruited from colleagues, so they are used to working together and know each other's strengths and weaknesses. Jen-Hsun Huang was well-known at Sun, having been the application engineer for LSI Logic on Sun's first SPARC implementation. The rest of the initial team at Nvidia had all worked together building graphics chips at Sun. As the company grows it can no longer recruit only colleagues, so usually experiences what at Sun was called the "bozo invasion". Freedom from backwards compatibility constraints. Radical design change is usually needed to take advantage of a technological discontinuity. Reconciling this with backwards compatibility takes time and forces compromise. Nvidia was able to ignore the legacy of program I/O from the ISA bus and fully exploit the Direct Memory Access capability of the PCI bus from the start. No cash cow to defend. The IBM-funded Andrew project at CMU was intended to deploy what became the IBM PC/RT, which used the ROMP, an IBM RISC CPU competing with Sun's SPARC. The ROMP was so fast that IBM's other product lines saw it as a threat, and insisted that it be priced not to under-cut their existing product's price/performance. So when it finally launched, its price/performance was much worse than Sun's SPARC-based products, and it failed. Funk concludes this section: In short, today’s start-ups have targeted low-tech, highly regulated industries with a business strategy that is ultimately self-defeating: raising capital to subsidize rapid growth and securing a competitive position in the market by undercharging consumers. This strategy has locked start-ups into early designs and customer pools and prevented the experimentation that is vital to all start-ups, including today’s unicorns. Uber, Lyft, DoorDash, and GrubHub are just a few of the well-known start-ups that have pursued this strategy, one that is used by almost every start-up today, partly in response to the demands of VC investors. It is also highly likely that without the steady influx of capital that subsidizes below-market prices, demand for these start-ups’ services would plummet, and thus their chances of profitability would fall even further. In retrospect, it would have been better if start-ups had taken more time to find good, high-tech business opportunities, had worked with regulators to define appropriate behavior, and had experimented with various technologies, designs, and markets, making a profit along the way. But, if the key to startup success is exploiting a technological discontinuity, and there haven't been any to exploit, as Funk argues earlier, taking more time to "find good, high-tech business opportunities" wouldn't have helped. They weren't there to be found. How To Fix The Problem? Funk quotes Charles Duhigg skewering the out-dated view of VCs: For decades, venture capitalists have succeeded in defining themselves as judicious meritocrats who direct money to those who will use it best. But examples like WeWork make it harder to believe that V.C.s help balance greedy impulses with enlightened innovation. Rather, V.C.s seem to embody the cynical shape of modern capitalism, which too often rewards crafty middlemen and bombastic charlatans rather than hardworking employees and creative businesspeople. And: Venture capitalists have shown themselves to be far less capable of commercializing breakthrough technologies than they once were. Instead, as recently outlined in the New Yorker, they often seem to be superficial trend-chasers, all going after the same ideas and often the same entrepreneurs. One managing partner at SoftBank summarized the problem faced by VC firms in a marketplace full of copycat start-ups: “Once Uber is founded, within a year you suddenly have three hundred copycats. The only way to protect your company is to get big fast by investing hundreds of millions.” VCs like these cannot create the technological discontinuities that are the key to adequate returns on investment in startups: we need venture capitalists and start-ups to create new products and new businesses that have higher productivity than do existing firms; the increased revenue that follows will then enable these start-ups to pay higher wages. The large productivity advantages needed can only be achieved by developing breakthrough technologies, like the integrated circuits, lasers, magnetic storage, and fiber optics of previous eras. And different players—VCs, start-ups, incumbents, universities—will need to play different roles in each in­dustry. Unfortunately, none of these players is currently doing the jobs required for our start-up economy to function properly. Business Schools Success in exploiting a technological discontinuity requires understanding of, and experience with, the technology, its advantages and its limitations. But Funk points out that business schools, not being engineering schools, need to devalue this requirement. Instead, they focus on "entrepreneurship": In recent decades, business schools have dramatically increased the number of entrepreneurship programs—from about sixteen in 1970 to more than two thousand in 2014—and have often marketed these programs with vacuous hype about “entrepreneurship” and “technology.” A recent Stanford research paper argues that such hype about entrepreneurship has encouraged students to become entrepreneurs for the wrong reasons and without proper preparation, with universities often presenting entrepreneurship as a fun and cool lifestyle that will enable them to meet new people and do interesting things, while ignoring the reality of hard and demanding work necessary for success. One of my abiding memories of Nvidia is Tench Coxe, our partner at Sutter Hill, perched on a stool in the lab playing the "Road Rash" video game about 2am one morning as we tried to figure out why our first silicon wasn't working. He was keeping an eye on his investment, and providing a much-needed calming influence. Focus on entrepreneurship means focus on the startup's business model not on its technology: A big mistake business schools make is their unwavering focus on business model over technology, thus deflecting any probing questions students and managers might have about what role technological breakthroughs play and why so few are being commercialized. For business schools, the heart of a business model is its ability to capture value, not the more important ability to create value. This prioritization of value capture is tied to an almost exclusive focus on revenue: whether revenues come from product sales, advertising, subscriptions, or referrals, and how to obtain these revenues from multiple customers on platforms. Value creation, however, is dependent on technological improvement, and the largest creation of value comes from breakthrough technologies such as the automobile, microprocessor, personal computer, and internet commerce. The key to "capturing value" is extracting value via monopoly rents. The way to get monopoly rents is to subsidize customer acquisition and buy up competitors, until the customers have no place to go. This doesn't create any value. In fact once the monopolist has burnt through the investor's money they find they need a return that can only be obtained by raising prices and holding the customer to ransom, destroying value for everyone. It is true a startup that combines innovation in technology with innovation in business has an advantage. Once more, Nvidia provides an example. Before starting Nvidia, Jen-Hsun Huang had run a division of LSI Logic that traded access to LSI Logic's fab for equity in the chips it made. Based on this experience on the supplier side of the fabless semiconductor business, one of his goals for Nvidia was to re-structure the relationship between the fabless company and the fab to be more of a win-win. Nvidia ended up as one of the most successful fabless companies of all time. But note that the innovation didn't affect Nvidia's basic business model — contract with fabs to build GPUs, and sell them to PC and graphics board companies. A business innovation combined with technological innovation stands a chance of creating a big company; a business innovation with no technology counterpart is unlikely to. Research Funk assigns much blame for the lack of breakthrough technologies to Universities: University engineering and science programs are also failing us, because they are not creating the breakthrough technologies that America and its start-ups need. Although some breakthrough technologies are assembled from existing components and thus are more the responsibility of private companies—for instance, the iPhone—universities must take responsibility for science-based technologies that depend on basic research, technologies that were once more common than they are now. Note that Funk accepts as a fait accompli the demise of corporate research labs, which certainly used to do the basic research that led not just to Funk's examples of "semiconductors, lasers, LEDs, glass fiber, and fiber optics", but also, for example, to packet switching, and operating systems such as Unix. As I did three years ago in Falling Research Productivity, he points out that increased government and corporate funding of University research has resulted in decreased output of breakthrough technologies: Many scientists point to the nature of the contemporary university research system, which began to emerge over half a century ago, as the problem. They argue that the major breakthroughs of the early and mid-twentieth century, such as the discovery of the DNA double helix, are no longer possible in today’s bureaucratic, grant-writing, administration-burdened university. ... Scientific merit is measured by citation counts and not by ideas or by the products and services that come from those ideas. Thus, labs must push papers through their research factories to secure funding, and issues of scientific curiosity, downstream products and services, and beneficial contributions to society are lost. Funk's analysis of the problem is insightful, but I see his ideas for fixing University research as simplistic and impractical: A first step toward fixing our sclerotic university research system is to change the way we do basic and applied research in order to place more emphasis on projects that may be riskier but also have the potential for greater breakthroughs. We can change the way proposals are reviewed and evaluated. We can provide incentives to universities that will encourage them to found more companies or to do more work with companies. Funk clearly doesn't understand how much University research is already funded by companies, and how long attempts to change the reward system in Universities have been crashing into the rock comprised of senior faculty who achieved their position through the existing system. He is more enthusiastic but equally misled about how basic research in corporate labs could be revived: One option is to recreate the system that existed prior to the 1970s, when most basic research was done by companies rather than universities. This was the system that gave us transistors, lasers, LEDs, magnetic storage, nuclear power, radar, jet engines, and polymers during the 1940s and 1950s. ... Unlike their predecessors at Bell Labs, IBM, GE, Motorola, DuPont, and Monsanto seventy years ago, top university scientists are more administrators than scientists now—one of the greatest mis­uses of talent the world has ever seen. Corporate labs have smaller administrative workloads because funding and promotion depend on informal discussions among scientists and not extensive paperwork. Not understanding the underlying causes of the demise of corporate research labs, Funk reaches for the time-worm nostrums of right-wing economists, "tax credits and matching grants": We can return basic research to corporate labs by providing much stronger incentives for companies—or cooperative alliances of companies—to do basic research. A scheme of substantial tax credits and matching grants, for instance, would incentivize corporations to do more research and would bypass the bureaucracy-laden federal grant process. This would push the management of detailed technological choices onto scientists and engineers, and promote the kind of informal discussions that used to drive decisions about technological research in the heyday of the early twentieth century. The challenge will be to ensure these matching funds and tax credits are in fact used for basic research and not for product development. Requiring multiple companies to share research facilities might be one way to avoid this danger, but more research on this issue is needed. In last year's The Death Of Corporate Research Labs I discussed a really important paper from a year earlier by Arora et al, The changing structure of American innovation: Some cautionary remarks for economic growth, which Funk does not cite. I wrote: Arora et al point out that the rise and fall of the labs coincided with the rise and fall of anti-trust enforcement: Historically, many large labs were set up partly because antitrust pressures constrained large firms’ ability to grow through mergers and acquisitions. In the 1930s, if a leading firm wanted to grow, it needed to develop new markets. With growth through mergers and acquisitions constrained by anti-trust pressures, and with little on offer from universities and independent inventors, it often had no choice but to invest in internal R&D. The more relaxed antitrust environment in the 1980s, however, changed this status quo. Growth through acquisitions became a more viable alternative to internal research, and hence the need to invest in internal research was reduced. Lack of anti-trust enforcement, pervasive short-termism, driven by Wall Street's focus on quarterly results, and management's focus on manipulating the stock price to maximize the value of their options killed the labs: Large corporate labs, however, are unlikely to regain the importance they once enjoyed. Research in corporations is difficult to manage profitably. Research projects have long horizons and few intermediate milestones that are meaningful to non-experts. As a result, research inside companies can only survive if insulated from the short-term performance requirements of business divisions. However, insulating research from business also has perils. Managers, haunted by the spectre of Xerox PARC and DuPont’s “Purity Hall”, fear creating research organizations disconnected from the main business of the company. Walking this tightrope has been extremely difficult. Greater product market competition, shorter technology life cycles, and more demanding investors have added to this challenge. Companies have increasingly concluded that they can do better by sourcing knowledge from outside, rather than betting on making game-changing discoveries in-house. It is pretty clear that "tax credits and matching grants" aren't the fix for the fundamental anti-trust problem. Not to mention that the idea of "Requiring multiple companies to share research facilities" in and of itself raises serious ant-trust concerns. After such a good analysis, it is disappointing that Funk's recommendations are so feeble. We have to add inadequate VC returns and a lack of startups capable of building top-100 companies to the long list of problems that only a major overhaul of anti-trust enforcement can fix. Lina Khan's nomination to the FTC is a hopeful sign that the Biden adminstration understands the urgency of changing direction, but Biden's hesitation about nominating the DOJ's anti-trust chief is not. Update: Michael Cembalest's Food Fight: An update on private equity performance vs public equity markets has a lot of fascinating information about private equity in general and venture capital in particular. His graphs comparing MOIC (Multiple Of Invested Capital) and IRR (Internal Rate of Return) across vintage years support his argument that: We have performance data for venture capital starting in the mid-1990s, but the period is so distorted by the late 1990’s boom and bust that we start our VC performance discussion in 20045. In my view, the massive gains earned by VC managers in the mid-1990s are not relevant to a discussion of VC investing today. As with buyout managers, VC manager MOIC and IRR also tracked each other until 2012 after which a combination of subscription lines and faster distributions led to rising IRRs despite falling MOICs. There’s a larger gap between average and median manager results than in buyout, indicating that there are a few VC managers with much higher returns and/or larger funds that pull up the average relative to the median. The gap is pretty big: VC managers have consistently outperformed public equity markets when looking at the “average” manager. But to reiterate, the gap between average and median results are substantial and indicate outsized returns posted by a small number of VC managers. For vintage years 2004 to 2008, the median VC manager actually underperformed the S&P 500 pretty substantially. Another of Cembalest's fascinating graphs addresses this question: One of the other “food fight” debates relates to pricing of venture-backed companies that go public. In other words, do venture investors reap the majority of the benefits, leaving public market equity investors “holding the bag”? Actually, the reverse has been true over the last decade when measured in terms of total dollars of value creation accruing to pre- and post-IPO investors: post-IPO investor gains have often been substantial. To show this: We analyzed all US tech, internet retailing and interactive media IPOs from 2010 to 2019. We computed the total value created since each company’s founding, from original paid-in capital by VCs to its latest market capitalization. We then examined how total value creation has accrued to pre- and post-IPO investors6. Sometimes both investor types share the gains, and sometimes one type accrues the vast majority of the gains. Pre-IPO investors earn the majority of the pie when IPOs collapse or flat-line after being issued, and post-IPO investors reap the majority of the pie when IPOs appreciate substantially after being issued. There are three general regions in the chart. As you can see, the vast majority of the 165 IPOs analyzed resulted in a large share of the total value creation accruing to public market equity investors; nevertheless, there were some painful exceptions (see lower left region on the chart). Posted by David. at 8:00 AM Labels: anti-trust, intellectual property, venture capital 2 comments: Blissex2 said... It is as usual a very informative and interesting post, but for me the main cause of unprofitable unicorns isthe example of the first 20 years of *Amazon*, and the second cause is the general "cash is trash" economic climate, where asset price inflation is very high, so anybody with cash desperately tries to exchange it for assets, even mythical assets like unicorn shares. April 30, 2021 at 6:40 AM David. said... Wikipedia agrees with me and disagrees with Funk's Figure 2, stating: "Sun was profitable from its first quarter in July 1982" May 2, 2021 at 3:18 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ▼  April (6) Venture Capital Isn't Working Dogecoin Disrupts Bitcoin! What Is The Point? NFTs and Web Archiving Cryptocurrency's Carbon Footprint Elon Musk: Threat or Menace? ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-9557 ---- DSHR's Blog: Alternatives To Proof-of-Work DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, July 20, 2021 Alternatives To Proof-of-Work The designers of peer-to-peer consensus protocols such as those underlying cryptocurrencies face three distinct problems. They need to prevent: Being swamped by a multitude of Sybil peers under the control of an attacker. This requires making peer participation expensive, such as by Proof-of-Work (PoW). PoW is problematic because it has a catastrophic carbon footprint. A rational majority of peers from conspiring to obtain inappropriate benefits. This is thought to be achieved by decentralization, that is a network of so many peers acting independently that a conspiracy among a majority of them is highly improbable. Decentralization is problematic because in practice all successful cryptocurrencies are effectively centralized. A rational minority of peers from conspiring to obtain inappropriate benefits. This requirement is called incentive compatibility. This is problematic because it requires very careful design of the protocol. In the rather long post below the fold I focus on some potential alternatives to PoW, inspired by Jeremiah Wagstaff's Subspace: A Solution to the Farmer’s Dilemma, the white paper for a new blockchain technology. Careful design of the economic mechanisms of the protocol can in theory ensure incentive compatibility, or as Ittay Eyal and Emin Gun Sirer express it: the best strategy of a rational minority pool is to be honest, and a minority of colluding miners cannot earn disproportionate benefits by deviating from the protocol They showed in 2013 that the Bitcoin protocol was not incentive-compatible, but this is in principle amenable to a technical fix. Unfortunately, ensuring decentralization is a much harder problem. Decentralization Vitalik Buterin, co-founder of Ethereum, wrote in The Meaning of Decentralization: In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. The Internet's basic protocols, TCP/IP, DNS, SMTP, HTTP are all decentralized, and yet the actual Internet is heavily centralized around a few large companies. Centralization is an emergent behavior, driven not by technical but by economic forces. W. Brian Arthur described these forces before the Web took off in his 1994 book Increasing Returns and Path Dependence in the Economy. Similarly, the blockchain protocols are decentralized but ever since 2014 the Bitcoin blockchain has been centralized around 3-4 large mining pools. Buterin wrote: can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference? This is perhaps the greatest among the multiple failures of Satoshi Nakamoto's goals for Bitcoin. The economic forces driving this centralization are the same as those that centralized other Internet protocols. I explored how they act to centralize P2P systems in 2014's Economies of Scale in Peer-to-Peer Networks. I argued that an incentive-compatible protocol wasn't adequate to prevent centralization. The simplistic version of the argument was: The income to a participant in an incentive-compatible P2P network should be linear in their contribution of resources to the network. The costs a participant incurs by contributing resources to the network will be less than linear in their resource contribution, because of the economies of scale. Thus the proportional profit margin a participant obtains will increase with increasing resource contribution. Thus the effects described in Brian Arthur's Increasing Returns and Path Dependence in the Economy will apply, and the network will be dominated by a few, perhaps just one, large participant. And I wrote: The advantages of P2P networks arise from a diverse network of small, roughly equal resource contributors. Thus it seems that P2P networks which have the characteristics needed to succeed (by being widely adopted) also inevitably carry the seeds of their own failure (by becoming effectively centralized). Bitcoin is an example of this. My description of the fundamental problem was: The network has to arrange not just that the reward grows more slowly than the contribution, but that it grows more slowly than the cost of the contribution to any participant. If there is even one participant whose rewards outpace their costs, Brian Arthur's analysis shows they will end up dominating the network. Herein lies the rub. The network does not know what an individual participant's costs, or even the average participant's costs, are and how they grow as the participant scales up their contribution. So the network would have to err on the safe side, and make rewards grow very slowly with contribution, at least above a certain minimum size. Doing so would mean few if any participants above the minimum contribution, making growth dependent entirely on recruiting new participants. This would be hard because their gains from participation would be limited to the minimum reward. It is clear that mass participation in the Bitcoin network was fuelled by the (unsustainable) prospect of large gains for a small investment. The result of limiting reward growth would be a blockchain with limited expenditure on mining which, as we see with the endemic 51% attacks against alt-coins, would not be secure. But without such limits, economies of scale mean that the blockchain would be dominated by a few large mining pools, so would not be decentralized and would be vulnerable to insider attacks. Note that in June 2014 the GHash.io mining pool alone had more than 51% of the Bitcoin mining power. But the major current problem for Bitcoin, Ethereum and cryptocurrencies in general is not vulnerability to 51% attacks. Participants in these "trustless" systems trust that the mining pools are invested in their security and will not conspire to misbehave. Events have shown that this trust is misplaced as applied to smaller alt-coins. Trustlessness was one of Nakamoto's goals, another of the failures. But as regards the major cryptocurrencies this trust is plausible; everyone is making enough golden eggs to preserve the life of the goose. Alternatives to Proof-of-Work The major current problem for cryptocurrencies is that their catastrophic carbon footprint has attracted attention. David Gerard writes: The bit where proof-of-work mining uses a country’s worth of electricity to run the most inefficient payment system in human history is finally coming to public attention, and is probably Bitcoin’s biggest public relations problem. Normal people think of Bitcoin as this dumb nerd money that nerds rip each other off with — but when they hear about proof-of-work, they get angry. Externalities turn out to matter. Yang Xiao et al's A Survey of Distributed Consensus Protocols for Blockchain Networks is very useful. They: identify five core components of a blockchain consensus protocol, namely, block proposal, block validation, information propagation, block finalization, and incentive mechanism. A wide spectrum of blockchain consensus protocols are then carefully reviewed accompanied by algorithmic abstractions and vulnerability analyses. The surveyed consensus protocols are analyzed using the five-component framework and compared with respect to different performance metrics. Their "wide spectrum" is comprehensive as regards the variety of PoW protocols, and as regards the varieties of Proof-of-Stake (PoS) protocols that are the leading alternatives to PoW. Their coverage of other consensus protocols is less thorough, and as regards the various protocols that defend against Sybil attacks by wasting storage instead of computation it is minimal. The main approach to replacing PoW with something equally good at preventing Sybil attacks but less good at cooking the planet has been PoS, but a recent entrant using Proof-of-Time-and-Space (I'll use PoTaS since the acronyms others use are confusing) to waste storage has attracted considerable attention. I will discuss PoS in general terms and two specific systems, Chia (PoTaS) and Subspace (a hybrid of PoTaS and PoS). Proof-of-Stake In PoW as implemented by Nakamoto, the probability of a winning the next block is proportional to the number of otherwise useless hashes computed — Nakamoto thought by individual CPUs but now by giant mining pools driven by warehouses full of mining ASICs. The idea of PoS is that the resource being wasted to deter Sybil attacks is the cryptocurrency itself. In order to mount a 51% attack the attacker would have to control more of the cryptocurrency that the loyal peers. In vanilla PoS the probability of winning the next block is proportional to the amount of the cryptocurrency "staked", i.e. effectively escrowed and placed at risk of being "slashed" if the majority concludes that the peer has misbehaved. It appears to have been first proposed in 2011 by Bitcointalk user QuantumMechanic. The first cryptocurrency to use PoS, albeit as a hybrid with PoW, was Peercoin in 2012. There have been a number of pure PoS cryptocurrencies since, including Cardano from 2015 and Algorand from 2017 but none have been very successful. Ethereum, the second most important cryptocurrency, understood the need to replace PoW in 2013 and started work in 2014. But as Vitalik Buterin then wrote: Over the last few months we have become more and more convinced that some inclusion of proof of stake is a necessary component for long-term sustainability; however, actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex. The fact that Ethereum includes a Turing-complete contracting system complicates things further, as it makes certain kinds of collusion much easier without requiring trust, and creates a large pool of stake in the hands of decentralized entities that have the incentive to vote with the stake to collect rewards, but which are too stupid to tell good blockchains from bad. Buterin was right about making "certain kinds of collusion much easier without requiring trust". In On-Chain Vote Buying and the Rise of Dark DAOs Philip Daian and co-authors show that "smart contracts" provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. It is obviously much harder to prevent bad behavior in a Turing-complete environment. Seven years later Ethereum is still working on the transition, which they currently don't expect to be complete for another 18 months: Shocked to see that the timeline for Ethereum moving to ETH2 and getting off proof-of-work mining has been put back to late 2022 … about 18 months from now. This is mostly from delays in getting sharding to work properly. Vitalik Buterin says that this is because the Ethereum team isn’t working well together. [Tokenist] Skepticism about the schedule for ETH2 is well-warranted, as Julia Magas writes in When will Ethereum 2.0 fully launch? Roadmap promises speed, but history says otherwise: Looking at how fast the relevant updates were implemented in the previous versions of Ethereum roadmaps, it turns out that the planned and real release dates are about a year apart, at the very minimum. Are there other reasons why PoS is so hard to implement safely? Bram Cohen's talk at Stanford included a critique of PoS: Its threat model is weaker than Proof of Work. Just as Proof of Work is in practice centralized around large mining pools, Proof of Stake is centralized around large currency holdings (which were probably acquired much more cheaply than large mining installations). The choice of a quorum size is problematic. "Too small and it's attackable. Too large and nothing happens." And "Unfortunately, those values are likely to be on the wrong side of each other in practice." Incentivizing peers to put their holdings at stake creates a class of attacks in which peers "exaggerate one's own bonding and blocking it from others." Slashing introduces a class of attacks in which peers cause others to be fraudulently slashed. The incentives need to be strong enough to overcome the risks of slashing, and of keeping their signing keys accessible and thus at risk of compromise. "Defending against those attacks can lead to situations where the system gets wedged because a split happened and nobody wants to take one for the team" Cohen seriously under-played PoS's centralization problem. It isn't just that the Gini coefficients of cryptocurrencies are extremely high, but that this is a self-reinforcing problem. Because the rewards for mining new blocks, and the fees for including transactions in blocks, flow to the HODL-ers in proportion to their HODL-ings, whatever Gini coefficient the systems starts out with will always increase. As I wrote, cryptocurrencies are: a mechanism for transferring wealth from later adopters, called suckers, to early adopters, called geniuses. PoS makes this "ratchet" mechanism much stronger than PoW, and thus renders them much more vulnerable to insider 51% attacks. I discussed one such high-profile attack by Justin Sun of Tron on the Steemit blockchain in Proof-of-Stake In Practice : One week later, on March 2nd, Tron arranged for exchanges, including Huobi, Binance and Poloniex, to stake tokens they held on behalf of their customers in a 51% attack: According to the list of accounts powered up on March. 2, the three exchanges collectively put in over 42 million STEEM Power (SP). With an overwhelming amount of stake, the Steemit team was then able to unilaterally implement hard fork 22.5 to regain their stake and vote out all top 20 community witnesses – server operators responsible for block production – using account @dev365 as a proxy. In the current list of Steem witnesses, Steemit and TRON’s own witnesses took up the first 20 slots. Although this attack didn't provide Tron with an immediate monetary reward, the long term value of retaining effective control of the blockchain was vastly greater than the cost of staking the tokens. I've been pointing out that the high Gini coefficients of cryptocurrencies means Proof-of-Stake centralizes control of the blockchain in the hands of the whales since 2017's Why Decentralize? quoted Vitalik Buterin pointing out that a realistic scenario was: In a proof of stake blockchain, 70% of the coins at stake are held at one exchange. Or in this case three exchanges cooperating. Note that economic analyses of PoS, such as More (or less) economic limits of the blockchain by Joshua Gans and Neil Gandal, assume economically rational actors care about the iliquidity of staked coins and the foregone interest. But true believers in "number go up" have a long-term perspective similar to Sun's. The eventual progress of their coin "to the moon!" means that temporary, short-term costs are irrelevant to long-term HODL-ers. Jude C. Nelson amplifies the centralization point: PoW is open-membership, because the means of coin production are not tied to owning coins already. All you need to contribute is computing power, and you can start earning coins at a profit. PoS is closed-membership with a veneer of open-membership, because the means of coin production are tied to owning a coin already. What this means in practice is that no rational coin-owner is going to sell you coins at a fast enough rate that you'll be able to increase your means of coin production. Put another way, the price you'd pay for the increased means of coin production will meet or exceed the total expected revenue created by staking those coins over their lifetime. So unless you know something the seller doesn't, you won't be able to profit by buying your way into staking. Overall, this makes PoS less resilient and less egalitarian than PoW. While both require an up-front capital expenditure, the expenditure for PoS coin-production will meet or exceed the total expected revenue of those coins at the point of sale. So, the system is only as resilient as the nodes run by the people who bought in initially, and the only way to join later is to buy coins from people who want to exit (which would only be viable if these folks believed the coins are worth less than what you're buying them for, which doesn't bode well for you as the buyer). Nelson continues: PoW requires less proactive trust and coordination between community members than PoS -- and thus is better able to recover from both liveness and safety failures -- precisely because it both (1) provides a computational method for ranking fork quality, and (2) allows anyone to participate in producing a fork at any time. If the canonical chain is 51%-attacked, and the attack eventually subsides, then the canonical chain can eventually be re-established in-band by honest miners simply continuing to work on the non-attacker chain. In PoS, block-producers have no such protocol -- such a protocol cannot exist because to the rest of the network, it looks like the honest nodes have been slashed for being dishonest. Any recovery procedure necessarily includes block-producers having to go around and convince people out-of-band that they were totally not dishonest, and were slashed due to a "hack" (and, since there's lots of money on the line, who knows if they're being honest about this?). PoS conforms to Mark 4:25: For he that hath, to him shall be given: and he that hath not, from him shall be taken even that which he hath. In Section VI(E) Yang Xiao et al identify the following types of vulnerability in PoS systems: Costless simulation: literally means any player can simulate any segment of blockchain history at the cost of no real work but speculation, as PoS does not incur intensive computation while the blockchain records all staking history. This may give attackers shortcuts to fabricate an alternative blockchain. It is the basis for attacks 2 through 5. Nothing at stake Unlike a PoW miner, a PoS minter needs little extra effort to validate transactions and generate blocks on multiple competing chains simultaneously. This “multi-bet” strategy makes economical sense to PoS nodes because by doing so they can avoid the opportunity cost of sticking to any single chain. Consequently if a significantly fraction of nodes perform the “multi-bet” strategy, an attacker holding far less than 50% of tokens can mount a successful double spending attack. The defense against this attack is usually "slashing", forfeiting the stake of miners detected on multiple competing chains. But slashing, as Cohen and Nelson point out, is in itself a consensus problem. Posterior corruption The key enabler of posterior corruption is the public availability of staking history on the blockchain, which includes stakeholder addresses and staking amounts. An attacker can attempt to corrupt the stakeholders who once possessed substantial stakes but little at present by promising them rewards after growing an alternative chain with altered transaction history (we call it a “malicious chain”). When there are enough stakeholders corrupted, the colluding group (attacker and corrupted once-rich stakeholders) could own a significant portion of tokens (possibly more than 50%) at some point in history, from which they are able to grow an malicious chain that will eventually surpass the current main chain. The defense is key-evolving cryptography, which ensures that the past signatures cannot be forged by the future private keys. Long-range attack as introduced by Buterin: foresees that a small group of colluding attackers can regrow a longer valid chain that starts not long after the genesis block. Because there were likely only a few stakeholders and a lack of competition at the nascent stage of the blockchain, the attackers can grow the malicious chain very fast and redo all the PoS blocks (i.e. by costless simulation) while claiming all the historical block rewards. Evangelos Deirmentzoglou et al's A Survey on Long-Range Attacks for Proof of Stake Protocols provides a useful review of these attacks. Even if there are no block rewards, only fees, a variant long-range attack is possible as described in Stake-Bleeding Attacks on Proof-of-Stake Blockchains by Peter Gazi et al, and by Shijie Zhang and Jong-Hyouk Lee in Eclipse-based Stake-Bleeding Attacks in PoS Blockchain Systems. Stake-grinding attack unlike PoW in which pseudo-randomness is guaranteed by the brute-force use of a cryptographic hash function, PoS’s pseudo-randomness is influenced by extra blockchain information—the staking history. Malicious PoS minters may take advantage of costless simulation and other staking-related mechanisms to bias the randomness of PoS in their own favor, thus achieving higher winning probabilities compared to their stake amounts Centralization risk as discussed above: In PoS the minters can lawfully reinvest their profits into staking perpetually, which allows the one with a large sum of unused tokens become wealthier and eventually reach a monopoly status. When a player owns more than 50% of tokens in circulation, the consensus process will be dominated by this player and the system integrity will not be guaranteed. There are a number of papers on this problem, including Staking Pool Centralization in Proof-of-Stake Blockchain Network by Ping He et al, Compounding of wealth in proof-of-stake cryptocurrencies by Giulia Fanti et al, and Stake shift in major cryptocurrencies: An empirical study by Rainer Stütz et al. But to my mind none of them suggest a realistic mitigation. These are not the only problems from which PoS suffers. Two more are: Checkpointing. Long-range and related attacks are capable of rewriting almost the entire chain. To mitigate this, PoS systems can arrange for consensus on checkpoints, blocks which are subsequently regarded as canonical forcing any rewriting to start no earlier than the following block. Winkle – Decentralised Checkpointing for Proof-of-Stake is: a decentralised checkpointing mechanism operated by coin holders, whose keys are harder to compromise than validators’ as they are more numerous. By analogy, in Bitcoin, taking control of one-third of the total supply of money would require at least 889 keys, whereas only 4 mining pools control more than half of the hash power It is important that consensus on checkpoints is achieved through a different mechanism than consensus on blocks. To over-simplify, Winkle piggy-backs votes for checkpoints on transactions; a transaction votes for a block with the number of coins remaining in the sending account, and with the number sent to the receiving account. A checkpoint is final once a set proportion of the coins have voted for it. For the details, see Winkle: Foiling Long-Range Attacks in Proof-of-Stake Systems by Sarah Azouvi et al. Lending. In Competitive equilibria between staking and on-chain lending, Tarun Chitra demonstrates that it is: possible for on-chain lending smart contracts to cannibalize network security in PoS systems. When the yield provided by these contracts is more attractive than the inflation rate provided from staking, stakers will tend to remove their staked tokens and lend them out, thus reducing network security. ... Our results illustrate that rational, non-adversarial actors can dramatically reduce PoS network security if block rewards are not calibrated appropriately above the expected yields of on-chain lending. I believe this is part of a fundamental problem for PoS. The token used to prevent a single attacker appearing as a multitude of independent peers can be lent, and thus the attacker can borrow a temporary majority of the stake cheaply, for only a short-term interest payment. Preventing this increases implementation complexity significantly. In summary, despite PoS' potential for greatly reducing PoW's environmental impact and cost of defending against Sybil attacks, it has a major disadvantage. It is significantly more complex and thus its attack surface is much larger, especially when combined with a Turing-complete execution environment such as Ethereum's. It therefore needs more defense mechanisms, which increase complexity further. Buterin and the Ethereum developers realize the complexity of the implementation task they face, which is why their responsible approach is taking so long. Currently Ethereum is the only realistic candidate to displace Bitcoin, and thus reduce cryptocurrencies' carbon footprint, so the difficulty of an industrial-strength implementation of PoS for Ethereum 2.0 is a major problem. Proof-of-Space-and-Time Back in 2018 I wrote about Bram Cohen's PoTaS system, Chia, in Proofs of Space and Chia Network. Instead of wasting computation to prevent Sybil attacks, Chia wastes storage. Chia's "space farmers" create and store "plots" consisting of large amounts of otherwise useless data. The technical details are described in Chia Consensus. They are comprehensive and impressively well thought out. Because, like Bitcoin, Chia is wasting a real resource to defend against Sybil attacks it lacks many of PoS' vulnerabilities. Nevertheless, the Chia protocol is significantly more complex than Bitcoin and thus likely to possess additional vulnerabilities. For example, whereas in Bitcoin there is only one role for participants, mining, the Chia protocol involves three roles: Farmer, "Farmers are nodes which participate in the consensus algorithm by storing plots and checking them for proofs of space." Timelord, "Timelords are nodes which participate in the consensus algorithm by creating proofs of time". Full node, which involves "broadcasting proofs of space and time, creating blocks, maintaining a mempool of pending transactions, storing the historical blockchain, and uploading blocks to other full nodes as well as wallets (light clients)." Figure 11 Another added complexity is that the Chia protocol maintains three chains (Challenge, Reward and Foliage), plus an evanescent chain during each "slot" (think Bitcoin's block time), as shown in the document's Figure 11. The document therefore includes a range of attacks and their mitigations which are of considerable technical interest. Cohen's praiseworthy objective for Chia was to avoid the massive power waste of PoW because: "You have this thing where mass storage medium you can set a bit and leave it there until the end of time and its not costing you any more power. DRAM is costing you power when its just sitting there doing nothing". Alas, Cohen was exaggerating: A state-of-the-art disk drive, such as Seagate's 12TB BarraCuda Pro, consumes about 1W spun-down in standby mode, about 5W spun-up idle and about 9W doing random 4K reads. Which is what it would be doing much of the time while "space farming". Clearly, PoTaS uses energy, just much less than PoW. Reporting on Cohen's 2018 talk at Stanford I summarized: Cohen's vision is of a PoSp/VDF network comprising large numbers of desktop PCs, continuously connected and powered up, each with one, or at most a few, half-empty hard drives. The drives would have been purchased at retail a few years ago. My main criticism in those posts was Cohen's naiveté about storage technology, the storage market and economies of scale: There would appear to be three possible kinds of participants in a pool: Individuals using the spare space in their desktop PC's disk. The storage for the Proof of Space is effectively "free", but unless these miners joined pools, they would be unlikely to get a reward in the life of the disk. Individuals buying systems with CPU, RAM and disk solely for mining. The disruption to the user's experience is gone, but now the whole cost of mining has to be covered by the rewards. To smooth out their income, these miners would join pools. Investors in data-center scale mining pools. Economies of scale would mean that these participants would see better profits for less hassle than the individuals buying systems, so these investor pools would come to dominate the network, replicating the Bitcoin pool centralization. Thus if Chia's network were to become successful, mining would be dominated by a few large pools. Each pool would run a VDF server to which the pool's participants would submit their Proofs of Space, so that the pool manager could verify their contribution to the pool. The emergence of pools, and dominance of a small number of pools, has nothing to do with the particular consensus mechanism in use. Thus I am skeptical that alternatives to Proof of Work will significantly reduce centralization of mining in blockchains generally, and in Chia Network's blockchain specifically. As I was writing the first of these posts, TechCrunch reported: Chia has just raised a $3.395 million seed round led by AngelList’s Naval Ravikant and joined by Andreessen Horowitz, Greylock and more. The money will help the startup build out its Chia coin and blockchain powered by proofs of space and time instead of Bitcoin’s energy-sucking proofs of work, which it plans to launch in Q1 2019. Even in 2020 the naiveté persisted, as Chia pitched the idea that space farming on a Raspberry Pi was a way to make money. It still persists, as Chia's president reportedly claims that "recyclable hard drives are entering the marketplace". But when Chia Coin actually started trading in early May 2021 the reality was nothing like Cohen's 2018 vision: As everyone predicted, the immediate effect was to create a massive shortage of the SSDs needed to create plots, and the hard drives needed to store them. Even Gene Hoffman, Chia's CEO, admitted that Bitcoin rival Chia 'destroyed' hard disc supply chains, says its boss: Chia, a cryptocurrency intended to be a “green” alternative to bitcoin has instead caused a global shortage of hard discs. Gene Hoffman, the president of Chia Network, the company behind the currency, admits that “we’ve kind of destroyed the short-term supply chain”, but he denies it will become an environmental drain. The result of the spike in storage prices was a rise in the vendors stock: The share price of hard disc maker Western Digital has increased from $52 at the start of the year to $73, while competitor Seagate is up from $60 to $94 over the same period. To give you some idea of how rapidly Chia has consumed storage in the two months since launch, it is around 20% of the rate at which the entire industry produced hard disk in 2018. Chia Pools Mining pools arose. As I write the network is storing 30.06EB of otherwise useless data, of which one pool, ihpool.com is managing 10.78EB, or 39.3%. Unlike Bitcoin, the next two pools are much smaller, but large enough so that the top four pools have 42% of the space. The network is slightly more decentralized than Bitcoin has been since 2014, and for reasons discussed below is less vulnerable to an insider 51% attack. Chia "price" The "price" of Chia Coin collapsed, from $1934.51 at the start of trading to $165.41 Sunday before soaring to $185.78 as I write. Each circulating XCH corresponds to about 30TB. The investment in "space farming" hardware vastly outweighs, by nearly six times, the market cap of the cryptocurrency it is supporting. The "space farmers" are earning $1.69M/day, or about $20/TB/year. A 10TB internal drive is currently about $300 on Amazon, so it will be about a 18 months before it earns a profit. The drive is only warranted for 3 years. But note that the warranty is limited: Supports up to 180 TB/yr workload rate Workload Rate is defined as the amount of user data transferred to or from the hard drive. Using the drive for "space farming" would likely void the warranty and, just as PoW does to GPUs, burn out the drive long before its warranted life. If you have two years, the $300 investment theoretically earns a 25% return before power and other costs. But the hard drive isn't the only cost of space farming. In order to become a "space farmer" in the first place you need to create plots containing many gigabytes of otherwise useless cryptographically-generated data. You need lots of them; the probability of winning your share of the $2.74M/day is how big a fraction of the nearly 30EB you can generate and store. The 30EB is growing rapidly, so the quicker you can generate the plots, the better your chance in the near term. To do so in finite time you need in addition to the hard drive a large SSD at extra cost. Using it for plotting will void its warranty and burn it out in as little as six weeks. And you need a powerful server running flat-out to do the cryptography, which both rather casts doubt on how much less power than PoW Chia really uses, and increases the payback time significantly. In my first Chia post I predicted that "space farming" would be dominated by huge data centers such as Amazon's. Sure enough, Wolfie Zhao reported on May 7th that: Technology giant Amazon has rolled out a solution dedicated to Chia crypto mining on its AWS cloud computing platform. According to a campaign page on the Amazon AWS Chinese site, the platform touts that users can deploy a cloud-based storage system in as quickly as five minutes in order to mine XCH, the native cryptocurrency on the Chia network. Two weeks later David Gerard reported that: The page disappeared in short order — but an archive exists. Because Chia mining trashes the drives, something else I pointed out in my first Chia post, storage services are banning users who think that renting something is a license to destroy it. In any case, 10TB of Amazon's S3 Reduced Redundancy Storage costs $0.788/day, so it would be hard to make ends meet. Cheaper storage services, such as Wasabi at $0.20/day are at considerable risk from Chia. Although this isn't an immediate effect, as David Gerard writes, because creating Chia plots wears out SSDs, and Chia farming wears out hard disks: Chia produces vast quantities of e-waste—rare metals, assembled into expensive computing components, turned into toxic near-unrecyclable landfill within weeks. Miners are incentivized to join pools because they prefer a relatively predictable, frequent flow of small rewards to very infrequent large rewards. The way pools work in Bitcoin and related protocols is that the pool decides what transactions are in the block it hopes to mine, and gets all the pool participants to work on that block. Thus a pool, or a conspiracy among pools, that had 51% of the mining power would have effective control over the transactions that were finalized. Because they make the decision as to which transactions happen, Nicholas Weaver argues that mining pools are money transmitters and thus subject to the AML/KYC rules. But in Chia pools work differently: First and foremost, even when a winning farmer is using a pool, they themselves are the ones who make the transaction block - not the pool. The decentralization benefits of this policy are obvious. The potential future downside is that while Bitcoin miners in a pool can argue that AML/KYC is the responsibility of the pool, Chia farmers would be responsible for enforcing the AML/KYC rules and subject to bank-sized penalties for failing to do so. In Bitcoin the winning pool receives and distributes both the block reward and the (currently much smaller) transaction fees. Over time the Bitcoin block reward is due to go to zero and the system is intended to survive on fees alone. Alas, research has shown that a fee-only Bitcoin system is insecure. Chia does things differently in two ways. First: all the transaction fees generated by a block go to the farmer who found it and not to the pool. Trying to split the transaction fees with the pool could result in transaction fees being paid ‘under the table’ either by making them go directly to the farmer or making an anyone can spend output which the farmer would then pay to themselves. Circumventing the pool would take up space on the blockchain. It could also encourage the emergence of alternative pooling protocols where the pool makes the transaction block which is a form of centralization we wish to avoid. The basic argument is that in Bitcoin the 51% conspiracy is N pools where in Chia it is M farmers (M ≫ N). Chia are confident that this is safe: This ensures that even if a pool has 51% netspace, they would also need to control ALL of the farmer nodes (with the 51% netspace) to do any malicious activity. This will be very difficult unless ALL the farmers (with the 51% netspace) downloaded the same malicious Chia client programmed by a Bram like level genius. I'm a bit less confident because, like Ethereum, Chia has a Turing-complete programming environment. In On-Chain Vote Buying and the Rise of Dark DAOs Philip Daian and co-authors showed that "smart contracts" provide for untraceable on-chain collusion in which the parties are mutually pseudonymous. Although their conspriacies were much smaller, similar techniques might be the basis for larger attacks on blockchains with "smart contracts". Second: This method has the downside of reducing the smoothing benefits of pools if transaction fees come to dominate fixed block rewards. That’s never been a major issue in Bitcoin and our block reward schedule is set to only halve three times and continue at a fixed amount forever after. There will alway be block rewards to pay to the pool while transaction fees go to the individual farmers. So unlike the Austrian economics of Bitcoin, Chia plans to reward farming by inflating the currency indefinitely, never depending wholly on fees. In Bitcoin the pool takes the whole block reward, but the way block rewards work is different too: fixed block rewards are set to go 7/8 to the pool and 1/8 to the farmer. This seems to be a sweet spot where it doesn’t reduce smoothing all that much but also wipes out potential selfish mining attacks where someone joins a competing pool and takes their partials but doesn’t upload actual blocks when they find them. Those sort of attacks can become profitable when the fraction of the split is smaller than the size of the pool relative to the whole system. Last I checked ihpool.com had almost 40% of the total system. Rational economics are not in play here. "Space farming" makes sense only at scale or for the most dedicated believers in "number go up". Others are less than happy: So I tested this Chia thing overnight. Gave it 200GB plot and two CPU threads. After 10 hours it consumed 400GB temp space, didn’t sync yet, CPU usage is always 80%+. Estimated reward time is 5 months. This isn’t green, already being centralised on large waste producing servers. The problem for the "number go up" believers is that the "size go up" too, by about half-an-exabyte a day. As the network grows, the chance that your investment in hardware will earn a reward goes down because it represents a smaller proportion of the total. Unless "number go up" much faster than "size go up", your investment is depreciating rapidly not just because you are burning it out but because its cost-effectiveness is decaying. And as we see, "size go up" rapidly but "number go down" rapidly. And economies of scale mean that return on investment in hardware will go up significantly with the proportion of the total the farmer has. So the little guy gets the short end of the stick even if they are in a pool. Chia's technology is extremely clever, but the economics of the system that results in the real world don't pass the laugh test. Chia is using nearly a billion dollars of equipment being paid for by inflating the currency at a rate of currently 2/3 billion dollars a year to process transactions at a rate around five billion dollars a year, a task that could probably be done using a conventional database and a Raspberry Pi. The only reason for this profligacy is to be able to claim that it is "decentralized". It is more decentralized than PoW or PoS systems, but over time economies of scale and free entry will drive the reward for farming in fiat terms down and mean that small-scale farmers will be squeezed out. The Chia "price" chart suggests that it might have been a "list-and-dump" scheme, in which A16Z and the other VCs incentivized the miners to mine and the exchanges to list the new cryptocurrency so that the VCs could dump their HODL-ings on the muppets seduced by the hype and escape with a profit. Note that A16Z just raised a $2.2B fund dedicated to pouring money into similar schemes. This is enough to fund 650 Chia-sized ventures! (David Gerard aptly calls Andreesen Horowitz "the SoftBank of crypto") They wouldn't do that unless they were making big bucks from at least some of the ones they funded earlier. Chia's sensitivity about their PR led them to hurl bogus legal threats at the leading Chia community blog. Neither is a good look. Subspace As we see, the Chia network has one huge pool and a number of relatively miniscule pools. In Subspace" A Solution to the Farmer's Dilemma, Wagstaff describes the "farmer's dilemma" thus: Observe that in any PoC blockchain a farmer is, by-definition, incentivized to allocate as much of its scarce storage resources as possible towards consensus. Contrast this with the desire for all full nodes to reserve storage for maintaining both the current state and history of the blockchain. These competing requirements pose a challenge to farmers: do they adhere to the desired behavior, retaining the state and history, or do they seek to maximize their own rewards, instead dedicating all available space towards consensus? When faced with this farmer’s dilemma rational farmers will always choose the latter, effectively becoming light clients, while degrading both the security and decentralization of the network. This implies that any PoC blockchain would eventually consolidate into a single large farming pool, with even greater speed than has been previously observed with PoW and PoS chains. Subspace proposes to resolve this using a hybrid of PoS and PoTaS: We instead clearly distinguish between a permissionless farming mechanism for block production and permissioned staking mechanism for block finalization. Wagstaff describes it thus: To prevent farmers from discarding the history, we construct a novel PoC consensus protocol based on proofs-of-storage of the history of the blockchain itself, in which each farmer stores as many provably-unique replicas of the chain history as their disk space allows. To ensure the history remains available, farmers form a decentralized storage network, which allows the history to remain fully-recoverable, load-balanced, and efficiently-retrievable. To relieve farmers of the burden of maintaining the state and preforming [sic] redundant computation, we apply the classic technique in distributed systems of decoupling consensus and computation. Farmers are then solely responsible for the ordering of transactions, while a separate class of executor nodes maintain the state and compute the transitions for each new block. To ensure executors remain accountable for their actions, we employ a system of staked deposits, verifiable computation, and non-interactive fraud proofs. Separating consensus (PoTaS) and computation (PoS) has interesting effects: Like Chia, the only function of pools is to smooth out farmer's rewards. They do not compose the blocks. Pools will compete on their fees. Economics of scale mean that the larger the pool, the lower the fees it can charge. So, just like Chia, Subspace will end up with one, or only a few, large pools. Like Chia, if they can find a proof, farmers assemble transactions into a block which they can submit to executors for finalization. Subspace shares with Chia the property that a 51% attack requires M farmers not N pools (M ≫ N), assuming of course no supply chain attack or abuse of "smart contracts". Subspace uses a LOCKSS-like technique of electing a random subset of executors for each finalization. Because any participant can unambiguously detect fraudulent execution, and thus that the finalization of a block is fraudulent, the opportunity for bad behavior by executors is highly constrained. A conspiracy of executors has to hope that no honest executor is elected. Like Chia, the technology is extremely clever but there are interesting economic aspects. As regards farmers, Wagstaff writes: To ensure the history does not grow beyond total network storage capacity, we modify the transaction fee mechanism such that it dynamically adjusts in response to the replication factor. Recall that in Bitcoin, the base fee rate is a function of the size of the transaction in bytes, not the amount of BTC being transferred. We extend this equation by including a multiplier, derived from the replication factor. This establishes a mandatory minimum fee for each transaction, which reflects its perpetual storage cost. The multiplier is recalculated each epoch, from the estimated network storage and the current size of the history. The higher the replication factor, the cheaper the cost of storage per byte. As the replication factor approaches one, the cost of storage asymptotically approaches infinity. As the replication factor decreases, transaction fees will rise, making farming more profitable, and in-turn attracting more capacity to the network. This allows the cost of storage to reach an equilibrium price as a function of the supply of, and demand for, space. There are some issues here: The assumption that the market for fees can determine the "perpetual storage cost" is problematic. As I first showed back in 2011, the endowment needed for "perpetual storage" depends very strongly on two factors that are inherently unpredictable, the future rate of decrease of media cost in $/byte (Kryder rate), and the future interest rate. The invisible hand of the market for transaction fees cannot know these, it only knows the current cost of storage. Nor can Subspace management know them, to set the "mandatory minimum fee". Thus it is likely that fees will significantly under-estimate the "perpetual storage cost", leading to problems down the road. The assumption that those wishing to transact will be prepared to pay at least the "mandatory minimum fee" is suspect. Cryptocurrency fees are notoriously volatile because they are based on a blind auction; when no-one wants to transact a "mandatory minimum fee" would be a deterrent, when everyone wants to fees are unaffordable. Research has shown that if fees dominate block rewards systems become unstable. Wagstaff's paper doesn't seem to describe how block rewards work; I assume that they go to the individual farmer or are shared via a pool for smoother cash flow. I couldn't see from the paper whether, like Chia, Subspace intends to avoid depending upon fees. As regards executors: For each new block, a small constant number of executors are chosen through a stake-weighted election. Anyone may participate in execution by syncing the state and placing a small deposit. But the chance that they will be elected and gain the reward for finalizing a block and generating an Execution Receipt (ER) depends upon how much they stake. The mechanism for rewarding executors is: Farmers split transaction fee rewards evenly with all executors, based on the expected number of ERs for each block.7 For example, if 32 executors are elected, the farmer will take half of the all transaction fees, while each executor will take 1/64. A farmer is incentivized to include all ERs which finalize execution for its parent block because doing so will allow it to claim more of its share of the rewards for its own block. For example, if the farmer only includes 16 out of 32 expected ERs, it will instead receive 1/4 (not 1/2) of total rewards, while each of the 16 executors will still receive 1/64. Any remaining shares will then be escrowed within a treasury account under the control of the community of token holders, with the aim of incentivizing continued protocol development. Although the role of executor demands significant resources, both in hardware and in staked coins, these rewards seem inadequate. Every executor has to execute the state transitions in every block. But for each block only a small fraction of the executors receive only, in the example above, 1/64 of the fees. Note also footnote 7: 7 We use this rate for explanatory purposes, while noting that in order to minimize the plutocratic nature of PoS, executor shares should be smaller in practice. So Wagstaff expects that an executor will receive only a small fraction of a small fraction of 1/64 of the transaction fees. Even supposing the stake distribution among executors was small and even, unlikely in practice, for the random election mechanism to be effective there need to be many times 32 executors. For example, if there are 256 executors, and executors share 1/8 of the fees, each can expect around 0.005% of the fees. Bitcoin currently runs with fees less than 10% of the block rewards. If Subspace had the same split in my example executors as a class would expect around 1.2% of the block rewards, with farmers as a class receiving 100% of the block rewards plus 87.5% of the fees. There is another problem — the notorious volatility of transaction fees set against the constant cost of running an executor. Much of the time there would be relatively low demand for transactions, so a block would contain relatively few transactions that each offered the mandatory minimum fee. Unless the fees, and especially the mandatory minimum fee, are large relative to the block reward it isn't clear why executors would participate. But fees that large would risk the instability of fee-only blockchains. There are two other roles in Subspace, verifiers and full nodes. As regards incentivizing verifiers: we rely on the fact that all executors may act as verifiers at negligible additional cost, as they are already required to maintain the valid state transitions in order to propose new ERs. If we further require them to reveal fraud in order to protect their own stake and claim their share of the rewards, in the event that they themselves are elected, then we can provide a more natural solution to the verifier’s dilemma. As regards incentivizing full nodes, Wagstaff isn't clear. In addition to executors, any full node may also monitor the network and generate fraud proofs, by virtue of the fact that no deposit is required to act as verifier. As I read the paper, full nodes have similar hardware requirements as executors but no income stream to support them unless they are executors too. Overall, Subspace is interesting. But the advantage from a farmer's point of view of Subspace over Chia is that their whole storage resource is devoted to farming. Everything else is not really significant, and all this would be dominated by a fairly small difference in "price". Add to that the fact that Chia has already occupied the market niche for new PoTaS systems, and has high visibility via Bram Cohen and A16Z, and the prospects for Subspace don't look good. If Subspace succeeds, economies of scale will have two effects: Large pools will dominate small pools because they can charge smaller fees. Large farmers will dominate small farmers because their rewards are linear in the resource they commit, but their costs are sub-linear, so their profit is super-linear. This will likely result in the most profitable, hassle-free way for smaller consumers to participate being investing in a pool rather than actually farming. Conclusion The overall theme is that permissionless blockchains have to make participating in consensus expensive in some way to defend against Sybils. Thus if you are expending an expensive resource economies of scale are an unavoidable part of Sybil defense. If you want to be "decentralized" to avoid 51% attacks from insiders you have to have some really powerful mechanism pushing back against economies of scale. I see three possibilities, either the blockchain protocol designers: Don't understand why successful cryptocurrencies are centralized, so don't understand the need to push back on economies of scale. Do understand the need to push back on economies of scale but can't figure out how to do it. It is true that figuring this out is incredibly difficult, but their response should be to say "if the blockchain is going to end up centralized, why bother wasting resources trying to be permissionless?" not to implement something they claim is decentralized when they know it won't be. Don't care about decentralization, they just want to get rich quick, and are betting it will centralize around them. In most cases, my money is on #3. At least both Chia and Subspace have made efforts to defuse the worst aspects of centralization. Posted by David. at 8:00 AM Labels: bitcoin, P2P 5 comments: David. said... Chris Dupres, who I should have acknowledged provided me with a key hint about Chia, posted Very detailed post on post-Proof of Work cryptocurrencies, including Chia linking here. Thanks, Chris! July 20, 2021 at 12:20 PM David. said... Brett Scott's I, Token is an excellent if lengthy explanation of why Bitcoin and other currencies lack "moneyness". July 23, 2021 at 2:18 PM Blissex2 said... «Being swamped by a multitude of Sybil peers [...] A rational majority of peers from conspiring to obtain inappropriate benefits. [...] A rational minority of peers from conspiring to obtain inappropriate benefits.» This analysis ids very interesting and seems well done to me, but it is orthogonal to what seems to be the economic appeal of BitCoin, which is not that it is fair, decentralized, trustworthy, but the perception that: * It is pseudonymous and does not have "know my customer" rules. * It is worldwide so it is not subject to capital export/import restrictions. * There is a limited number of BitCoins, so it is "guaranteed" that as demand grows the worth of each BitCoin is going to grow a lot. BitCoin in other words is perceived as a limited-edition, pseudonymous collectible; it is considered as the convenient alternative to a pouch of diamonds for "informal" payments. Considering that international "informal" payments often involve a 10-20% "cleanup" fee, users of BitCoin for "informal" payments don't worry too much about issues like decentralization or perfect fairness and perfect trustworthiness, as long as the BitCoin operators steal less than 10-20% it is just a cost of doing business. July 26, 2021 at 5:48 AM David. said... This post wasn't about Bitcoin, or the reasons people use it. It was about the design of consensus protocols for permissionless peer-to-peer systems. And, to your point, the use of BTC for "payments" is negligible compared to its use for speculation. The use-case for BTC HODL-ers is "to the moon", and the use-case for BTC traders is its volatility and ability to be pumped-and-dumped. July 27, 2021 at 10:19 AM Blissex2 said... «This post wasn't about Bitcoin, or the reasons people use it. It was about the design of consensus protocols for permissionless peer-to-peer systems.» Indeed, as I wrote your post is “orthogonal to what seems to be the economic appeal of BitCoin”. But I think that they are relevant: the question is whether “consensus protocols for permissionless peer-to-peer systems” matter if the economic incentives are not aligned with them. Your post also mentioned repeatedly “the multiple failures of Satoshi Nakamoto's goals for Bitcoin”, which goals were economic, and accordingly you also mention as relevant “The economic forces” for those consensus protocols. I am sorry that I was not clear as to why I was writing about economic forces orthogonally to consensus protocol, given that your post also seemed to consider them relevant. «the use of BTC for "payments" is negligible compared to its use for speculation. The use-case for BTC HODL-ers is "to the moon", and the use-case for BTC traders is its volatility and ability to be pumped-and-dumped» Indeed currently, but again you referred several times to “Satoshi Nakamoto's goals for Bitcoin”, and those were about a payment system. Also the current minority of users of BitCoin and other "coins" who use it as a "don't know your customer" alternative to WesternUnion or "hawala" remittances are not irrelevant, because that use of "coins" could be long term. When doing "don't know your customer" transfers, they could be denominated in etruscan guineas or mayan jiaozis while in transit, and "coins" are just then a unit of account, and it is for that main purpose that a “consensus protocol” was designed by Satoshi Nakamoto (even if it turned out not to be that suitable). Maybe you think that these considerations are very secondary, but I wrote about them because I think in time they will matter. July 28, 2021 at 8:26 AM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ►  August (2) ▼  July (6) Economics Of Evil Revisited Yet Another DNA Storage Technique Alternatives To Proof-of-Work A Modest Proposal About Ransomware Intel Did A Boeing Graphing China's Cryptocurrency Crackdown ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-9604 ---- DSHR's Blog: Stablecoins Part 2 DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, August 3, 2021 Stablecoins Part 2 I wrote Stablecoins about Tether and its "magic money pump" seven months ago. A lot has happened and a lot has been written about it since, and some of it explores aspects I didn't understand at the time, so below the fold at some length I try to catch up. Source In the postscript to Stablecoins I quoted David Gerard's account of the December 16th pump that pushed BTC over $20K: We saw about 300 million Tethers being lined up on Binance and Huobi in the week previously. These were then deployed en masse. You can see the pump starting at 13:38 UTC on 16 December. BTC was $20,420.00 on Coinbase at 13:45 UTC. Notice the very long candles, as bots set to sell at $20,000 sell directly into the pump. In 2020 BTC had dropped from around $7.9K on March 10th to under $5K on March 11th. It spiked back up on March 18th, then gradually rose to just under $11K by October 10th. Source During that time Tether issuance went from $4.7B to $15.7B, an increase of over 230% with large jumps on four occasions: March 28-29th: $1.6B = $4.6B to $6.2B (a weekend) May 12-13th $2.4B = $6.4B to $8.8B July 20th-21st $0.8B = $9.2B to $10B (a weekend) August 19-20th $3.4B = $10B to $13.4B (a weekend) Source Then both BTC and USDT really took off, with BTC peaking April 13th at $64.9K, and USDT issuing more than $30B. BTC then started falling. Tether continued to issue USDT, peaking 55 days later on May 30th after nearly another $16B at $61.8B. Issuance slowed dramatically, peaking 19 days later on June 18th at $62.7B when BTC had dropped to $$35.8K, 55% of the peak. Since then USDT has faced gradual redemptions; it is now down to $61,8B. What on earth is going on? How could USDT go from around $6B to around $60B in just over a year? Tether Source In Crypto and the infinite ladder: what if Tether is fake?, the first of a two-part series, Fais Kahn asks the same question: Tether (USDT) is the most used cryptocurrency in the world, reaching volumes significantly higher than Bitcoin. Each coin is supposed to be backed by $1, making it “stable.” And yet no one knows if this is true. Even more odd: in the last year, USDT has exploded in size even faster than Bitcoin - going from $6B in market cap to over $60B in less than a year. This includes $40B of new supply - a straight line up - after the New York Attorney General accused Tether of fraud. I and many others have considered a scenario in which the admitted fact that USDT is not backed 1-for-1 by USD causes a "run on the bank". Among the latest is Taming Wildcat Stablecoins by Gary Gorton and Jeffery Zhang. Zhang is one of the Federal Reserve's attorney, but who is Gary Gorton? Izabella Kaminska explains: Over the course of his career, Gary Gorton has gained a reputation for being something of an experts’ expert on financial systems. Despite being an academic, this is in large part due to what might be described as his practitioner’s take on many key issues. The Yale School of Management professor is, for example, best known for a highly respected (albeit still relatively obscure) theory about the role played in bank runs by information-sensitive assets. ... the two authors make the implicit about stablecoins explicit: however you slice them, dice them or frame them in new technology, in the grand scheme of financial innovation stablecoins are actually nothing new. What they really amount to, they say, is another form of information sensitive private money, with stablecoin issuers operating more like unregulated banks. Gorton and Zhang write: The goal of private money is to be accepted at par with no questions asked. This did not occur during the Free Banking Era in the United States—a period that most resembles the current world of stablecoins. State-chartered banks in the Free Banking Era experienced panics, and their private monies made it very hard to transact because of fluctuating prices. That system was curtailed by the National Bank Act of 1863, which created a uniform national currency backed by U.S. Treasury bonds. Subsequent legislation taxed the state-chartered banks’ paper currencies out of existence in favor of a single sovereign currency. Unlike me, Kahn is a "brown guy in fintech", so he is better placed to come up with answers than I am. For a start, he is skeptical of the USDT "bank run" scenario: The unbacked scenario is what concerns investors. If there were a sudden drop in the market, and investors wanted to exchange their USDT for real dollars in Tether’s reserve, that could trigger a “bank run” where the value dropped significantly below one dollar, and suddenly everyone would want their money. That could trigger a full on collapse. But when that might actually happen? When Bitcoin falls in the frequent crypto bloodbaths, users actually buy Tether - fleeing to the safety of the dollar. This actually drives Tether’s price up! The only scenario that could hurt is when Bitcoin goes up, and Tether demand drops. But hold on. It’s extremely unlikely Tether is simply creating tokens out of thin air - at worst, there may be some fractional reserve (they themselves admitted at one point it was only 74% backed) that is split between USD and Bitcoin. The NY AG’s statement that Tether had “no bank anywhere in the world” strongly suggests some money being held in crypto (Tether has stated this is true, but less than 2%), and Tether’s own bank says they use Bitcoin to hold customer funds! That means in the event of a Tether drop/Bitcoin rise, they are hedged. Tether’s own Terms of Service say users may not be redeemed immediately. Forced to wait, many users would flee to Bitcoin for lack of options, driving the price up again. Kahn agrees with me that Tether may have a magic "money" pump: It’s possible Tether didn’t have the money at some point in the past. And it’s just as possible that, with the massive run in Bitcoin the last year Tether now has more than the $62B they claim! In that case Tether would seem to have constructed a perfect machine for printing money. (And America has a second central bank.) Of course, the recent massive run down in Bitcoin will have caused the "machine for printing money" to start running in reverse. Matt Levine listened to an interview with Tether's CTO Paolo Ardoino and General Counsel Stuart Hoegner, and is skeptical about Tether's backing: Tether is a stablecoin that we have talked about around here because it was sued by the New York attorney general for lying about its reserves, and because it subsequently disclosed its reserves in a format that satisfied basically no one. Tether now says that its reserves consist mostly of commercial paper, which apparently makes it one of the largest commercial paper holders in the world. There is a fun game among financial journalists and other interested observers who try to find anyone who has actually traded commercial paper with Tether, or any of its actual holdings. The game is hard! As far as I know, no one has ever won it, or even scored a point; I have never seen anyone publicly identify a security that Tether holds or a counterparty that has traded commercial paper with it. USDT reserve disclosure Levine contrasts Tether's reserve disclosure with that of another instrument that is supposed to maintain a stable value, a money market fund: Here is the website for the JPMorgan Prime Money Market Fund. If you click on the tab labeled “portfolio,” you can see what the fund owns. The first item alphabetically is $50 million face amount of asset-backed commercial paper issued by Alpine Securitization Corp. and maturing on Oct. 12. Its CUSIP — its official security identifier — is 02089XMG9. There are certificates of deposit at big banks, repurchase agreements, even a little bit of non-financial commercial paper. ... You can see exactly how much (both face amount and market value), and when it matures, and the CUSIP for each holding. JPMorgan is not on the bleeding edge of transparency here or anything; this is just how money market funds work. You disclose your holdings. Binance But the big picture is that USDT pumped $60B into cryptocurrencies. Where did the demand for the $60B come from? In my view, some of it comes from whales accumulating dry powder to use in pump-and-dump schemes like the one illustrated above. But Kahn has two different suggestions. First: One of the well-known uses for USDT is “shadow banking” - since real US dollars are highly regulated, opening an account with Binance and buying USDT is a straightforward way to get a dollar account. The CEO of USDC himself admits in this Coindesk article: “In particular in Asia where, you know, these are dollar-denominated markets, they have to use a shadow banking system to do it...You can’t connect a bank account in China to Binance or Huobi. So you have to do it through shadow banking and they do it through tether. And so it just represents the aggregate demand. Investors and users in Asia – it’s a huge, huge piece of it.” Source Second: Binance also hosts a massive perpetual futures market, which are “cash-settled” using USDT. This allows traders to make leveraged bets of 100x margin or more...which, in laymen’s terms, is basically a speculative casino. That market alone provides around ~$27B of daily volume, where users deposit USDT to trade on margin. As a result, Binance is by far the biggest holder of USDT, with $17B sitting in its wallet. Wikipedia describes "perpetual futures" thus: In finance, a perpetual futures contract, also known as a perpetual swap, is an agreement to non-optionally buy or sell an asset at an unspecified point in the future. Perpetual futures are cash-settled, and differ from regular futures in that they lack a pre-specified delivery date, and can thus be held indefinitely without the need to roll over contracts as they approach expiration. Payments are periodically exchanged between holders of the two sides of the contracts, long and short, with the direction and magnitude of the settlement based on the difference between the contract price and that of the underlying asset, as well as, if applicable, the difference in leverage between the two sides In Is Tether a Black Swan? Bernhard Mueller goes into more detail about Binance's market: According to Tether’s rich list, 17 billion Tron USDT are held by Binance alone. The list also shows 2.68B USDT in Huobi’s exchange wallets. That’s almost 20B USDT held by two exchanges. Considering those numbers, the value given by CryptoQuant appears understated. A more realistic estimate is that ~70% of the Tether supply (43.7B USDT) is located on centralized exchanges. Interestingly, only a small fraction of those USDT shows up in spot order books. One likely reason is that a large share is sitting on wallets to collateralize derivative positions, in particular perpetual futures. The CEX futures market is essentially a casino where traders bet on crypto prices with insane amounts of leverage. And it’s a massive market: Futures trading on Binance alone generated $60 billion in volume over the last 24 hours. It’s important to understand that USDT perpetual futures implementations are 100% USDT-based, including collateralization, funding and settlement. Prices are tied to crypto asset prices via clever incentives, but in reality, USDT is the only asset that ever changes hands between traders. This use-case generates significant demand for USDT. Why is this "massive perpetual futures market" so popular? Kahn provides answers: That crazed demand for margin trading is how we can explain one of the enduring mysteries of crypto - how users can get 12.5% interest on their holdings when banks offer less than 1%. Source The high interest is possible because: The massive supply of USDT, and the host of other dollar stablecoins like USDC, PAX, and DAI, creates an arbitrage opportunity. This brings in capital from outside the ecosystem seeking the “free money” making trades like this using a combination of 10x leverage and and 8.5% variance between stablecoins to generate an 89% profit in just a few seconds. If you’re only holding the bag for a minute, who cares if USDT is imaginary dollars? Rollicking good times like these attract the attention of regulators, as Amy Castor reported on July 2nd in Binance: A crypto exchange running out of places to hide: Binance, the world’s largest dark crypto slush fund, is struggling to find corners of the world that will tolerate its lax anti-money laundering policies and flagrant disregard for securities laws. As a result, Laurence Fletcher, Eva Szalay and Adam Samson report that Hedge funds back away from Binance after regulatory assault : The global regulatory pushback “should raise red flags for anyone keeping serious capital at the exchange”, said Ulrik Lykke, executive director at ARK36, adding that the fund has “scaled down” exposure. ... Lykke described it as “especially concerning” that the recent moves against Binance “involve multiple entities from across the financial sphere”, such as banks and payments groups. This leaves some serious money looking for an off-ramp from USDT to fiat. These are somewhat scarce: if USDT holders on centralized exchanges chose to run for the exits, USD/USDC/BUSD liquidity immediately available to them would be relatively small. ~44 billion USDT held on exchanges would be matched with perhaps ~10 billion in fiat currency and USDC/BUSD This, and the addictive nature of "a casino ... with insane amounts of leverage", probably account for the relatively small drop in USDT market cap since June 18th. Amy Castor reported July 13th on another reason in Binance: Fiat off-ramps keep closing, reports of frozen funds, what happened to Catherine Coley?: Binance customers are becoming trapped inside of Binance — or at least their funds are — as the fiat exits to the world’s largest crypto exchange close around them. You can almost hear the echoes of doors slamming, one by one, down a long empty corridor leading to nowhere. In the latest bit of unfolding drama, Binance told its customers today that it had disabled withdrawals in British Pounds after its key payment partner, Clear Junction, ended its business relationship with the exchange. ... There’s a lot of unhappy people on r/BinanceUS right now complaining their withdrawals are frozen or suspended — and they can’t seem to get a response from customer support either. ... Binance is known for having “maintenance issues” during periods of heavy market volatility. As a result, margin traders, unable to exit their positions, are left to watch in horror while the exchange seizes their margin collateral and liquidates their holdings. And it isn't just getting money out of Binance that is getting hard, as David Gerard reports: Binance is totally not insolvent! They just won’t give anyone their cryptos back because they’re being super-compliant. KYC/AML laws are very important to Binance, especially if you want to get your money back after suspicious activity on your account — such as pressing the “withdraw” button. Please send more KYC. [Binance] Issues like these tend to attract the attention of the mainstream press. On July 23rd the New York Times' Eric Lipton and Ephrat Livni profiled Sam Bankman-Fried of the FTX exchange in Crypto Nomads: Surfing the World for Risk and Profit: The highly leveraged form of trading these platforms offer has become so popular that the overall value of daily purchases and sales of these derivatives far surpasses the daily volume of actual cryptocurrency transactions, industry data analyzed by researchers at Carnegie Mellon University shows. ... FTX alone has one million users across the world and handles as much as $20 billion a day in transactions, most of them derivatives trades. Like their customers, the platforms compete. Mr. Bankman-Fried from FTX, looking to out promote BitMEX, moved to offer up to 101 times leverage on derivatives trades. Mr. Zhao from Binance then bested them both by taking it to 125. Then on the 25th, as the regulators' seriousness sank in, the same authors reported Leaders in Cryptocurrency Industry Move to Curb the Highest-Risk Trades: Two of the world’s most popular cryptocurrency exchanges announced on Sunday that they would curb a type of high-risk trading that has been blamed in part for sharp fluctuations in the value of Bitcoin and the casino-like atmosphere on such platforms globally. The first move came from the exchange, FTX, which said it would reduce the size of the bets investors can make by lowering the amount of leverage it offers to 20 times from 101 times. Leverage multiplies the traders’ chance for not only profit, but also loss. ... About 14 hours later, Changpeng Zhao [CZ], the founder of Binance, the world’s largest cryptocurrency exchange, echoed the move by FTX, announcing that his company had already started to limit leverage to 20 times for new users and it would soon expand this limit to other existing clients. Early the next day, Tom Schoenberg, Matt Robinson, and Zeke Faux reported for Bloomberg that Tether Executives Said to Face Criminal Probe Into Bank Fraud: U.S. probe into Tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case that would have broad implications for the cryptocurrency market. Tether’s pivotal role in the crypto ecosystem is now well known because the token is widely used to trade Bitcoin. But the Justice Department investigation is focused on conduct that occurred years ago, when Tether was in its more nascent stages. Specifically, federal prosecutors are scrutinizing whether Tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential. Federal prosecutors have been circling Tether since at least 2018. In recent months, they sent letters to individuals alerting them that they’re targets of the investigation, one of the people said. Source Once again, David Gerard pointed out the obvious market manipulation: This week’s “number go up” happened several hours before the report broke — likely when the Bloomberg reporter contacted Tether for comment. BTC/USD futures on Binance spiked to $48,000, and the BTC/USD price on Coinbase spiked at $40,000 shortly after. Here’s the one-minute candles on Coinbase BTC/USD around 01:00 UTC (2am BST on this chart) on 26 July — the price went up $4,000 in three minutes. You’ve never seen something this majestically organic And so did Amy Castor in The DOJ’s criminal probe into Tether — What we know: Last night, before the news broke, bitcoin was pumping like crazy. The price climbed nearly 17%, topping $40,000. On Coinbase, the price of BTC/USD went up $4,000 in three minutes, a bit after 01:00 UTC. After a user placed a large number of buy orders for bitcoin perpetual futures denominated in tethers (USDT) on Binance — an unregulated exchange struggling with its own banking issues — The BTC/USDT perpetual contract hit a high of $48,168 at around 01:00 UTC on the exchange. Bitcoin pumps are a good way to get everyone to ignore the impact of bad news and focus on number go up. “Hey, this isn’t so bad. Bitcoin is going up in price. I’m rich!” Source As shown in the graph, the perpetual futures market is at least an order of magnitude larger than the spot market upon which it is based. and as we saw for example on December 16th and July 26th, the spot market is heavily manipulated. Pump-and-dump schemes in the physical market are very profitable, and connecting them to the casino in the futures market with its insane leverage can juice profitability enormously. Tether and Binance Fais Kahn's second part, Bitcoin's end: Tether, Binance and the white swans that could bring it all down, explores the mutual dependency between Tether and Binance: There are $62B tokens for USDT in circulation, much of which exists to fuel the massive casino that is the perpetual futures market on Binance. These complex derivatives markets, which are illegal to trade in the US, run in the tens of billions and help drive up the price of Bitcoin by generating the basis trade. The "basis trade": involves buying a commodity at spot (taking a long position) and simultaneously establishing a short position through derivatives like options or futures contracts Kahn continues: For Binance to allow traders to make such crazy bets, it needs collateral to make sure if traders get wiped out, Binance doesn’t go bankrupt. That collateral is now an eye-popping $17B, having grown from $3B in February and $10B in May: But for that market to work, Binance needs USDT. And getting fresh USDT is a problem now that the exchange, which has always been known for its relaxed approach to following the laws, is under heavy scrutiny from the US Department of Justice and IRS: so much so that their only US dollar provider, Silvergate Bank, recently terminated their relationship, suggesting major concerns about the legality of some of Binance’s activities. This means users can no longer transfer US dollars from their bank to Binance, which were likely often used to fund purchases of USDT. Since that shutdown, the linkages between Binance, USDT, and the basis trade are now clearer than ever. In the last month, the issuance of USDT has completely stopped: Likewise, futures trading has fallen significantly. This confirms that most of the USDT demand likely came from leveraged traders who needed more and more chips for the casino. Meanwhile, the basis trade has completely disappeared at the same time. Which is the chicken and which is the egg? Did the massive losses in Bitcoin kill all the craziest players and end the free money bonanza, or did Binance’s banking troubles choke off the supply of dollars, ending the game for everyone? Either way, the link between futures, USDT, and the funds flooding the crypto world chasing free money appears to be broken for now. This is a problem for Binance: Right now Tether is Binance’s $17B problem. At this point, Binance is holding so much Tether the exchange is far more dependent on USDT’s peg staying stable than it is on any of its banking relationships. If that peg were to break, Binance would likely see capital flight on a level that would wreak untold havoc in the crypto markets ... Regulators have been increasing the pace of their enforcements. In other words, they are getting pissed, and the BitMex founders going to jail is a good example of what might await. Binance has been doing all it can to avoid scrutiny, and you have to award points for creativity. The exchange was based in Malta, until Malta decided Binance had “no license” to operate there, and that Malta did not have jurisdiction to regulate them. As a result, CZ began to claim that Binance “doesn’t have” a headquarters. Wonder why? Perhaps to avoid falling under anyone’s direct jurisdiction, or to avoid a paper trail? CZ went on to only reply that he is based in “Asia.” Given what China did to Jack Ma recently, we can empathize with a desire to stay hidden, particularly when unregulated exchanges are a key rail for evading China’s strict capital controls. Any surprise that the CFO quit last month? But it is also a problem for Tether: Here’s what could trigger a cascade that could bring the exchange down and much of crypto with it: the DOJ and IRS crack down on Binance, either by filing charges against CZ or pushing Biden and Congress to give them the death penalty: full on sanctions. This would lock them out of the global financial system, cause withdrawals to skyrocket, and eventually drive them to redeem that $17B of USDT they are sitting on. And what will happen to Tether if they need to suddenly sell or redeem those billions? We have no way of knowing. Even if fully collateralized, Tether would need to sell billions in commercial paper on short notice. And in the worst case, the peg would break, wreaking absolute havoc and crushing crypto prices. Alternatively It’s possible that regulators will move as slow as they have been all along - with one country at a time unplugging Binance from its banking system until the exchange eventually shrinks down to be less of a systemic risk than it is. That's my guess — it will become increasingly difficult either to get USD or cryptocurrency out of Binance's clutches, or to send them fiat, as banks around the world realize that doing business with Binance is going to get them in trouble with their regulators. Once customers realize that Binance has become a "roach motel" for funds, and that about 25% of USDT is locked up there, things could get quite dynamic. Kahn concludes: Everything around Binance and Tether is murky, even as these entities two dominate the crypto world. Tether redemptions are accelerating, and Binance is in trouble, but why some of these things are happening is guesswork. And what happens if something happens to one of those two? We’re entering some uncharted territory. But if things get weird, don’t say no one saw it coming. Policy Responses Gorton and Zhang argue that the modern equivalent of the "free banking" era is fraught with too many risks to tolerate. David Gerard provides an overview of the era in Stablecoins through history — Michigan Bank Commissioners report, 1839: The wildcat banking era, more politely called the “free banking era,” ran from 1837 to 1863. Banks at this time were free of federal regulation — they could launch just under state regulation. Under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. The quality of these reserves could be a matter of some dispute. The wildcat banks didn’t work out so well. The National Bank Act was passed in 1863, establishing the United States National Banking System and the Office of the Comptroller of the Currency — and taking away the power of state banks to issue paper notes. Gerard's account draws from a report of Michigan's state banking commissioners, Documents Accompanying the Journal of the House of Representatives of the State of Michigan, pp. 226–258, which makes clear that Tether's lack of transparency as to its reserves isn't original. Banks were supposed to hold "specie" (money in the form of coin) as backing but: The banking system at the time featured barrels of gold that were carried to other banks, just ahead of the inspectors For example, the commissioners reported that: The Farmers’ and Mechanics’ bank of Pontiac, presented a more favorable exhibit in point of solvency, but the undersigned having satisfactorily informed himself that a large proportion of the specie exhibited to the commissioners, at a previous examination, as the bona fide property of the bank, under the oath of the cashier, had been borrowed for the purpose of exhibition and deception; that the sum of ten thousand dollars which had been issued for “exchange purposes,” had not been entered on the books of the bank, reckoned among its circulation, or explained to the commissioners. Gorton and Zhang summarize the policy choices thus: Based on historical lessons, the government has a couple of options: (1) transform stablecoins into the equivalent of public money by (a) requiring stablecoins to be issued through FDIC- insured banks or (b) requiring stablecoins to be backed one-for-one with Treasuries or reserves at the central bank; or (2) introduce a central bank digital currency and tax private stablecoins out of existence. Their suggestions for how to implement the first option include: the interpretation of Section 21 of the Glass-Steagall Act, under which "it is unlawful for a non-bank entity to engage in deposit-taking" the interpretation of Title VIII of the Dodd-Frank Act, under which the Financial Stability Oversight Council could "designate stablecoin issuance as a systemic payment activity". This "would give the Federal Reserve the authority to regulate the activity of stablecoin issuance by any financial institution." Congress could pass legislation that requires stablecoin issuers to become FDIC-insured banks or to run their business out of FDIC-insured banks. As a result, stablecoin issuers would be subject to regulations and supervisory activities that come along with being an FDIC-insured bank. Alternatively, the second option would involve: Congress could require the Federal Reserve to issue a central bank digital currency as a substitute to privately produced digital money like stablecoins ... The question then becomes whether policymakers would want to have central bank digital currencies coexist with stablecoins or to have central bank digital currencies be the only form of money in circulation. As discussed previously, Congress has the legal authority to create a fiat currency and to tax competitors of that uniform national currency out of existence. They regard the key attribute of an instrument that acts as money to be that it is accepted at face value "No Questions Asked" (NQA). Thus, based on history they ask: In other words, should the sovereign have a monopoly on money issuance? As shown by revealed preference in the table below, the answer is yes. The provision of NQA money is a public good, which only the government can supply. Posted by David. at 8:00 AM Labels: bitcoin 3 comments: David. said... Katanga Johnson reports for Reuters that U.S. SEC Chair Gensler calls on Congress to help rein in crypto 'Wild West': "Gary Gensler said the crypto market involves many tokens which may be unregistered securities and leaves prices open to manipulation and millions of investors vulnerable to risks. "This asset class is rife with fraud, scams and abuse in certain applications," Gensler told a global conference. "We need additional congressional authorities to prevent transactions, products and platforms from falling between regulatory cracks." ... He also called on lawmakers to give the SEC more power to oversee crypto lending, and platforms like peer-to-peer decentralized finance (DeFi) sites that allow lenders and borrowers to transact in cryptocurrencies without traditional banks." August 3, 2021 at 5:30 PM David. said... Carol Alexander's Binance’s Insurance Fund is a fascinating, detailed examination of Binance's extremely convenient "outage" as BTC crashed on May 19: "How insufficient insurance funds might explain the outage of Binance’s futures platform on May 19 and the potentially toxic relationship between Binance and Tether." August 9, 2021 at 7:04 AM David. said... Gary Gensler's "Wild West" comment is refuted in David Segal's hysterically funny Going for Broke in Cryptoland: "Cryptoland is often likened to the Wild West, but that’s unfair to the Wild West. It had sheriffs, courts, the occasional posse. There isn’t a cop in sight in Cryptoland. If someone steals your crypto, tough." Other snippets include: "The journey from PancakeSwap to my crypto wallet took four and a half hours. Which pointed up another surprise about Cryptoland. It’s absurdly slow." And: "Crypto also offers something deeper and more gratifying than a regular investment. It offers meaning. The more time you spend in a cryptocurrency chat the more elements it seems to share with a religious sect. Belief is required. Heretics, in the form of those Telegram dissenters, are banished. And if you stick around long enough, the proselytizing begins. “Once you start seeing the potential of this project for the rest of the world,” Mr. Danci told me during the Telegram chat, “you will want to start promoting it yourself because it is really game changing.” “Resistance is futile!” someone piped up, with a laugh. Investing in crypto holds out the prospect of a jackpot and the chance to bond over a shared catechism. It’s like a church social in a casino. One attendee said he spent about 10 hours a day on the FEG chat. “I talk to these people more than I talk to the friends I grew up with,” he said." August 9, 2021 at 12:33 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ▼  2021 (39) ▼  August (2) The Economist On Cryptocurrencies Stablecoins Part 2 ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ►  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-979 ---- DSHR's Blog: Library of Congress Storage Architecture Meeting DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Thursday, January 9, 2020 Library of Congress Storage Architecture Meeting .The Library of Congress has finally posted the presentations from the 2019 Designing Storage Architectures for Digital Collections workshop that took place in early September, I've greatly enjoyed the earlier editions of this meeting, so I was sorry I couldn't make it this time. Below the fold, I look at some of the presentations. Robert Fontana & Gary Decad As usual, Fontana and Decad provided their invaluable overview of the storage landscape. Their key points include: [Slide 5] The total amount of storage manufactured each year continues its exponential growth at around 20%/yr. The vast majority (76%) of it is HDD, but the proportion of flash (20%) is increasing. Tape remains a very small proportion (4%). [Slide 12] They contrast this 20% growth in supply with the traditionally ludicrous 40% growth in "demand". Their analysis assumes one byte of storage manufactured in a year represents one byte of data stored in that year, which is not the case (see my 2016 post Where Did All Those Bits Go? for a comprehensive debunking). So their supposed "storage gap" is actually a huge, if irrelevant, underestimate. But they hit the nail on the head with: Key Point: HDD 75% of bits and 30% of revenue, NAND 20% of bits and 70% of revenue". [Slide 9] The Kryder rates for NAND Flash, HDD and Tape are comparable; $/GB decreases are competitive with all technologies. But, as I've been writing since at least 2012's Storage Will Be A Lot Less Free Than It Used To Be, the Kryder rate has decreased significantly from the good old days: $/GB decreases are in the 19%/yr range and not the classical Moore’s Law projection of 28%/yr associated with areal density doubling every 2 years As my economic model shows, this makes long-term data storage a significantly greater investment. [Slide 11] In 2017 flash was 9.7 times as expensive as HDD. In 2018 the ratio was 9 times. Thus, despite recovering from 2017's supply shortages, flash has not made significant progress in eroding HDD's $/GB advantage. By continuing current trends, they project that by 2026 flash will ship more bytes than HDD. But they project it will still be 6 times as expensive per byte. So they ask a good question: In 2026 is there demand for 7X more manufactured storage annually and is there sufficient value for this storage to spend $122B more annually (2.4X) for this storage? Jon Trantham Jon Trantham of Seagate confirmed that, as it has been for a decade, the date for volume shipments of HAMR drives is still slipping in real time; "Seagate is now shipping HAMR drives in limited quantities to lead customers". His presentation is interesting in that he provides some details of the extraordinary challenges involved in manufacturing HAMR drives, with pictures showing how small everything is: The height from the bottom of the slider to the top of the laser module is less than 500 um The slider will fly over the disk with an air-gap of only 1-2 nm As usual, I will predict that the industry is far more likely to achieve the 15% CAGR in areal density line on the graph than the 30% line. Note the flatness of the "HDD Product" curve for the last five years or so. Tape The topic of tape provided a point-counterpoint balance. Gary Decad and Robert Fontana from IBM made the point that tape's roadmap is highly credible by showing that: Tape, unlike HDD, has consistently achieved published capacity roadmaps and that: For the last 8 years, the ratio of manufactured EB of tape to manufactured EB of HDD as remained constant in the 5.5% range and that: Unlike HDD, tape magnetic physics is not the limiting issues since tape bit cells are 60X larger than HDD bit cells ... The projected tape areal density in 2025 (90 Gbit/in2) is 13x smaller than today’s HDD areal density and has already been demonstrated in laboratory environments. Carl Watts' Issues in Tape Industry needed only a few bullets to make his counterpoint that the risk in tape is not technological: IBM is the last of the hardware manufacturers: IBM is the only builder of LTO8 IBM is the only vendor left with enterprise class tape drives If you only have one manufacturer how do you mitigate risk? These cloud archival solutions all use tape: Amazon AWS Glacier and Glacier Deep ($1/TB/month) Azure General Purpose v2 storage Archive ($2/TB/month) Google GCP Coldline($7/TB/month) If it's all the same tape, how do we mitigate risk? If, as Decad and Fontana claim: Tape storage is strategic in public, hybrid, and private “Clouds” then IBM has achieved a monopoly, which could have implications for tape's cost advantage. Jon Trantham's presentation described Seagate's work on robots, similar to tape robots and the Blu-Ray robots developed by Facebook, but containing hard disk cartridges descended from those we studied in 2008's Predicting the Archival Life of Removable Hard Disk Drives. We showed that the bits on the platters had similar life to bits on tape. Of course, tape has the advantage of being effectively a 3D medium where disk is effectively a 2D medium. Cloud Storage Amazon, Wasabi and Ceph gave useful marketing presentations. Julian Morley reported on Stanford's transition from in-house tape to cloud storage, with important cost data. I reported previously on the economic modeling Morley used to support this decision. Cold storage US$/month AWS 1000GB 0.99 10,000 write operations 0.05 10,000 read operations 0.004 1GB retrieval 0.02 Early deletion charge: 180 days Azure 1000GB 0.99 10,000 write operations 0.1 10,000 read operations 5 1GB retrieval 0.02 Early deletion charge: 180 days Google 1000GB 1.2 10,000 operations 0.5 1GB retrieval 0.12 Early deletion charge: 365 days At The Register, Tim Anderson's Archive storage comes to Google Cloud: Will it give AWS and Azure the cold shoulder? provides a handy comparison of the leading cloud providers' pricing options for archiva storage, and concludes: This table, note, is an over-simplification. The pricing is complex; operations are broken down more precisely than read and write; the exact features vary; and there may be discounts for reserved storage. Costs for data transfer within your cloud infrastructure may be less. The only way to get a true comparison is to specify your exact requirements (and whether the cloud provider can meet them), and work out the price for your particular case. DNA I've been writing enthusiastically about the long-term potential, but skeptically about the medium-term future, of DNA as an archival storage medium for more than seven years. I've always been impressed by the work of the Microsoft/UW team in this field, and Karin Strauss and Luis Ceze's DNA data storage and computation is no exception. It includes details of their demonstration of a complete write-to-read automated system (see also video), and discussion of techniques for performing "big data" computations on data stored in DNA. Anne Fischer reported on DARPA's research program in Molecular Informatics. One of its antecedents was a DARPA workshop in 2016. Her presentation stressed the diverse range of small molecules that can be used as storage media. I wrote about one non-DNA approach from Harvard last year. In Cost-Reducing Writing DNA Data I wrote about Catalog's approach, assembling a strand from a library of short sequences of bases. It is a good idea, addressing one of the big deficiencies of DNA as a storage medium, its write bandwidth. But Devin Leake's slides are short on detail, more of an elevator pitch for investment, They start by repeating the ludicrous IDC projection of "bytes generated" and equating it to demand for storage, and in particular archival storage. If you're doing a company you need a much better idea than this about the market you're addressing. Henry Newman The good Dr. Pangloss loved Henry Newman's enthusiasm for 5G networking, but I'm a lot more skeptical. It is true that early 5G phones can demo nearly 2Gb/s in very restricted coverage areas in some US cities. But 5G phones are going to be more expensive to buy, more expensive to use, have less battery life, overheat, have less consistent bandwidth and almost non-existent coverage. In return, you get better peak bandwidth, which most people don't use. Customers are already discovering that their existing phone is "good enough". 5G is such a deal! The reason the carriers are building out 5G networks isn't phones, it is because they see a goldmine in the Internet of Things. But combine 2Gb/s bandwidth with the IoT's notoriously non-existent security, and you have a disaster the carriers simply cannot allow to happen. The IoT has proliferated for two reasons, the Things are very cheap and connecting them to the Internet is unregulated, so ISPs cannot impose hassles. But connecting a Thing to the 5G Internet will require a data plan from the carrier, so they will be able to impose requirements, and thus costs. Among the requirements will have to be that the Things have UL certification, adequate security and support, including timely software updates for their presumably long connected life. It is precisely the lack of these expensive attributes that have made the IoT so ubiquitous and such a security dumpster-fire! Fixity Two presentations discussed fixity checks. Mark Cooper reported on an effort to validate both the inventory and the checksums of part of LC's digital collection. The conclusion was that the automated parts were reliable, the human parts not so much: Content on storage is correct, inventory is not Content custodians working around system limitations, resulting in broken inventory records Content in the digital storage system needs to be understood as potentially dynamic, in particular for presentation and access System needs to facilitate required actions in ways that are logged and versioned Buzz Hayes from Google explained their recommended technique for performing fixity checks on data in Google's cloud. They provide scripts for the two traditional approaches: Read the data back and hash it, which at scale gets expensive in access and bandwidth charges. Hash the data in the cloud that stores it, which involves trusting the cloud to actually perform the hash rather than simply remember the hash computed at ingest. I have yet to see a cloud API that implements the technique published by Mehul Shah et al twelve years ago, allowing the data owner to challenge the cloud provider with a nonce, thus forcing it to compute the hash of the nonce and the data at check time. See also my Auditing The Integrity Of Multiple Replicas. Blockchain Sharmila Bhatia reported on an initiative by NARA to investigate the potential for blockchain to assist government records management which concluded: Authenticity and Integrity Blockchain distributed ledger functionality presents a new way to ensure electronic systems provide electronic record authenticity / integrity. May not help with preservation or long term access and may make these issues more complicated. It is important to note that what NARA means by "government records" is quite different from what is typically meant by "records", and the legislative framework under which they operate may make applying blockchain technology tricky. Ben Fino-Radin and Michelle Lee pitched Starling, a startup claiming: Simplified & coordinated decentralized storage on the Filecoin network Their slides describe how the technology works, but give no idea of how much it would cost to use. Just as with DNA and other exotic media, the real issue is economic not technical. I wrote skeptically about the economics of the Filecoin network in The Four Most Expensive Words in the English Language and Triumph Of Greed Over Arithmetic, comparing its possible pricing to Amazon's S3 and S3 RRS. Of course, the numbers would have looked much worse for Filecoin had I compared it with Wasabi's pricing. A Final Request To The Organizers This is always a fascinating meeting. But, please, on the call for participation next year make it clear that anyone using projections for "data generated" in their slides as somehow relevant to "data storage" and archival data storage in particular will be hauled off stage by the hook. Posted by David. at 8:00 AM Labels: big data, bitcoin, digital preservation, government information, IoT, library of congress, long-lived media, storage media 7 comments: David. said... In 5G Security, Bruce Schneier points out that, even if the telcos were to enforce strict security for 5G-connected Things, we are still screwed: "Security vulnerabilities in the standards ­the protocols and software for 5G ­ensure that vulnerabilities will remain, regardless of who provides the hardware and software. These insecurities are a result of market forces that prioritize costs over security and of governments, including the United States, that want to preserve the option of surveillance in 5G networks. If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security." Go read the whole post and weep. January 14, 2020 at 12:02 PM David. said... Chris Mellor reports that Hard disk drive shipments fell 50% between 2012 and 2019 as SSD cannibalized everything except nearline. But note from Fontana's graph above that capacity per drive increased faster than unit shipments decreased, so total bytes shipped still increased. January 15, 2020 at 7:42 AM David. said... Maybe volume shipments of HAMR drives will happen this year. Jim Salter expresses optimisms despite the long history in HAMR don’t hurt ’em—laser-assisted hard drives are coming in 2020: "Seagate has been trialing 16TB HAMR drives with select customers for more than a year and claims that the trials have proved that its HAMR drives are "plug and play replacements" for traditional CMR drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to." February 7, 2020 at 11:56 AM David. said... Kevin Werbach takes the 5G hype to the woodshed in The 'race to 5G' is a myth: "Telecommunications providers relentlessly extol the power of fifth-generation (5G) wireless technology. Government officials and policy advocates fret that the winner of the "5G race" will dominate the internet of the future, so America cannot afford to lose out. Pundits declare that 5G will revolutionize the digital world. It all sounds very thrilling. Unfortunately, the hype has gone too far. 5G systems will, over time, replace today's 4G, just as next year's iPhone 12 will improve on this year's 11. 5G networks offer significantly greater transmission capacity. However, despite all the hype, they won't represent a radical break from the current mobile experience." February 8, 2020 at 8:52 AM David. said... An illustration of how broken the security of the Things in the Internet is, Wang Wei's A Dozen Vulnerabilities Affect Millions of Bluetooth LE Powered Devices reports that: "A team of cybersecurity researchers late last week disclosed the existence of 12 potentially severe security vulnerabilities, collectively named 'SweynTooth,' affecting millions of Bluetooth-enabled wireless smart devices worldwide—and worryingly, a few of which haven't yet been patched. All SweynTooth flaws basically reside in the way software development kits (SDKs) used by multiple system-on-a-chip (SoC) have implemented Bluetooth Low Energy (BLE) wireless communication technology—powering at least 480 distinct products from several vendors including Samsung, FitBit and Xiaomi." A lot of the vulnerable products are medical devices ... March 8, 2020 at 3:22 PM David. said... karl Bode piles on the 5G debunking with Study Shows US 5G Is An Over-hyped Disappointment. He reports on 5G download speed is now faster than Wifi in seven leading 5G countries, a new study that would be great if the 1 country out of 8 wasn't the US. Bode writes: "The study, in one swipe, puts to bed claims that 5G is a "race" that the US is somehow winning through sheer ingenuity and industry coddling deregulation, and that 5G will be some sort of competitive panacea (high prices also hamstring it in this area). OpenSignal has a whole separate study on why 5G won't be supplanting WiFi anytime soon. All of this runs, again, in pretty stark contrast to claims by companies like Verizon that 5G is some widely available, near mystical technology that will revolutionize everything from smart vehicles to modern medicine. May 11, 2020 at 12:47 PM David. said... karl Bode points out that even Verizon Tries To Temper 5G Enthusiasm After Report Clearly Shows US 5G Is Slow, Lame: "Verizon's problem is that while the company will be deploying a lot of millimeter wave (mmwave) spectrum in key urban markets, that flavor of 5G lacks range and can't penetrate walls particularly well (for 5G conspiracy theorists, that means the technology is less likely to penetrate your skin and harm you, as well). For most users, what you see now with 4G is what you'll get for several years to come ... U.S. consumers, who already pay some of the highest prices in the world for Verizon 4G service, will also need to pay $10 extra a month (you know, just because), and shell out significant cash for early-adoption 5G devices that are fatter, more expensive, and have worse battery life than their current gear." May 21, 2020 at 4:46 PM Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ▼  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ►  March (6) ►  February (5) ▼  January (5) Regulating Social Media: Part 1 Advertising Is A Bubble Library of Congress Storage Architecture Meeting Bitcoin's Lightning Network (updated) Bunnie Huang's Betrusted Project ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-dshr-org-9998 ---- DSHR's Blog: Proof-of-Stake In Practice DSHR's Blog I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation. Tuesday, March 17, 2020 Proof-of-Stake In Practice At the most abstract level, the work of Eric Budish, Raphael Auer, Joshua Gans and Neil Gandal is obvious. A blockchain is secure only if the value to be gained by an attack is less than the cost of mounting it. These papers all assume that actors are "economically rational", driven by the immediate monetary bottom line, but this isn't always true in the real world. As I wrote when commenting on Gans and Gandal: As we see with Bitcoin's Lightning Network, true members of the cryptocurrency cult are not concerned that the foregone interest on capital they devote to making the system work is vastly greater than the fees they receive for doing so. The reason is that, as David Gerard writes, they believe that "number go up". In other words, they are convinced that the finite supply of their favorite coin guarantees that its value will in the future "go to the moon", providing capital gains that vastly outweigh the foregone interest. Follow me below the fold for a discussion of a recent attack on a Proof-of-Stake blockchain that wasn't motivated by the immediate monetary bottom line. Steem was one of the efforts to decentralize the Web discussed in the MIT report: They pointed out that: Right now, the distribution of SP across users in the system is very unequal -- more than 90% of SP tokens are held by less than 2% of account holders in the system. This immense disparity in voting power complicates Steemit’s narrative around democratized content curation -- it means that a very small number of users are extremely influential and that the vast majority of users’ votes are virtually inconsequential. Now this has proven true. David Gerard reports that: Distributed Proof-of-Stake leaves your blockchain open to takeover bids — such as when Justin Sun of TRON tried to take over the Steem blockchain, by enlisting exchanges such as Binance to pledge their holdings to his efforts. Gerard links to Yulin's Cheng's Tron takeover? Steem community in uproar as crypto exchanges back reversal of blockchain governance soft fork, a detailed account of events. First: On Feb. 14, Steemit entered into a "strategic partnership" with Tron that saw Steemit's chairman declare on social media that he had sold Steemit to [Justin Sun]," referring to Tron's founder. The result was that: Concerns that Tron might possess too much power over the network resulted in a move by the Steem community on Feb. 24 to implement a soft fork. The soft fork deactivated the voting power of a large number of tokens owned by TRON and Steemit. That was soft fork 2.22. One week later, on March 2nd, Tron arranged for exchanges, including Huobi, Binance and Poloniex, to stake tokens they held on behalf of their customers in a 51% attack: According to the list of accounts powered up on March. 2, the three exchanges collectively put in over 42 million STEEM Power (SP). With an overwhelming amount of stake, the Steemit team was then able to unilaterally implement hard fork 22.5 to regain their stake and vote out all top 20 community witnesses – server operators responsible for block production – using account @dev365 as a proxy. In the current list of Steem witnesses, Steemit and TRON’s own witnesses took up the first 20 slots. Although this attack didn't provide Tron with an immediate monetary reward, the long term value of retaining effective control of the blockchain was vastly greater than the cost of staking the tokens. I've been pointing out that the high Gini coefficients of cryptocurrencies means Proof-of-Stake centralizes control of the blockchain in the hands of the whales since 2017's Why Decentralize? quoted Vitalik Buterin pointing out that a realistic scenario was: In a proof of stake blockchain, 70% of the coins at stake are held at one exchange. Or in this case three exchanges cooperating. Apparently, the tokens that soft fork 2.22  blocked from voting were mined before the blockchain went live and retained by Steemit: "The stake was essentially premined and was always said to be for on-boarding and community building. The witnesses decided to freeze it in an attempt to prevent a hostile takeover of the network,” [@jeffjagoe] told The Block. "But they forgot Justin has a lot of money, and money buys buddies at the exchanges." Vitalik Buterin commented: "Apparently Steem DPOS got taken over by big exchanges voting with depositors' funds," he tweeted. "Seems like the first big instance of a 'de facto bribe attack' on coin voting (the bribe being exchs giving holders convenience and taking their votes." As Buterin wrote in 2014, Proof-of-Stake turned out to be non-trivial. Posted by David. at 8:00 AM Labels: bitcoin, distributed web No comments: Post a Comment Newer Post Older Post Home Subscribe to: Post Comments (Atom) Blog Rules Posts and comments are copyright of their respective authors who, by posting or commenting, license their work under a Creative Commons Attribution-Share Alike 3.0 United States License. Off-topic or unsuitable comments will be deleted. DSHR DSHR in ANWR Recent Comments Full comments Blog Archive ►  2021 (39) ►  August (2) ►  July (6) ►  June (8) ►  May (4) ►  April (6) ►  March (3) ►  February (5) ►  January (5) ▼  2020 (55) ►  December (4) ►  November (4) ►  October (3) ►  September (6) ►  August (5) ►  July (3) ►  June (6) ►  May (3) ►  April (5) ▼  March (6) Archival Cloud Storage Pricing More On Failures From FAST 2020 Proof-of-Stake In Practice Enterprise SSD Reliability Guest Post: Michael Nelson's Response Falling Research Productivity Revisited ►  February (5) ►  January (5) ►  2019 (66) ►  December (2) ►  November (4) ►  October (8) ►  September (5) ►  August (5) ►  July (7) ►  June (6) ►  May (7) ►  April (6) ►  March (7) ►  February (4) ►  January (5) ►  2018 (96) ►  December (7) ►  November (8) ►  October (10) ►  September (5) ►  August (8) ►  July (5) ►  June (7) ►  May (10) ►  April (8) ►  March (9) ►  February (9) ►  January (10) ►  2017 (82) ►  December (6) ►  November (6) ►  October (8) ►  September (6) ►  August (7) ►  July (5) ►  June (7) ►  May (6) ►  April (7) ►  March (11) ►  February (5) ►  January (8) ►  2016 (89) ►  December (4) ►  November (8) ►  October (10) ►  September (8) ►  August (8) ►  July (7) ►  June (8) ►  May (7) ►  April (5) ►  March (10) ►  February (7) ►  January (7) ►  2015 (75) ►  December (7) ►  November (5) ►  October (11) ►  September (5) ►  August (3) ►  July (3) ►  June (8) ►  May (10) ►  April (6) ►  March (6) ►  February (7) ►  January (4) ►  2014 (68) ►  December (7) ►  November (8) ►  October (6) ►  September (8) ►  August (7) ►  July (3) ►  June (5) ►  May (6) ►  April (5) ►  March (6) ►  February (2) ►  January (5) ►  2013 (67) ►  December (3) ►  November (6) ►  October (7) ►  September (6) ►  August (3) ►  July (5) ►  June (6) ►  May (5) ►  April (9) ►  March (5) ►  February (5) ►  January (7) ►  2012 (43) ►  December (4) ►  November (4) ►  October (6) ►  September (6) ►  August (2) ►  July (5) ►  June (2) ►  May (5) ►  March (1) ►  February (5) ►  January (3) ►  2011 (40) ►  December (2) ►  November (1) ►  October (7) ►  September (3) ►  August (5) ►  July (2) ►  June (2) ►  May (2) ►  April (4) ►  March (4) ►  February (4) ►  January (4) ►  2010 (17) ►  December (5) ►  November (3) ►  October (4) ►  September (2) ►  July (1) ►  June (1) ►  February (1) ►  2009 (8) ►  July (1) ►  June (1) ►  May (1) ►  April (1) ►  March (2) ►  January (2) ►  2008 (8) ►  December (2) ►  March (1) ►  January (5) ►  2007 (14) ►  December (1) ►  October (3) ►  September (1) ►  August (1) ►  July (2) ►  June (3) ►  May (1) ►  April (2) LOCKSS system has permission to collect, preserve, and serve this Archival Unit. Simple theme. Powered by Blogger. blog-esilibrary-com-3149 ---- Equinox Open Library Initiative Skip to content Facebook-f Twitter Linkedin-in Vimeo About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL SubjectsPlus Services Consulting Workflow and Advanced ILS Consultation Data Services Web Design IT Consultation Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Resource Library Collaborate Communities Evergreen Koha CORAL SubjectsPlus Equinox Grants Connect Sales Support Donate Contact Us × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL SubjectsPlus Services Consulting Workflow and Advanced ILS Consultation Data Services Web Design IT Consultation Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Resource Library Collaborate Communities Evergreen Koha CORAL SubjectsPlus Equinox Grants Connect Sales Support Donate Contact Us About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media Equinox provides innovative open source software for libraries of all types. Extraordinary service. Exceptional value. As a 501(c)(3) nonprofit corporation, Equinox supports library automation by investing in open source software and providing technology services for libraries. Products Services Ask Us How » About Equinox » News & Events Press Release Center for Khmer Studies Goes Live on Koha ILS Learn More » Press Release Vermont Jazz Center Goes Live on Koha ILS Learn More » Press Release Equinox Open Library Initiative Presents “Developing Open Source Tools To Support Libraries During COVID-19” at the 2021 ALA Annual Conference Learn More » Products & Services Koha is the first free and open source library automation package. Equinox’s team includes some of Koha’s core developers. Learn More Evergreen is a unique and powerful open source ILS designed to support large, dispersed, and multi-tiered library networks. Learn More Equinox provides ongoing educational opportunities through equinoxEDU, including live webinars, workshops, and online resources. Learn More Fulfillment is an open source interlibrary loan management system. Fulfillment can be used alongside or in connection with any integrated library system.     Learn More CORAL is an open source electronic resources management system. Its interoperable modules allow libraries to streamline their management of electronic resources. Learn More Customized For Your Library Consulting Migration Development Hosting & Support Training & Education Why Choose Equinox? Equinox is different from most ILS providers. As a non-profit organization, our guiding principle is to provide a transparent, open software development process, and we release all code developed to publicly available repositories. Equinox is experienced with serving libraries of all types in the United States and internationally. We’ve supported and migrated libraries of all sizes, from single library sites to full statewide implementations. Equinox is technically proficient, with skilled project managers, software developers, and data services staff ready to assist you. We’ve helped libraries automating for the first time and those migrating from legacy ILS systems. Equinox knows libraries. More than fifty percent of our team are professional librarians with direct experience working in academic, government, public and special libraries. We understand the context and ecosystem of library software. Sign up today for news & updates! Please enable JavaScript in your browser to complete this form. Email * Name *First Last Organization I'd like to hear more about: Koha Evergreen equinoxEDU Other Please describe: Submit Working with Equinox has been like night and day. It's amazing to have a system so accessible to our patrons and easy to use. It has super-charged our library lending power! Brooke MatsonExecutive Director, Spark Central Equinox Open Library Initiative hosts Evergreen for the SCLENDS library consortium. Their technical support has been both prompt, responsive, and professional in reacting to our support requests during COVID-19. They have been a valuable consortium partner in meeting the needs of the member libraries and their patrons. Chris YatesSouth Carolina State Library Working with Equinox was great! They were able to migrate our entire consortium with no down time during working hours. The Equinox team went the extra mile in helping Missouri Evergreen. Colleen KnightMissouri Evergreen Previous Next Twitter Equinox OLIFollow Equinox OLI@EquinoxOLI·13 Aug The latest from @ALA_ACRL: Value of Academic Libraries Committee Presents Survey Results for COVID-19 Protocols for Academic Libraries. Read more: https://bit.ly/3fCdCPL #COVID #libraries Reply on Twitter 1426001816030961668Retweet on Twitter 1426001816030961668Like on Twitter 1426001816030961668Twitter 1426001816030961668 Equinox OLI@EquinoxOLI·12 Aug Take a look at the tools we developed over the last year to assist libraries in responding to COVID-19. Watch our 2021 @ALALibrary Conference presentation: https://splus.equinoxoli.org/subjects/equinox_covid19tools Reply on Twitter 1425611757041721344Retweet on Twitter 1425611757041721344Like on Twitter 14256117570417213441Twitter 1425611757041721344 Equinox OLI@EquinoxOLI·10 Aug Say goodbye to your spreadsheets and hello to #CORAL, an open source solution for managing electronic resources! The next equinoxEDU: Spotlight on CORAL ERM is August 25, 1-2pm EDT - free and open. Register: https://bit.ly/3fxqpmA #equinoxEDU #equinox @coral_erm #opensource Reply on Twitter 1425161290142699530Retweet on Twitter 14251612901426995303Like on Twitter 1425161290142699530Twitter 1425161290142699530 Equinox OLI@EquinoxOLI·10 Aug Save the date! #ChatOpenS is 08/18 w/ guest moderator @xsong9 discussing #electronicresourcemanagement, #opensource, and the #CORAL community. @coral_erm #oss #opensource #librarytech #tech Reply on Twitter 1424942367455461384Retweet on Twitter 14249423674554613844Like on Twitter 14249423674554613841Twitter 1424942367455461384 Equinox OLI@EquinoxOLI·7 Aug The inaugural LibLearnX is Jan 21-24 2022 w/ @ALA_ACRL & @ALALibrary. Call for proposals now open: https://bit.ly/37r0FDW #librarylife #education Reply on Twitter 1423803831083753472Retweet on Twitter 14238038310837534721Like on Twitter 14238038310837534721Twitter 1423803831083753472 Events Open Source Twitter Chat with Guest Moderator Xiaoyan Song #ChatOpenS Event August 18, 2021 (12-1pm ET). Join us on Twitter with the hashtag #ChatOpenS as we discuss CORAL ERM and electronic resource management with guest moderator Xiaoyan Song, Electronic Resources Librarian Read More equinoxEDU: Spotlight on CORAL ERM Event August 25, 2021 (1-2pm ET). Say goodbye to your spreadsheets and hello to an open source solution for managing electronic resources! Join us to learn how CORAL ERM can help Read More Open Source Twitter Chat with Andrea Buntz Neiman #ChatOpenS Event July 14, 2021 (12-1pm ET).  Join us on Twitter with the hashtag #ChatOpenS as we discuss Fulfillment with Development Project Manager, Andrea Buntz Neiman. Read More Equinox Open Library Initiative Equinox Open Library Initiative Inc. is a 501(c)3 corporation devoted to the support of open source software for public libraries, academic libraries, school libraries, and special libraries. As the successor to Equinox Software, Inc., Equinox provides exceptional service and technical expertise delivered by experienced librarians and technical staff. Equinox offers affordable, customized consulting services, software development, hosting, training, and technology support for libraries of all sizes and types. Connect Please enable JavaScript in your browser to complete this form. Email * Submit Facebook-f Twitter Linkedin-in Vimeo Contact Us info@equinoxoli.org 877.OPEN.ILS (877.673.6457) +1.770.709.5555 PO Box 69 Norcross, GA 30091 Copyright © 2007 – 2021 Equinox Open Library Initiative. All rights reserved. Privacy Policy  |   Terms of Use  |   Equinox Library Services Canada  |   Site Map Skip to content Open toolbar Accessibility Tools Increase Text Decrease Text Grayscale High Contrast Negative Contrast Light Background Links Underline Readable Font Reset blog-esilibrary-com-7463 ---- Equinox Open Library Initiative Skip to content Facebook-f Twitter Linkedin-in Vimeo About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL SubjectsPlus Services Consulting Workflow and Advanced ILS Consultation Data Services Web Design IT Consultation Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Resource Library Collaborate Communities Evergreen Koha CORAL SubjectsPlus Equinox Grants Connect Sales Support Donate Contact Us × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL SubjectsPlus Services Consulting Workflow and Advanced ILS Consultation Data Services Web Design IT Consultation Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Resource Library Collaborate Communities Evergreen Koha CORAL SubjectsPlus Equinox Grants Connect Sales Support Donate Contact Us About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media × About Our Team Newsroom Events History Ethics Disclosures Products Evergreen Koha Fulfillment CORAL Services Consulting Migration Development Hosting & Support Training & Education Learn equinoxEDU Tips & Tricks Conference Presentations Collaborate Communities Partnerships Grants We Provide Connect Sales Support Donate Social Media Equinox provides innovative open source software for libraries of all types. Extraordinary service. Exceptional value. As a 501(c)(3) nonprofit corporation, Equinox supports library automation by investing in open source software and providing technology services for libraries. Products Services Ask Us How » About Equinox » News & Events Press Release Center for Khmer Studies Goes Live on Koha ILS Learn More » Press Release Vermont Jazz Center Goes Live on Koha ILS Learn More » Press Release Equinox Open Library Initiative Presents “Developing Open Source Tools To Support Libraries During COVID-19” at the 2021 ALA Annual Conference Learn More » Products & Services Koha is the first free and open source library automation package. Equinox’s team includes some of Koha’s core developers. Learn More Evergreen is a unique and powerful open source ILS designed to support large, dispersed, and multi-tiered library networks. Learn More Equinox provides ongoing educational opportunities through equinoxEDU, including live webinars, workshops, and online resources. Learn More Fulfillment is an open source interlibrary loan management system. Fulfillment can be used alongside or in connection with any integrated library system.     Learn More CORAL is an open source electronic resources management system. Its interoperable modules allow libraries to streamline their management of electronic resources. Learn More Customized For Your Library Consulting Migration Development Hosting & Support Training & Education Why Choose Equinox? Equinox is different from most ILS providers. As a non-profit organization, our guiding principle is to provide a transparent, open software development process, and we release all code developed to publicly available repositories. Equinox is experienced with serving libraries of all types in the United States and internationally. We’ve supported and migrated libraries of all sizes, from single library sites to full statewide implementations. Equinox is technically proficient, with skilled project managers, software developers, and data services staff ready to assist you. We’ve helped libraries automating for the first time and those migrating from legacy ILS systems. Equinox knows libraries. More than fifty percent of our team are professional librarians with direct experience working in academic, government, public and special libraries. We understand the context and ecosystem of library software. Sign up today for news & updates! Please enable JavaScript in your browser to complete this form. Email * Name *First Last Organization I'd like to hear more about: Koha Evergreen equinoxEDU Other Please describe: Submit Working with Equinox has been like night and day. It's amazing to have a system so accessible to our patrons and easy to use. It has super-charged our library lending power! Brooke MatsonExecutive Director, Spark Central Equinox Open Library Initiative hosts Evergreen for the SCLENDS library consortium. Their technical support has been both prompt, responsive, and professional in reacting to our support requests during COVID-19. They have been a valuable consortium partner in meeting the needs of the member libraries and their patrons. Chris YatesSouth Carolina State Library Working with Equinox was great! They were able to migrate our entire consortium with no down time during working hours. The Equinox team went the extra mile in helping Missouri Evergreen. Colleen KnightMissouri Evergreen Previous Next Twitter Equinox OLIFollow Equinox OLI@EquinoxOLI·13 Aug The latest from @ALA_ACRL: Value of Academic Libraries Committee Presents Survey Results for COVID-19 Protocols for Academic Libraries. Read more: https://bit.ly/3fCdCPL #COVID #libraries Reply on Twitter 1426001816030961668Retweet on Twitter 1426001816030961668Like on Twitter 1426001816030961668Twitter 1426001816030961668 Equinox OLI@EquinoxOLI·12 Aug Take a look at the tools we developed over the last year to assist libraries in responding to COVID-19. Watch our 2021 @ALALibrary Conference presentation: https://splus.equinoxoli.org/subjects/equinox_covid19tools Reply on Twitter 1425611757041721344Retweet on Twitter 1425611757041721344Like on Twitter 14256117570417213441Twitter 1425611757041721344 Equinox OLI@EquinoxOLI·10 Aug Say goodbye to your spreadsheets and hello to #CORAL, an open source solution for managing electronic resources! The next equinoxEDU: Spotlight on CORAL ERM is August 25, 1-2pm EDT - free and open. Register: https://bit.ly/3fxqpmA #equinoxEDU #equinox @coral_erm #opensource Reply on Twitter 1425161290142699530Retweet on Twitter 14251612901426995303Like on Twitter 1425161290142699530Twitter 1425161290142699530 Equinox OLI@EquinoxOLI·10 Aug Save the date! #ChatOpenS is 08/18 w/ guest moderator @xsong9 discussing #electronicresourcemanagement, #opensource, and the #CORAL community. @coral_erm #oss #opensource #librarytech #tech Reply on Twitter 1424942367455461384Retweet on Twitter 14249423674554613844Like on Twitter 14249423674554613841Twitter 1424942367455461384 Equinox OLI@EquinoxOLI·7 Aug The inaugural LibLearnX is Jan 21-24 2022 w/ @ALA_ACRL & @ALALibrary. Call for proposals now open: https://bit.ly/37r0FDW #librarylife #education Reply on Twitter 1423803831083753472Retweet on Twitter 14238038310837534721Like on Twitter 14238038310837534721Twitter 1423803831083753472 Events Open Source Twitter Chat with Guest Moderator Xiaoyan Song #ChatOpenS Event August 18, 2021 (12-1pm ET). Join us on Twitter with the hashtag #ChatOpenS as we discuss CORAL ERM and electronic resource management with guest moderator Xiaoyan Song, Electronic Resources Librarian Read More equinoxEDU: Spotlight on CORAL ERM Event August 25, 2021 (1-2pm ET). Say goodbye to your spreadsheets and hello to an open source solution for managing electronic resources! Join us to learn how CORAL ERM can help Read More Open Source Twitter Chat with Andrea Buntz Neiman #ChatOpenS Event July 14, 2021 (12-1pm ET).  Join us on Twitter with the hashtag #ChatOpenS as we discuss Fulfillment with Development Project Manager, Andrea Buntz Neiman. Read More Equinox Open Library Initiative Equinox Open Library Initiative Inc. is a 501(c)3 corporation devoted to the support of open source software for public libraries, academic libraries, school libraries, and special libraries. As the successor to Equinox Software, Inc., Equinox provides exceptional service and technical expertise delivered by experienced librarians and technical staff. Equinox offers affordable, customized consulting services, software development, hosting, training, and technology support for libraries of all sizes and types. Connect Please enable JavaScript in your browser to complete this form. Email * Submit Facebook-f Twitter Linkedin-in Vimeo Contact Us info@equinoxoli.org 877.OPEN.ILS (877.673.6457) +1.770.709.5555 PO Box 69 Norcross, GA 30091 Copyright © 2007 – 2021 Equinox Open Library Initiative. All rights reserved. Privacy Policy  |   Terms of Use  |   Equinox Library Services Canada  |   Site Map Skip to content Open toolbar Accessibility Tools Increase Text Decrease Text Grayscale High Contrast Negative Contrast Light Background Links Underline Readable Font Reset blog-ethereum-org-4851 ---- Slasher Ghost, and Other Developments in Proof of Stake | Ethereum Foundation Blog Created with Sketch. Search Categories R&D Research & Development Devcon Devcon Org Organizational ESP Ecosystem Support ETH.Org Ethereum.org Sec Security Archive 2021 Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2020 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2019 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2018 Dec, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2017 Dec, Nov, Oct, Sep, Aug, Jul, May, Apr, Mar, Feb, Jan 2016 Dec, Nov, Oct, Sep, Jul, Jun, May, Apr, Mar, Feb, Jan 2015 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2014 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2013 Dec Slasher Ghost, and Other Developments in Proof of Stake Posted by Vitalik Buterin on October 3, 2014 Research & Development Special thanks to Vlad Zamfir and Zack Hess for ongoing research and discussions on proof-of-stake algorithms and their own input into Slasher-like proposals One of the hardest problems in cryptocurrency development is that of devising effective consensus algorithms. Certainly, relatively passable default options exist. At the very least it is possible to rely on a Bitcoin-like proof of work algorithm based on either a randomly-generated circuit approach targeted for specialized-hardware resitance, or failing that simple SHA3, and our existing GHOST optimizations allow for such an algorithm to provide block times of 12 seconds. However, proof of work as a general category has many flaws that call into question its sustainability as an exclusive source of consensus; 51% attacks from altcoin miners, eventual ASIC dominance and high energy inefficiency are perhaps the most prominent. Over the last few months we have become more and more convinced that some inclusion of proof of stake is a necessary component for long-term sustainability; however, actually implementing a proof of stake algorithm that is effective is proving to be surprisingly complex. The fact that Ethereum includes a Turing-complete contracting system complicates things further, as it makes certain kinds of collusion much easier without requiring trust, and creates a large pool of stake in the hands of decentralized entities that have the incentive to vote with the stake to collect rewards, but which are too stupid to tell good blockchains from bad. What the rest of this article will show is a set of strategies that deal with most of the issues surrounding proof of stake algorithms as they exist today, and a sketch of how to extend our current preferred proof-of-stake algorithm, Slasher, into something much more robust. Historical Overview: Proof of stake and Slasher If you’re not yet well-versed in the nuances of proof of stake algorithms, first read: https://blog.ethereum.org/2014/07/05/stake/ The fundamental problem that consensus protocols try to solve is that of creating a mechanism for growing a blockchain over time in a decentralized way that cannot easily be subverted by attackers. If a blockchain does not use a consensus protocol to regulate block creation, and simply allows anyone to add a block at any time, then an attacker or botnet with very many IP addresses could flood the network with blocks, and particularly they can use their power to perform double-spend attacks - sending a payment for a product, waiting for the payment to be confirmed in the blockchain, and then starting their own “fork” of the blockchain, substituting the payment that they made earlier with a payment to a different account controlled by themselves, and growing it longer than the original so everyone accepts this new blockchain without the payment as truth. The general solution to this problem involves making a block “hard” to create in some fashion. In the case of proof of work, each block requires computational effort to produce, and in the case of proof of stake it requires ownership of coins - in most cases, it’s a probabilistic process where block-making privileges are doled out randomly in proportion to coin holdings, and in more exotic “negative block reward” schemes anyone can create a block by spending a certain quantity of funds, and they are compensated via transaction fees. In any of these approaches, each chain has a “score” that roughly reflects the total difficulty of producing the chain, and the highest-scoring chain is taken to represent the “truth” at that particular time. For a detailed overview of some of the finer points of proof of stake, see the above-linked article; for those readers who are already aware of the issues I will start off by presenting a semi-formal specification for Slasher: Blocks are produced by miners; in order for a block to be valid it must satisfy a proof-of-work condition. However, this condition is relatively weak (eg. we can target the mining reward to something like 0.02x the genesis supply every year) Every block has a set of designated signers, which are chosen beforehand (see below). For a block with valid PoW to be accepted as part of the chain it must be accompanied by signatures from at least two thirds of its designated signers. When block N is produced, we say that the set of potential signers of block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D2 where D2 is a difficulty parameter targeting 15 signers per block (ie. if block N has less than 15 signers it goes down otherwise it goes up). Note that the set of potential signers is very computationally intensive to fully enumerate, and we don't try to do so; instead we rely on signers to self-declare. If a potential signer for block N + 3000 wants to become a designated signer for that block, they must send a special transaction accepting this responsibility and that transaction must get included between blocks N + 1 and N + 64. The set of designated signers for block N + 3000 is the set of all individuals that do this. This "signer must confirm" mechanism helps ensure that the majority of signers will actually be online when the time comes to sign. For blocks 0 ... 2999, the set of signers is empty, so proof of work alone suffices to create those blocks. When a designated signer adds their signature to block N + 3000, they are scheduled to receive a reward in block N + 6000. If a signer signs two different blocks at height N + 3000, then if someone detects the double-signing before block N + 6000 they can submit an "evidence" transaction containing the two signatures, destroying the signer's reward and transferring a third of it to the whistleblower. If there is an insufficient number of signers to sign at a particular block height h, a miner can produce a block with height h+1 directly on top of the block with height h-1 by mining at an 8x higher difficulty (to incentivize this, but still make it less attractive than trying to create a normal block, there is a 6x higher reward). Skipping over two blocks has higher factors of 16x diff and 12x reward, three blocks 32x and 24x, etc. Essentially, by explicitly punishing double-signing, Slasher in a lot of ways, although not all, makes proof of stake act like a sort of simulated proof of work. An important incidental benefit of Slasher is the non-revert property. In proof of work, sometimes after one node mines one block some other node will immediately mine two blocks, and so some nodes will need to revert back one block upon seeing the longer chain. Here, every block requires two thirds of the signers to ratify it, and a signer cannot ratify two blocks at the same height without losing their gains in both chains, so assuming no malfeasance the blockchain will never revert. From the point of view of a decentralized application developer, this is a very desirable property as it means that “time” only moves in one direction, just like in a server-based environment. However, Slasher is still vulnerable to one particular class of attack: long-range attacks. Instead of trying to start a fork from ten blocks behind the current head, suppose that an attacker tries to start a fork starting from ten thousand blocks behind, or even the genesis block - all that matters is that the depth of the fork must be greater than the duration of the reward lockup. At that point, because users’ funds are unlocked and they can move them to a new address to escape punishment, users have no disincentive against signing on both chains. In fact, we may even expect to see a black market of people selling their old private keys, culminating with an attacker single-handedly acquiring access to the keys that controlled over 50% of the currency supply at some point in history. One approach to solving the long-range double-signing problem is transactions-as-proof-of-stake, an alternative PoS solution that does not have an incentive to double-sign because it’s the transactions that vote, and there is no reward for sending a transaction (in fact there’s a cost, and the reward is outside the network); however, this does nothing to stop the black key market problem. To properly deal with that issue, we will need to relax a hidden assumption. Subjective Scoring and Trust For all its faults, proof of work does have some elegant economic properties. Particularly, because proof of work requires an externally rivalrous resource, something with exists and is consumed outside the blockchain, in order to generate blocks (namely, computational effort), launching a fork against a proof of work chain invariably requires having access to, and spending, a large quantity of economic resources. In the case of proof of stake, on the other hand, the only scarce value involved is value within the chain, and between multiple chains that value is not scarce at all. No matter what algorithm is used, in proof of stake 51% of the owners of the genesis block could eventually come together, collude, and produce a longer (ie. higher-scoring) chain than everyone else. This may seem like a fatal flaw, but in reality it is only a flaw if we implicitly accept an assumption that is made in the case of proof of work: that nodes have no knowledge of history. In a proof-of-work protocol, a new node, having no direct knowledge of past events and seeing nothing but the protocol source code and the set of messages that have already been published, can join the network at any point and determine the score of all possible chains, and from there the block that is at the top of the highest-scoring main chain. With proof of stake, as we described, such a property cannot be achieved, since it’s very cheap to acquire historical keys and simulate alternate histories. Thus, we will relax our assumptions somewhat: we will say that we are only concerned with maintaining consensus between a static set of nodes that are online at least once every N days, allowing these nodes to use their own knowledge of history to reject obvious long-range forks using some formula, and new nodes or long-dormant nodes will need to specify a “checkpoint” (a hash of a block representing what the rest of the network agrees is a recent state) in order to get back onto the consensus. Such an approach is essentially a hybrid between the pure and perhaps harsh trust-no-one logic of Bitcoin and the total dependency on socially-driven consensus found in networks like Ripple. In Ripple’s case, users joining the system need to select a set of nodes that they trust (or, more precisely, trust not to collude) and rely on those nodes during every step of the consensus process. In the case of Bitcoin, the theory is that no such trust is required and the protocol is completely self-contained; the system works just as well between a thousand isolated cavemen with laptops on a thousand islands as it does in a strongly connected society (in fact, it might work better with island cavemen, since without trust collusion is more difficult). In our hybrid scheme, users need only look to the society outside of the protocol exactly once - when they first download a client and find a checkpoint - and can enjoy Bitcoin-like trust properties starting from that point. In order to determine which trust assumption is the better one to take, we ultimately need to ask a somewhat philosophical question: do we want our consensus protocols to exist as absolute cryptoeconomic constructs completely independent of the outside world, or are we okay with relying heavily on the fact that these systems exist in the context of a wider society? Although it is indeed a central tenet of mainstream cryptocurrency philosophy that too much external dependence is dangerous, arguably the level of independence that Bitcoin affords us in reality is no greater than that provided by the hybrid model. The argument is simple: even in the case of Bitcoin, a user must also take a leap of trust upon joining the network - first by trusting that they are joining a protocol that contains assets that other people find valuable (eg. how does a user know that bitcoins are worth $380 each and dogecoins only $0.0004? Especially with the different capabilities of ASICs for different algorithms, hashpower is only a very rough estimate), and second by trusting that they are downloading the correct software package. In both the supposedly “pure” model and the hybrid model there is always a need to look outside the protocol exactly once. Thus, on the whole, the gain from accepting the extra trust requirement (namely, environmental friendliness and security against oligopolistic mining pools and ASIC farms) is arguably worth the cost. Additionally, we may note that, unlike Ripple consensus, the hybrid model is still compatible with the idea of blockchains “talking” to each each other by containing a minimal “light” implementation of each other’s protocols. The reason is that, while the scoring mechanism is not “absolute” from the point of view of a node without history suddenly looking at every block, it is perfectly sufficient from the point of view of an entity that remains online over a long period of time, and a blockchain certainly is such an entity. So far, there have been two major approaches that followed some kind of checkpoint-based trust model: Developer-issued checkpoints - the client developer issues a new checkpoint with each client upgrade (eg. used in PPCoin) Revert limit - nodes refuse to accept forks that revert more than N (eg. 3000) blocks (eg. used in Tendermint) The first approach has been roundly criticized by the cryptocurrency community for being too centralized. The second, however, also has a flaw: a powerful attacker can not only revert a few thousand blocks, but also potentially split the network permanently. In the N-block revert case, the strategy is as follows. Suppose that the network is currently at block 10000, and N = 3000. The attacker starts a secret fork, and grows it by 3001 blocks faster than the main network. When the main network gets to 12999, and some node produces block 13000, the attacker reveals his own fork. Some nodes will see the main network’s block 13000, and refuse to switch to the attacker’s fork, but the nodes that did not yet see that block will be happy to revert from 12999 to 10000 and then accept the attacker’s fork. From there, the network is permanently split. Fortunately, one can actually construct a third approach that neatly solves this problem, which we will call exponentially subjective scoring. Essentially, instead of rejecting forks that go back too far, we simply penalize them on a graduating scale. For every block, a node maintains a score and a “gravity” factor, which acts as a multiplier to the contribution that the block makes to the blockchain’s score. The gravity of the genesis block is 1, and normally the gravity of any other block is set to be equal to the gravity of its parent. However, if a node receives a block whose parent already has a chain of N descendants (ie. it’s a fork reverting N blocks), that block’s gravity is penalized by a factor of 0.99N, and the penalty propagates forever down the chain and stacks multiplicatively with other penalties. That is, a fork which starts 1 block ago will need to grow 1% faster than the main chain in order to overtake it, a fork which starts 100 blocks ago will need to grow 2.718 times as quickly, and a fork which starts 3000 blocks ago will need to grow 12428428189813 times as quickly - clearly an impossibility with even trivial proof of work. The algorithm serves to smooth out the role of checkpointing, assigning a small “weak checkpoint” role to each individual block. If an attacker produces a fork that some nodes hear about even three blocks earlier than others, those two chains will need to stay within 3% of each other forever in order for a network split to maintain itself. There are other solutions that could be used aside from, or even alongside ESS; a particular set of strategies involves stakeholders voting on a checkpoint every few thousand blocks, requiring every checkpoint produced to reflect a large consensus of the majority of the current stake (the reason the majority of the stake can’t vote on every block is, of course, that having that many signatures would bloat the blockchain). Slasher Ghost The other large complexity in implementing proof of stake for Ethereum specifically is the fact that the network includes a Turing-complete financial system where accounts can have arbitrary permissions and even permissions that change over time. In a simple currency, proof of stake is relatively easy to accomplish because each unit of currency has an unambiguous owner outside the system, and that owner can be counted on to participate in the stake-voting process by signing a message with the private key that owns the coins. In Ethereum, however, things are not quite so simple: if we do our job promoting proper wallet security right, the majority of ether is going to be stored in specialized storage contracts, and with Turing-complete code there is no clear way of ascertaining or assigning an “owner”. One strategy that we looked at was delegation: requiring every address or contract to assign an address as a delegate to sign for them, and that delegate account would have to be controlled by a private key. However, there is a problem with any such approach. Suppose that a majority of the ether in the system is actually stored in application contracts (as opposed to personal storage contracts); this includes deposits in SchellingCoins and other stake-based protocols, security deposits in probabilistic enforcement systems, collateral for financial derivatives, funds owned by DAOs, etc. Those contracts do not have an owner even in spirit; in that case, the fear is that the contract will default to a strategy of renting out stake-voting delegations to the highest bidder. Because attackers are the only entities willing to bid more than the expected return from the delegation, this will make it very cheap for an attacker to acquire the signing rights to large quantities of stake. The only solution to this within the delegation paradigm is to make it extremely risky to dole out signing privileges to untrusted parties; the simplest approach is to modify Slasher to require a large deposit, and slash the deposit as well as the reward in the event of double-signing. However, if we do this then we are essentially back to entrusting the fate of a large quantity of funds to a single private key, thereby defeating much of the point of Ethereum in the first place. Fortunately, there is one alternative to delegation that is somewhat more effective: letting contracts themselves sign. To see how this works, consider the following protocol: There is now a SIGN opcode added. A signature is a series of virtual transactions which, when sequentially applied to the state at the end of the parent block, results in the SIGN opcode being called. The nonce of the first VTX in the signature must be the prevhash being signed, the nonce of the second must be the prevhash plus one, and so forth (alternatively, we can make the nonces -1, -2, -3 etc. and require the prevhash to be passed in through transaction data so as to be eventually supplied as an input to the SIGN opcode). When the block is processed, the state transitions from the VTXs are reverted (this is what is meant by "virtual") but a deposit is subtracted from each signing contract and the contract is registered to receive the deposit and reward in 3000 blocks. Basically, it is the contract’s job to determine the access policy for signing, and the contract does this by placing the SIGN opcode behind the appropriate set of conditional clauses. A signature now becomes a set of transactions which together satisfy this access policy. The incentive for contract developers to keep this policy secure, and not dole it out to anyone who asks, is that if it is not secure then someone can double-sign with it and destroy the signing deposit, taking a portion for themselves as per the Slasher protocol. Some contracts will still delegate, but this is unavoidable; even in proof-of-stake systems for plain currencies such as NXT, many users end up delegating (eg. DPOS even goes so far as to institutionalize delegation), and at least here contracts have an incentive to delegate to an access policy that is not likely to come under the influence of a hostile entity - in fact, we may even see an equilibrium where contracts compete to deliver secure blockchain-based stake pools that are least likely to double-vote, thereby increasing security over time. However, the virtual-transactions-as-signatures paradigm does impose one complication: it is no longer trivial to provide an evidence transaction showing two signatures by the same signer at the same block height. Because the result of a transaction execution depends on the starting state, in order to ascertain whether a given evidence transaction is valid one must prove everything up to the block in which the second signature was given. Thus, one must essentially “include” the fork of a blockchain inside of the main chain. To do this efficiently, a relatively simple proposal is a sort of “Slasher GHOST” protocol, where one can include side-blocks in the main chain as uncles. Specifically, we declare two new transaction types: [block_number, uncle_hash] - this transaction is valid if (1) the block with the given uncle_hash has already been validated, (2) the block with the given uncle_hash has the given block number, and (3) the parent of that uncle is either in the main chain or was included earlier as an uncle. During the act of processing this transaction, if addresses that double-signed at that height are detected, they are appropriately penalized. [block_number, uncle_parent_hash, vtx] - this transaction is valid if (1) the block with the given uncle_parent_hash has already been validated, (2) the given virtual transaction is valid at the given block height with the state at the end of uncle_parent_hash, and (3) the virtual transaction shows a signature by an address which also signed a block at the given block_number in the main chain. This transaction penalizes that one address. Essentially, one can think of the mechanism as working like a “zipper”, with one block from the fork chain at a time being zipped into the main chain. Note that for a fork to start, there must exist double-signers at every block; there is no situation where there is a double-signer 1500 blocks into a fork so a whistleblower must “zip” 1499 innocent blocks into a chain before getting to the target block - rather, in such a case, even if 1500 blocks need to be added, each one of them notifies the main chain about five separate malfeasors that double-signed at that height. One somewhat complicated property of the scheme is that the validity of these “Slasher uncles” depends on whether or not the node has validated a particular block outside of the main chain; to facilitate this, we specify that a response to a “getblock” message in the wire protocol must include the uncle-dependencies for a block before the actual block. Note that this may sometimes lead to a recursive expansion; however, the denial-of-service potential is limited since each individual block still requires a substantial quantity of proof-of-work to produce. Blockmakers and Overrides Finally, there is a third complication. In the hybrid-proof-of-stake version of Slasher, if a miner has an overwhelming share of the hashpower, then the miner can produce multiple versions of each block, and send different versions to different parts of the network. Half the signers will see and sign one block, half will see and sign another block, and the network will be stuck with two blocks with insufficient signatures, and no signer willing to slash themselves to complete the process; thus, a proof-of-work override will be required, a dangerous situation since the miner controls most of the proof-of-work. There are two possible solutions here: Signers should wait a few seconds after receiving a block before signing, and only sign stochastically in some fashion that ensures that a random one of the blocks will dominate. There should be a single "blockmaker" among the signers whose signature is required for a block to be valid. Effectively, this transfers the "leadership" role from a miner to a stakeholder, eliminating the problem, but at the cost of adding a dependency on a single party that now has the ability to substantially inconvenience everyone by not signing, or unintentionally by being the target of a denial-of-service attack. Such behavior can be disincentivized by having the signer lose part of their deposit if they do not sign, but even still this will result in a rather jumpy block time if the only way to get around an absent blockmaker is using a proof-of-work override. One possible solution to the problem in (2) is to remove proof of work entirely (or almost entirely, keeping a minimal amount for anti-DDoS value), replacing it with a mechanism that Vlad Zamfir has coined “delegated timestamping”. Essentially, every block must appear on schedule (eg. at 15 second intervals), and when a block appears the signers vote 1 if the block was on time, or 0 if the block was too early or too late. If the majority of the signers votes 0, then the block is treated as invalid - kept in the chain in order to give the signers their fair reward, but the blockmaker gets no reward and the state transition gets skipped over. Voting is incentivized via schellingcoin - the signers whose vote agrees with the majority get an extra reward, so assuming that everyone else is going to be honest everyone has the incentive to be honest, in a self-reinforcing equilibrium. The theory is that a 15-second block time is too fast for signers to coordinate on a false vote (the astute reader may note that the signers were decided 3000 blocks in advance so this is not really true; to fix this we can create two groups of signers, one pre-chosen group for validation and another group chosen at block creation time for timestamp voting). Putting it all Together Taken together, we can thus see something like the following working as a functional version of Slasher: Every block has a designated blockmaker, a set of designated signers, and a set of designated timestampers. For a block to be accepted as part of the chain it must be accompanied by virtual-transactions-as-signatures from the blockmaker, two thirds of the signers and 10 timestampers, and the block must have some minimal proof of work for anti-DDoS reasons (say, targeted to 0.01x per year) During block N, we say that the set of potential signers of block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D2 where D2 is a difficulty parameter targeting 15 signers per block (ie. if block N has less than 15 signers it goes down otherwise it goes up). If a potential signer for block N + 3000 wants to become a signer, they must send a special transaction accepting this responsibility and supplying a deposit, and that transaction must get included between blocks N + 1 and N + 64. The set of designated signers for block N + 3000 is the set of all individuals that do this, and the blockmaker is the designated signer with the lowest value for sha3(address + block[N].hash). If the signer set is empty, no block at that height can be made. For blocks 0 ... 2999, the blockmaker and only signer is the protocol developer. The set of timestampers of the block N + 3000 is the set of addresses such that sha3(address + block[N].hash) < block[N].balance(address) * D3, where D3 is targeted such that there is an average of 20 timestampers each block (ie. if block N has less than 20 timestampers it goes down otherwise it goes up). Let T be the timestamp of the genesis block. When block N + 3000 is released, timestampers can supply virtual-transactions-as-signatures for that block, and have the choice of voting 0 or 1 on the block. Voting 1 means that they saw the block within 7.5 seconds of time T + (N + 3000) * 15, and voting 0 means that they received the block when the time was outside that range. Note that nodes should detect if their clocks are out of sync with everyone else's clocks on the blockchain, and if so adjust their system clocks. Timestampers who voted along with the majority receive a reward, other timestampers get nothing. The designated signers for block N + 3000 have the ability to sign that block by supplying a set of virtual-transactions-as-a-signature. All designated signers who sign are scheduled to receive a reward and their returned deposit in block N + 6000. Signers who skipped out are scheduled to receive their returned deposit minus twice the reward (this means that it's only economically profitable to sign up as a signer if you actually think there is a chance greater than 2/3 that you will be online). If the majority timestamper vote is 1, the blockmaker is scheduled to receive a reward and their returned deposit in block N + 6000. If the majority timestamper vote is 0, the blockmaker is scheduled to receive their deposit minus twice the reward, and the block is ignored (ie. the block is in the chain, but it does not contribute to the chain's score, and the state of the next block starts from the end state of the block before the rejected block). If a signer signs two different blocks at height N + 3000, then if someone detects the double-signing before block N + 6000 they can submit an "evidence" transaction containing the two signatures to either or both chains, destroying the signer's reward and deposit and transferring a third of it to the whistleblower. If there is an insufficient number of signers to sign or the blockmaker is missing at a particular block height h, the designated blockmaker for height h + 1 can produce a block directly on top of the block at height h - 1 after waiting for 30 seconds instead of 15. After years of research, one thing has become clear: proof of stake is non-trivial - so non-trivial that some even consider it impossible. The issues of nothing-at-stake and long-range attacks, and the lack of mining as a rate-limiting device, require a number of compensatory mechanisms, and even the protocol above does not address the issue of how to randomly select signers. With a substantial proof of work reward, the problem is limited, as block hashes can be a source of randomness and we can mathematically show that the gain from holding back block hashes until a miner finds a hash that favorably selects future signers is usually less than the gain from publishing the block hashes. Without such a reward, however, other sources of randomness such as low-influence functions need to be used. For Ethereum 1.0, we consider it highly desirable to both not excessively delay the release and not try too many untested features at once; hence, we will likely stick with ASIC-resistant proof of work, perhaps with non-Slasher proof of activity as an addon, and look at moving to a more comprehensive proof of stake model over time. Previous Post Next Post RSS Email me Facebook GitHub Twitter Ethereum Foundation • Ethereum.org • ESP • Bug Bounty Program • Do-not-Track • Archive Categories: Research & Development • Devcon • Organizational • Ecosystem Support • Ethereum.org • Security Original theme by beautiful-jekyll, modified by the Ethereum Foundation team. blog-ethereum-org-6559 ---- On Stake | Ethereum Foundation Blog Created with Sketch. Search Categories R&D Research & Development Devcon Devcon Org Organizational ESP Ecosystem Support ETH.Org Ethereum.org Sec Security Archive 2021 Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2020 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2019 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2018 Dec, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2017 Dec, Nov, Oct, Sep, Aug, Jul, May, Apr, Mar, Feb, Jan 2016 Dec, Nov, Oct, Sep, Jul, Jun, May, Apr, Mar, Feb, Jan 2015 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2014 Dec, Nov, Oct, Sep, Aug, Jul, Jun, May, Apr, Mar, Feb, Jan 2013 Dec On Stake Posted by Vitalik Buterin on July 5, 2014 Research & Development The topic of mining centralization has been a very important one over the past few weeks. GHASH.io, the Bitcoin network’s largest mining pool, has for the past month directed over 40% of the Bitcoin network’s hashpower, and two weeks ago briefly spiked over 50%, theoretically giving it monopoly control over the Bitcoin network. Although miners quidkly left the pool and reduced its hashpower to 35%, it’s clear that the problem is not solved. At the same time, ASICs threaten to further centralize the very production . One approach to solving the problem is the one I advocated in my previous post: create a mining algorithm that is guaranteed to remain CPU-friendly in the long term. Another, however, is to abolish mining entirely, and replace it with a new model for seeking consensus. The primary second contender to date has been a strategy called “proof of stake”, the intuition behind which is as follows. In a traditional proof-of-work blockchain, miners “vote” on which transactions came at what time with their CPU power, and the more CPU power you have the proportionately larger your influence is. In proof-of-stake, the system follows a similar but different paradigm: stakeholders vote with their dollars (or rather, the internal currency of the particular system). In terms of how this works technically, the simplest setup is a model that has been called the “simulated mining rig”: essentially, every account has a certain chance per second of generating a valid block, much like a piece of mining hardware, and this chance is proportional to the account’s balance. The simplest formula for this is: SHA256(prevhash + address + timestamp) <= 2^256 * balance / diff prevhash is the hash of the previous block, address is the address of the stake-miner, timestamp is the current Unix time in second, balance is the account balance of the stack-miner and diff is an adjustable global difficulty parameter. If a given account satisfies this equation at any particular second, it may produce a valid block, giving that account some block reward. Another approach is to use not the balance, but the “coin age” (ie. the balance multiplied by the amount of time that the coins have not been touched), as the weighting factor; this guarantees more even returns but at the expense of potentially much easier collusion attacks, since attackers have the ability to accumulate coin age, and possible superlinearity; for these reasons, I prefer the plain balance-based approach in most cases, and we will use this as our baseline for the rest of this discussion. Other solutions to “proof of X” have been proposed, including excellence, bandwidth, storage and identity, but none are particularly convenient as consensus algorithms; rather, all of these systems have many of the same properties of proof of stake, and are thus best implemented indirectly - by making them purely mechanisms for currency distribution, and then using proof of stake on those distributed coins for the actual consensus. The only exception is perhaps the social-graph-theory based Ripple, although many cryptocurrency proponents consider such systems to be far too trust-dependent in order to be considered truly “decentralized”; this point can be debated, but it is best to focus on one topic at a time and so we will focus on stake. Strengths and Weaknesses If it can be implemented correctly, in theory proof of stake has many advantages. In particular are three: It does not waste any significant amount of electicity. Sure, there is a need for stakeholders to keep trying to produce blocks, but no one gains any benefit from making more than one attempt per account per second; hence, the electricity expenditure is comparable to any other non-wasteful internet protocol (eg. BitTorrent) It can arguably provide a much higher level of security. In proof of work, assuming a liquid market for computing power the cost of launching a 51% attack is equal to the cost of the computing power of the network over the course of two hours - an amount that, by standard economic principles, is roughly equal to the total sum of block rewards and transaction fees provided in two hours. In proof of stake, the threshold is theoretically much higher: 51% of the entire supply of the currency. Depending on the precise algorithm in question it can potentially allow for much faster blockchains (eg. NXT has one block every few seconds, compared to one per minute for Ethereum and one per ten minutes for Bitcoin) Note that there is one important counterargument that has been made to #2: if a large entity credibly commits to purchasing 51% of currency units and then using those funds to repeatedly sabotage the network, then the price will fall drastically, making it much easier for that entity to puchase the tokens. This does somewhat mitigate the benefit of stake, although not nearly fatally; an entity that can credibly commit to purchasing 50% of coins is likely also one that can launch 51% attacks against proof of work. However, with the naive proof of stake algorithm described above, there is one serious problem: as some Bitcoin developers describe it, “there is nothing at stake”. What that means is this: in the context of a proof-of-work blockchain, if there is an accidental fork, or a deliberate transaction reversal (“double-spend”) attempt, and there are two competing forks of the blockchain, then miners have to choose which one they contribute to. Their three choices are either: Mine on no chain and get no rewards Mine on chain A and get the reward if chain A wins Mine on chain B and get the reward if chain B wins As I have commented in a previous post, note the striking similarity to SchellingCoin/Truthcoin here: you win if you go with what everyone else goes with, except in this case the vote is on the order of transactions, not a numerical (as in SchellingCoin) or binary (as in TruthCoin) datum. The incentive is to support the chain that everyone else supports, forcing rapid convergence, and preventing successful attacks provided that at least 51% of the network is not colluding. In the naive proof of stake algorithm, on the other hand, the choices of whether or not to vote on A and whether or not to vote on B are independent; hence, the optimal strategy is to mine on any fork that you can find. Thus, in order to launch a successful attack, an attacker need only overpower all of the altruists who are willing to vote only on the correct chain. The problem is, unfortunately, somewhat fundamental. Proof of work is nice because the property of hash verification allows the network to be aware of something outside of itself - namely, computing power, and that thing serves as a sort of anchor to ensure some stability. In a naive proof of stake system, however, the only thing that each chain is aware of is itself; hence, one can intuitively see that this makes such systems more flimsy and less stable. However, the above is merely an intuitive argument; it is by no means a mathematical proof that a proof-of-stake system cannot be incentive-compatible and secure, and indeed there are a number of potential ways to get around the issue. The first strategy is the one that is employed in the Slasher algorithm, and it hinges on a simple realization: although, in the case of a fork, chains are not aware of anything in the outside world, they are aware of each other. Hence, the way the protocol prevents double-mining is this: if you mine a block, the reward is locked up for 1000 blocks, and if you also mine on any other chain then anyone else can submit the block from the other chain into the original chain in order to steal the mining reward. Note, however, that things are not quite so simple, and there is one catch: the miners have to be known in advance. The problem is that if the algorithm given above is used directly, then the issue arises that, using a probabilistic strategy, double mining becomes very easy to hide. The issue is this: suppose that you have 1% stake, and thus every block there is a 1% chance that you will be able to produce (hereinafter, “sign”) it. Now, suppose there is a fork between chain A and chain B, with chain A being the “correct” chain. The “honest” strategy is to try to generate blocks just on A, getting an expected 0.01 A-coins per block. An alternative strategy, however, is to try to generate blocks on both A and B, and if you find a block on both at the same time then discarding B. The payout per block is one A-coin if you get lucky on A (0.99% chance), one B-coin if you get lucky on B (0.99% chance) and one A-coin, but no B-coins, if you get lucky on both; hence, the expected payout is 0.01 A-coins plus 0.0099 B-coins if you double-vote. If the stakeholders that need to sign a particular block are decided in advance, however (ie. specifically, decided before a fork starts), then there is no possibility of having the opportunity to vote on A but not B; you either have the opportunity on both or neither. Hence, the “dishonest” strategy simply collapses into being the same thing as the “honest” strategy. The Block Signer Selection Problem But then if block signers are decided in advance, another issue arises: if done wrong, block signers could “mine” their blocks, repeatedly trying to create a block with different random data until the resulting block triggers that same signer having the privilege to sign a block again very soon. For example, if the signer for block N+1000 was simply chosen from the hash of block N, and an attacker had 1% stake, then the attacker could keep rebuilding the block until block N+1000 also had the attacker as its signer (ie. an expected 100 iterations). Over time, the attacker would naturally gain signing privilege on other blocks, and thus eventually come to completely saturate the blockchain with length-1000 cycles controlled by himself. Even if the hash of 100 blocks put together is used, it’s possible to manipulate the value. Thus, the question is, how do we determine what the signers for future blocks are going to be? The solution used in Slasher is to use a secure decentralized random number generator protocol: many parties come in, first submit to the blockchain the hashes of their values, and then submit their values. There is no chance of manipulation this way, because each submitter is bound to submit in the second round the value whose hash they provided in the first round, and in the first round no one has enough information in order to engage in any manipulation. The player still has a choice of whether or not to participate in the second round, but the two countervailing points are that (1) this is only one bit of freedom, although it becomes greater for large miners that can control multiple accounts, and (2) we can institute a rule that failing to participate causes forfeiture of one’s mining privilege (miners in round N choose miners for round N+1 during round N-1, so there is an opportunity to do this if certain round-N miners misbehave during this selection step). Another idea, proposed by Iddo Bentov and others in their “Cryptocurrencies Without Proof of Work” paper, is to use something called a “low-influence” function - essentially, a function such that there is only a very low chance that a single actor will be able to change the result by changing the input. A simple example of an LIF over small sets is majority rule; here, because we are trying to pick a random miner, we have a very large set of options to choose from, so majority rule per bit is used (eg. if you have 500 parties and you want to pick a random miner out of a billion, assign them into thirty groups of 17, and have each group vote on whether their particular bit is zero or one, and then recombine the bits as a binary number at the end). This removes the need for a complicated two-step protocol, allowing it to potentially be done much more quickly and even in parallel, reducing the risk that the pre-chosen stake-miners for some particular block will get together and collude. A third interesting strategy, used by NXT, is to use the addresses of the stake-miners for blocks N and N+1 to choose the miner for block N+2; this by definition gives only one choice for the next miner in each block. Adding a criterion that every miner needs to be locked in for 1440 blocks in order to participate prevents sending transactions as a form of double-mining. However, having such rapid stake-miner selection also compromises the nothing-at-stake resistance property due to the probabilistic double-mining problem; this is the reason why clever schemes to make miner determination happen very quickly are ultimately, beyond a certain point, undesirable. Long-Range Attacks While the Slasher approach does effectively solve the nothing-at-stake problem against traditional 51% attacks, a problem arises in the case of something called a “long-range attack”: instead of an attacker starting mining from ten blocks before the current block, the attacker starts ten thousand blocks ago. In a proof-of-work context, this is silly; it basically means doing thousands of times as much work as is necessary to launch an attack. Here, however, creating a block is nearly computationally free, so it’s a reasonable strategy. The reason why it works is that Slasher’s process for punishing multi-mining only lasts for 1000 blocks, and its process for determining new miners lasts 3000 blocks, so outside the “scope” of that range Slasher functions exactly like the naive proof-of-stake coin. Note that Slasher is still a substantial improvement; in fact, assuming users never change it can be made fully secure by introducing a rule into each client not to accept forks going back more than 1000 blocks. The problem is, however, what happens when a new user enters the picture. When a new user downloads a proof-of-stake-coin client for the first time, it will see multiple versions of the blockchain: the longest, and therefore legitimate, fork, and many pretenders trying to mine their own chains from the genesis. As described above, proof-of-stake chains are completely self-referential; hence, the client seeing all of these chains has no idea about any surrounding context like which chain came first or which has more value (note: in a hybrid proof-of-stake plus social graph system, the user would get initial blockchain data from a trusted source; this approach is reasonable, but is not fully decentralized). The only thing that the client can see is the allocation in the genesis block, and all of the transactions since that point. Thus, all “pure” proof-of-stake systems are ultimately permanent nobilities where the members of the genesis block allocation always have the ultimate say. No matter what happens ten million blocks down the road, the genesis block members can always come together and launch an alternate fork with an alternate transaction history and have that fork take over. If you understand this, and you are still okay with pure proof of stake as a concept (the specific reason why you might still be okay is that, if the initial issuance is done right, the “nobility” should still be large enough that it cannot practically collude), then the realization allows for some more imaginative directions in terms of how proof of stake can play out. The simplest idea is to have the members of the genesis block vote on every block, where double-mining is punished by permanent loss of voting power. Note that this system actually solves nothing-at-stake issues completely, since every genesis block holder has a mining privilege that has value forever into the future, so it will always not be worth it to double-mine. This system, however, has a finite lifespan - specifically, the maximum life (and interest) span of the genesis signers, and it also gives the nobility a permanent profit-making privilege, and not just voting rights; however, nevertheless the existence of the algorithm is encouraging because it suggests that long-range-nothing-at-stake might be fundamentally resolvable. Thus, the challenge is to figure out some way to make sure voting privileges transfer over, while still at the same time maintaining security. Changing Incentives Another approach to solving nothing-at-stake comes at the problem from a completely different angle. The core problem is, in naive proof-of-stake, rational individuals will double-vote. The Slasher-like solutions all try to solve the problem by making it impossible to double-vote, or at the very least heavily punishing such a strategy. But what if there is another approach; specifically, what if we instead remove the incentive to do so? In all of the proof of stake systems that I described above, the incentive is obvious, and unfortunately fundamental: because whoever is producing blocks needs an incentive to participate in the process, they benefit if they include a block in as many forks as possible. The solution to this conundrum comes from an imaginative, out-of-the-box proposal from Daniel Larimer: transactions as proof of stake. The core idea behind transactions as proof-of-stake is simple: instead of mining being done by a separate class of individuals, whether computer hardware owners or stakeholders, mining and transaction sending are merged into one. The naive TaPoS algorithm is as follows: Every transaction must contain a reference (ie. hash) to the previous transaction A candidate state-of-the-system is obtained by calculating the result of a resulting transaction chain The correct chain among multiple candidates is the one that has either (i) the longest coin-days-destroyed (ie. number of coins in the account * time since last access), or (ii) the highest transaction fees (these are two different options that we will analyze separately) This algorithm has the property that it is extremely unscalable, breaking down beyond about 1 transaction per 2-5 seconds, and it is not the one that Larimer suggests or the one that will actually be used; rather, it’s simply a proof of concept that we will analyze to see if this approach is valid at all. If it is, then there are likely ways to optimize it. Now, let’s see what the economics of this are. Suppose that there is a fork, and there are two competing versions of the TaPoS chain. You, as a transaction sender, made a transaction on chain A, and there is now an upcoming chain B. Do you have the incentive to double-mine and include your transaction in chain B as well? The answer is no - in fact you actually want to double-spend your recipient so you would not put the transaction on another chain. This argument is especially potent in the case of long-range attacks, where you already received your product in exchange for the funds; in the short term, of course, the incentive still exists to make sure the transaction is sent, so senders do have the incentive to double-mine; however, because the worry is strictly time-limited this can be resolved via a Slasher-like mechanism. One concern is this: given the presence of forks, how easy is it to overwhelm the system? If, for example, there is a fork, and one particular entity wants to double-spend, under what circumstances is that possible? In the transaction-fee version, the requirement is pretty simple: you need to spend more txfees than the rest of the network. This seems weak, but in reality it isn’t; we know that in the case of Bitcoin, once the currency supply stops increasing mining will rely solely on transaction fees, and the mechanics are exactly the same (since the amount that the network will spend on mining will roughly correspond to the total number of txfees being sent in); hence, fee-based TaPoS is in this regard at least as secure as fee-only PoW mining. In the second case, we have a different model: instead of mining with your coins, you are mining with your liquidity. Anyone can 51% attack the system if and only if they have a sufficiently large quantity of coin-days-destroyed on them. Hence, the cost of spending a large txfee after the fact is replaced by the cost of sacrificing liquidity before the fact. Cost of Liquidity The discussion around liquidity leads to another important philosophical point: security cannot be cost-free. In any system where there is a block reward, the thing that is the prerequisite for the reward (whether CPU, stake, or something else) cannot be free, since otherwise everyone would be claiming the reward at infinitum, and in TaPoS transaction senders need to be providing some kind of fee to justify security. Furthermore, whatever resource is used to back the security, whether CPU, currency sacrifices or liquidity sacrifices, the attacker need only get their hands on the same quantity of that resource than the rest of the network. Note that, in the case of liquidity sacrifices (which is what naive proof of stake is), the relevant quantity here is actually not 50% of coins, but rather the privilege of accessing 50% of coins for a few hours - a service that, assuming a perfectly efficient market, might only cost a few hundred thousand dollars. The solution to this puzzle is that marginal cost is not the same thing as average cost. In the case of proof of work, this is true only to a very limited extent; although miners do earn a positive nonzero profit from mining, they all pay a high cost (unless they’re CPU miners heating their homes, but even there there are substantial efficiency losses; laptops running hash functions at 100%, though effective at heating, are necessarily less effective than systems designed for the task). In the case of currency sacrifices, everyone pays the same, but the payment is redistributed as a dividend to everyone else, and this profit is too dispersed to be recovered via market mechanisms; thus, although the system is costly from a local perspective, it is costless from a global perspective. The last option, liquidity sacrifice, is in between the two. Although liquidity sacrifice is costly, there is a substantial amount of disparity in how much people value liquidity. Some people, like individual users or businesses with low savings, heavily value liquidity; others, like savers, do not value liquidity at all (eg. I could not care less if I lost the ability to spend ten of my bitcoins for some duration). Hence, although the marginal cost of liquidity will be high (specifically, necessarily equal to either the mining reward or the transaction fee), the average cost is much lower. Hence, there is a leverage effect that allows the cost of an attack to be much higher than the inefficiency of the network, or the amount that senders spend on txfees. Additionally, note that in Larimer’s scheme specifically, things are rigged in such a a way that all liquidity that is sacrificed in consensus is liquidity that was being sacrificed anyway (namely, by not sending coins earlier), so the practical level of inefficiency is zero. Now, TaPoS does have its problems. First, if we try to make it more scalable by reintroducing the concept of blocks, then there ideally needs to be some reason to produce blocks that is not profit, so as not to reintroduce the nothing-at-stake problem. One approach may be to force a certain class of large transaction senders to create blocks. Second, attacking a chain is still theoretically “cost-free”, so the security assurances are somewhat less nice than they are in proof-of-work. Third, in the context of a more complicated blockchain like Ethereum, and not a currency, some transactions (eg. finalizing a bet) are actually profitable to send, so there will be incentive to double-mine on at least some transactions (though not nearly all, so there is still some security). Finally, it’s a genesis-block-nobility system, just like all proof-of-stake necessarily is. However, as far as pure proof-of-stake systems go, it does seem a much better backbone than the version of proof of stake that emulated Bitcoin mining. Hybrid Proof of Stake Given the attractiveness of proof of stake as a solution for increasing efficiency and security, and its simultaneous deficiencies in terms of zero-cost attacks, one moderate solution that has been brought up many times is hybrid proof of stake, in its latest incantation called “proof of activity”. The idea behind proof of activity is simple: blocks are produced via proof of work, but every block randomly assigns three stakeholders that need to sign it. The next block can only be valid once those signatures are in place. In this system, in theory, an attacker with 10% stake would see 999 of his 1000 blocks not being signed, whereas in the legitimate network 729 out of 1000 blocks would be signed; hence, such an attacker would be penalized in mining by a factor of 729. However, there is a problem: what motivates signers to sign blocks on only one chain? If the arguments against pure proof of stake are correct, then most rational stake-miners would sign both chains. Hence, in hybrid PoS, if the attacker signs only his chain, and altruists only sign the legitimate chain, and everyone else signs both, then if the attacker can overpower the altruists on the stake front that means that the attacker can overtake the chain with less than a 51% attack on the mining front. If we trust that altruists as a group are more powerful in stake than any attacker, but we don’t trust that too much, then hybrid PoS seems like a reasonable hedge option; however, given the reasoning above, if we want to hybridize one might ask if hybrid PoW + TaPoS might not be the more optimal way to go. For example, one could imagine a system where transactions need to reference recent blocks, and a blockchain’s score is calculated based on proof of work and coin-days-destroyed counts. Conclusion Will we see proof of stake emerge as a viable alternative to proof of work in the next few years? It may well be. From a pure efficiency perspective, if Bitcoin, or Ethereum, or any other PoW-based platform get to the point where they have similar market cap to gold, silver, the USD, EUR or CNY, or any other mainstream asset, then over a hundred billion dollars worth of new currency units will be produced per year. Under a pure-PoW regime, an amount of economic power approaching that will be spent on hashing every year. Thus, the cost to society of maintaining a proof-of-work cryptocurrency is about the same as the cost of maintaining the Russian military (the analogy is particularly potent because militaries are also proof of work; their only value to anyone is protecting against other militaries). Under hybrid-PoS, that might safely be dropped to $30 billion per year, and under pure PoS it would be almost nothing, except depending on implementation maybe a few billion dollars of cost from lost liquidity. Ultimately, this boils down to a philosophical question: exactly how much does decentralization mean to us, and how much are we willing to pay for it? Remember that centralized databases, and even quasi-centralized ones based on Ripple consensus, are free. If perfect decentralization is indeed worth $100 billion, then proof of work is definitely the right way to go. But arguably that is not the case. What if society does not see decentralization as a goal in itself, and the only reason why it’s worth it to decentralize is to get the increased benefits of efficiency that decentralization brings? In that case, if decentralization comes with a $100 billion price tag, then we should just centralize and let a few governments run the databases. But if we have a solid, viable proof of stake algorithm, then we have a third option: a system which is both decentralized and cost-free (note that useful proof of work also fits this criterion, and may be easier); in that case, the dichotomy does not exist at all and decentralization becomes the obvious choice. Previous Post Next Post RSS Email me Facebook GitHub Twitter Ethereum Foundation • Ethereum.org • ESP • Bug Bounty Program • Do-not-Track • Archive Categories: Research & Development • Devcon • Organizational • Ecosystem Support • Ethereum.org • Security Original theme by beautiful-jekyll, modified by the Ethereum Foundation team. blog-iandavis-com-4110 ---- Internet Alchemy, the blog of Ian Davis Internet Alchemy est. 1999 2017 · 2011 · 2006 · 2001 2016 · 2010 · 2005 · 2000 2015 · 2009 · 2004 · 1999 2014 · 2008 · 2003 2012 · 2007 · 2002                      Mon, Oct 23, 2017 Serverless: why microfunctions > microservices This post follows on from a post I wrote a couple of years back called Why Service Architectures Should Focus on Workflows. In that post I attempted to describe the fragility of microservice systems that were simply translating object-oriented patterns to the new paradigm. These systems were migrating domain models and their interactions from in-memory objects to separate networked processes. They were replacing in-process function calls with cross-network rpc calls, adding latency and infrastructure complexity. The goal was scalability and flexibility but, I argued, the entity modelling approach introduced new failure modes. I suggested a solution: Instead of carving up the domain by entity, focus on the workflows. If I was writing that post today I would say “focus on the functions” because the future is serverless functions, not microservices. Or, more brashly: microfunctions > microservices The industry has moved apace in the last 3 years with a focus on solving the infrastructure challenges caused by running hundreds of intercommunicating microservices. Containers have matured and become the de-facto standard for the unit of microservice deployment with management platforms such as Kubernetes to orchestrate them and frameworks like GRPC for robust interservice communication. The focus still tends to be on interacting entities though: when placing an order the “order service” talks to the “customer service” which reserves items by talking to the “stock service” and the “payment service” which talks to the “payment gateway” after first checking with the “fraud service”. When the order needs to be shipped the “shipping service” asks the “order service” for orders that need to be fulfilled and tells the “stock service” to remove the reservation, then to the “customer service” to locate the customer etc. All of these services are likely to be persisting state in various backend databases. Microservices are organized as vertical slices through the domain: The same problems still exist: if the customer service is overwhelmed by the shipping service then the order service can’t take new orders. The container manager will, of course, scale up the number of customer service instances and register them with the appropriate load balancers, discovery servers, monitoring and logging. However, it cannot easily cope with a critical failure in this service, perhaps caused by a repeated bad request that panics the service and prevents multiple dependent services from operating properly. Failures and slowdowns in response times are handled within client services through backoff strategies, circuit breakers and retries. The system as a whole increases in complexity but remains fragile. By contrast, in a serverless architecture, the emphasis is on the functions of the system. For this reason serverless is sometimes called FaaS – Functions as a Service. Systems are decomposed into functions that encapsulate a single task in a single process. Instead of each request involving the orchestration of multiple services the request uses an instance of the appropriate function. Rather than the domain model being exploded into separate networked processes its entities are provided in code libraries compiled into the function at build time. Calls to entity methods are in-process so don’t pay the network latency or reliability taxes. In this paradigm the “place order” function simply calls methods on customer, stock and payment objects, which may then interact with the various backend databases directly. Instead of a dozen networked RPC calls, the function relies on 2-3 database calls. Additionally, if a function is particularly hot it can be scaled directly without affecting the operation of other functions and, crucially, it can fail completely without taking down other functions. (Modulo the reliability of databases which affect both styles of architecture identically.) Microfunctions are horizontal slices through the domain: The advantages I wrote last time still hold up when translated to serverless terminology: Deploying or retiring a function becomes as simple as switching it on or off which leads to greater freedom to experiment. Scaling a function is limited to scaling a single type of process horizontally and the costs of doing this can be cleanly evaluated. The system as a whole becomes much more robust. When a function encounters problems it is limited to a single workflow such as issuing invoices. Other functions can continue to operate independently. Latency, bandwidth use and reliability are all improved because there are fewer network calls. The function still relies on the database and other support systems such as lock servers, but most of the data flow is controlled in-process. The unit of testing and deployment is a single function which reduces the complexity and cost of maintenance. One major advantage that I missed is the potential for extreme cost savings through scale, particularly the scale attainable by running on public shared infrastructure. Since all the variability of microservice deployment configurations is abstracted away into a simple request/response interface the microfunctions can be run as isolated shared-nothing processes, billed only for the resources they use in their short lifetime. Anyone who has costed for redundant microservices simply for basic resilience will appreciate the potential here. Although there are number of cloud providers in this space (AWS Lambda, Google Cloud Functions, Azure Functions) serverless is still an emerging paradigm with the problems that come with immaturity. Adrian Coyler recently summarized an excellent paper and presentation dealing with the challenges of building serverless systems which highlights many of these, including the lack of service level agreements and loose performance guarantees. It seems almost certain though that these will improve as the space matures and overtakes the microservice paradigm. Other posts tagged as architecture, distributed-systems, technology, serverless, faas Earlier Posts Gorecipes: Fin Wed, Mar 30 2016 Another Blog Refresh Sun, Feb 22 2015 Why Service Architectures Should Focus on Workflows Mon, Mar 31 2014 Help me crowdfund my game Amberfell Mon, Nov 12 2012 blog-librarything-com-3306 ---- The Thingology Blog The LibraryThing Blog Thingology Monday, April 20th, 2020 New Syndetics Unbound Feature: Mark and Boost Electronic Resources ProQuest and LibraryThing have just introduced a major new feature to our catalog-enrichment suite, Syndetics Unbound, to meet the needs of libraries during the COVID-19 crisis. Our friends at ProQuest blogged about it briefly on the ProQuest blog. This blog post goes into greater detail about what we did, how we did it, and what efforts like this may mean for library catalogs in the future. What it Does The feature, “Mark and Boost Electronic Resources,” turns Syndetics Unbound from a general catalog enrichment tool to one focused on your library’s electronic resources—the resources patrons can access during a library shutdown. We hope it encourages libraries to continue to promote their catalog, the library’s own and most complete collection repository, instead of sending patrons to a host of partial, third-party eresource platforms. The new feature marks the library’s electronic resources and “boosts,” or promotes, them in Syndetics Unbound’s discovery enhancements, such as “You May Also Like,” “Other Editions,” “Tags” and “Reading Levels.” Here’s a screenshot showing the feature in action. How it Works The feature is composed of three settings. By default, they all turn on together, but they can be independently turned off and on. Boost electronic resources chooses to show electronic editions of an item where they exist, and boosts such items within discovery elements. Mark electronic resources with an “e” icon marks all electronic resources—ebooks, eaudio, and streaming video. Add electronic resources message at top of page adds a customizable message to the top of the Syndetics Unbound area. “Mark and Boost Electronic Holdings” works across all enrichments. It is particularly important for “Also Available As” which lists all the other formats for a given title. Enabling this feature sorts electronic resources to the front of the list. We also suggest that, for now, libraries may want to put “Also Available As” at the top of their enrichment order. Why We Did It Your catalog is only as good as your holdings. Faced with a world in which physical holdings are off-limits and electronic resources essential, many libraries have discouraged use of the catalog, which is dominated by non-digital resources, in favor of linking directly to Overdrive, Hoopla, Freegal and so forth. Unfortunately, these services are silos, containing only what you bought from that particular vendor. “Mark and Boost Electronic Resources” turns your catalog toward digital resources, while preserving what makes a catalog important—a single point of access to ALL library resources, not a vendor silo. Maximizing Your Electronic Holdings To make the best use of “Mark and Boost Electronic Resources,” we need to know about all your electronic resources. Unfortunately, some systems separate MARC holdings and electronic holdings; all resources appear in the catalog, but only some are available for export to Syndetics Unbound. Other libraries send us holding files with everything, but they are unable to send us updates every time new electronic resources are added. To address this issue, we have therefore advanced a new feature—”Auto-discover electronic holdings.” Turn this on and we build up an accurate representation of your library’s electronic resource holdings, without requiring any effort on your part. Adapting to Change “Mark and Boost Electronic Resources” is our first feature change to address the current crisis. But we are eager to do others, and to adapt the feature over time, as the situation develops. We are eager to get feedback from librarians and patrons! — The ProQuest and LibraryThing teams Labels: new features, new product, Syndetics Unbound posted by Tim @3:12 pm 0 Comments » Share Thursday, October 27th, 2016 Introducing Syndetics Unbound Short Version Today we’re going public with a new product for libraries, jointly developed by LibraryThing and ProQuest. It’s called Syndetics Unbound, and it makes library catalogs better, with catalog enrichments that provide information about each item, and jumping-off points for exploring the catalog. To see it in action, check out the Hartford Public Library in Hartford, CT. Here are some sample links: The Raven Boys by Maggie Stiefvater Alexander Hamilton by Ron Chernow Faithful Place by Tana French We’ve also got a press release and a nifty marketing site. UPDATE: Webinars Every Week! We’re now having weekly webinars, in which you can learn all about Syndetics Unbound, and ask us questions. Visit ProQuest’s WebEx portal to see the schedule and sign up! Long Version The Basic Idea Syndetics Unbound aims to make patrons happier and increase circulation. It works by enhancing discovery within your OPAC, giving patrons useful information about books, movies, music, and video games, and helping them find other things they like. This means adding elements like cover images, summaries, recommendations, series, tags, and both professional and user reviews. In one sense, Syndetics Unbound combines products—the ProQuest product Syndetics Plus and the LibraryThing products LibraryThing for Libraries and Book Display Widgets. In a more important sense, however, it leaps forward from these products to something new, simple, and powerful. New elements were invented. Static elements have become newly dynamic. Buttons provide deep-dives into your library’s collection. And—we think—everything looks better than anything Syndetics or LibraryThing have done before! (That’s one of only two exclamation points in this blog post, so we mean it.) Simplicity Syndetics Unbound is a complete and unified solution, not a menu of options spread across one or even multiple vendors. This simplicity starts with the design, which is made to look good out of the box, already configured for your OPAC and look. The installation requirements for Syndetics Unbound are minimal. If you already have Syndetics Plus or LibraryThing for Libraries, you’re all set. If you’ve never been a customer, you only need to add a line of HTML to your OPAC, and to upload your holdings. Although it’s simple, we didn’t neglect options. Libraries can reorder elements, or drop them entirely. We expect libraries will pick and choose, and evaluate elements according to patron needs, or feedback from our detailed usage stats. Libraries can also tweak the look and feel with custom CSS stylesheets. And simplicity is cheap. To assemble a not-quite-equivalent bundle from ProQuest’s and LibraryThing’s separate offerings would cost far more. We want everyone who has Syndetics Unbound to have it in its full glory. Comprehensiveness and Enrichments Syndetics Unbound enriches your catalog with some sixteen enrichments, but the number is less important than the options they encompass. These include both professional and user-generated content, information about the item you’re looking at, and jumping-off points to explore similar items. Quick descriptions of the enrichments: Boilterplate covers for items without covers. Premium Cover Service. Syndetics offers the most comprehensive cover database in existence for libraries—over 25 million full-color cover images for books, videos, DVDs, and CDs, with thousands of new covers added every week. For Syndetics Unbound, we added boilerplate covers for items that don’t have a cover, which include the title, author, and media type. Summaries. Over 18 million essential summaries and annotations, so patrons know what the book’s about. About the Author. This section includes the author biography and a small shelf of other items by the author. The section is also adorned by a small author photo—a first in the catalog, although familiar elsewhere on the web. Look Inside. Includes three previous Syndetics enrichments—first chapters or excerpts, table of contents and large-size covers—newly presented as a “peek inside the book” feature. Series. Shows a book’s series, including reading order. If the library is missing part of the series, those covers are shown but grayed out. You May Also Like. Provides sharp, on-the-spot readers advisory in your catalog, with the option to browse a larger world of suggestions, drawn from LibraryThing members and big-data algorithms. In this and other enrichments, Syndetics Unbound only recommends items that your library owns. The Syndetics Unbound recommendations cover far more of your collection than any similar service. For example, statistics from the Hartford Public Library show this feature on 88% of items viewed. Professional Reviews includes more than 5.4 million reviews from Library Journal, School Library Journal, New York Times, The Guardian, The Horn Book, BookList, BookSeller + Publisher Magazine, Choice, Publisher’s Weekly, and Kirkus. A la carte review sources include Voice of Youth Advocates: VOYA, Doody’s Medical Reviews and Quill and Quire. Reader Reviews includes more than 1.5 million vetted, reader reviews from LibraryThing members. It also allows patrons and librarians to add their own ratings and reviews, right in your catalog, and then showcase them on a library’s home page and social media. Also Available As helps patrons find other available formats and versions of a title in your collection, including paper, audio, ebook, and translations. Exploring the tag system Tags rethinks LibraryThing’s celebrated tag clouds—redesigning them toward simplicity and consistency, and away from the “ransom note” look of most clouds. As data, tags are based on over 131 million tags created by LibraryThing members, and hand-vetted by our staff librarians for quality. A new exploration interface allows patrons to explore what LibraryThing calls “tag mashes”—finding books by combinations of tags—in a simple faceted way. I’m going to be blogging about the redesign of tag clouds in the near future. Considering dozens of designs, we decided on a clean break with the past. (I expect it will get some reactions.) Book Profile is a newly dynamic version of what Bowker has done for years—analyzing thousands of new works of fiction, short-story collections, biographies, autobiographies, and memoirs annually. Now every term is clickable, and patrons can search and browse over one million profiles. Explore Reading Levels Reading Level is a newly dynamic way to see and explore other books in the same age and grade range. Reading Level also includes Metametrics Lexile® Framework for Reading. Click the “more” button to get a new, super-powered reading-level explorer. This is one my favorite features! (Second and last exclamation point.) Awards highlights the awards a title has won, and helps patrons find highly-awarded books in your collection. Includes biggies like the National Book Award and the Booker Prize, but also smaller awards like the Bram Stoker Award and Oklahoma’s Sequoyah Book Award. Browse Shelf gives your patrons the context and serendipity of browsing a physical shelf, using your call numbers. Includes a mini shelf-browser that sits on your detail pages, and a full-screen version, launched from the detail page. Video and Music adds summaries and other information for more than four million video and music titles including annotations, performers, track listings, release dates, genres, keywords, and themes. Video Games provides game descriptions, ESRB ratings, star ratings, system requirements, and even screenshots. Book Display Widgets. Finally, Syndetics Unbound isn’t limited to the catalog, but includes the LibraryThing product Book Display Widgets—virtual book displays that go on your library’s homepage, blog, LibGuides, Facebook, Twitter, Pinterest, or even in email newsletters. Display Widgets can be filled with preset content, such as popular titles, new titles, DVDs, journals, series, awards, tags, and more. Or you point them at a web page, RSS feed, or list of ISBNs, UPCs, or ISSNs. If your data is dynamic, the widget updates automatically. Here’s a page of Book Display Widget examples. Find out More Made it this far? You really need to see Syndetics Unbound in action. Check it Out. Again, here are some sample links of Syndetics Unbound at Hartford Public Library in Hartford, CT: The Raven Boys by Maggie Stiefvater, Alexander Hamilton by Ron Chernow, Faithful Place by Tana French. Webinars. We hold webinars every Tuesday and walk you through the different elements and answer questions. To sign up for a webinar, visit this Webex page and search for “Syndetics Unbound.” Interested in Syndetics Unbound at your library? Go here to contact a representative at ProQuest. Or read more about at the Syndetics Unbound website. Or email us at ltflsupport@librarything.com and we’ll help you find the right person or resource. Labels: librarything for libraries, new feature, new features, new product posted by Tim @10:45 am 4 Comments » Share Thursday, January 7th, 2016 ALAMW 2016 in Boston (and Free Passes)! Abby and KJ will be at ALA Midwinter in Boston this weekend, showing off LibraryThing for Libraries. Since the conference is so close to LibraryThing headquarters, chances are good that a few other LT staff members may appear, as well! Visit Us. Stop by booth #1717 to meet Abby & KJ (and potential mystery guests!), get a demo, and learn about all the new and fun things we’re up to with LibraryThing for Libraries, TinyCat, and LibraryThing. Get in Free. Are you in the Boston area and want to go to ALAMW? We have free exhibit only passes. Click here to sign up and get one! Note: It will get you just into the exhibit hall, not the conference sessions themselves. Labels: Uncategorized posted by Kate @4:05 pm 0 Comments » Share Thursday, June 25th, 2015 For ALA 2015: Three Free OPAC Enhancements For a limited time, LibraryThing for Libraries (LTFL) is offering three of its signature enhancements for free! There are no strings attached. We want people to see how LibraryThing for Libraries can improve your catalog. Check Library. The Check Library button is a “bookmarklet” that allows patrons to check if your library has a book while on Amazon and most other book websites. Unlike other options, LibraryThing knows all of the editions out there, so it finds the edition your library has. Learn more about Check Library Other Editions Let your users know everything you have. Don’t let users leave empty-handed when the record that came up is checked out. Other editions links all your holdings together in a FRBR model—paper, audiobook, ebook, even translations. Lexile Measures Put MetaMetrics’ The Lexile Framework® for Reading in your catalog, to help librarians and patrons find material based on reading level. In addition to showing the Lexile numbers, we also include an interactive browser. Easy to Add LTFL Enhancements are easy to install and can be added to every major ILS/OPAC system and most of the minor ones. Enrichments can be customized and styled to fit your catalog, and detailed usage reporting lets you know how they’re doing. See us at ALA. Stop by booth 3634 at ALA Annual this weekend in San Francisco to talk to Tim and Abby and see how these enhancements work. If you need a free pass to the exhibit hall, details are in this blog post. Sign up We’re offering these three enhancements free, for at least two years. We’ll probably send you links showing you how awesome other enhancements would look in your catalog, but that’s it. Find out more http://www.librarything.com/forlibraries or email Abby Blachly at abby@librarything.com. Labels: alaac15, Lexile measures, librarything for libraries, ltfl posted by Abby @1:31 pm 0 Comments » Share Tuesday, June 23rd, 2015 ALA 2015 in San Francisco (Free Passes) Our booth. But this is Kate, not Tim or Abby. She had the baby. Tim and I are headed to San Francisco this weekend for the ALA Annual Conference. Visit Us. Stop by booth #3634 to talk to us, get a demo, and learn about all the new and fun things we’re up to with LibraryThing for Libraries! Stay tuned this week for more announcements of what we’ll be showing off. No, really. It’s going to be awesome. Get in Free. In the SF area and want to go to ALA? We have free exhibit only passes. Click here to sign up and get one. It will get you just into the exhibit hall, not the conference sessions themselves. Labels: ala, alaac15 posted by Abby @2:17 pm 4 Comments » Share Monday, February 9th, 2015 New “More Like This” for LibraryThing for Libraries We’ve just released “More Like This,” a major upgrade to LibraryThing for Libraries’ “Similar items” recommendations. The upgrade is free and automatic for all current subscribers to LibraryThing for Libraries Catalog Enhancement Package. It adds several new categories of recommendations, as well as new features. We’ve got text about it below, but here’s a short (1:28) video: What’s New Similar items now has a See more link, which opens More Like This. Browse through different types of recommendations, including: Similar items More by author Similar authors By readers Same series By tags By genre You can also choose to show one or several of the new categories directly on the catalog page. Click a book in the lightbox to learn more about it—a summary when available, and a link to go directly to that item in the catalog. Rate the usefulness of each recommended item right in your catalog—hovering over a cover gives you buttons that let you mark whether it’s a good or bad recommendation. Try it Out! Click “See more” to open the More Like This browser in one of these libraries: Spokane County Library District Arapahoe Public Library Waukegan Public Library Cape May Public Library SAILS Library Network Find out more Find more details for current customers on what’s changing and what customizations are available on our help pages. For more information on LibraryThing for Libraries or if you’re interested in a free trial, email abby@librarything.com, visit http://www.librarything.com/forlibraries, or register for a webinar. Labels: librarything for libraries, ltfl, recommendations, similar books posted by Abby @2:02 pm 2 Comments » Share Thursday, February 5th, 2015 Subjects and the Ship of Theseus I thought I might take a break to post an amusing photo of something I wrote out today: The photo is a first draft of a database schema for a revamp of how LibraryThing will do library subjects. All told, it has 26 tables. Gulp. About eight of the tables do what a good cataloging system would do: Distinguishes the various subject systems (LCSH, Medical Subjects, etc.) Preserves the semantic richness of subject cataloging, including the stuff that never makes it into library systems. Breaks subjects into their facets (e.g., “Man-woman relationships — Fiction”) has two subject facets Most of the tables, however, satisfy LibraryThing’s unusual core commitments: to let users do their own thing, like their own little library, but also to let them benefit from and participate in the data and contributions of others.(1) So it: Links to subjects from various “levels,” including book-level, edition-level, ISBN-level and work-level. Allows members to use their own data, or “inherit” subjects from other levels. Allows for members to “play librarian,” improving good data and suppressing bad data.(2) Allows for real-time, fully reversible aliasing of subjects and subject facets. The last is perhaps the hardest. Nine years ago (!) I compared LibraryThing to the “Ship of Theseus,” a ship which is “preserved” although its components are continually changed. The same goes for much of its data, although “shifting sands” might be a better analogy. Accounting for this makes for some interesting database structures, and interesting programming. Not every system at LibraryThing does this perfectly. But I hope this structure will help us do that better for subjects.(3) Weird as all this is, I think it’s the way things are going. At present most libraries maintain their own data, which, while generally copied from another library, is fundamentally siloed. Like an evolving species, library records descend from each other; they aren’t dynamically linked. The data inside the records are siloed as well, trapped in a non-relational model. The profession that invented metadata, and indeed invented sharing metadata, is, at least as far as its catalogs go, far behind. Eventually that will end. It may end in a “Library Goodreads,” every library sharing the same data, with global changes possible, but reserved for special catalogers. But my bet is on a more LibraryThing-like future, where library systems will both respect local cataloging choices and, if they like, benefit instantly from improvements made elsewhere in the system. When that future arrives, we got the schema! 1. I’m betting another ten tables are added before the system is complete. 2. The system doesn’t presume whether changes will be made unilaterally, or voted on. Voting, like much else, existings in a separate system, even if it ends up looking like part of the subject system. 3. This is a long-term project. Our first steps are much more modest–the tables have an order-of-use, not shown. First off we’re going to duplicate the current system, but with appropriate character sets and segmentation by thesaurus and language. Labels: cataloging, subjects posted by Tim @7:44 pm 3 Comments » Share Tuesday, January 20th, 2015 LibraryThing Recommends in BiblioCommons Does your library use BiblioCommons as its catalog? LibraryThing and BiblioCommons now work together to give you high-quality reading recommendations in your BiblioCommons catalog. You can see some examples here. Look for “LibraryThing Recommends” on the right side. Not That Kind of Girl (Daniel Boone Regional Library) Carthage Must Be Destroyed (Ottowa Public Library) The Martian (Edmonton Public Library) Little Bear (West Vancouver Memorial Library) Station Eleven (Chapel Hill Public Library) The Brothers Karamazov (Calgary Public Library) Quick facts: As with all LibraryThing for Libraries products, LibraryThing Recommends only recommends other books within a library’s catalog. LibraryThing Recommends stretches across media, providing recommendations not just for print titles, but also for ebooks, audiobooks, and other media. LibraryThing Recommends shows up to two titles up front, with up to three displayed under “Show more.” Recommendations come from LibraryThing’s recommendations system, which draws on hundreds of millions of data points in readership patterns, tags, series, popularity, and other data. Not using BiblioCommons? Well, you can get LibraryThing recommendations—and much more—integrated in almost every catalog (OPAC and ILS) on earth, with all the same basic functionality, like recommending only books in your catalog, as well as other LibraryThing for Libraries feaures, like reviews, series and tags. Check out some examples on different systems here. SirsiDynix Enterprise (Saint Louis Public Library) SirsiDynix Horizon Information Portal (Hume Libraries) SirsiDynix eLibrary (Spokane County Public Library) III Encore (Arapahoe Public Library) III WebPac Pro (Waukegan Public Library) Polaris (Cape May County Library) Ex Libris Voyager (University of Wisconsin-Eau Claire) Interested? BiblioCommons: email info@bibliocommons.com or visit http://www.bibliocommons.com/AugmentedContent. See the full specifics here. Other Systems: email abby@librarything.com or visit http://www.librarything.com/forlibraries. Labels: Uncategorized posted by Tim @12:43 pm 0 Comments » Share Thursday, October 16th, 2014 NEW: Annotations for Book Display Widgets Our Book Display Widgets is getting adopted by more and more libraries, and we’re busy making it better and better. Last week we introduced Easy Share. This week we’re rolling out another improvement—Annotations! Book Display Widgets is the ultimate tool for libraries to create automatic or hand-picked virtual book displays for their home page, blog, Facebook or elsewhere. Annotations allows libraries to add explanations for their picks. Some Ways to Use Annotations 1. Explain Staff Picks right on your homepage. 2. Let students know if a book is reserved for a particular class. 3. Add context for special collections displays. How it Works Check out the LibraryThing for Libraries Wiki for instructions on how to add Annotations to your Book Display Widgets. It’s pretty easy. Interested? Watch a quick screencast explaining Book Display Widgets and how you can use them. Find out more about LibraryThing for Libraries and Book Display Widgets. And sign up for a free trial of either by contacting ltflsupport@librarything.com. Labels: Book Display Widgets, librarything for libraries, new feature, new features, widgets posted by KJ @10:21 am 0 Comments » Share Tuesday, October 14th, 2014 Send us a programmer, win $1,000 in books. We just posted a new job post Job: Library Developer at LibraryThing (Telecommute). To sweeten the deal, we are offering $1,000 worth of books to the person who finds them. That’s a lot of books. Rules! You get a $1,000 gift certificate to the local, chain or online bookseller of your choice. To qualify, you need to connect us to someone. Either you introduce them to us—and they follow up by applying themselves—or they mention your name in their email (“So-and-so told me about this”). You can recommend yourself, but if you found out about it from someone else, we hope you’ll do the right thing and make them the beneficiary. Small print: Our decision is final, incontestable, irreversible and completely dictatorial. It only applies when an employee is hired full-time, not part-time, contract or for a trial period. If we don’t hire someone for the job, we don’t pay. The contact must happen in the next month. If we’ve already been in touch with the candidate, it doesn’t count. Void where prohibited. You pay taxes, and the insidious hidden tax of shelving. Employees and their families are eligible to win, provided they aren’t work contacts. Tim is not. » Job: Library Developer at LibraryThing (Telecommute) Labels: jobs posted by Tim @10:04 am 1 Comment » Share Page 1 of 4512345...102030...»Last » Thingology is LibraryThing's ideas blog, on the philosophy and methods of tags, libraries and suchnot. The LibraryThing Blog RSS Feed Combined Feed Search for: Recent Posts New Syndetics Unbound Feature: Mark and Boost Electronic Resources Introducing Syndetics Unbound ALAMW 2016 in Boston (and Free Passes)! For ALA 2015: Three Free OPAC Enhancements ALA 2015 in San Francisco (Free Passes) Recent Comments máy phun phân bón on The LibraryThing programming quiz! Janis Jones on Book Display Widgets from LibraryThing for Libraries Marie Seltenrych on Book Display Widgets from LibraryThing for Libraries Tye Bishop on Introducing thingISBN Walter Clark on New group: “Books in 2025—The Future of the Book World” Archives April 2020 October 2016 January 2016 June 2015 February 2015 January 2015 October 2014 June 2014 May 2014 April 2014 March 2014 December 2013 October 2013 September 2013 June 2013 April 2013 March 2013 February 2013 January 2013 November 2012 October 2012 August 2012 June 2012 April 2012 March 2012 February 2012 January 2012 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 Categories 37Signals aaron swartz academics ads ahml ala ala 2008 ala anaheim ALA midwinter ala2007 ala2008 ALA2010 ala2014 alaac15 ALAMW11 ALAMW13 alamw2009 ALAmw2010 Aleph Alexandria Egypt Amazon amusement android apis app arl arlington heights armenian astroturfing ato attention australia australian australian tax office authenticity awards barcode scanning bea ben franklin berkman center bhutan biblios BIGWIG blogging book blogs book covers Book Display Widgets book reviews BookPsychic books bookstores booksurge Boston bowdoin Bowker branded apps brigadoon library britney spears business c.s. lewis canton cataloging categories censorship Charleston chick lit chris catalfo christmas CIG CIL CIL2008 CIL2009 CIL2010 CIL2012 city planning claremont colleges clay shirky cluetrain code codi cognitive cost collection development commemorations common knowledge communiation Computers in Libraries conference ConferenceThing contests copyright covers coverthing crime csuci curiosities cutter DanMARC david weinberger DDC dead or alive department of commerce department of defense department of labor dewey decimal Dewey Decimal Classification discovery layer django doc searls dr. horrible drm Durham Early Reviewers east brunswick ebooks ebpl EBSCOhost economics elton john email employment enhancement Enterprise ereaders erotica event Evergreen everything is miscellaneous ExLibris facebook fake steve federal libraries feedback flash-mob cataloging folksonomy frbr freedom fun future of cataloging future of the book gbs gene smith getting real giraffe gmilcs google google book search groups guardian harry potter harvard coop hidden images hiring homophily houghton mifflin humor iBistro iii il2008 indexing indiebound inspiration instruction international internet archive internet librarians internships interviews iphone app isbns it conversations itt tallaght jacob nielsen jason griffey javascript jeff atwood jobs JSON kelly vista kils kindle kingston koha languages lccn lccns LCSH legacies legacy libraries legacy mob Lexile measures lianza lianza09 lib2.0 liblime librarians libraries libraries of the dead library 2.0 gang library anywhere library blogging library journal library of congress library of congress report library of the futurue library science library technology librarycampnyc2007 librarything librarything for libraries LibraryThing for Publishers librarything local linden labs LIS los gatos LTER ltfl LTFL categories ltfl libraries LTFL Reviews maine marc marcthing marié digby mashups masonic control masons meet-up metadata metasexdactyly michael gorman michael porter microsoft microsoft songsmith mike wesch milestone mobile mobile catalog mobile web monopoly moose movers and shakers nc NCSU neil gaiman NELA2013 new feature new features new product newspapers nipply nook North Carolina oclc oclc numbers oh opacs open data open library Open Shelves Classification open source openness OSC OverDrive paid memberships palinet pay what you want physical world PLA PLA12 PLA2008 podcasts policy politics polls portland Portland Public Library power laws print culture profile pictures QR code ra radiohead randolph county public library rcpl readers advisory reading recommendations reloadevery remixability reviews rhinos richland county rights riverine metaphors roy tennant rusa mars safe for work if you're a cataloger San Francisco State University santathing scanning schaufferwaffer screencasts Seattle Public Library second life secret santas serendipity series sfsu shelf browse shelfari shirky similar books sincerity sirsidynix slco small libraries Social Cataloging social media social networking songsmith sony reader SOPAC spam stack map stats steve lawson strangeness subjects Syndetics Unbound syria tag mirror tagging tagmash tags talis talks tax exemption the thingisbn Tim tipping points tools translation twitter uclassify ugc Uncategorized usability user generated content users utnapishtim VC vertical social networks very short list visualizations Voyager VuFind web web 2.0 webinars weddings weinberger West Virginia westlaw widgets Wikimania 2008 Wikimania2008 wirral wirral libraries work disambiguation Working Group on the Future of Bibliographic Control works worldcat worldcat local xisbn youtube zombies zoomii Meta Register Log in Entries RSS Comments RSS WordPress.org Help/FAQs | About | Privacy/Terms | Blog | Contact | APIs | WikiThing | Common Knowledge | Legacy Libraries | Early Reviewers | Zeitgeist Copyright LibraryThing and/or members of LibraryThing, authors, publishers, libraries, cover designers, Amazon, Bol, Bruna, etc. blog-librarything-com-8558 ---- The Thingology Blog The Thingology Blog New Syndetics Unbound Feature: Mark and Boost Electronic Resources ProQuest and LibraryThing have just introduced a major new feature to our catalog-enrichment suite, Syndetics Unbound, to meet the needs of libraries during the COVID-19 crisis. Our friends at ProQuest blogged about it briefly on the ProQuest blog. This blog post goes into greater detail about what we did, how we did it, and what […] Introducing Syndetics Unbound Short Version Today we’re going public with a new product for libraries, jointly developed by LibraryThing and ProQuest. It’s called Syndetics Unbound, and it makes library catalogs better, with catalog enrichments that provide information about each item, and jumping-off points for exploring the catalog. To see it in action, check out the Hartford Public Library […] ALAMW 2016 in Boston (and Free Passes)! Abby and KJ will be at ALA Midwinter in Boston this weekend, showing off LibraryThing for Libraries. Since the conference is so close to LibraryThing headquarters, chances are good that a few other LT staff members may appear, as well! Visit Us. Stop by booth #1717 to meet Abby & KJ (and potential mystery guests!), […] For ALA 2015: Three Free OPAC Enhancements For a limited time, LibraryThing for Libraries (LTFL) is offering three of its signature enhancements for free! There are no strings attached. We want people to see how LibraryThing for Libraries can improve your catalog. Check Library. The Check Library button is a “bookmarklet” that allows patrons to check if your library has a book […] ALA 2015 in San Francisco (Free Passes) Our booth. But this is Kate, not Tim or Abby. She had the baby. Tim and I are headed to San Francisco this weekend for the ALA Annual Conference. Visit Us. Stop by booth #3634 to talk to us, get a demo, and learn about all the new and fun things we’re up to with […] New “More Like This” for LibraryThing for Libraries We’ve just released “More Like This,” a major upgrade to LibraryThing for Libraries’ “Similar items” recommendations. The upgrade is free and automatic for all current subscribers to LibraryThing for Libraries Catalog Enhancement Package. It adds several new categories of recommendations, as well as new features. We’ve got text about it below, but here’s a short […] Subjects and the Ship of Theseus I thought I might take a break to post an amusing photo of something I wrote out today: The photo is a first draft of a database schema for a revamp of how LibraryThing will do library subjects. All told, it has 26 tables. Gulp. About eight of the tables do what a good cataloging […] LibraryThing Recommends in BiblioCommons Does your library use BiblioCommons as its catalog? LibraryThing and BiblioCommons now work together to give you high-quality reading recommendations in your BiblioCommons catalog. You can see some examples here. Look for “LibraryThing Recommends” on the right side. Not That Kind of Girl (Daniel Boone Regional Library) Carthage Must Be Destroyed (Ottowa Public Library) The […] NEW: Annotations for Book Display Widgets Our Book Display Widgets is getting adopted by more and more libraries, and we’re busy making it better and better. Last week we introduced Easy Share. This week we’re rolling out another improvement—Annotations! Book Display Widgets is the ultimate tool for libraries to create automatic or hand-picked virtual book displays for their home page, blog, […] Send us a programmer, win $1,000 in books. We just posted a new job post Job: Library Developer at LibraryThing (Telecommute). To sweeten the deal, we are offering $1,000 worth of books to the person who finds them. That’s a lot of books. Rules! You get a $1,000 gift certificate to the local, chain or online bookseller of your choice. To qualify, you […] blog-library-villanova-edu-3004 ---- Falvey Memorial Library :: The collection of blogs published by Falvey Memorial Library, Villanova University Skip Navigation Falvey Memorial Library VISIT / APPLY / GIVE My Library Account Collections Research Services Using the Library About Falvey Memorial Library Search Everything Books & Media Title Journal Title Author Subject Call Number ISBN/ISSN Tag Articles & more Article Title Article Author Other Libraries (ILL) ILL Title ILL Author ILL Subject ILL Call Number ILL ISBN/ISSN Library Website Guides Digital Library Search for books, articles, library site, almost anything Advanced You are exploring: Home > Blogs Falvey Memorial Library Blog Falvey Library Blogs Library Staff Decks the Halls at RA Fair August 13, 2021Library Newsfalvey memorial library, RA Fair, Residence Hall, Villanova University ... Read More Content Roundup – First Two Weeks – August 2021 August 12, 2021Blue Electrode: Sparking between Silicon and PaperContent Roudup Just a few new Dime Novels and Story Papers for your reading and research needs as the summer heat continues! Also completed this is the full and annotated transcription of the Joseph McGarrity Collection Minute book of the Friends of Irish: ... Read More Meet New E-ZBorrow Platform: ReShare August 12, 2021Library News The new E-ZBorrow platform, ReShare, allows Falvey Memorial Library to improve service to our patrons by using the latest library technology. The open source code gives libraries more customization options and allows them to more quickly adapt to ... Read More Villanovan Patrick Tiernan Offers Lesson in Resilience at the Olympics August 11, 2021Library NewsCross Country, falvey memorial library, Patrick Tiernan, Summer Olympics, Villanova University By Shawn Proctor Every Olympics is filled with storied rises to victory, when an athlete snatches the gold despite seemingly insurmountable odds. Yet for every medal there’s another tale. Last second losses. Injuries. And, for Villanovan ... Read More Foto Friday: Chair-ish The Weekend August 6, 2021Library Newsfoto friday, summer 2021   Sit back and relax, Wildcats! Enjoy these last few weeks of summer break! #FotoFriday Kallie Stahl ’17 MA is Communication and Marketing Specialist at Falvey Memorial Library.       ... Read More Welcome to Falvey: Emily Horn Joins Resource Management and Description August 5, 2021Library NewsEmily Horn, Falvey staff, Resource Management & Description Emily Horn recently joined Resource Management and Description as Resource Management and Description Coordinator. Helping to build and cultivate Falvey Library’s collection, Horn assists with acquisitions, licensing, description, discovery, ... Read More Audit Analytics Accounting & Oversight August 5, 2021Library NewsAudit Analytics, Business, falvey memorial library, resources The Library in partnership with the Villanova School of Business has added the Accounting & Oversight module to our basic Audit Analytics subscription. Audit Analytics structured data facilitate scholarly research on governance, shareholder ... Read More Euromonitor (Passport) Trial August 5, 2021Library News, ResourcesEuromonitor Passport, falvey memorial library, resources, trial resources We currently have a trial to three new modules of Euromonitor (Passport): Cities, Industrial and Mobility. At its core, Euromonitor provides data and analysis by and for consumer goods and services markets, internationally. Our basic subscription ... Read More Service Alert—EZProxy Upgrade: 8/4 August 3, 2021Library Newsdatabases, EZProxy, falvey library, Law Library, Service Alert Some Falvey Library and Law Library databases may be temporarily unavailable on Wednesday, Aug. 4, between 6:30–8:30 a.m., due to routine server maintenance. If you encounter a problem during this time, please keep trying; we will endeavor to ... Read More Search Falvey Library Blogs Categories Blue Electrode: Sparking between Silicon and Paper Library News Resources Technology Developments Feeds Content Comments Meta Log in   Last Modified: December 22, 2015 800 Lancaster Ave., Villanova, PA 19085 610.519.4500 Contact Directions Privacy & Security Diversity Higher Education Act MY NOVA Villanova A-Z Directory Work at Villanova Accessibility Ask Us: Live Chat blog-library-villanova-edu-658 ---- Falvey Memorial Library Blog Falvey Memorial Library Blog The collection of blogs published by Falvey Memorial Library, Villanova University blog-libux-co-2913 ---- Library User Experience Community - Medium Library User Experience Community - Medium A blog and slack community organized around design and the user experience in libraries, non-profits, and the higher-ed web. - Medium A Library System for the Future This is a what-if story.Continue reading on Library User Experience Community » Alexa, get me the articles (voice interfaces in academia) Thinking about interfaces has led me down a path of all sorts of exciting/mildly terrifying ways of interacting with our devices — from…Continue reading on Library User Experience Community » Accessibility Information on Library Websites Is autocomplete on your library home page? Writing for the User Experience with Rebecca Blakiston First look at Primo’s new user interface What users expect Write for LibUX On the User Experience of Ebooks Unambitious and incapable men in librarianship blog-libux-co-4982 ---- Library User Experience Community Homepage Open in app Sign inGet started Practical Design Thinking for Libraries Library User Experience Community Guest Write ( - we pay!) Our Slack Community FollowFollowing A Library System for the Future A Library System for the Future This is a what-if story. Kelly DaganFeb 25, 2018 Latest Alexa, get me the articles (voice interfaces in academia) Alexa, get me the articles (voice interfaces in academia) Thinking about interfaces has led me down a path of all sorts of exciting/mildly terrifying ways of interacting with our devices — from… Kelly DaganFeb 11, 2018 Accessibility Information on Library Websites Accessibility Information on Library Websites An important part of making your library accessible is advertising that your library’s spaces and services are accessible and inclusive. Carli SpinaNov 17, 2017 Is autocomplete on your library home page? Is autocomplete on your library home page? Literature and some testing I’ve done this semester convinces me that autocomplete fundamentally improves the user experience Jaci Paige WilkinsonAug 20, 2017 Writing for the User Experience with Rebecca Blakiston Writing for the User Experience with Rebecca Blakiston 53:25 | Rebecca Blakiston — author of books on usability testing and writing with clarity; Library Journal mover and shaker — talks shop in… Michael SchofieldAug 1, 2017 Write for LibUX Write for LibUX We should aspire to push the #libweb forward by creating content that sets the bar for the conversation way up there, and I would love your… Michael SchofieldApr 28, 2017 First look at Primo’s new user interface First look at Primo’s new user interface Impressions of some key innovations of Primo’s new UI as well as challenges involved making customizations. Ron GilmourFeb 27, 2017 Today, I learned about the Accessibility Tree Today, I learned about the Accessibility Tree If you didn’t think your grip on web accessibility could get any looser. Michael SchofieldFeb 18, 2017 What users expect What users expect We thought it would be fun to emulate some of our favorite sites in a lightweight concept discovery layer we call Libre. Trey GordnerJan 29, 2017 Critical Librarianship in the Design of Libraries Critical Librarianship in the Design of Libraries Design decisions position libraries to more deliberately influence the user experience toward advocacy — such as communicating moral or… Michael SchofieldJan 10, 2017 The Non-Reader Persona The Non-Reader Persona Michael SchofieldDec 1, 2016 IU Libraries’ Redesign and the descending hero search IU Libraries’ Redesign and the descending hero search Michael SchofieldAug 8, 2016 Accessible, sort of — #a11eh Michael SchofieldJul 21, 2016 Create Once, Publish Everywhere Create Once, Publish Everywhere Michael SchofieldJul 17, 2016 Web education must go further than a conference budget Michael SchofieldMay 8, 2016 Blur the Line Between the Website and the Building Michael SchofieldNov 2, 2015 Say “Ok Library” Say “Ok Library” Michael SchofieldOct 28, 2015 Unambitious and incapable men in librarianship Unambitious and incapable men in librarianship Michael SchofieldOct 25, 2015 On the User Experience of Ebooks On the User Experience of Ebooks So, when it comes to ebooks I am in the minority: I prefer them to the real thing. The aesthetic or whats-it about the musty trappings of… Michael SchofieldOct 5, 2015 About Library User Experience CommunityLatest StoriesArchiveAbout MediumTermsPrivacy blog-openlibrary-org-3657 ---- The Open Library Blog The Open Library Blog A web page for every book Open Library Tags Explained—for Readers Seeking Buried Treasure As part of an open-source project, the Open Library blog has a growing number of contributors: from librarians and developers to designers, researchers, and book lovers. Each contributor writes from their perspective, sharing contributions they’re making to the Open Library catalog. As such, the Open Library blog has a versatile tagging system to help patrons […] Introducing the Open Library Explorer Try it here! If you like it, share it. Bringing 100 Years of Librarian-Knowledge to Life By Nick Norman with Drini Cami & Mek At the Library Leaders Forum 2020 (demo), Open Library unveiled the beta for what it’s calling the Library Explorer: an immersive interface which powerfully recreates and enhances the experience of navigating […] Importing your Goodreads & Accessing them with Open Library’s APIs by Mek Today Joe Alcorn, founder of readng, published an article (https://joealcorn.co.uk/blog/2020/goodreads-retiring-API) sharing news with readers that Amazon’s Goodreads service is in the process of retiring their developer APIs, with an effective start date of last Tuesday, December 8th, 2020. The topic stirred discussion among developers and book lovers alike, making the front-page of the […] On Bookstores, Libraries & Archives in the Digital Age The following was a guest post by Brewster Kahle on Against The Grain (ATG) – Linking Publishers, Vendors, & Librarians By: Brewster Kahle, Founder & Digital Librarian, Internet Archive ​​​Back in 2006, I was honored to give a keynote at the meeting of the Society of American Archivists, when the president of the Society presented me with a […] Amplifying the Voices Behind Books With the Power of Data Exploring how Open Library uses author data to help readers move from imagination to impact By Nick Norman, Edited by Mek & Drini According to René Descartes, a creative mathematician, “The reading of all good books is like a conversation with the finest [people] of past centuries.” If that’s true, then who are some of […] Giacomo Cignoni: My Internship at the Internet Archive This summer, Open Library and the Internet Archive took part in Google Summer of Code (GSoC), a Google initiative to help students gain coding experience by contributing to open source projects. I was lucky enough to mentor Giacomo while he worked on improving our BookReader experience and infrastructure. We have invited Giacomo to write a […] Google Summer of Code 2020: Adoption by Book Lovers by Tabish Shaikh & Mek OpenLibrary.org,the world’s best-kept library secret: Let’s make it easier for book lovers to discover and get started with Open Library. Hi, my name is Tabish Shaikh and this summer I participated in the Google Summer of Code program with Open Library to develop improvements which will help book lovers discover […] Open Library for Language Learners By Guyrandy Jean-Gilles 2020-07-21 A quick browse through the App Store and aspiring language learners will find themselves swimming in useful programs. But for experienced linguaphiles, the never-ending challenge is finding enough raw content and media to consume in their adopted tongue. Open Library can help. Earlier this year, Open Library added reading levels to […] Meet the Librarians of Open Library By Lisa Seaberg Are you a book lover looking to contribute to a warm, inclusive library community? We’d love to work with you: Learn more about Volunteering @ Open Library Behind the scenes of Open Library is a whole team of developers, data scientists, outreach experts, and librarians working together to make Open Library better […] Re-thinking Open Library’s Book Pages by Mek Karpeles, Tabish Shaikh We’ve redesigned our Book Pages: Before →After. Please share your feedback with us. A web page for every book… This is the mission of Open Library: a free, inclusive, online digital library catalog which helps readers find information about any book ever published. Millions of books in Open Library’s catalog […] blog-openlibrary-org-6880 ---- The Open Library Blog | A web page for every book The Open Library Blog A web page for every book Skip to content About « Older posts Open Library Tags Explained—for Readers Seeking Buried Treasure By nicknorman | Published: June 29, 2021 As part of an open-source project, the Open Library blog has a growing number of contributors: from librarians and developers to designers, researchers, and book lovers. Each contributor writes from their perspective, sharing contributions they’re making to the Open Library catalog. As such, the Open Library blog has a versatile tagging system to help patrons navigate such a diverse and wide range of content. Read More » Posted in Community, Librarianship, Open Source | Tagged Community, features, Nick Norman | Leave a comment Introducing the Open Library Explorer By mek | Published: December 16, 2020 Try it here! If you like it, share it. Bringing 100 Years of Librarian-Knowledge to Life By Nick Norman with Drini Cami & Mek At the Library Leaders Forum 2020 (demo), Open Library unveiled the beta for what it’s calling the Library Explorer: an immersive interface which powerfully recreates and enhances the experience of navigating a physical library. If the tagline doesn’t grab your attention, wait until you see it in action: Drini showcasing Library Explorer at the Library Leaders Forum Get Ready to Explore In this article, we’ll give you a tour of the Open Library Explorer and teach you how one may take full advantage of its features. You’ll also get a crash course on the 100+ years of library history which led to its innovation and an opportunity to test-drive it for yourself. So let’s get started!   Read More » Posted in Community, Interface/Design, Librarianship | Tagged features, Nick Norman, openlibrary | Comments closed Importing your Goodreads & Accessing them with Open Library’s APIs By mek | Published: December 13, 2020 by Mek Today Joe Alcorn, founder of readng, published an article (https://joealcorn.co.uk/blog/2020/goodreads-retiring-API) sharing news with readers that Amazon’s Goodreads service is in the process of retiring their developer APIs, with an effective start date of last Tuesday, December 8th, 2020. A screenshot taken from Joe Alcorn’s post The topic stirred discussion among developers and book lovers alike, making the front-page of the popular Hacker News website. Hacker News at 2020-12-13 1:30pm Pacific. Read More » Posted in Uncategorized | Tagged APIs | Comments closed On Bookstores, Libraries & Archives in the Digital Age By Brewster Kahle | Published: October 7, 2020 The following was a guest post by Brewster Kahle on Against The Grain (ATG) – Linking Publishers, Vendors, & Librarians Read More » Posted in Discussion, Librarianship, Uncategorized | Tagged founder | Comments closed Amplifying the Voices Behind Books With the Power of Data By mek | Published: September 2, 2020 Exploring how Open Library uses author data to help readers move from imagination to impact By Nick Norman, Edited by Mek & Drini Image Source: Pexels / Pixabay from popsugar According to René Descartes, a creative mathematician, “The reading of all good books is like a conversation with the finest [people] of past centuries.” If that’s true, then who are some of the people you’re talking to? Read More » Posted in Community, Cultural Resources, Data | Tagged Drini, features, Mek, Nick Norman | Comments closed Open Library is an initiative of the Internet Archive, a 501(c)(3) non-profit, building a digital library of Internet sites and other cultural artifacts in digital form. Other projects include the Wayback Machine, archive.org and archive-it.org. Your use of the Open Library is subject to the Internet Archive's Terms of Use. « Older posts Search Recent Posts Open Library Tags Explained—for Readers Seeking Buried Treasure Introducing the Open Library Explorer Importing your Goodreads & Accessing them with Open Library’s APIs On Bookstores, Libraries & Archives in the Digital Age Amplifying the Voices Behind Books With the Power of Data Archives Archives Select Month June 2021 December 2020 October 2020 September 2020 August 2020 July 2020 May 2020 November 2019 October 2019 January 2019 October 2018 August 2018 July 2018 June 2018 May 2018 March 2018 December 2017 October 2016 June 2016 May 2016 February 2016 January 2016 November 2015 February 2015 January 2015 December 2014 November 2014 October 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 April 2013 January 2013 August 2012 December 2011 November 2011 October 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 Theme customized from Thematic Theme Framework. blog-reeset-net-8577 ---- Terry's Worklog – On my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) Skip to content Terry's Worklog On my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) Menu Close Home About Me MarcEdit Homepage GitHub Page Privacy Policy MarcEdit Update Round-up A handful of updates have been posted related to MarcEdit 7.5 since the program cam out of beta.  These have been mostly bug fixes and small enhancements.  Here’s the full list: Bug Fix: OCLC Search – multiple terms would result in an error if ‘OR’ was used with specific search indexes. Fixed: 6/22 Enhancement: OCLC… Continue reading MarcEdit Update Round-up Published July 7, 2021Categorized as MarcEdit MarcEdit 7.5.x/MarcEdit Mac 3.5.x: Coming out of beta MarcEdit 7.5/MarcEdit Mac 3.5 is officially out of beta.  It has been my primary version of MarcEdit for about 6 months and is where all new development has taken place since Dec. 2020.  Because there are significant changes (including framework supported) – MarcEdit 7.5/3.5 are not in-place upgrades.  Previous versions of MarcEdit can be installed… Continue reading MarcEdit 7.5.x/MarcEdit Mac 3.5.x: Coming out of beta Published June 20, 2021Categorized as MarcEdit Exploring BibFrame workflows in MarcEdit Update: 6/21/2021: I uploaded a video with sound that demonstrates the process.  You can find it here: During this past year while working on MarcEdit 7.5.x/3.5.x, I’ve been giving some thought to how I might be able to facilitate some workflows to allow users to move data to and from BibFrame.  While the tools has… Continue reading Exploring BibFrame workflows in MarcEdit Published June 20, 2021Categorized as BibFrame, MarcEdit Thoughts on NACOs proposed process on updating CJK records I would like to take a few minutes and share my thoughts about an updated best practice recently posted by the PCC and NACO related to an update on CJK records. The update is found here: https://www.loc.gov/aba/pcc/naco/CJK/CJK-Best-Practice-NCR.docx. I’m not certain if this is active or a simply a proposal, but I’ve been having a number… Continue reading Thoughts on NACOs proposed process on updating CJK records Published April 20, 2021Categorized as Cataloging, MarcEdit MarcEdit 7.5 Update ChangeLog: https://marcedit.reeset.net/software/update75.txt Highlights Preview Changes One of the most requested features over the years has been the ability to preview changes prior to running them.  As of 7.5.8 – a new preview option has been added to many of the global editing tools in the MarcEditor.  Currently, you will find the preview option attached to… Continue reading MarcEdit 7.5 Update Published April 3, 2021Categorized as MarcEdit How do I generate MARC authority records from the Homosaurus vocabulary? Step by step instructions here: https://youtu.be/FJsdQI3pZPQ Ok, so last week, I got an interesting question on the listserv where a user asked specifically about generating MARC records for use in one’s ILS system from a JSONLD vocabulary.  In this case, the vocabulary in question as Homosaurus (Homosaurus Vocabulary Site) – and the questioner was specifically… Continue reading How do I generate MARC authority records from the Homosaurus vocabulary? Published April 3, 2021Categorized as MarcEdit MarcEdit: State of the Community *2020-2021 * Sigh – original title said 2019-2020.  Obviously, this is for this past year (Jan. 2020-Dec. 31, 2020).   Per usual, I wanted to take a couple minutes and look at the state of the MarcEdit project. This is something that I try to do once a year to gauge the current health of the community,… Continue reading MarcEdit: State of the Community *2020-2021 Published March 24, 2021Categorized as MarcEdit, Uncategorized MarcEdit 7.3.x/7.5.x (beta) Updates Versions are available at: https://marcedit.reeset.net/downloads Information about the changes: 7.3.10 Change Log: https://marcedit.reeset.net/software/update7.txt 7.5.0 Change Log: https://marcedit.reeset.net/software/update75.txt If you are using 7.x – this will prompt as normal for update. 7.5.x is the beta build, please be aware I expect to be releasing updates to this build weekly and also expect to find some issues.… Continue reading MarcEdit 7.3.x/7.5.x (beta) Updates Published February 2, 2021Categorized as MarcEdit MarcEdit 7.5.x/MacOS 3.5.x Timelines I sent this to the MarcEdit Listserv to provide info about my thoughts around timelines related to the beta and release.  Here’s the info. Dear All, As we are getting close to Feb. 1 (when I’ll make the 7.5 beta build available for testing) – I wanted to provide information about the update process going… Continue reading MarcEdit 7.5.x/MacOS 3.5.x Timelines Published January 26, 2021Categorized as MarcEdit MarcEdit 7.5 Change/Bug Fix list * Updated; 1/20 Change: Allow OS to manage supported supported Security Protocol types. Change: Remove com.sun dependency related to dns and httpserver Change: Changed AppData Path Change: First install automatically imports settings from MarcEdit 7.0-2.x Change: Field Count – simplify UI (consolidate elements) Change: 008 Windows — update help urls to oclc Change: Generate FAST… Continue reading MarcEdit 7.5 Change/Bug Fix list Published January 20, 2021Categorized as MarcEdit Posts navigation Page 1 … Page 94 Older posts Search… Terry's Worklog Proudly powered by WordPress. Dark Mode: blog-reeset-net-9750 ---- Terry's Worklog Terry's Worklog On my work (programming, digital libraries, cataloging) and other stuff that perks my interest (family, cycling, etc) MarcEdit Update Round-up A handful of updates have been posted related to MarcEdit 7.5 since the program cam out of beta.  These have been mostly bug fixes and small enhancements.  Here’s the full list: Bug Fix: OCLC Search – multiple terms would result in an error if ‘OR’ was used with specific search indexes. Fixed: 6/22 Enhancement: OCLC… Continue reading MarcEdit Update Round-up MarcEdit 7.5.x/MarcEdit Mac 3.5.x: Coming out of beta MarcEdit 7.5/MarcEdit Mac 3.5 is officially out of beta.  It has been my primary version of MarcEdit for about 6 months and is where all new development has taken place since Dec. 2020.  Because there are significant changes (including framework supported) – MarcEdit 7.5/3.5 are not in-place upgrades.  Previous versions of MarcEdit can be installed… Continue reading MarcEdit 7.5.x/MarcEdit Mac 3.5.x: Coming out of beta Exploring BibFrame workflows in MarcEdit Update: 6/21/2021: I uploaded a video with sound that demonstrates the process.  You can find it here: During this past year while working on MarcEdit 7.5.x/3.5.x, I’ve been giving some thought to how I might be able to facilitate some workflows to allow users to move data to and from BibFrame.  While the tools has… Continue reading Exploring BibFrame workflows in MarcEdit Thoughts on NACOs proposed process on updating CJK records I would like to take a few minutes and share my thoughts about an updated best practice recently posted by the PCC and NACO related to an update on CJK records. The update is found here: https://www.loc.gov/aba/pcc/naco/CJK/CJK-Best-Practice-NCR.docx. I’m not certain if this is active or a simply a proposal, but I’ve been having a number… Continue reading Thoughts on NACOs proposed process on updating CJK records MarcEdit 7.5 Update ChangeLog: https://marcedit.reeset.net/software/update75.txt Highlights Preview Changes One of the most requested features over the years has been the ability to preview changes prior to running them.  As of 7.5.8 – a new preview option has been added to many of the global editing tools in the MarcEditor.  Currently, you will find the preview option attached to… Continue reading MarcEdit 7.5 Update How do I generate MARC authority records from the Homosaurus vocabulary? Step by step instructions here: https://youtu.be/FJsdQI3pZPQ Ok, so last week, I got an interesting question on the listserv where a user asked specifically about generating MARC records for use in one’s ILS system from a JSONLD vocabulary.  In this case, the vocabulary in question as Homosaurus (Homosaurus Vocabulary Site) – and the questioner was specifically… Continue reading How do I generate MARC authority records from the Homosaurus vocabulary? MarcEdit: State of the Community *2020-2021 * Sigh – original title said 2019-2020.  Obviously, this is for this past year (Jan. 2020-Dec. 31, 2020).   Per usual, I wanted to take a couple minutes and look at the state of the MarcEdit project. This is something that I try to do once a year to gauge the current health of the community,… Continue reading MarcEdit: State of the Community *2020-2021 MarcEdit 7.3.x/7.5.x (beta) Updates Versions are available at: https://marcedit.reeset.net/downloads Information about the changes: 7.3.10 Change Log: https://marcedit.reeset.net/software/update7.txt 7.5.0 Change Log: https://marcedit.reeset.net/software/update75.txt If you are using 7.x – this will prompt as normal for update. 7.5.x is the beta build, please be aware I expect to be releasing updates to this build weekly and also expect to find some issues.… Continue reading MarcEdit 7.3.x/7.5.x (beta) Updates MarcEdit 7.5.x/MacOS 3.5.x Timelines I sent this to the MarcEdit Listserv to provide info about my thoughts around timelines related to the beta and release.  Here’s the info. Dear All, As we are getting close to Feb. 1 (when I’ll make the 7.5 beta build available for testing) – I wanted to provide information about the update process going… Continue reading MarcEdit 7.5.x/MacOS 3.5.x Timelines MarcEdit 7.5 Change/Bug Fix list * Updated; 1/20 Change: Allow OS to manage supported supported Security Protocol types. Change: Remove com.sun dependency related to dns and httpserver Change: Changed AppData Path Change: First install automatically imports settings from MarcEdit 7.0-2.x Change: Field Count – simplify UI (consolidate elements) Change: 008 Windows — update help urls to oclc Change: Generate FAST… Continue reading MarcEdit 7.5 Change/Bug Fix list blog-saeloun-com-7300 ---- Rails 6 adds ActiveSupport::ParameterFilter | Saeloun Blog All Articles Categories Contact Conference Rails 6 adds ActiveSupport::ParameterFilter Dec 3, 2019 , by Romil Mehta 1 minute read There are cases when we do not want sensitive data like passwords, card details etc in log files. Rails provides filter_parameters to achive this. For example, if we have to filter secret_code of user then we need to set filter_parameters in the application.rb as below: config.filter_parameters += ["secret_code"] After sending request to server, our request parameters will look like these: Parameters: {"authenticity_token"=>"ZKeyrytDDqYbjgHm+ZZicqVrKU/KetThIkmHsFQ/91mQ/eGmIJkELhypgVvAbAg1OR+fN5TA8qk0PrOzDOtAKA==", "user"=>{"first_name"=>"First Name", "last_name"=>"Last Name", "email"=>"abc@gmail.com", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "secret_code"=>"[FILTERED]"}, "commit"=>"Create User"} Now if we do User.last then: > User.last #=> # We can see that the secret_code of user is not filtered and visible. Rails 6 has moved ParamterFilter from ActionDispatch to ActiveSupport to solve above security problem. In Rails 6 > User.last #=> # Now we can see that secret_code is filtered. Instead of defining as filter_parameters, we can also define attributes as filter_attributes. > User.filter_attributes = [:secret_code, :password] #=> [:secret_code, :password] > User.last #=> # If we have filter_attributes or filter_parameters in regex or proc form, Rails 6 has added support for that also. > User.filter_attributes = [/name/, :secret_code, :password] #=> [/name/, :secret_code, :password] > User.last #=> # Share this post! If you enjoyed this post, you might also like: Rails 6 - Action Mailbox tryout November 11, 2019 Rails 7 adds disable_joins: true option to has_many :through association May 4, 2021 Rails 7 adds disable_joins: true option to has_one :through association June 1, 2021 blog-vlib-mpg-de-1575 ---- Max Planck vLib News |   Max Planck vLib News Search Primary Menu Skip to content Home About Contact Disclaimer Privacy Policy Search for: sfx link resolver MPG/SFX SERVER MAINTENANCE, THURSDAY 10 JUNE, 5-6 PM 9. June 2021 eia The MPG/SFX server will undergo scheduled maintenance due to a hardware upgrade. The downtime will start at 5 pm. Services are expected to be back after approximately one hour. We apologize for any inconvenience. outage sfx link resolver MPG/SFX server maintenance, Tuesday 01 December, 5-6 pm 30. November 2020 eia The database of the MPG/SFX server will undergo scheduled maintenance. The downtime will start at 5 pm. Services are expected to be back after 30 minutes. We apologize for any inconvenience. outage resources, sfx link resolver How to get Elsevier articles after December 31, 2018 20. December 2018 inga The Max Planck Digital Library has been mandated to discontinue their Elsevier subscription when the current agreement expires on December 31, 2018. Read more about the background in the full press release. Nevertheless, most journal articles published until that date will remain available, due to the rights stipulated in the MPG contracts to date. To fulfill the content needs of Max Planck researchers when Elsevier shuts off access to recent content at the beginning of January, the Max Planck libraries and MPDL have coordinated the setup of a common document order service. This will be integrated into the MPG/SFX interface and can be addressed as follows: Step 1: Search in ScienceDirect, start in any other database or enter the article details into the MPG/SFX citation linker. Step 2: Click the MPG/SFX button. Note: In ScienceDirect, it appears in the “Get Access” section at the top of those article pages for which the full text is no longer available: Step 3: Check the options in the service menu presented to you, e.g. free available full text versions (if available). Step 4: To order the article via your local library or the MPDL, select the corresponding link, e.g. "Request document via your local library". Please note that the wording might differ slightly according to your location. Step 5: Add your personal details to the order form in the next screen and submit your document request. The team in your local library or at the MPDL will get back to you as soon as possible. Please feel free to contact us if you face any problem or want to raise a question. Update, 06.06.2019: Check out our new flyer "How to deal with no subscription DEAL" prepared in cooperation with Max Planck’s PhDnet. elsevier document-delivery resources Aleph Multipool-Recherche: Parallele Suche in MPG-Bibliothekskatalogen 2. November 2018 inga Update, 07.12.2018: Die Multipool-Suche gibt es jetzt auch als Webinterface. Der Multipool-Expertenmodus im Aleph Katalogisierungs-Client dient der schnellen Recherche in mehreren Datenbanken gleichzeitig. Dabei können die Datenbanken entweder direkt auf dem Aleph-Server liegen oder als externe Ressourcen über das z39.50-Protokoll angebunden sein. Zusätzlich zu den lokalen Bibliotheken ist der MPI Bibliothekskatalog im GBV auf dem Aleph-Sever bereits vorkonfiguriert. Die Multipool-Funktion ist im Aleph Katalogisierungs-Client im Recherche-Bereich zu finden (2. Tab): Unterhalb des Bereichs zur Auswahl der relevanten Datenbanken kann man die Suchanfrage eintragen. Hinweise zur verwendeten Kommandosprache finden sich in der Aleph-Hilfe. Nach dem Absenden der Suchanfrage wird die Ergebnisliste mit den Datenbanken und der jeweiligen Treffermenge im unteren Rahmen angezeigt: Zum Öffnen eines einzelnen Sets genügt ein Doppelklick: Bei gemeinsamen Katalogen – wie z.B. dem MPI Bibliothekskatalog im GBV – findet sich der Hinweis auf die bestandshaltende Bibliothek in der Datensatz-Vollanzeige: Zur Einrichtung der Multipool-Suche müssen die vom lokalen Aleph-Client genutzten Konfigurationsdateien (library.ini und searbase.dat) erweitert werden. Bei Bedarf stellen wir die von uns genutzten Dateien gerne zur Verfügung. Weiterführende Informationen finden sich auch im Aleph Wiki: Download und Installation des Aleph Clients Einrichtung weiterer Z39.50-Zugänge Aleph vLib portal Goodbye vLib! Shutdown after October 31, 2018 24. October 2018 inga In 2002 the Max Planck virtual Library (vLib) was launched, with the idea of making all information resources relevant for Max Planck users simultaneously searchable under a common user interface. Since then, the vLib project partners from the Max Planck libraries, information retrieval services groups, the GWDG and the MPDL invested much time and effort to integrate various library catalogs, reference databases, full-text collections and other information resources into MetaLib, a federated search system developed by Ex Libris. With the rise of large search engines and discovery tools in recent years, usage slowly shifted away and the metasearch technology applied was no longer fulfilling user’s expection. Therefore, the termination of most vLib services was announced two years ago and now we are approaching the final shutdown: The vLib portal will cease to operate after the 31th of October 2018. As you know, there are many alternatives to the former vLib services: MPG.ReNa will remain available for browsing and discovering electronic resources available to Max Planck users. In addition, we’ll post some information on how to cross search Max Planck library catalogs soon. Let us take the opportunity to send a big "Thank you!" to all vLib users and collaborators within and outside the Max Planck Society. It always was and will continue to be a pleasure to work with and for you. Goodbye!… and please feel free to contact us in case of any further question. MPG.eBooks, sfx link resolver HTTPS only for MPG/SFX and MPG.eBooks 17. November 2017 eia As of next week, all http requests to the MPG/SFX link resolver will be redirected to a corresponding https request. The Max Planck Society electronic Book Index is scheduled to be switched to https only access the week after, starting on November 27, 2017. Regular web browser use of the above services should not be affected. Please thoroughly test any solutions that integrate these services via their web APIs. Please consider re-subscribing to MPG.eBooks RSS feeds. ebookshttpsrss sfx link resolver HTTPS enabled for MPG/SFX 27. June 2016 inga The MPG/SFX link resolver is now alternatively accessible via the https protocol. The secure base URL of the productive MPG/SFX instance is: https://sfx.mpg.de/sfx_local. HTTPS support enables secure third-party sites to load or to embed content from MPG/SFX without causing mixed content errors. Please feel free to update your applications or your links to the MPG/SFX server. https resources Citation Trails in Primo Central Index (PCI) 2. June 2016 inga The May 2016 release brought an interesting functionality to the Primo Central Index (PCI): The new "Citation Trail" capability enables PCI users to discover relevant materials by providing cited and citing publications for selected article records. At this time the only data source for the citation trail feature is CrossRef, thus the number of citing articles will be below the "Cited by" counts in other sources like Scopus and Web of Science. Further information: Short video demonstrating the citation trail feature (by Ex Libris). Detailed feature description (by Ex Libris) pciprimo-central-indexscopusweb-of-science sfx link resolver MPG/SFX server maintenance, Wednesday 20 April, 8-9 am 20. April 2016 inga The MPG/SFX server updates to a new database (MariaDB) on Wednesday morning. The downtime will begin at 8 am and is scheduled to last until 9 am. We apologize for any inconvenience. outage resources ProQuest Illustrata databases discontinued 15. April 2016 inga Last year, the information provider ProQuest decided to discontinue its "Illustrata Technology" and "Illustrata Natural Science" databases. Unfortunately, this represents a preliminary end to ProQuest’s long-year investment into deep indexing content. In a corresponding support article ProQuest states that there "[…] will be no loss of full text and full text + graphics images because of the removal of Deep Indexed content". In addition, they announce to "[…] develop an even better way for researchers to discover images, figures, tables, and other relevant visual materials related to their research tasks". The MPG.ReNa records for ProQuest Illustrata: Technology and ProQuest Illustrata: Natural Science have been marked as "terminating" and will be deactivated soon. proquest Posts navigation 1 2 … 10 Next → In short In this blog you'll find updates on information resources, vendor platform and access systems provided by the Max Planck Digital Library. Use MPG.ReNa to search and browse through the journal collections, eBook collections and databases available to MPG researchers. New Resources in MPG.ReNa Brill Scholarly Editions 27. June 2021 Current Digest of the Russian Press (East View) 21. June 2021 Ogonek Digital Archive (East View) 21. June 2021 F1000 Research 20. June 2021 Confidential Print: Middle East, 1839-1969 16. June 2021 MPDL News   News Categories COinS (4) exLibris (2) localization (6) materials (7) MPG.eBooks (1) MPG.ReNa (3) question and answer (6) resources (21) sfx link resolver (45) tools (10) vLib portal (38) Related Blogs FHI library MPIs Stuttgart Library PubMan blog Proudly powered by WordPress blog-vlib-mpg-de-487 ---- Max Planck vLib News Max Planck vLib News   MPG/SFX SERVER MAINTENANCE, THURSDAY 10 JUNE, 5-6 PM The MPG/SFX server will undergo scheduled maintenance due to a hardware upgrade. The downtime will start at 5 pm. Services are expected to be back after approximately one hour. We apologize for any inconvenience. MPG/SFX server maintenance, Tuesday 01 December, 5-6 pm The database of the MPG/SFX server will undergo scheduled maintenance. The downtime will start at 5 pm. Services are expected to be back after 30 minutes. We apologize for any inconvenience. How to get Elsevier articles after December 31, 2018 The Max Planck Digital Library has been mandated to discontinue their Elsevier subscription when the current agreement expires on December 31, 2018. Read more about the background in the full press release. Nevertheless, most journal articles published until that date will remain available, due to the rights stipulated in the MPG contracts to date. To … Continue reading How to get Elsevier articles after December 31, 2018 → Aleph Multipool-Recherche: Parallele Suche in MPG-Bibliothekskatalogen Update, 07.12.2018: Die Multipool-Suche gibt es jetzt auch als Webinterface. Der Multipool-Expertenmodus im Aleph Katalogisierungs-Client dient der schnellen Recherche in mehreren Datenbanken gleichzeitig. Dabei können die Datenbanken entweder direkt auf dem Aleph-Server liegen oder als externe Ressourcen über das z39.50-Protokoll angebunden sein. Zusätzlich zu den lokalen Bibliotheken ist der MPI Bibliothekskatalog im GBV auf dem … Continue reading Aleph Multipool-Recherche: Parallele Suche in MPG-Bibliothekskatalogen → Goodbye vLib! Shutdown after October 31, 2018 In 2002 the Max Planck virtual Library (vLib) was launched, with the idea of making all information resources relevant for Max Planck users simultaneously searchable under a common user interface. Since then, the vLib project partners from the Max Planck libraries, information retrieval services groups, the GWDG and the MPDL invested much time and effort … Continue reading Goodbye vLib! Shutdown after October 31, 2018 → HTTPS only for MPG/SFX and MPG.eBooks As of next week, all http requests to the MPG/SFX link resolver will be redirected to a corresponding https request. The Max Planck Society electronic Book Index is scheduled to be switched to https only access the week after, starting on November 27, 2017. Regular web browser use of the above services should not be … Continue reading HTTPS only for MPG/SFX and MPG.eBooks → HTTPS enabled for MPG/SFX The MPG/SFX link resolver is now alternatively accessible via the https protocol. The secure base URL of the productive MPG/SFX instance is: https://sfx.mpg.de/sfx_local. HTTPS support enables secure third-party sites to load or to embed content from MPG/SFX without causing mixed content errors. Please feel free to update your applications or your links to the MPG/SFX … Continue reading HTTPS enabled for MPG/SFX → Citation Trails in Primo Central Index (PCI) The May 2016 release brought an interesting functionality to the MPG/SFX server maintenance, Wednesday 20 April, 8-9 am The MPG/SFX server updates to a new database (MariaDB) on Wednesday morning. The downtime will begin at 8 am and is scheduled to last until 9 am. We apologize for any inconvenience. ProQuest Illustrata databases discontinued Last year, the information provider ProQuest decided to discontinue its "Illustrata Technology" and "Illustrata Natural Science" databases. Unfortunately, this represents a preliminary end to ProQuest’s long-year investment into deep indexing content. In a corresponding support article ProQuest states that there "[…] will be no loss of full text and full text + graphics images because … Continue reading ProQuest Illustrata databases discontinued → books-google-co-uk-2987 ---- Documents Accompanying the Journal of the House of Representatives of the ... - Michigan. Legislature. House of Representatives - Google Books Search Images Maps Play YouTube News Gmail Drive More » Sign in Books Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books Try the new Google Books My library Help Advanced Book Search Download EPUB Download PDF Plain text eBook - FREE Get this book in print AbeBooks.co.uk Find in a library All sellers » 0 ReviewsWrite review Documents Accompanying the Journal of the House of Representatives of the ... By Michigan. Legislature. House of Representatives   About this book Terms of Service    Plain text PDF EPUB catalog-hathitrust-org-7089 ---- Catalog Record: The golden cocoon; a novel | HathiTrust Digital Library Skip to main Skip to similar items Home Menu About Welcome to HathiTrust Our Partnership Our Digital Library Our Collaborative Programs Our Research Center News & Publications Collections Help Feedback Search HathiTrust Log in HathiTrust Digital Library Search full-text index Search Field List All Fields Title Author Subject ISBN/ISSN Publisher Series Title Available Indexes Full-text Catalog Full view only Search HathiTrust Advanced full-text search Advanced catalog search Search tips The golden cocoon; a novel, by Ruth Cross. Description Tools Cite this Export citation file Main Author: Cross, Ruth. Language(s): English Published: New York, Harper & brothers, 1924. Edition: First edition. Physical Description: 4 p. L., 341 p. 20 cm. Locate a Print Version: Find in a library Viewability Item Link Original Source Full view University of Michigan View HathiTrust MARC record Similar Items Golden cocoon Author Lindstrom, Virginia K., 1918- Published 1995 Enchantment, a novel Author Cross, Ruth, b. 1887. Published 1930 The steel cocoon, a novel Author Plagemann, Bentz, 1913- Published 1958 Soldier of good fortune : an historical novel Author Cross, Ruth, b. 1887. Published 1936 The golden poppy; a novel Author Deprend, Jeffrey. Published 1920 The golden poppy a novel Author Deprend, Jeffrey. Published 1920 The golden house. A novel Author Warner, Charles Dudley, 1829-1900. Published 1895 The golden ones : a novel Author Slaughter, Frank G. (Frank Gill), 1908-2001. Published 1957 The golden bowl; a novel Author Manfred, Frederick Feikema, 1912-1994. Published 1969 The golden honeycomb : a novel Author Markandaya, Kamala, 1924-2004. Published 1977 Home About Collections Help Feedback Accessibility Take-Down Policy Privacy Contact cbeci-org-1557 ---- Cambridge Bitcoin Electricity Consumption Index (CBECI) Cambridge Bitcoin Electricity Consumption Index Bitcoin Mining Map Note: average monthly hashrate share by country and region for the selected period, based on geolocational mining pool data. Updates are scheduled on a monthly basis subject to data availability (generally with a delay of one to three months). All changes and updates are listed in the  Change Log.   Download data in CSV format Download data in CSV format Note: seasonal variance in renewable energy production causes a pattern where mining operations are moving between regions within China to benefit from cheap and abundant power.  All information on this page is based on an exclusive sample of geolocational mining facility data collected in partnership with several Bitcoin mining pools (please visit the Methodology page for further information). We would like to thank BTC.com, Poolin, ViaBTC, and Foundry for their contribution to this research project.  If you are a mining pool operator and would like to contribute to this research, please get in touch. Figure 5: Mining Provinces Figure 4: Mining Countries Cambridge Centre for Alternative Finance ©  2021 cbeer-info-8395 ---- blog.cbeer.info Chris Beer chris@cbeer.info cbeer _cb_ May 25, 2016 Autoscaling AWS Elastic Beanstalk worker tier based on SQS queue length We are deploying a Rails application (for the Hydra-in-a-Box project) to AWS Elastic Beanstalk. Elastic Beanstalk offers us easy deployment, monitoring, and simple auto-scaling with a built-in dashboard and management interface. Our application uses several potentially long-running background jobs to characterize, checksum, and create derivates for uploaded content. Since we’re deploying this application within AWS, we’re also taking advantage of the Simple Queue Service (SQS), using the active-elastic-job gem to queue and run ActiveJob tasks. Elastic Beanstalk provides settings for “Web server” and “Worker” tiers. Web servers are provisioned behind a load balancer and handle end-user requests, while Workers automatically handle background tasks (via SQS + active-elastic-job). Elastic Beanstalk provides basic autoscaling based on a variety of metrics collected from the underlying instances (CPU, Network, I/O, etc), although, while sufficient for our “Web server” tier, we’d like to scale our “Worker” tier based on the number of tasks waiting to be run. Currently, though, the ability to auto-scale the worker tier based on the underlying queue depth isn’t enable through the Elastic Beanstak interface. However, as Beanstalk merely manages and aggregates other AWS resources, we have access to the underlying resources, including the autoscaling group for our environment. We should be able to attach a custom auto-scaling policy to that auto scaling group to scale based on additional alarms. For example, let’s we want to add additional worker nodes if there are more than 10 tasks for more than 5 minutes (and, to save money and resources, also remove worker nodes when there are no tasks available). To create the new policy, we’ll need to: find the appropriate auto-scaling group by finding the Auto-scaling group with the elasticbeanstalk:environment-id that matches the worker tier environment id; find the appropriate SQS queue for the worker tier; add auto-scaling policies that add (and remove) instances to the autoscaling group; create a new CloudWatch alarm that measures the SQS queue exceeds our configured depth (5) that triggers the auto-scaling policy to add additional worker instances whenever the alarm is triggered; and, conversely, create a new CloudWatch alarm that measures the SQS queue hits 09 that trigger the auto-scaling action to removes worker instances whenever the alarm is triggered. and, similarly for scaling back down. Even though there are several manual steps, they aren’t too difficult (other than discovering the various resources we’re trying to orchestrate), and using Elastic Beanstalk is still valuable for the rest of its functionality. But, we’re in the cloud, and really want to automate everything. With a little CloudFormation trickery, we can even automate creating the worker tier with the appropriate autoscaling policies. First, knowing that the CloudFormation API allows us to pass in an existing SQS queue for the worker tier, let’s create an explicit SQS queue resource for the workers: "DefaultQueue" : { "Type" : "AWS::SQS::Queue", } And wire it up to the Beanstalk application by setting the aws:elasticbeanstalk:sqsd:WorkerQueueURL (not shown: sending the worker queue to the web server tier): "WorkersConfigurationTemplate" : { "Type" : "AWS::ElasticBeanstalk::ConfigurationTemplate", "Properties" : { "ApplicationName" : { "Ref" : "AWS::StackName" }, "OptionSettings" : [ ..., { "Namespace": "aws:elasticbeanstalk:sqsd", "OptionName": "WorkerQueueURL", "Value": { "Ref" : "DefaultQueue"} } } } }, "WorkerEnvironment": { "Type": "AWS::ElasticBeanstalk::Environment", "Properties": { "ApplicationName": { "Ref" : "AWS::StackName" }, "Description": "Worker Environment", "EnvironmentName": { "Fn::Join": ["-", [{ "Ref" : "AWS::StackName"}, "workers"]] }, "TemplateName": { "Ref": "WorkersConfigurationTemplate" }, "Tier": { "Name": "Worker", "Type": "SQS/HTTP" }, "SolutionStackName" : "64bit Amazon Linux 2016.03 v2.1.2 running Ruby 2.3 (Puma)" ... } } Using our queue we can describe one of the CloudWatch::Alarm resources and start describing a scaling policy: "ScaleOutAlarm" : { "Type": "AWS::CloudWatch::Alarm", "Properties": { "MetricName": "ApproximateNumberOfMessagesVisible", "Namespace": "AWS/SQS", "Statistic": "Average", "Period": "60", "Threshold": "10", "ComparisonOperator": "GreaterThanOrEqualToThreshold", "Dimensions": [ { "Name": "QueueName", "Value": { "Fn::GetAtt" : ["DefaultQueue", "QueueName"] } } ], "EvaluationPeriods": "5", "AlarmActions": [{ "Ref" : "ScaleOutPolicy" }] } }, "ScaleOutPolicy" : { "Type": "AWS::AutoScaling::ScalingPolicy", "Properties": { "AdjustmentType": "ChangeInCapacity", "AutoScalingGroupName": ????, "ScalingAdjustment": "1", "Cooldown": "60" } }, However, to connect the policy to the auto-scaling group, we need to know the name for the autoscaling group. Unfortunately, the autoscaling group is abstracted behind the Beanstalk environment. To gain access to it, we’ll need to create a custom resource backed by a Lambda function to extract the information from the AWS APIs: "BeanstalkStack": { "Type": "Custom::BeanstalkStack", "Properties": { "ServiceToken": { "Fn::GetAtt" : ["BeanstalkStackOutputs", "Arn"] }, "EnvironmentName": { "Ref": "WorkerEnvironment" } } }, "BeanstalkStackOutputs": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "ZipFile": { "Fn::Join": ["\n", [ "var response = require('cfn-response');", "exports.handler = function(event, context) {", " console.log('REQUEST RECEIVED:\\n', JSON.stringify(event));", " if (event.RequestType == 'Delete') {", " response.send(event, context, response.SUCCESS);", " return;", " }", " var environmentName = event.ResourceProperties.EnvironmentName;", " var responseData = {};", " if (environmentName) {", " var aws = require('aws-sdk');", " var eb = new aws.ElasticBeanstalk();", " eb.describeEnvironmentResources({EnvironmentName: environmentName}, function(err, data) {", " if (err) {", " responseData = { Error: 'describeEnvironmentResources call failed' };", " console.log(responseData.Error + ':\\n', err);", " response.send(event, context, resource.FAILED, responseData);", " } else {", " responseData = { AutoScalingGroupName: data.EnvironmentResources.AutoScalingGroups[0].Name };", " response.send(event, context, response.SUCCESS, responseData);", " }", " });", " } else {", " responseData = {Error: 'Environment name not specified'};", " console.log(responseData.Error);", " response.send(event, context, response.FAILED, responseData);", " }", "};" ]]} }, "Handler": "index.handler", "Runtime": "nodejs", "Timeout": "10", "Role": { "Fn::GetAtt" : ["LambdaExecutionRole", "Arn"] } } } With the custom resource, we can finally get access the autoscaling group name and complete the scaling policy: "ScaleOutPolicy" : { "Type": "AWS::AutoScaling::ScalingPolicy", "Properties": { "AdjustmentType": "ChangeInCapacity", "AutoScalingGroupName": { "Fn::GetAtt": [ "BeanstalkStack", "AutoScalingGroupName" ] }, "ScalingAdjustment": "1", "Cooldown": "60" } }, The complete worker tier is part of our CloudFormation stack: https://github.com/hybox/aws/blob/master/templates/worker.json Mar 8, 2015 LDPath in 3 examples At Code4Lib 2015, I gave a quick lightning talk on LDPath, a declarative domain-specific language for flatting linked data resources to a hash (e.g. for indexing to Solr). LDPath can traverse the Linked Data Cloud as easily as working with local resources and can cache remote resources for future access. The LDPath language is also (generally) implementation independent (java, ruby) and relatively easy to implement. The language also lends itself to integration within development environments (e.g. ldpath-angular-demo-app, with context-aware autocompletion and real-time responses). For me, working with the LDPath language and implementation was the first time that linked data moved from being a good idea to being a practical solution to some problems. Here is a selection from the VIAF record [1]: <> void:inDataset <../data> ; a genont:InformationResource, foaf:Document ; foaf:primaryTopic <../65687612> . <../65687612> schema:alternateName "Bittman, Mark" ; schema:birthDate "1950-02-17" ; schema:familyName "Bittman" ; schema:givenName "Mark" ; schema:name "Bittman, Mark" ; schema:sameAs , ; a schema:Person ; rdfs:seeAlso <../182434519>, <../310263569>, <../314261350>, <../314497377>, <../314513297>, <../314718264> ; foaf:isPrimaryTopicOf . We can use LDPath to extract the person’s name: So far, this is not so different from traditional approaches. But, if we look deeper in the response, we can see other resources, including books by the author. <../310263569> schema:creator <../65687612> ; schema:name "How to Cook Everything : Simple Recipes for Great Food" ; a schema:CreativeWork . We can traverse the links to include the titles in our record: LDPath also gives us the ability to write this query using a reverse property selector, e.g: books = foaf:primaryTopic / ^schema:creator[rdf:type is schema:CreativeWork] / schema:name :: xsd:string ; The resource links out to some external resources, including a link to dbpedia. Here is a selection from record in dbpedia: dbpedia-owl:abstract "Mark Bittman (born c. 1950) is an American food journalist, author, and columnist for The New York Times."@en, "Mark Bittman est un auteur et chroniqueur culinaire américain. Il a tenu une chronique hebdomadaire pour le The New York Times, appelée The Minimalist (« le minimaliste »), parue entre le 17 septembre 1997 et le 26 janvier 2011. Bittman continue d'écrire pour le New York Times Magazine, et participe à la section Opinion du journal. Il tient également un blog."@fr ; dbpedia-owl:birthDate "1950+02:00"^^ ; dbpprop:name "Bittman, Mark"@en ; dbpprop:shortDescription "American journalist, food writer"@en ; dc:description "American journalist, food writer", "American journalist, food writer"@en ; dcterms:subject , , , , , , ; LDPath allows us to transparently traverse that link, allowing us to extract the subjects for VIAF record: [1] If you’re playing along at home, note that, as of this writing, VIAF.org fails to correctly implement content negotiation and returns HTML if it appears anywhere in the Accept header, e.g.: curl -H "Accept: application/rdf+xml, text/html; q=0.1" -v http://viaf.org/viaf/152427175/ will return a text/html response. This may cause trouble for your linked data clients. Mar 13, 2013 Building a Pivotal Tracker IRC bot with Sinatra and Cinch We're using Pivotal Tracker on the Fedora Futures project. We also have an IRC channel where the tech team hangs out most of the day, and let each other know what we're working on, which tickets we're taking, and give each other feedback on those tickets. In order to document this, we try to put most of our the discussion in the tickets for future reference (although we are logging the IRC channel, it's not nearly as easy to look up decisions there). Because we're (lazy) developers, we wanted updates in Pivotal to get surfaced in the IRC channel. There was a (neglected) IRC bot, Pivotal-Tracker-IRC-bot, but it was designed to push and pull data from Pivotal based on commands in IRC (and, seems fairly abandoned). So, naturally, we built our own integration: Pivotal-IRC. This was my first time using Cinch to build a bot, and it was a surprisingly pleasant and straightforward experience: bot = Cinch::Bot.new do configure do |c| c.nick = $nick c.server = $irc_server c.channels = [$channel] end end # launch the bot in a separate thread, because we're using this one for the webapp. Thread.new { bot.start } And we have a really tiny Sinatra app that can parse the Pivotal Webhooks payload and funnel it into the channel: post '/' do message = Pivotal::WebhookMessage.new request.body.read bot.channel_list.first.msg("#{message.description} #{message.story_url}") end It turns out we also send links to Pivotal tickets not infrequently, and building two-way communication (using the Pivotal REST API, and the handy pivotal-tracker gem) was also easy. Cinch exposes a handy DSL that parses messages using regular expressions and capturing groups: bot.on :message, /story\/show\/([0-9]+)/ do |m, ticket_id| story = project.stories.find(ticket_id) m.reply "#{story.story_type}: #{story.name} (#{story.current_state}) / owner: #{story.owned_by}" end Mar 9, 2013 Real-time statistics with Graphite, Statsd, and GDash We have a Graphite-based stack of real-time visualization tools, including the data aggregator Statsd. These tools let us easily record real-time data from arbitrary services with mimimal fuss. We present some curated graphs through GDash, a simple Sinatra front-end. For example, we record the time it takes for Solr to respond to queries from our SearchWorks catalog, using this simple bash script: tail -f /var/log/tomcat6/catalina.out | ruby solr_stats.rb (We rotate these logs through truncation; you can also use `tail -f --retry` for logs that are moved away when rotated) And the ruby script that does the actual parsing: require 'statsd.rb' STATSD = Statsd.new(...,8125) # Listen to stdin while str = gets if str =~ /QTime=([^ ]+)/ # extract the QTime ms = $1.to_i # record it, based on our hostname STATSD.timing("#{ENV['HOSTNAME'].gsub('.', '-')}.solr.qtime", ms) end end From this data, we can start asking qustions like: Is our load-balancer configured optimally? (hint: not quite; for a variety of reasons, we've sacrificed some marginal performance benefit for this non-invasive, simpler load-blaance configuration. Why are our the 90th-percentile query times creeping up? (time in ms) (Answers to these questions and more in a future post, I'm sure.) We also use this setup to monitor other services, e.g.: What's happening in our Fedora instance (and, which services are using the repository)? Note the red line ("warn_0") in the top graph. It marks the point where our (asynchronous) indexing system is unable to keep up with demand, and updates may appear at a delay. Given time (and sufficient data, of course), this also gives us the ability to forecast and plan for issues: Is our Solr query time getting worse? (Ganglia can perform some basic manipulation, including taking integrals and derivatives) What is the rate of growth of our indexing backlog, and, can we process it in a reasonable timeframe, or should we scale the indexer service? Given our rate of disk usage, are we on track to run out of disk space this month? this week? If we build graphs to monitor those conditions, we can add Nagios alerts to trigger service alerts. GDash helpfully exposes a REST endpoint that lets us know if a service has those WARN or CRITICAL thresholds. We currently have a home-grown system monitoring system that we're tempted to fold into here as well. I've been evaluating Diamond, which seems to do a pretty good job of collecting granular system statistics (CPU, RAM, IO, Disk space, etc). Mar 8, 2013 Icemelt: A stand-in for integration tests against AWS Glacier One of the threads we've been pursuing as part of the Fedora Futures project is integration with asynchronous and/or very slow storage. We've taken on AWS Glacier as a prime, generally accessable example. Uploading content is slow, but can be done synchronously in one API request: POST /:account_id/vaults/:vault_id/archives x-amz-archive-description: Description ...Request body (aka your content)... Where things get radically different is when requesting content back. First, you let Glacier know you'd like to retrieve your content: POST /:account_id/vaults/:vault_id/jobs HTTP/1.1 { "Type": "archive-retrieval", "ArchiveId": String, [...] } Then, you wait. and wait. and wait some more; from the documentation: Most Amazon Glacier jobs take about four hours to complete. You must wait until the job output is ready for you to download. If you have either set a notification configuration on the vault identifying an Amazon Simple Notification Service (Amazon SNS) topic or specified an Amazon SNS topic when you initiated a job, Amazon Glacier sends a message to that topic after it completes the job. [emphasis added] Icemelt If you're iterating on some code, waiting hours to get your content back isn't realistic. So, we wrote a quick Sinatra app called Icemelt in order to mock the Glacier REST API (and, perhaps taking less time to code than retrieving content from Glacier ). We've tested it using the Ruby Fog client, as well as the official AWS Java SDK, and it actually works! Your content gets stored locally, and the delay for retrieving content is configurable (default: 5 seconds). Configuring the official SDK looks something like this: PropertiesCredentials credentials = new PropertiesCredentials( TestIcemeltGlacierMock.class .getResourceAsStream("AwsCredentials.properties")); AmazonGlacierClient client = new AmazonGlacierClient(credentials); client.setEndpoint("http://localhost:3000/"); And for Fog, something like: Fog::AWS::Glacier.new :aws_access_key_id => '', :aws_secret_access_key => '', :scheme => 'http', :host => 'localhost', :port => '3000' Right now, Icemelt skips a lot of unnecessary work (e.g. checking HMAC digests for authentication, validating hashes, etc), but, as always, patches are very welcome. Next » clir-informz-net-5854 ---- Template Name: Characters remaining: Template Description: Characters remaining: Name: Filename: Select the destination folder: Select a folder Report Edit Test Activate Deactivate Save As Template Copy Delete Undelete Archive Set as In Progress Form Content Only Unsubscribe Form Custom Some non-required fields on this form are blank. Do you wish to save with blank values? Loading... Join the DLF Forum Newsletter mailing list! Email: * First Name * Last Name * Institution Title Opt-in to join the list DLF Forum News Please verify that you are not a robot. Sign Up DLF never shares your information. But we do like to share information with you! coactproject-eu-9604 ---- Gender Equality – CoAct About What is CoAct? Our Ethical Values Partners Advisory board R&I Actions Mental Health Care Youth Employment Environmental Justice Open Calls on Gender Equality Resources Toolkits Publications Readings Project reports Communication materials Citizen Social Science CSS School The Community Get involved Blog and Events Menu Menu About What is CoAct? Our Ethical Values Partners Advisory board R&I Actions Mental Health Care Youth Employment Environmental Justice Open Calls on Gender Equality Resources Toolkits Publications Readings Project reports Communication materials Citizen Social Science CSS School The Community Get involved Blog and Events Menu Gender Equality CoAct Open Calls on Gender Equality Contents About the Open Calls Why should I apply How to apply Timeline and deadlines Contact and FAQ Read the call in German, Polish or Czech We are launching three open calls to foster CSO-led Citizen Social Science projects on Gender Equality. Any non-profit organisation registered in the eligible countries can apply. If you are a CSO working on Gender Equality, apply between July 1st and Sept 30th, 2021. Before applying, we recommend that you read the Applicant Guide and review the Application Form. Apply here Download the Applicant Guide Review the Application Form About the Open Calls CoAct is launching a call for proposals, inviting civil society initiatives to apply for our cascading grants with max. 20.000,- Euro to conduct Citizen Social Science research on the topic of Gender Equality. A maximum of four (4) applicants will be selected across three (3) different open calls. Applications from a broad range of backgrounds are welcome, including feminist, LGTBQ+, none-binary and critical masculinity perspectives. Gender Equality & Sustainable Cities and Communities CSOs in Berlin & Brandenburg area Gender Equality & Decent Work and Economic Growth CSOs in Eastern Europe Gender Equality & Opportunities and Risks of Digitalization International CSOs in the EU We understand Citizen Social Science as participatory research co-designed or directly driven by citizen groups that share a particular social concern. In CoAct projects citizens act as co-researchers throughout the entire research process and are recognized as in-the-field competent experts being equal actors in all phases. Citizen science in general, and our open calls in particular, are relevant to civic organisations which incorporate citizen engagement, community building or any kind of collective action as part of their projects. Why should I apply CoAct will provide funding for a research project (10 months max), alongside dedicated activities, resources and tools to set up and run the research project. CoAct will provide a research mentoring program for your team. In collaborative workshops you will be supported to co-design and explore available tools, working together with the CoAct team to achieve your goals. CoAct will connect you to a community of people and initiatives, tackling similar challenges and contributing to common aims. You will have the opportunity to discuss your projects with the other grantees and, moreover, are invited to join CoAct´s broader Citizen Social Science network. You should apply if you: are an ongoing Citizen Social Science project looking for support, financial and otherwise, to grow and become sustainable; are a community interested in co-designing research to generate new knowledge about gender equality topics, broadly defined; are a not-for-profit organization focusing on community building, increasing the visibility of specific communities, increasing civic participation, and being interested in exploring the use of Citizen Social Science in your work. How to apply All the information presented here can be found in the Guide for Applicants. To apply for any of the three Open Calls, you should: Be a not-for-profit organization, legally registered and operating in the European Union Select the Open Call relevant to your work and interest Verify the specific eligibility criteria of the Open Call Use the online form to submit your application Below you can find descriptions of each Open Call, including the specific eligibility criteria: Open Call 1 “Sustainable cities and communities” (SDG 11) is addressing initiatives in the Berlin and Brandenburg region that aim for making cities inclusive, safe, resilient and sustainable for all its inhabitants. Proposals should examine gender inequalities in affordable housing and/or urban planning as well as projects that promotes social, economic, environmental sustainability through community building around the topic of gender equality in its broadest sense. Eligibility criteria: – Applicant should be a non-profit – The candidate organisation should be registered in the Berlin and Brandenburg region. Open Call 2 “Decent work and economic growth” (SDG 8) is addressing organisations in Eastern Europe. Whereas women in the EU earn on average over 16% less per hour than men this figure becomes is even higher in Eastern Europe Countries. Trans and intersexual and none-binary people are facing even harder forms of discrimination regarding their work opportunities (EC 2018). Eligibility criteria: – Applicant should be a non-profit – The candidate organisation should be registered in one of the following countries: Bulgaria, Croatia, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia. Open Call 3 “Opportunities and risks of digitalization” is open to international civic organisations operating in the EU. It has been pointed out that digital spaces are gendered spaces which hinder for example the participation of young women and that digital norms are exacerbated online (EIGE 2019). Proposals should address the issues related to gender inequalities in online spaces, due, in part, to issues such as the gender dynamics of online platforms and the exposure to online harassment. Eligibility criteria: Applicant should be a non-profit The candidate organisation should have operations in at least two European countries. The candidate organisation should be registered in a member country of the European Union. Click here to apply Timeline and deadlines Opening date: July 1st 2021, 12:00am GMT Closing date: September 30th 2021, 12:00am GMT Contact and FAQ To contact us for questions and clarifications, send an email to opencalls@coactproject.eu General What is CoAct? CoAct stands for Co-designing Citizen Social Science for collective action. It is a research project that explores the field of Citizen Social Science funded by the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 873048). CoAct proposes a new understanding of Citizen Social Science (CSS) as participatory research co-designed and directly driven by citizen groups sharing a social concern. CoAct aims to provide and further develop methodologies supporting an understanding of research that can equally be led by academic researchers or citizen groups. Doing so, the project seeks to create an environment that provides a more equal “seat at the table” in process, which are oftentimes dominated by academic researchers. CoAct is running three so-called Research and Innovation Actions (R&I Actions) in which citizens act as co-researchers, actively participating in all phases of the research, from the design to the interpretation of the results and their transformation into concrete actions. Simultaneously, with the CoAct Open Call provides funding for citizen groups to lead their own participatory research, inviting academic researchers in. Who are the partners of the CoAct consortium? CoAct is a transdisciplinary collaboration of research institutions and civil society organisations. The consortium brings together experts from different disciplines and fields of practice, such as Participatory Action Research, Computational Social Science, Citizen Science, Research Policy and Development, Digital Transformation, Social Movement Studies and Participatory Development Communication. For further information check this link: https://coactproject.eu/partners/. What is Citizen Social Science? We understand Citizen Social Science as participatory research co-designed or directly driven by citizen groups that share a particular social concern. In CoAct’s R&I Actions citizens act as co-researchers throughout the entire research process and are recognized as in-the-field competent experts being equal actors in all phases. In the co-designed research, the citizens explore their lived experiences regarding the specific social concerns that motivate the research actions. In these R&I Actions, we focus on the topics of mental health care, youth employment and environmental justice and gender equality. Such an approach enables them to address pressing social issues from the bottom up, embedded in their social contexts. Co-designed research provides the foundation for socially robust evidence-based knowledge that strives for sustainable impact and social change. Why an CoAct Open Call? CoAct’s Open Call seeks to move beyond its own co-research activities and invite further actors to benefit from the project and its support mechanisms. We want to support civic organizations in making use of CSS methods and best practices in their own projects, directly from within civil society. Civil society organizations are directly dealing with specific social topics of concern and are mostly organized around these. Therefore, the CoAct Open Call seeks to connect to expert work at the grassroots level to explore the opportunities and challenges of citizen-led research. Why a call on Gender Equality? Gender equality is an ongoing major societal topic that constantly affects our daily life. The United Nations made “Gender Equality” the fifth Sustainable Development Goal (SDG) and define it as follows: (1) End all forms of discrimination against all women and girls everywhere; (2) Eliminate all forms of violence against all women and girls in the public and private spheres, including trafficking and sexual and other types of exploitation. In CoAct, we take SDG5 as the starting point for this Open call but we want to consider gender equality in a wider and inclusive manner, including all perspectives and collectives, such as LGBTQ+ communities for example. All perspectives related to any perceived gender identity, including non-binary ones, are thus welcome. The COVID-19 pandemic has clearly brought the strongly rooted traditional role patterns in our system to light again, particularly regarding care work. Simultaneously, we are witnessing new manifestations and visibilities, and—at least in some locations—more attention from policy and society of the different feminist and LGBTQ+ movements with claims for equity appearing in various forms, for example in huge demonstrations (300,000 people in Barcelona on 8th of March of 2019), the #Metoo movement or also intersectional movements like Black Lives Matter. There is a vast variety of different attempts to tackle the social construction and structural embeddedness of gender inequality and many types of actors can play a relevant role. Movements range from demands for a women’s quota in decision making positions, to human rights movements against discrimination and violence up to more radical transformative approaches that criticize the basic exclusionary foundations of capitalism. From our perspective, Citizen Social Science can represent a powerful grassroots approach to this global issue. In our understanding of Citizen Social Science, citizens in vulnerable situations need to be at the centre of the research cycle, defining the focus on a specific social issue. This way, unprecedented scientific data related to gender inequalities could be collected, possibly leading to new scientific evidence-informed reactions and the proposal of new collective actions or policymaking. Therefore, we want to invite civil society organizations to apply for a short-term grant to investigate issues with a Citizen Social Science approach. Funded projects will receive financial backing as well as support via mentoring by partners from CoAct, including academic researchers, global networks, NGOs and others. Application Process How long is the application process open? The application process is open for three (3) months from 1st July to 30th September 2021 (12 am GMT). To whom is the Open Call aiming to? CoAct’s Open Calls are inviting: (A) ongoing Citizen Social Science projects looking for support, financial and otherwise, to grow and become sustainable; (B) communities interested in co-designing research to generate new knowledge about gender equality topics; (C) organizations in the third sectors that focus on community building, increasing the visibility of specific communities, increasing civic participation and who are interested in exploring the use of Citizen Social Science in their work. The funding is available to ​legal entities​ and ​consortia​ established in a country or territory eligible to receive Horizon 2020 grants. Only organizations legally registered and operating in an EU member state or associated country are eligible for funding from CoAct. For consortia of different organisations, all participants must be eligible. In this case, the participants also need to choose a research project lead, which will submit the application and engage with CoAct on behalf of the consortium. Every entity is allowed to participate in one application, either on its own or as part of a consortium as described above​. CoAct has the following conflict of interest policy: ​Immediate family, domestic and non-domestic partners and those with financial ties to members of the CoAct consortium members are prohibited to apply. If you have a prior relationship with anyone contributing to CoAct that you feel may constitute a conflict of interest, please email ​opencalls@coactproject.eu for clarification. What is provided by CoAct? CoAct will provide: (A) funding for a research project (10 months max), alongside dedicated activities, resources and tools to set up and run the research project; (B) a research mentoring program for your team. In collaborative workshops you will be supported to co-design and explore available tools working together with the CoAct team to achieve your goals; connections to a community of people and initiatives, tackling similar challenges and contributing to common aims. You will have the opportunity to discuss your projects with the other grantees and moreover are invited to join CoAct´s broader Citizen Social Science network. What are the topics of the Open Call? In the CoAct Open Calls gender equality is combined with three secondary thematic topics that are corresponding with specific regional foci: “Sustainable cities and communities”, Berlin and Brandenburg Area, Germany “Decent work and economic growth”, Eastern Europe (Bulgaria, Croatia, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia) “Opportunities and risks of digitalization”, across all EU countries. Consequently, applicants will have to show the relevance of their project to both gender equality and the specific focus of the Open Call of their choice. It is possible for one organization to apply to several Open Calls, for different projects. How much funding is available? The funding will be set at a maximum of €20,000 for Call 1 and 3, for which only one applicant will be selected. Call 2 will select two proposals, which will share the €20,000 grant, with a maximum of €15,000 for a single organisation. The funding can be spent on salaries, equipment, consumables, travel, subcontracting to other entities, and indirect expenditure (calculated as 25% of the total direct costs ), in accordance with Horizon 2020 guidelines. What is funded? The budget you submit will have to include different cost categories, which are explained below. There is a general distinction between direct costs, subcontracting, and indirect costs (also known as overheads). Indirect costs are calculated at 25% of the direct costs; no indirect costs can be charged on subcontracting. All costs, except for purchased equipment, can be booked to the project’s budget covered by the grant. Indirect costs, which are charged on top of the total direct costs, should be included. All costs should be stated inclusive of any irrecoverable VAT. Direct costs Personnel Applicants can spend CoAct funds on the staff directly involved in the execution of the project. Equipment Equipment with a useful life in excess of the project duration can only be reimbursed to the extent the asset would be depreciated for the ten-month project period. Therefore, the standard rate allowed under the contracted project will be 15% of the total costs of the asset for a ten-month period. Indirect costs can be applied to the 15% of costs charged to the project. The costs of equipment rental for the project period can be charged at full cost, as long as the rental cost is not greater than the depreciation cost had the equipment been purchased. Consumables, other goods and services Applicants can spend on consumables and other goods and services (including travel) if they are directly relevant for the achievement of the project. There is no hard-and-fast rule about the distinction between the equipment and other costs; small items such as moderation cards may be budgeted as ‘other goods and services. Subcontracting Applicants may subcontract some of their activities to other parties as long as they are also from an H2020 eligible country. No indirect costs (overhead) can be charged on subcontracting costs. Note that we expect the applicant to carry out most of the tasks of the project – subcontracting cannot be used to carry out key tasks in the project. Indirect costs Indirect costs are within the €20,000 or €15,000 limit and cover items such as rent, admin, printing, photocopying, amenities etc. These costs are eligible if they are declared on the basis of the flat rate of 25% of the eligible costs, from which are excluded: Costs of subcontracting and Costs of in-kind contributions provided by third parties which are not used on the applicant’s premises. How can I apply? Submission is done online via an online form available on the Open Call page of the CoAct website​. Applicants will be asked to describe their project proposal but also a series of questions about their eligibility to apply for funding, and their ability to conduct the research project. Only complete applications submitted before the deadline will be considered for review. All information provided must be in English. Before applying, we recommend that you read the Applicant Guide and review the Application Form. How many projects will be funded? A maximum of four (4) projects will be funded. What is the expected outcome of the research projects? Each research project is expected to provide a final report of the findings of the research at the end of the project. Furthermore, results can also manifest as videos, manuals, handbooks, exhibitions etc. will be funded. Eligibility Who can apply? The funding is available to ​legal entities​ and ​consortia​ established in a country or territory eligible to receive Horizon 2020 grants. Only organizations legally registered and operating in an EU member state or associated country are eligible for funding from CoAct. Every entity is allowed to participate, either on its own or as part of a consortium​. Can individuals apply? No, individuals cannot apply. Can consortia apply? For consortia of different organisations, the lead organisation must be eligible. Consortium members need to choose a project lead, which will submit the application and engage with CoAct on behalf of the consortium. Which costs are eligible? The €20,000 grant may be spent only on eligible costs. These are costs that meet the following criteria: – Incurred by the applicant in connection with or during the project; – Identifiable and verifiable in the applicant’s accounts; – Compliant with national law; – Reasonable, justified, in accordance with sound financial management (economy and efficiency); – Indicated in the budget you submitted with the short proposal. CoAct will provide training and guidance to all funded projects on financial matters. There is a general distinction between direct costs, subcontracting, and indirect costs (also known as overheads). Indirect costs are calculated at 25% of the direct costs; no indirect costs can be charged on subcontracting. All costs, except for purchased equipment, can be booked to the project’s budget covered by the grant. Indirect costs, which are charged on top of the total direct costs, should be included. All costs should be stated inclusive of any irrecoverable VAT. Submission Can I submit more than one application? Yes, you can submit one application for each call. Does CoAct offer a pre-proposal check? Unfortunately, we cannot offer a pre-proposal check. Can I submit an application if I am already receiving funds from another public programme? Yes, you can but activities you plan to carry out with CoAct cannot receive double funding​. Synergies with other sources of funding, including other Horizon 2020 projects, are encouraged if the grants are used for complementary, not overlapping purposes. Can I submit documents that are not in English? No, all submitted documents must be written in English. Can I change my application once it was submitted? Once submitted, you cannot change your application because we start immediately with the review. Can I apply for all three calls? Yes, you can apply for all three calls with different proposals. Responsible Research and Innovation Who keeps the intellectual property rights? By default, you will be the sole owner of the results and outcomes of your project and all associated intellectual property. However, we expect all proposals to follow an open approach, sharing results and experiences widely with the community, as in any EU project. We will only accept proposals with a well-articulated plan that includes an open data approach. In addition, CoAct or the European Commission may ask you to present your work as part of our public relations and networking events, to showcase and discuss the benefits and challenges of the CoAct approach. What happens with the data? Applicants will have to be clear in their proposal about the data that they expect to collect, generate and manage through the project. The processing of that data should follow the general data protection regulation (GDPR). As noted earlier, we will only accept proposals that are committed to making their data, methods and outputs publicly available for reuse, following an open science approach. For that CoAct will provide technical, legal and operational support to successful applicants. In addition, CoAct will require Citizen Social Science projects funded through the programme to collect, manage and share data with and for the CoAct team for co-evaluation purposes. The specifics of the data to be collected will be defined with each project team. Do I need to share everything openly? Yes, within the limits of data protection laws. We will only accept proposals that are committed to making their data, methods and outputs publicly available for reuse, following an open science approach. For that CoAct will provide technical, legal and operational support to successful applicants. In addition, CoAct will require Citizen Social Science projects funded through the programme to collect, manage and share data with and for the CoAct team for co-evaluation purposes. The specifics of the data to be collected will be defined with each project team. What about ethics? CoAct expects all successful applicants to the Open Calls to follow the Responsible Research and Innovation guidelines set by the European Commission for the Horizon2020-programme: projects will be expected to: – Ensure the informed consent of any human participants, who should be provided clear and concise instructions on what is expected of them, what personal or sensitive data will be gathered about them and how they can request the deletion of this data. – Ensure adequate data protection practises are in place to securely store any data gathered by or about volunteers including, but not limited to: pseudonymisation, anonymisation and aggregation of data; encryption and use of secure servers. Although selected grantees will be provided support to make sure that their project follows the Responsible Research and Innovation guidelines, we will favour applicants who are able to identify and outline remediation strategies in response to the potential ethical challenges that their project may face. Selection Process How do we select applications? For all applications we follow the same selection procedure: (1) Eligibility check of the organisation and the proposal; (2) Review and shortlist of the proposals by at least two reviewers according to the selection criteria on a 3-point scale; (3) Interview with shortlisted applicants between 6th and 8th October 2021; (4) Decision for applicants acceptance by the reviewers and CoAct team latest until 15th October 2021; (5) Negotiation including due diligence checks, work plan and budget agreement, (6) CoAct facilitation for accepted applicants during the ten-month project by the CoAct team. Who is reviewing the applications? Applications will be reviewed by a panel made of CoAct team members and external experts, selected for the relevance of their work to each open call. What are the review criteria? Idea Relevance to the call: Does the proposal match the focus of the specific call it was submitted to? Does it include activities compatible with Citizen Social Science? Project design: Are the planned activities realistic given the proposed budget and time constraints? Does the scope and complexity of the project match the profiles of the project’s team? Impact Link to a broader agenda: Is the project linked to a broader agenda/programme carried out by the applicant? Does the project have a chance to continue beyond the length of the programme? Is the idea reusable by other organizations working on similar topics? Does the project present a participatory evaluation and impact assessment strategy? Documentation & Dissemination: Which documentation strategies are planned? Is there a commitment to publish the data and results? Where would the results of the projects be disseminated? Ethics & Safety Ethical considerations: Are there any ethical considerations relevant to the project, and if so how are they taken into account? Is there a clear commitment to data protection and anonymization where relevant? How does the applicant plan to ensure that their activities are as inclusive as possible? Health & Safety: If the project requires physical meetings, what processes are put in place to protect the health of participants in the contest of the Covid19 pandemic? When will I know if I am shortlisted? You will be contacted by the CoAct team between 1st and 5th October 2021. Applicants who were not shortlisted will be informed at this stage as well. What will be topics of the interview? The interview will be an opportunity for the applicant to expand on their written application and answer questions that it may have raised. Will I receive feedback? Yes, we will provide feedback to applicants to improve their project. Unfortunately, due to the high number of applications anticipated, we will have a very limited capacity to reply to any queries on unsuccessful applications. When will be the interviews? The interviews will take place between 6th and 8th October 2021. Shortlisted candidates will be proposed several interview slot options in order to make it possible for everyone to attend. If none of the slots are possible for the applicant, we will be forced to reject the application. Can I reschedule the interview?Can I reschedule the interview? Once you have agreed on a date it is not possible to reschedule the interview. Please ensure that at least two people are available for the interview, in order to ensure that at least once can attend the interview. When will I know if my application was approved/rejected? All applicants will be informed about whether they got accepted or rejected latest by 15th October 2021. Costs, Payment and Legal How much funding is available for each project? The funding will be set at a maximum of €20,000 for Call 1 and 3, for which only one applicant will be selected. Call 2 will select two proposals, which will share the 20,000€ grant, with a maximum of 15,000€ for a single organisation. For what can I spend the funding? The funding can be spent on salaries, equipment, consumables, travel, subcontracting to other entities, and indirect expenditure (calculated as 25% of the total direct costs), in accordance with ​Horizon 2020 guidelines​. How are payments scheduled? You will receive two payments—one at the beginning of the project and a second one when the CoAct team has reviewed the interim project report after the first five-month. Is subcontracting allowed? Yes, subcontracting is allowed. What do I have to do to receive the final payment? You would have to show that you are proceeding with the project in an interim project report to be delivered to the CoAct team after the first five month of the research. The CoAct project has received funding from the European Union´s Horizon 2020 research and innovation programme under grant agreement number 873048 This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Menu About What is CoAct? Our Ethical Values Partners Advisory board R&I Actions Mental Health Care Youth Employment Environmental Justice Open Calls on Gender Equality Resources Toolkits Publications Readings Project reports Communication materials Citizen Social Science CSS School The Community Get involved Blog and Events We use cookies on our website to remember your preferences in your visits. By clicking “Accept”, you consent to the use of ALL the cookies. Read MoreCookie settingsREJECTACCEPT Manage cookies Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. Necessary Necessary Always Enabled Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously. Cookie Duration Description cookielawinfo-checbox-analytics 11 months This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". cookielawinfo-checbox-functional 11 months The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". cookielawinfo-checbox-others 11 months This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. cookielawinfo-checkbox-advertisement 1 year The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement". cookielawinfo-checkbox-necessary 11 months This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". cookielawinfo-checkbox-performance 11 months This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". csrftoken 1 year This cookie is associated with Django web development platform for python. Used to help protect the website against Cross-Site Request Forgery attacks viewed_cookie_policy 11 months The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. Functional Functional Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features. Performance Performance Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Cookie Duration Description _gat 1 minute This cookies is installed by Google Universal Analytics to throttle the request rate to limit the colllection of data on high traffic sites. G 1 year This cookie is set by the provider Eventbrite. This cookie is used for delivering content based on the user's interest. It also helps in event booking purposes. YSC session This cookies is set by Youtube and is used to track the views of embedded videos. Analytics Analytics Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc. Cookie Duration Description _ga 2 years This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors. _gid 1 day This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form. kampyle_userid 1 year This cookie is set by the provider Medallia. This cookie is used for storing a user ID. kampyleSessionPageCounter 1 year This cookie is set by the provider Medallia. This cookie is used for counting the number of pages a user views within a single sessions. kampyleUserSession 1 year This cookie is set by the provider Medallia. This cookie is used for setting an unique ID for the specific session. kampyleUserSessionsCount 1 year This cookie is set by the provider Medallia. This cookie is used for counting the number of sessions per user. Advertisement Advertisement Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads. Cookie Duration Description IDE 1 year 24 days Used by Google DoubleClick and stores information about how the user uses the website and any other advertisement before visiting the website. This is used to present users with ads that are relevant to them according to the user profile. NID 6 months This cookie is used to a profile based on user's interest and display personalized ads to the users. test_cookie 15 minutes This cookie is set by doubleclick.net. The purpose of the cookie is to determine if the user's browser supports cookies. VISITOR_INFO1_LIVE 5 months 27 days This cookie is set by Youtube. Used to track the information of the embedded YouTube videos on a website. Others Others Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet. Cookie Duration Description _pk_id.4.f65a 1 year 27 days No description _pk_ses.4.f65a 30 minutes No description -pgfUSet 1 day No description CONSENT 16 years 7 months 14 days 6 hours 2 minutes No description mgref 1 year No description WjVhktNgUczPDpHX 1 day No description wpautoterms_cache_detector session No description SAVE & ACCEPT Powered by Events x English Català English Deutsch Español Terms and Conditions - Privacy Policy code4lib-org-5244 ---- Code4Lib | We are developers and technologists for libraries, museums, and archives who are dedicated to being a diverse and inclusive community, seeking to share ideas and build collaboration. About Chat Code of Conduct Conference Jobs Journal Local Mailing List Planet Wiki Code4Lib.org was migrated from Drupal to Jekyll in June 2018. Some links may still be broken. To report issues or help fix see: https://github.com/code4lib/code4lib.github.io Posts Nov 25, 2020 Code4Lib 2021 Sep 5, 2019 Code4Lib 2020 Aug 27, 2018 Code4Lib 2019 Apr 17, 2018 Code4Lib Journal Issue 41 Call for Papers Oct 18, 2017 Issue 38 of the Code4Lib Journal Aug 8, 2017 Code4Lib 2018 Jul 18, 2017 Issue 37 of the Code4Lib Journal Jun 12, 2017 Code4Lib Journal Issue 38 Call for Papers Oct 28, 2016 Code4Lib Journal #34 Oct 14, 2016 C4L17: Call for Presentation/Panel proposals Oct 13, 2016 Code4Lib 2017 Jul 19, 2016 Code4Lib Journal #33 Apr 26, 2016 Code4Lib Journal #32 Sep 17, 2015 jobs.code4lib.org studied Aug 10, 2015 Code4Lib 2016 Jul 27, 2015 Code4Lib Northern California: Stanford, CA Jul 15, 2015 Code4Lib Journal #29 Apr 15, 2015 Code4Lib Journal #28: Special Issue on Diversity in Library Technology Mar 7, 2015 Code4Lib 2016 will be in Philadelphia Mar 7, 2015 Code4Lib 2016 Conference Proposals Feb 21, 2015 Code4Lib North 2015: St. Catharines, ON Feb 21, 2015 Code4Lib 2015 videos Jan 31, 2015 2015 Code of Conduct Dec 12, 2014 Code4Lib 2015 Diversity Scholarships Dec 5, 2014 Your code does not exist in a vacuum Dec 5, 2014 Your Chocolate is in My Peanut Butter! Mixing up Content and Presentation Layers to Build Smarter Books in Browsers with RDFa, Schema.org, and Linked Data Topics Dec 5, 2014 You Gotta Keep 'em Separated: The Case for "Bento Box" Discovery Interfaces Dec 5, 2014 Refinery — An open source locally deployable web platform for the analysis of large document collections Dec 5, 2014 Programmers are not projects: lessons learned from managing humans Dec 5, 2014 Our $50,000 Problem: Why Library School? Dec 5, 2014 Making your digital objects embeddable around the web Dec 5, 2014 Leveling Up Your Git Workflow Dec 5, 2014 Level Up Your Coding with Code Club (yes, you can talk about it) Dec 5, 2014 How to Hack it as a Working Parent: or, Should Your Face be Bathed in the Blue Glow of a Phone at 2 AM? Dec 5, 2014 Helping Google (and scholars, researchers, educators, & the public) find archival audio Dec 5, 2014 Heiðrún: DPLA's Metadata Harvesting, Mapping and Enhancement System Dec 5, 2014 Got Git? Getting More Out of Your GitHub Repositories Dec 5, 2014 Feminist Human Computer Interaction (HCI) in Library Software Dec 5, 2014 Dynamic Indexing: a Tragic Solr Story Dec 5, 2014 Docker? VMs? EC2? Yes! With Packer.io Dec 5, 2014 Digital Content Integrated with ILS Data for User Discovery: Lessons Learned Dec 5, 2014 Designing and Leading a Kick A** Tech Team Dec 5, 2014 Consuming Big Linked Open Data in Practice: Authority Shifts and Identifier Drift Dec 5, 2014 BYOB: Build Your Own Bootstrap Dec 5, 2014 Book Reader Bingo: Which Page-Turner Should I Use? Dec 5, 2014 Beyond Open Source Dec 5, 2014 Awesome Pi, LOL! Dec 5, 2014 Annotations as Linked Data with Fedora4 and Triannon (a Real Use Case for RDF!) Dec 5, 2014 American (Archives) Horror Story: LTO Failure and Data Loss Dec 5, 2014 A Semantic Makeover for CMS Data Dec 4, 2014 Code4lib 2007 Lighting Talks Nov 16, 2014 Store Nov 11, 2014 Voting for Code4Lib 2015 Prepared Talks is now open. Nov 10, 2014 Keynote voting for the 2015 conference is now open! Sep 23, 2014 Code4Lib 2015: Call for Proposals Sep 21, 2014 Code4Lib North (Ottawa): Tuesday October 7th, 2014 Sep 10, 2014 code4libBC: November 27 and 28, 2014 Sep 6, 2014 2015 Conference Schedule Jul 22, 2014 Code4Lib Journal issue 25 Jul 15, 2014 Code4Lib NorCal 28 July in San Mateo Jul 2, 2014 Code4Lib 2015 Apr 18, 2014 Code4Lib 2014 Trip Report - Zahra Ashktorab Apr 18, 2014 Code4Lib 2014 Trip Report- Nabil Kashyap Apr 18, 2014 Code4Lib 2014 Trip Report - Junior Tidal Apr 18, 2014 Code4Lib 2014 Trip Report - Jennifer Maiko Kishi Apr 18, 2014 Code4Lib 2014 Trip Report - J. (Jenny) Gubernick Apr 18, 2014 Code4Lib 2014 Trip Report - Emily Reynolds Apr 18, 2014 Code4Lib 2014 Trip Report - Coral Sheldon Hess Apr 18, 2014 Code4Lib 2014 Trip Report - Christina Harlow Apr 18, 2014 CODE4LIB 2014 Trip Report - Arie Nugraha Mar 10, 2014 Call for proposals: Code4Lib Journal, issue 25 Feb 3, 2014 2014 Code of Conduct Jan 30, 2014 Code4Lib 2015 Call for Host Proposals Jan 24, 2014 Code4Lib 2014 Sponsors Jan 21, 2014 WebSockets for Real-Time and Interactive Interfaces Jan 21, 2014 We Are All Disabled! Universal Web Design Making Web Services Accessible for Everyone Jan 21, 2014 Visualizing Solr Search Results with D3.js for User-Friendly Navigation of Large Results Sets Jan 21, 2014 Visualizing Library Resources as Networks Jan 21, 2014 Under the Hood of Hadoop Processing at OCLC Research Jan 21, 2014 Towards Pasta Code Nirvana: Using JavaScript MVC to Fill Your Programming Ravioli Jan 21, 2014 Sustaining your Open Source project through training Jan 21, 2014 Structured data NOW: seeding schema.org in library systems Jan 21, 2014 Quick and Easy Data Visualization with Google Visualization API and Google Chart Libraries Jan 21, 2014 Queue Programming -- how using job queues can make the Library coding world a better place Jan 21, 2014 PhantomJS+Selenium: Easy Automated Testing of AJAX-y UIs Jan 21, 2014 Personalize your Google Analytics Data with Custom Events and Variables Jan 21, 2014 Organic Free-Range API Development - Making Web Services That You Will Actually Want to Consume Jan 21, 2014 Next Generation Catalogue - RDF as a Basis for New Services Jan 21, 2014 More Like This: Approaches to Recommending Related Items using Subject Headings Jan 21, 2014 Lucene's Latest (for Libraries) Jan 21, 2014 Discovering your Discovery System in Real Time Jan 21, 2014 Dead-simple Video Content Management: Let Your Filesystem Do The Work Jan 21, 2014 Building for others (and ourselves): the Avalon Media System Jan 21, 2014 Behold Fedora 4: The Incredible Shrinking Repository! Jan 21, 2014 All Tiled Up Jan 21, 2014 A reusable application to enable self deposit of complex objects into a digital preservation environment Jan 21, 2014 A Book, a Web Browser and a Tablet: How Bibliotheca Alexandrina's Book Viewer Framework Makes It Possible Jan 21, 2014 2014 Conference Schedule Jan 17, 2014 Code4Lib 2014 Conference Diversity Scholarship Recipients Nov 19, 2013 Code4lib 2014 Diversity Scholarships (Application Deadline: Dec. 13, 2013, 5pm EST) Nov 12, 2013 Code4Lib 2014 Keynote Speakers Sep 30, 2013 Code4Lib 2014 Jun 10, 2013 Code4Lib 2014 Conference Prospectus for Sponsors Mar 28, 2013 Code4Lib 2014 Conference Proposals Jan 31, 2013 Ask Anything! Dec 5, 2012 Code4Lib 2014 Call for Host Proposals Dec 4, 2012 The Care and Feeding of a Crowd Dec 4, 2012 The Avalon Media System: A Next Generation Hydra Head For Audio and Video Delivery Dec 4, 2012 Solr Update Dec 4, 2012 REST IS Your Mobile Strategy Dec 4, 2012 Practical Relevance Ranking for 10 million books. Dec 4, 2012 Pitfall! Working with Legacy Born Digital Materials in Special Collections Dec 4, 2012 n Characters in Search of an Author Dec 4, 2012 Linked Open Communism: Better discovery through data dis- and re- aggregation Dec 4, 2012 Hybrid Archival Collections Using Blacklight and Hydra Dec 4, 2012 HTML5 Video Now! Dec 4, 2012 Hands off! Best Practices and Top Ten Lists for Code Handoffs Dec 4, 2012 Hacking the DPLA Dec 4, 2012 Google Analytics, Event Tracking and Discovery Tools Dec 4, 2012 Evolving Towards a Consortium MARCR Redis Datastore Dec 4, 2012 EAD without XSLT: A Practical New Approach to Web-Based Finding Aids Dec 4, 2012 De-sucking the Library User Experience Dec 4, 2012 Data-Driven Documents: Visualizing library data with D3.js Dec 4, 2012 Creating a Commons Dec 4, 2012 Citation search in SOLR and second-order operators Dec 4, 2012 Browser/Javascript Integration Testing with Ruby Dec 4, 2012 ARCHITECTING ScholarSphere: How We Built a Repository App That Doesn't Feel Like Yet Another Janky Old Repository App Dec 4, 2012 All Teh Metadatas Re-Revisited Dec 4, 2012 Actions speak louder than words: Analyzing large-scale query logs to improve the research experience Nov 30, 2012 Code4Lib 2013 Scholarship (deadline: December 14, 2012) Nov 2, 2012 Code4Lib 2013 Nov 2, 2012 Code4Lib 2013 Schedule Oct 2, 2012 Code4Lib Conference 2013 Call for Propoosals Sep 5, 2012 Keynote voting for the 2013 conference is now open! Jul 11, 2012 Dates Set for Code4Lib 2013 in Chicago May 29, 2012 Code4Lib Journal - Call for Proposals May 7, 2012 ruby-marc 0.5.0 released Apr 10, 2012 Code4Lib Journal: Editors Wanted Feb 3, 2012 Code4Lib Journal Issue 16 is published! Feb 3, 2012 Ask Anything! – Facilitated by Carmen Mitchell- Code4Lib 2012 Jan 26, 2012 Relevance Ranking in the Scholarly Domain - Tamar Sadeh, PhD Jan 26, 2012 Kill the search button II - the handheld devices are coming - Jørn Thøgersen, Michael Poltorak Nielsen Jan 25, 2012 Stack View: A Library Browsing Tool - Annie Cain Jan 25, 2012 Search Engine Relevancy Tuning - A Static Rank Framework for Solr/Lucene - Mike Schultz Jan 25, 2012 Practical Agile: What's Working for Stanford, Blacklight, and Hydra - Naomi Dushay Jan 25, 2012 NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis - Jeremy Nelson Jan 25, 2012 Lies, Damned Lies, and Lines of Code Per Day - James Stuart Jan 25, 2012 Indexing big data with Tika, Solr & map-reduce - Scott Fisher, Erik Hetzner Jan 25, 2012 In-browser data storage and me - Jason Casden Jan 25, 2012 How people search the library from a single search box - Cory Lown Jan 25, 2012 Discovering Digital Library User Behavior with Google Analytics - Kirk Hess Jan 25, 2012 Building research applications with Mendeley - William Gunn Jan 23, 2012 Your UI can make or break the application (to the user, anyway) - Robin Schaaf Jan 23, 2012 Your Catalog in Linked Data - Tom Johnson Jan 23, 2012 The Golden Road (To Unlimited Devotion): Building a Socially Constructed Archive of Grateful Dead Artifacts - Robin Chandler Jan 23, 2012 Quick and Dirty Clean Usability: Rapid Prototyping with Bootstrap - Shaun Ellis Jan 23, 2012 “Linked-Data-Ready” Software for Libraries - Jennifer Bowen Jan 23, 2012 HTML5 Microdata and Schema.org - Jason Ronallo Jan 23, 2012 HathiTrust Large Scale Search: Scalability meets Usability - Tom Burton-West Jan 23, 2012 Design for Developers - Lisa Kurt Jan 23, 2012 Beyond code: Versioning data with Git and Mercurial - Charlie Collett, Martin Haye Jan 23, 2012 ALL TEH METADATAS! or How we use RDF to keep all of the digital object metadata formats thrown at us - Declan Fleming Dec 29, 2011 Discussion for Elsevier App Challenge during Code4Lib 2012 Dec 14, 2011 So you want to start a Kindle lending program Dec 1, 2011 Code4Lib 2013 Call for Host Proposals Nov 29, 2011 Code4Lib 2012 Scholarship (deadline: December 9, 2011) Oct 21, 2011 code4lib 2012 Sponsor Listing Oct 19, 2011 Code4Lib 2012 Schedule Jul 28, 2011 Code4Lib 2012 Feb 11, 2011 Code4Lib 2012 Sponsorship Jan 26, 2011 VuFind Beyond MARC: Discovering Everything Else - Demian Katz Jan 26, 2011 One Week | One Tool: Ultra-Rapid Open Source Development Among Strangers - Scott Hanrath Jan 26, 2011 Letting In the Light: Using Solr as an External Search Component - Jay Luker and Benoit Thiell Jan 26, 2011 Kuali OLE: Architecture for Diverse and Linked Data - Tim McGeary and Brad Skiles Jan 26, 2011 Keynote Address - Diane Hillmann Jan 26, 2011 Hey, Dilbert. Where's My Data?! - Thomas Barker Jan 26, 2011 Enhancing the Mobile Experience: Mobile Library Services at Illinois - Josh Bishoff - Josh Bishoff Jan 26, 2011 Drupal 7 as Rapid Application Development Tool - Cary Gordon Jan 26, 2011 Code4Lib 2012 in Seattle Jan 26, 2011 2011 Lightning Talks Jan 26, 2011 2011 Breakout Sessions Jan 25, 2011 (Yet Another) Home-Grown Digital Library System, Built Upon Open Source XML Technologies and Metadata Standards - David Lacy Jan 25, 2011 Why (Code4) Libraries Exist - Eric Hellman Jan 25, 2011 Visualizing Library Data - Karen Coombs Jan 25, 2011 Sharing Between Data Repositories - Kevin S. Clarke Jan 25, 2011 Practical Relevancy Testing - Naomi Dushay Jan 25, 2011 Opinionated Metadata (OM): Bringing a Bit of Sanity to the World of XML Metadata - Matt Zumwalt Jan 25, 2011 Mendeley's API and University Libraries: Three Examples to Create Value - Ian Mulvany Jan 25, 2011 Let's Get Small: A Microservices Approach to Library Websites - Sean Hannan Jan 25, 2011 GIS on the Cheap - Mike Graves Jan 25, 2011 fiwalk With Me: Building Emergent Pre-Ingest Workflows for Digital Archival Records using Open Source Forensic Software - Mark M Jan 25, 2011 Enhancing the Performance and Extensibility of the XC’s MetadataServicesToolkit - Ben Anderson Jan 25, 2011 Chicago Underground Library’s Community-Based Cataloging System - Margaret Heller and Nell Taylor Jan 25, 2011 Building an Open Source Staff-Facing Tablet App for Library Assessment - Jason Casden and Joyce Chapman Jan 25, 2011 Beyond Sacrilege: A CouchApp Catalog - Gabriel Farrell Jan 25, 2011 Ask Anything! – Facilitated by Dan Chudnov Jan 25, 2011 A Community-Based Approach to Developing a Digital Exhibit at Notre Dame Using the Hydra Framework - Rick Johnson and Dan Brubak Dec 12, 2010 Code4Lib 2011 schedule Dec 10, 2010 Code4Lib 2012 Call for Host Proposals Nov 17, 2010 Scholarships to Attend the 2011 Code4Lib Conference (Deadline Dec. 6, 2010) Sep 23, 2010 Code4Lib 2011 Sponsorship Jun 28, 2010 Issue 10 of the Code4Lib Journal Mar 23, 2010 Location of code4lib 2011 Mar 23, 2010 Code4Lib 2011: Get Ready for the Best Code4lib Conference Yet! Mar 22, 2010 Issue 9 of the Code4Lib Journal Mar 12, 2010 Vote on Code4Lib 2011 hosting proposals Feb 24, 2010 You Either Surf or You Fight: Integrating Library Services With Google Wave - Sean Hannan - Code4Lib 2010 Feb 24, 2010 Vampires vs. Werewolves: Ending the War Between Developers and Sysadmins with Puppet - Bess Sadler - Code4Lib 2010 Feb 24, 2010 The Linked Library Data Cloud: Stop talking and start doing - Ross Singer - Code4Lib 2010 Feb 24, 2010 Taking Control of Library Metadata and Websites Using the eXtensible Catalog - Jennifer Bowen - Code4Lib 2010 Feb 24, 2010 Public Datasets in the Cloud - Rosalyn Metz and Michael B. Klein - Code4Lib 2010 Feb 24, 2010 Mobile Web App Design: Getting Started - Michael Doran - Code4Lib 2010 Feb 24, 2010 Metadata Editing – A Truly Extensible Solution - David Kennedy and David Chandek-Stark - Code4Lib 2010 Feb 24, 2010 Media, Blacklight, and Viewers Like You (pdf, 2.61MB) - Chris Beer - Code4Lib 2010 Feb 24, 2010 Matching Dirty Data – Yet Another Wheel - Anjanette Young and Jeff Sherwood - Code4Lib 2010 Feb 24, 2010 library/mobile: Developing a Mobile Catalog - Kim Griggs - Code4Lib 2010 Feb 24, 2010 Keynote #2: catfish, cthulhu, code, clouds and Levenshtein distance - Paul Jones - Code4Lib 2010 Feb 24, 2010 Keynote #1: Cathy Marshall - Code4Lib 2010 Feb 24, 2010 Iterative Development Done Simply - Emily Lynema - Code4Lib 2010 Feb 24, 2010 I Am Not Your Mother: Write Your Test Code - Naomi Dushay, Willy Mene, and Jessie Keck - Code4Lib 2010 Feb 24, 2010 How to Implement A Virtual Bookshelf With Solr - Naomi Dushay and Jessie Keck - Code4Lib 2010 Feb 24, 2010 HIVE: A New Tool for Working With Vocabularies - Ryan Scherle and Jose Aguera - Code4Lib 2010 Feb 24, 2010 Enhancing Discoverability With Virtual Shelf Browse - Andreas Orphanides, Cory Lown, and Emily Lynema - Code4Lib 2010 Feb 24, 2010 Drupal 7: A more powerful platform for building library applications - Cary Gordon - Code4Lib 2010 Feb 24, 2010 Do It Yourself Cloud Computing with Apache and R - Harrison Dekker - Code4Lib 2010 Feb 24, 2010 Cloud4Lib - Jeremy Frumkin and Terry Reese - Code4Lib 2010 Feb 24, 2010 Becoming Truly Innovative: Migrating from Millennium to Koha - Ian Walls - Code4Lib 2010 Feb 24, 2010 Ask Anything! – Facilitated by Dan Chudnov - Code4Lib 2010 Feb 24, 2010 A Better Advanced Search - Naomi Dushay and Jessie Keck - Code4Lib 2010 Feb 24, 2010 7 Ways to Enhance Library Interfaces with OCLC Web Services - Karen Coombs - Code4Lib 2010 Feb 22, 2010 Code4Lib 2010 Lightning Talks Feb 22, 2010 Code4Lib 2010 Breakout Sessions Feb 21, 2010 Code4Lib 2010 Participant Release Form Feb 5, 2010 Code4Lib 2011 Hosting Proposals Solicited Jan 16, 2010 2010 Code4lib Scholarship Recipients Jan 12, 2010 Code4Lib North Dec 21, 2009 Scholarships to Attend the 2010 Code4Lib Conference Dec 16, 2009 Code4Lib 2010 Registration Dec 14, 2009 2010 Conference info Dec 10, 2009 Code4Lib 2010 Schedule Dec 4, 2009 Code4Lib 2010 Sponsorship Nov 16, 2009 2010 Code4Lib Conference Prepared Talks Voting Now Open! Oct 30, 2009 Code4Lib 2010 Call for Prepared Talk Proposals Sep 21, 2009 Vote for code4lib 2010 keynotes! Jul 10, 2009 Code4Lib 2010 Jun 26, 2009 Code4Lib Journal: new issue 7 now available May 15, 2009 Visualizing Media Archives: A Case Study May 15, 2009 The Open Platform Strategy: what it means for library developers May 15, 2009 If You Love Something...Set it Free May 14, 2009 What We Talk About When We Talk About FRBR May 14, 2009 The Rising Sun: Making the most of Solr power May 14, 2009 Great facets, like your relevance, but can I have links to Amazon and Google Book Search? May 14, 2009 FreeCite - An Open Source Free-Text Citation Parser May 14, 2009 Freebasing for Fun and Enhancement May 14, 2009 Extending biblios, the open source web based metadata editor May 14, 2009 Complete faceting May 14, 2009 A New Platform for Open Data - Introducing ‡biblios.net Web Services May 13, 2009 Sebastian Hammer, Keynote Address May 13, 2009 Blacklight as a unified discovery platform May 13, 2009 A new frontier - the Open Library Environment (OLE) May 8, 2009 The Dashboard Initiative May 8, 2009 RESTafarian-ism at the NLA May 8, 2009 Open Up Your Repository With a SWORD! May 8, 2009 LuSql: (Quickly and easily) Getting your data from your DBMS into Lucene May 8, 2009 Like a can opener for your data silo: simple access through AtomPub and Jangle May 8, 2009 LibX 2.0 May 8, 2009 How I Failed To Present on Using DVCS for Managing Archival Metadata May 8, 2009 djatoka for djummies May 8, 2009 A Bookless Future for Libraries: A Comedy in 3 Acts May 1, 2009 Why libraries should embrace Linked Data Mar 31, 2009 Code4Lib Journal: new issue 6 now available Feb 28, 2009 See you next year in Asheville Feb 20, 2009 Code4Lib 2009 Lightning Talks Feb 19, 2009 code4lib2010 venue voting Feb 17, 2009 OCLC Grid Services Boot Camp (2009 Preconference) Feb 16, 2009 Code4Lib 2010 Hosting Proposals Jan 29, 2009 Code4Lib Logo Jan 29, 2009 Code4Lib Logo Debuts Jan 28, 2009 Code4Lib 2009 Breakout Sessions Jan 16, 2009 Call for Code4Lib 2010 Hosting Proposals Jan 11, 2009 2009 Code4lib Scholarship Recipients Jan 5, 2009 Code4lib 2009 T-shirt Design Contest Dec 17, 2008 code4lib2009 registration open! Dec 15, 2008 Code4Lib Journal Issue 5 Published Dec 5, 2008 Code4lib 2009 Gender Diversity and Minority Scholarships Dec 5, 2008 Calling all Code4Libers Attending Midwinter Dec 3, 2008 Logo Design Process Launched Dec 3, 2008 Code4Lib 2009 Schedule Dec 2, 2008 2009 Pre-Conferences Nov 25, 2008 Voting On Presentations for code4lib 2009 Open until December 3 Nov 18, 2008 drupal4lib unconference (02/27/2009 Darien, CT) Oct 24, 2008 Call for Proposals, Code4Lib 2009 Conference Oct 10, 2008 ne.code4lib.org Sep 30, 2008 code4lib2009 keynote voting Sep 23, 2008 Logo? You Decide Sep 17, 2008 solrpy google code project Sep 3, 2008 Code4Lib 2009 Sep 3, 2008 Code4Lib 2009 Sponsorship Aug 27, 2008 Code4LibNYC Aug 22, 2008 Update from LinkedIn Jul 15, 2008 LinkedIn Group Growing Fast Jul 3, 2008 code4lib group on LInkedIn Apr 17, 2008 ELPUB 2008 Open Scholarship: Authority, Community and Sustainability in the Age of Web 2.0 Mar 4, 2008 Code4libcon 2008 Lightning Talks Mar 3, 2008 Brown University to Host Code4Lib 2009 Feb 26, 2008 Desktop Presenter software Feb 25, 2008 Presentations from LibraryFind pre-conference Feb 21, 2008 Vote for Code4Lib 2009 Host! Feb 19, 2008 Karen Coyle Keynote - R&D: Can Resource Description become Rigorous Data? Feb 6, 2008 Code4libcon 2008 Breakout Sessions Feb 1, 2008 Call for Code4Lib 2009 Hosting Proposals Jan 30, 2008 Code4lib 2008 Conference T-Shirt Design Jan 7, 2008 Code4lib 2008 Registration now open! Dec 27, 2007 Zotero and You, or Bibliography on the Semantic Web Dec 27, 2007 XForms for Metadata creation Dec 27, 2007 Working with the WorldCat API Dec 27, 2007 Using a CSS Framework Dec 27, 2007 The Wayback Machine Dec 27, 2007 The Making of The Code4Lib Journal Dec 27, 2007 The Code4Lib Future Dec 27, 2007 Show Your Stuff, using Omeka Dec 27, 2007 Second Life Web Interoperability - Moodle and Merlot.org Dec 27, 2007 RDF and RDA: declaring and modeling library metadata Dec 27, 2007 ÖpënÜRL Dec 27, 2007 OSS Web-based cataloging tool Dec 27, 2007 MARCThing Dec 27, 2007 Losing sleep over REST? Dec 27, 2007 From Idea to Open Source Dec 27, 2007 Finding Relationships in MARC Data Dec 27, 2007 DLF ILS Discovery Interface Task Force API recommendation Dec 27, 2007 Delivering Library Services in the Web 2.0 environment: OSU Libraries Publishing System for and by Librarians Dec 27, 2007 CouchDB is sacrilege... mmm, delicious sacrilege Dec 27, 2007 Building the Open Library Dec 27, 2007 Building Mountains Out of Molehills Dec 27, 2007 A Metadata Registry Dec 17, 2007 Code4lib 2008 Gender Diversity and Minority Scholarships Dec 12, 2007 Conference Schedule Nov 20, 2007 Code4lib 2008 Keynote Survey Oct 31, 2007 Code4lib 2008 Call for Proposals Oct 16, 2007 Code4Lib 2008 Schedule Jul 18, 2007 code4lib 2008 conference Jul 6, 2007 Random #code4lib Quotes Jun 13, 2007 Request for Proposals: Innovative Uses of CrossRef Metadata May 16, 2007 Library Camp NYC, August 14, 2007 Apr 3, 2007 Code4Lib 2007 - Video, Audio and Podcast Available Mar 14, 2007 Code4Lib 2007 - Day 1 Video Available Mar 13, 2007 Erik Hatcher Keynote Mar 12, 2007 My Adventures in Getting Data into the ArchivistsToolkit Mar 9, 2007 Karen Schneider Keynote "Hurry up please it's time" Mar 9, 2007 Code4Lib Conference Feedback Available Mar 9, 2007 Code4Lib 2007 Video Trickling In Mar 1, 2007 Code4Lib.org Restored Feb 24, 2007 Code4Lib 2008 will be in Portland, OR Feb 13, 2007 Code4Lib Blog Anthology Feb 9, 2007 The Intellectual Property Disclosure Process: Releasing Open Source Software in Academia Feb 6, 2007 Polling for interest in a European code4lib Feb 5, 2007 Call for Proposals to Host Code4Lib 2008 Feb 5, 2007 2007 Code4lib Scholarship Recipients Feb 3, 2007 Delicious! Flare + SIMILE Exhibit Jan 30, 2007 Open Access Self-Archiving Mandate Jan 17, 2007 Evergreen Keynote Jan 17, 2007 Code4Lib 2007 T-Shirt Contest Jan 16, 2007 Stone Soup Jan 10, 2007 #code4lib logging Jan 2, 2007 Two scholarships to attend the 2007 code4lib conference Dec 20, 2006 2007 Conference Schedule Now Available Dec 19, 2006 code4lib 2007 pre-conference workshop: Lucene, Solr, and your data Dec 18, 2006 Traversing the Last Mile Dec 18, 2006 The XQuery Exposé: Practical Experiences from a Digital Library Dec 18, 2006 The BibApp Dec 18, 2006 Smart Subjects - Application Independent Subject Recommendations Dec 18, 2006 Open-Source Endeca in 250 Lines or Less Dec 18, 2006 On the Herding of Cats Dec 18, 2006 Obstacles to Agility Dec 18, 2006 MyResearch Portal: An XML based Catalog-Independent OPAC Dec 18, 2006 LibraryFind Dec 18, 2006 Library-in-a-Box Dec 18, 2006 Library Data APIs Abound! Dec 18, 2006 Get Groovy at Your Public Library Dec 18, 2006 Fun with ZeroConfMetaOpenSearch Dec 18, 2006 Free the Data: Creating a Web Services Interface to the Online Catalog Dec 18, 2006 Forget the Lipstick. This Pig Just Needs Social Skills. Dec 18, 2006 Atom Publishing Protocol Primer Nov 27, 2006 barton data Nov 21, 2006 MIT Catalog Data Oct 29, 2006 Code4Lib Downtime Oct 16, 2006 Call for Proposals Aug 24, 2006 Code4Lib2006 Audio Aug 15, 2006 book club Jul 4, 2006 Code4LibCon Site Proposals Jul 1, 2006 Improving Code4LibCon 200* Jun 28, 2006 Code4Lib Conference Hosting Jun 22, 2006 Learning to Scratch Our Own Itches Jun 15, 2006 2007 Code4Lib Conference Jun 15, 2006 2007 Code4Lib Conference Schedule Jun 15, 2006 2007 Code4Lib Conference Lightning Talks Jun 15, 2006 2007 Code4Lib Conference Breakouts Mar 31, 2006 Results of the journal name vote Mar 22, 2006 #dspace Mar 20, 2006 #code4lib logging Mar 14, 2006 regulars on the #code4lib irc channel Mar 14, 2006 Code4lib Journal Name Vote Mar 14, 2006 code4lib journal: mission, format, guidelines Mar 14, 2006 #code4lib irc channel faq Feb 27, 2006 CUFTS2 AIM/AOL/ICQ bot Feb 24, 2006 code4lib journal: draft purpose, format, and guidelines Feb 21, 2006 2006 code4lib Breakout Sessions Feb 17, 2006 unapi revision 1 Feb 15, 2006 code4lib 2006 presentations will be available Feb 14, 2006 planet update Feb 13, 2006 Weather in Corvallis for Code4lib Feb 13, 2006 Holiday Inn Express Feb 9, 2006 conference wiki Jan 31, 2006 Portland Hostel Jan 27, 2006 Lightning Talks Jan 23, 2006 Code4lib 2006 T-Shirt design vote! Jan 19, 2006 Portland Jazz Festival Jan 13, 2006 unAPI version 0 Jan 13, 2006 conference schedule in hCalendar Jan 12, 2006 code4lib 2006 T-shirt design contest Jan 11, 2006 Conference Schedule Set Jan 11, 2006 code4lib 2006 registration count pool Jan 10, 2006 WikiD Jan 10, 2006 The Case for Code4Lib 501c(3) Jan 10, 2006 Teaching the Library and Information Community How to Remix Information Jan 10, 2006 Practical Aspects of Implementing Open Source in Armenia Jan 10, 2006 Lipstick on a Pig: 7 Ways to Improve the Sex Life of Your OPAC Jan 10, 2006 Generating Recommendations in OPACS: Initial Results and Open Areas for Exploration Jan 10, 2006 ERP Options in an OSS World Jan 10, 2006 AHAH: When Good is Better than Best Jan 10, 2006 1,000 Lines of Code, and other topics from OCLC Research Jan 9, 2006 What Blog Applications Can Teach Us About Library Software Architecture Jan 9, 2006 Standards, Reusability, and the Mating Habits of Learning Content Jan 9, 2006 Quality Metrics Jan 9, 2006 Library Text Mining Jan 9, 2006 Connecting Everything with unAPI and OPA Jan 9, 2006 Chasing Babel Jan 9, 2006 Anatomy of aDORe Jan 6, 2006 Voting on Code4Lib 2006 Presentation Proposals Jan 3, 2006 one more week for proposals Dec 19, 2005 code4lib card Dec 15, 2005 planet facelift Dec 6, 2005 Registration is Open Dec 3, 2005 planet code4lib & blogs Dec 1, 2005 Code4lib 2006 Call For Proposals Nov 29, 2005 code4lib Conference 2006: Schedule Nov 21, 2005 panizzi Nov 21, 2005 drupal installed Nov 21, 2005 code4lib 2006 subscribe via RSS Code4Lib Code4Lib code4lib code4lib.social code4lib code4lib We are developers and technologists for libraries, museums, and archives who are dedicated to being a diverse and inclusive community, seeking to share ideas and build collaboration. coffeecode-net-9763 ---- Coffee|Code: Dan Scott's blog - coding Coffee|Code: Dan Scott's blog - coding Librarian · Developer Our nginx caching proxy setup for Evergreen Details of our nginx caching proxy settings for Evergreen Enriching catalogue pages in Evergreen with Wikidata An openly licensed JavaScript widget that enriches library catalogues with Wikidata data Wikidata, Canada 150, and music festival data At CAML 2017, Stacy Allison-Cassin and I presented our arguments in favour of using Wikidata is a good fit for communities who want to increase the visibility of Canadian music in Wikimedia Foundation projects. Wikidata workshop for librarians Interested in learning about Wikidata? I delivered a workshop for librarians and archivists at the CAML 2017 preconference. Perhaps you will find the materials I developed useful for your own training purposes. Truly progressive WebVR apps are available offline! I've been dabbling with the A-Frame framework for creating WebVR experiences for the past couple of months, ever since Patrick Trottier gave a lightning talk at the GDG Sudbury DevFest in November and a hands-on session with AFrame in January. The @AFrameVR Twitter feed regularly highlights cool new WebVR apps … schema.org, Wikidata, Knowledge Graph: strands of the modern semantic web My slides from Ohio DevFest 2016: schema.org, Wikidata, Knowledge Graph: strands of the modern semantic web And the video, recorded and edited by the incredible amazing Patrick Hammond: In November, I had the opportunity to speak at Ohio DevFest 2016. One of the organizers, Casey Borders, had invited me … Google Scholar's broken Recaptcha hurts libraries and their users Update 2016-11-28: The brilliant folk at UNC figured out how to fix Google Scholar using a pre-scoped search so that, if a search is launched from the library web site, it will automatically associate that search with the library's licensed resources. No EZProxy required! For libraries, proxying user requests is … PHP's File_MARC gets a new release (1.1.3) Yesterday, just one day before the anniversary of the 1.1.2 release, I published the 1.1.3 release of the PEAR File_MARC library. The only change is the addition of a convenience method for fields called getContents() that simply concatenates all of the subfields together in order, with … PHP's File_MARC gets a new release (1.1.3) Yesterday, just one day before the anniversary of the 1.1.2 release, I published the 1.1.3 release of the PEAR File_MARC library. The only change is the addition of a convenience method for fields called getContents() that simply concatenates all of the subfields together in order, with … Chromebooks and privacy: not always at odds On Friday, June 10th I gave a short talk at the OLITA Digital Odyssey 2016 conference, which had a theme this year of privacy and security. My talk addressed the evolution of our public and loaner laptops over the past decade, from bare Windows XP, to Linux, Windows XP with … Chromebooks and privacy: not always at odds On Friday, June 10th I gave a short talk at the OLITA Digital Odyssey 2016 conference, which had a theme this year of privacy and security. My talk addressed the evolution of our public and loaner laptops over the past decade, from bare Windows XP, to Linux, Windows XP with … Library stories: 2020 vision: "Professional research tools" For a recent strategic retreat, I was asked to prepare (as homework) a story about a subject that I'm passionate about, with an idea of where we might see the library in the next three to five years. Here's one of the stories I came up with, in the form … Library stories: 2020 vision: "Professional research tools" For a recent strategic retreat, I was asked to prepare (as homework) a story about a subject that I'm passionate about, with an idea of where we might see the library in the next three to five years. Here's one of the stories I came up with, in the form … Querying Evergreen from Google Sheets with custom functions via Apps Script Our staff were recently asked to check thousands of ISBNs to find out if we already have the corresponding books in our catalogue. They in turn asked me if I could run a script that would check it for them. It makes me happy to work with people who believe … Querying Evergreen from Google Sheets with custom functions via Apps Script Our staff were recently asked to check thousands of ISBNs to find out if we already have the corresponding books in our catalogue. They in turn asked me if I could run a script that would check it for them. It makes me happy to work with people who believe … That survey about EZProxy OCLC recently asked EZProxy clients to fill out a survey about their experiences with the product and to get feedback on possible future plans for the product. About half-way through, I decided it might be a good idea to post my responses. Because hey, if I'm working to help them … That survey about EZProxy OCLC recently asked EZProxy clients to fill out a survey about their experiences with the product and to get feedback on possible future plans for the product. About half-way through, I decided it might be a good idea to post my responses. Because hey, if I'm working to help them … "The Librarian" - an instruction session in the style of "The Martian" I had fun today. A colleague in Computer Science has been giving his C++ students an assignment to track down an article that is only available in print in the library. When we chatted about it earlier this year, I suggested that perhaps he could bring me in as a … "The Librarian" - an instruction session in the style of "The Martian" I had fun today. A colleague in Computer Science has been giving his C++ students an assignment to track down an article that is only available in print in the library. When we chatted about it earlier this year, I suggested that perhaps he could bring me in as a … We screwed up: identities in loosely-coupled systems A few weeks ago, I came to the startling and depressing realization that we had screwed up. It started when someone I know and greatly respect ran into me in the library and said "We have a problem". I'm the recently appointed Chair of our library and archives department, so … We screwed up: identities in loosely-coupled systems A few weeks ago, I came to the startling and depressing realization that we had screwed up. It started when someone I know and greatly respect ran into me in the library and said "We have a problem". I'm the recently appointed Chair of our library and archives department, so … Research across the Curriculum The following post dates back to January 15, 2007, when I had been employed at Laurentian for less than a year and was getting an institutional repository up and running.... I think old me had some interesting thoughts! Abstract The author advocates an approach to university curriculum that re-emphasizes the … Research across the Curriculum The following post dates back to January 15, 2007, when I had been employed at Laurentian for less than a year and was getting an institutional repository up and running.... I think old me had some interesting thoughts! Abstract The author advocates an approach to university curriculum that re-emphasizes the … Library and Archives Canada: Planning for a new union catalogue Update 2015-03-03: Clarified (in the Privacy section) that only NRCan runs Evergreen. I attended a meeting with Library and Archives Canada today in my role as an Ontario Library Association board member to discuss the plans around a new Canadian union catalogue based on OCLC's hosted services. Following are some … Library and Archives Canada: Planning for a new union catalogue Update 2015-03-03: Clarified (in the Privacy section) that only NRCan runs Evergreen. I attended a meeting with Library and Archives Canada today in my role as an Ontario Library Association board member to discuss the plans around a new Canadian union catalogue based on OCLC's hosted services. Following are some … Library catalogues and HTTP status codes I noticed in Google's Webmaster Tools that our catalogue had been returning some Soft 404s. Curious, I checked into some of the URIs suffering from this condition, and realized that Evergreen returns an HTTP status code of 200 OK when it serves up a record details page for a record … Library catalogues and HTTP status codes I noticed in Google's Webmaster Tools that our catalogue had been returning some Soft 404s. Curious, I checked into some of the URIs suffering from this condition, and realized that Evergreen returns an HTTP status code of 200 OK when it serves up a record details page for a record … Dear database vendor: defending against sci-hub.org scraping is going to be very difficult Our library receives formal communications from various content/database vendors about "serious intellectual property infringement" on a reasonably regular basis, that urge us to "pay particular attention to proxy security". Here is part of the response I sent to the most recent such request: We use the UsageLimit directives that … Dear database vendor: defending against sci-hub.org scraping is going to be very difficult Our library receives formal communications from various content/database vendors about "serious intellectual property infringement" on a reasonably regular basis, that urge us to "pay particular attention to proxy security". Here is part of the response I sent to the most recent such request: We use the UsageLimit directives that … Putting the "Web" back into Semantic Web in Libraries 2014 I was honoured to lead a workshop and speak at this year's edition of Semantic Web in Bibliotheken (SWIB) in Bonn, Germany. It was an amazing experience; there were so many rich projects being described with obvious dividends for the users of libraries, once again the European library community fills … Putting the "Web" back into Semantic Web in Libraries 2014 I was honoured to lead a workshop and speak at this year's edition of Semantic Web in Bibliotheken (SWIB) in Bonn, Germany. It was an amazing experience; there were so many rich projects being described with obvious dividends for the users of libraries, once again the European library community fills … Social networking for researchers: ResearchGate and their ilk The Centre for Research in Occupational Safety and Health asked me to give a lunch'n'learn presentation on ResearchGate today, which was a challenge I was happy to take on... but I took the liberty of stretching the scope of the discussion to focus on social networking in the context of … Social networking for researchers: ResearchGate and their ilk The Centre for Research in Occupational Safety and Health asked me to give a lunch'n'learn presentation on ResearchGate today, which was a challenge I was happy to take on... but I took the liberty of stretching the scope of the discussion to focus on social networking in the context of … How discovery layers have closed off access to library resources, and other tales of schema.org from LITA Forum 2014 At the LITA Forum yesterday, I accused (presentation) most discovery layers of not solving the discoverability problems of libraries, but instead exacerbating them by launching us headlong to a closed, unlinkable world. Coincidentally, Lorcan Dempsey's opening keynote contained a subtle criticism of discovery layers. I wasn't that subtle. Here's why … How discovery layers have closed off access to library resources, and other tales of schema.org from LITA Forum 2014 At the LITA Forum yesterday, I accused (presentation) most discovery layers of not solving the discoverability problems of libraries, but instead exacerbating them by launching us headlong to a closed, unlinkable world. Coincidentally, Lorcan Dempsey's opening keynote contained a subtle criticism of discovery layers. I wasn't that subtle. Here's why … DCMI 2014: schema.org holdings in open source library systems My slides from DCMI 2014: schema.org in the wild: open source libraries++. Last week I was at the Dublin Core Metadata Initiative 2014 conference, where Richard Wallis, Charles MacCathie Nevile and I were slated to present on schema.org and the work of the W3C Schema.org Bibliographic Extension … My small contribution to schema.org this week Version 1.91 of the http://schema.org vocabulary was released a few days ago, and I once again had a small part to play in it. With the addition of the workExample and exampleOfWork properties, we (Richard Wallis, Dan Brickley, and I) realized that examples of these CreativeWork example … My small contribution to schema.org this week Version 1.91 of the http://schema.org vocabulary was released a few days ago, and I once again had a small part to play in it. With the addition of the workExample and exampleOfWork properties, we (Richard Wallis, Dan Brickley, and I) realized that examples of these CreativeWork example … Posting on the Laurentian University library blog Since returning from my sabbatical, I've felt pretty strongly that one of the things our work place is lacking is open communication about the work that we do--not just outside of the library, but within the library as well. I'm convinced that the more that we know about the demands … Posting on the Laurentian University library blog Since returning from my sabbatical, I've felt pretty strongly that one of the things our work place is lacking is open communication about the work that we do--not just outside of the library, but within the library as well. I'm convinced that the more that we know about the demands … Cataloguing for the open web: schema.org in library catalogues and websites tldr; my slides are href="http://stuff.coffeecode.net/2014/understanding_schema">here, and the slides from Jenn and Jason are also available from href="http://connect.ala.org/node/222959">ALA Connect. On Sunday, June 29th Jenn Riley, Jason Clark, and I presented at the ALCTS/LITA jointly sponsored session … Cataloguing for the open web: schema.org in library catalogues and websites tldr; my slides are href="http://stuff.coffeecode.net/2014/understanding_schema">here, and the slides from Jenn and Jason are also available from href="http://connect.ala.org/node/222959">ALA Connect. On Sunday, June 29th Jenn Riley, Jason Clark, and I presented at the ALCTS/LITA jointly sponsored session … Linked data interest panel, part 1 Good talk by Richard Wallis this morning at the ALA Annual Conference on publishing entities on the web. Many of his points map extremely closely to what I've been saying and will be saying tomorrow during my own session (albeit with ten fewer minutes). I was particularly heartened to hear … Linked data interest panel, part 1 Good talk by Richard Wallis this morning at the ALA Annual Conference on publishing entities on the web. Many of his points map extremely closely to what I've been saying and will be saying tomorrow during my own session (albeit with ten fewer minutes). I was particularly heartened to hear … RDFa introduction and codelabs for libraries My RDFa introduction and codelab materials for the ALA 2014 preconference on Practical linked data with open source are now online! And now I've finished leading the RDFa + schema.org codelab that I've been stressing over and refining for about a month at the American Library Association annual conference Practical … RDFa introduction and codelabs for libraries My RDFa introduction and codelab materials for the ALA 2014 preconference on Practical linked data with open source are now online! And now I've finished leading the RDFa + schema.org codelab that I've been stressing over and refining for about a month at the American Library Association annual conference Practical … Dropping back into the Semantic Web I've been at the 2014 Extended (formerly European) Semantic Web Conference ( ESWC) in Anissaras, Greece for four days now. My reason for attending was to present my paper Seeding structured data by default in open source library systems (presentation) (paper). It has been fantastic. As a librarian attending a conference … Dropping back into the Semantic Web I've been at the 2014 Extended (formerly European) Semantic Web Conference ( ESWC) in Anissaras, Greece for four days now. My reason for attending was to present my paper Seeding structured data by default in open source library systems (presentation) (paper). It has been fantastic. As a librarian attending a conference … RDFa, schema.org, and open source library systems Two things of note: I recently submitted the camera-ready copy for my ESWC 2014 paper, Seeding Structured Data by Default via Open Source Library Systems (**preprint**). The paper focuses on the work I've done with Evergreen, Koha, and VuFind to use emerging web standards such as RDFa Lite and schema … RDFa, schema.org, and open source library systems Two things of note: I recently submitted the camera-ready copy for my ESWC 2014 paper, Seeding Structured Data by Default via Open Source Library Systems (**preprint**). The paper focuses on the work I've done with Evergreen, Koha, and VuFind to use emerging web standards such as RDFa Lite and schema … Mapping library holdings to the Product / Offer mode in schema.org Back in August, I mentioned that I taught Evergreen, Koha, and VuFind how to express library holdings in schema.org via the http://schema.org/Offer class. What I failed to mention was how others can do the same with their own library systems (well, okay, I linked to the … Mapping library holdings to the Product / Offer mode in schema.org Back in August, I mentioned that I taught Evergreen, Koha, and VuFind how to express library holdings in schema.org via the http://schema.org/Offer class. What I failed to mention was how others can do the same with their own library systems (well, okay, I linked to the … What would you understand if you read the entire world wide web? On Tuesday, February 4th, I'll be participating in Laurentian University's Research Week lightning talks. Unlike most five-minute lightning talk events in which I've participated, the time limit for each talk tomorrow will be one minute. Imagine 60 different researchers getting up to summarize their research in one minute each, and … What would you understand if you read the entire world wide web? On Tuesday, February 4th, I'll be participating in Laurentian University's Research Week lightning talks. Unlike most five-minute lightning talk events in which I've participated, the time limit for each talk tomorrow will be one minute. Imagine 60 different researchers getting up to summarize their research in one minute each, and … Ups and downs Tuesday was not the greatest day, but at least each setback resulted in a triumph... First, the periodical proposal for schema.org--that I have poured a good couple of months of effort into--took a step closer to reality when Dan Brickley announced on the public-vocabs list that he had … Ups and downs Tuesday was not the greatest day, but at least each setback resulted in a triumph... First, the periodical proposal for schema.org--that I have poured a good couple of months of effort into--took a step closer to reality when Dan Brickley announced on the public-vocabs list that he had … Broadening support for linked data in MARC The following is an email that I sent to the MARC mailing list on January 24, 2014 that might be of interest to those looking to provide better support for linked data in MARC (hopefully as just a transitional step): In the spirit of making it possible to express linked … Broadening support for linked data in MARC The following is an email that I sent to the MARC mailing list on January 24, 2014 that might be of interest to those looking to provide better support for linked data in MARC (hopefully as just a transitional step): In the spirit of making it possible to express linked … Want citations? Release your work! Last week I was putting the finishing touches on the first serious academic paper I have written in a long time, and decided that I wanted to provide backup for some of the assertions I had made. Naturally, the deadline was tight, so getting any articles via interlibrary loan was … Want citations? Release your work! Last week I was putting the finishing touches on the first serious academic paper I have written in a long time, and decided that I wanted to provide backup for some of the assertions I had made. Naturally, the deadline was tight, so getting any articles via interlibrary loan was … File_MARC: 1.0.1 release fixes data corruption bug I released File_MARC 1.0.1 yesterday after receiving a bug report from the most excellent Mark Jordan about a basic (but data corrupting) problem that had existed since the very early days (almost seven years ago). If you generate MARC binary output from File_MARC, you should upgrade immediately. In … File_MARC: 1.0.1 release fixes data corruption bug I released File_MARC 1.0.1 yesterday after receiving a bug report from the most excellent Mark Jordan about a basic (but data corrupting) problem that had existed since the very early days (almost seven years ago). If you generate MARC binary output from File_MARC, you should upgrade immediately. In … Talk proposal: Structuring library data on the web with schema.org: we're on it! I submitted the following proposal to the Library Technology Conference 2014 and thought it might be of general interest. Structuring library data on the web with schema.org: we're on it! Abstract Until recently, there has been a disappointing level of adoption of schema.org structured data in traditional core … Talk proposal: Structuring library data on the web with schema.org: we're on it! I submitted the following proposal to the Library Technology Conference 2014 and thought it might be of general interest. Structuring library data on the web with schema.org: we're on it! Abstract Until recently, there has been a disappointing level of adoption of schema.org structured data in traditional core … File_MARC makes it to stable 1.0.0 release (finally!) Way back in 2006, I thought "It's a shame there is no PHP library for parsing MARC records!", and given that much of my most recent coding experience was in the PHP realm, I thought it would be a good way of contributing to the world of code4lib. Thus File_MARC … File_MARC makes it to stable 1.0.0 release (finally!) Way back in 2006, I thought "It's a shame there is no PHP library for parsing MARC records!", and given that much of my most recent coding experience was in the PHP realm, I thought it would be a good way of contributing to the world of code4lib. Thus File_MARC … Finally tangoed with reveal.js to create presentations ... and I have enjoyed the dance. Yes, I know I'm way behind the times. Over the past few years I was generating presentations via asciidoc, and I enjoyed its very functional approach and basic output. However, recently I used Google Drive to quickly create a few slightly prettier but much … Finally tangoed with reveal.js to create presentations ... and I have enjoyed the dance. Yes, I know I'm way behind the times. Over the past few years I was generating presentations via asciidoc, and I enjoyed its very functional approach and basic output. However, recently I used Google Drive to quickly create a few slightly prettier but much … RDFa and schema.org all the library things TLDR: The Evergreen and Koha integrated library systems now express their record details in the schema.org vocabulary out of the box using RDFa. Individual holdings are expressed as Offer instances per the W3C Schema Bib Extension community group proposal to parallel commercial sales offers. And I have published a … RDFa and schema.org all the library things TLDR: The Evergreen and Koha integrated library systems now express their record details in the schema.org vocabulary out of the box using RDFa. Individual holdings are expressed as Offer instances per the W3C Schema Bib Extension community group proposal to parallel commercial sales offers. And I have published a … A Flask of full-text search in PostgreSQL Update: More conventional versions of the slides are available from Google Docs or in on Speakerdeck (PDF) . On August 10, 2013, I gave the following talk at the PyCon Canada 2013 conference: I’m a systems librarian at Laurentian University. For the past six years, my day job and research … A Flask of full-text search in PostgreSQL Update: More conventional versions of the slides are available from Google Docs or in on Speakerdeck (PDF) . On August 10, 2013, I gave the following talk at the PyCon Canada 2013 conference: I’m a systems librarian at Laurentian University. For the past six years, my day job and research … Parsing the schema.org vocabulary for fun and frustration For various reasons I've spent a few hours today trying to parse the schema.org vocabulary into a nice, searchable database structure. Unfortunately, for a linked data effort that's two years old now and arguably one of the most important efforts out there, it's been an exercise in frustration. OWL … Parsing the schema.org vocabulary for fun and frustration For various reasons I've spent a few hours today trying to parse the schema.org vocabulary into a nice, searchable database structure. Unfortunately, for a linked data effort that's two years old now and arguably one of the most important efforts out there, it's been an exercise in frustration. OWL … Linked data irony, example one of probably many I'm currently ramping up my knowledge of the linked dataworld, and ran across the Proceedings of the WWW2013 Workshop on Linked Data on the Web. Which are published on the web (yay!) as open access (yay!) in PDF (what?). Thus, the papers from the linked data workshop at the W3 … Linked data irony, example one of probably many I'm currently ramping up my knowledge of the linked dataworld, and ran across the Proceedings of the WWW2013 Workshop on Linked Data on the Web. Which are published on the web (yay!) as open access (yay!) in PDF (what?). Thus, the papers from the linked data workshop at the W3 … PyCon Canada 2013 - PostgreSQL full-text search and Flask On August 10, 2013, I'll be giving a twenty-minute talk at PyCon Canada on A Flask of full-text search with PostgreSQL. I'm very excited to be talking about Python, at a Python conference, and to be giving the Python audience a peek at PostgreSQL's full-text search capabilities. With a twenty … PyCon Canada 2013 - PostgreSQL full-text search and Flask On August 10, 2013, I'll be giving a twenty-minute talk at PyCon Canada on A Flask of full-text search with PostgreSQL. I'm very excited to be talking about Python, at a Python conference, and to be giving the Python audience a peek at PostgreSQL's full-text search capabilities. With a twenty … CARLCore Metadata Application Profile for institutional repositories A long time ago, in what seemed like another life, I attended the Access 2006 conference as a relatively new systems librarian at Laurentian University. The subject of the preconference was this totally new-to-me thing called "institutional repositories", which I eventually worked out were basically web applications oriented towards content … CARLCore Metadata Application Profile for institutional repositories A long time ago, in what seemed like another life, I attended the Access 2006 conference as a relatively new systems librarian at Laurentian University. The subject of the preconference was this totally new-to-me thing called "institutional repositories", which I eventually worked out were basically web applications oriented towards content … Making the Evergreen catalogue mobile-friendly via responsive CSS Back in November the Evergreen community was discussing the desire for a mobile catalogue, and expressed a strong opinion that the right way forward would be to teach the current catalogue to be mobile-friendly by applying principles of responsive design. In fact, I stated: Almost all of this can be … Making the Evergreen catalogue mobile-friendly via responsive CSS Back in November the Evergreen community was discussing the desire for a mobile catalogue, and expressed a strong opinion that the right way forward would be to teach the current catalogue to be mobile-friendly by applying principles of responsive design. In fact, I stated: Almost all of this can be … Structured data: making metadata matter for machines Update 2013-04-18: Now with video of the presentation, thanks to the awesome #egcon2013 volunteers! I've been attending the Evergreen 2013 Conference in beautiful Vancouver. This morning, I was honoured to be able to give a presentation on some of the work I've been doing on implementing linked data via schema … Structured data: making metadata matter for machines Update 2013-04-18: Now with video of the presentation, thanks to the awesome #egcon2013 volunteers! I've been attending the Evergreen 2013 Conference in beautiful Vancouver. This morning, I was honoured to be able to give a presentation on some of the work I've been doing on implementing linked data via schema … Introducing version control & git in 1.5 hours to undergraduates Our university offers a Computer Science degree, but the formal curriculum does not cover version control (or a number of other common tools and practices in software development). Students that have worked for me in part-time jobs or summer positions have said things like: if it wasn't for that one … Introducing version control & git in 1.5 hours to undergraduates Our university offers a Computer Science degree, but the formal curriculum does not cover version control (or a number of other common tools and practices in software development). Students that have worked for me in part-time jobs or summer positions have said things like: if it wasn't for that one … Triumph of the tiny brain: Dan vs. Drupal / Panels A while ago I inherited responsibility for a Drupal 6 instance and a rather out-of-date server. (You know it's not good when your production operating system is so old that it is no longer getting security updates). I'm not a Drupal person. I dabbled with Drupal years and years ago … Triumph of the tiny brain: Dan vs. Drupal / Panels A while ago I inherited responsibility for a Drupal 6 instance and a rather out-of-date server. (You know it's not good when your production operating system is so old that it is no longer getting security updates). I'm not a Drupal person. I dabbled with Drupal years and years ago … Finding DRM-free books on the Google Play store John Mark Ockerbloom recently said, while trying to buy a DRM-free copy of John Scalzi's Redshirts on the Google Play Store: “The catalog page doesn’t tell me what format it’s in, or whether it has DRM; it instead just asks me to sign in to buy it.” I … Finding DRM-free books on the Google Play store John Mark Ockerbloom recently said, while trying to buy a DRM-free copy of John Scalzi's Redshirts on the Google Play Store: “The catalog page doesn’t tell me what format it’s in, or whether it has DRM; it instead just asks me to sign in to buy it.” I … First Go program: converting Google Scholar XML holdings to EBSCO Discovery Service holdings Update 2012-06-19: And here's how to implement stream-oriented XML parsing Many academic libraries are already generating electronic resource holdings summaries in the Google Scholar XML holdingsformat, and it seems to provide most of the metadata you would need to provide a discovery layer summary in a nice, granular format … First Go program: converting Google Scholar XML holdings to EBSCO Discovery Service holdings Update 2012-06-19: And here's how to implement stream-oriented XML parsing Many academic libraries are already generating electronic resource holdings summaries in the Google Scholar XML holdingsformat, and it seems to provide most of the metadata you would need to provide a discovery layer summary in a nice, granular format … What does a system librarian do? Preface: I'm talking to my daughter's kindergarten class tomorrow about my job. Exciting! So I prepped a little bit; it will probably go entirely different, but here's how it's going to go in my mind... My name is Dan Scott. I’m Amber’s dad. I’m a systems librarian … Farewell, old Google Books APIs Since the announcement of the new v1 Google Books API, I've been doing a bit of work with it in Python (following up on my part of the conversation). Today, Google announced that many of their older APIs were now officially deprecated. Included in that list are the Google Books … The new Google Books API and possibilities for libraries On the subject of the new Google Books API that was unveiled during the Google IO 2011 conference last week, Jonathan Rochkind states: Once you have an API key, it can keep track of # requests for that key — it’s not clear to me if they rate limit you, and … Creating a MARC record from scratch in PHP using File_MARC In the past couple of days, two people have written me email essentially saying: "Dan, this File_MARC library sounds great - but I can't figure out how to create a record from scratch with it! Can you please help me?" Yes, when you're dealing with MARC, you'll quickly get all weepy … Access Conference 2011 in beautiful British Columbia The official announcement for the Canadian Library Association (CLA) Emerging Technology Interest Group (ETIG)-sponsored Access Conference for 2011 went out back in November, announcing Vancouver, British Columbia, as the host. Note that the schedule has changed from its original dates to October 19-22! I've told a number of people … Troubleshooting Ariel send and receive functionality I'm posting the following instructions for testing the ports required by Ariel interlibrary loan software. I get requests for this information a few times a year, and at some point it will be easier to find on my blog than to dig through my email archives from over 3 years … Chilifresh-using libraries: are you violating copyright? When I was preparing my Access 2010 presentation about social sharing and aggregation in library software, I came across Chilifresh, a company that aggregates reviews written by library patrons from across libraries that subscribe to the company's review service. I was a bit disappointed to see that the service almost … On avoiding accusations of forking a project Sometimes forking a project is necessary to reassert community control over a project that has become overly dominated by a single corporate rules: see OpenIndiana and LibreOffice for recent examples. And in the world of distributed version control systems, forking is viewed positively; it's a form of evolution, where experimental … Library hackers want you to throw down the gauntlet On October 13th, a very special event is happening: the Access Hackfest. A tradition since Access 2002, the Hackfest brings together library practitioners of all kinds to tackle challenges and problems from the mundane to the sublime to the ridiculous. If you can imagine a spectrum with three axes, you … File_MARC 0.6.0 - now offering two tasty flavours of MARC-as-JSON output I've just released the PHP PEAR library File_MARC 0.6.0. This release brings two JSON serialization output methods for MARC to the table: toJSONHash() returns JSON that adheres to Bill Dueber's proposal for the array-oriented MARC-HASH JSON format at New interest in MARC-HASH JSON toJSON() returns JSON that adheres … In which I perceive that gossip is not science Marshall Breeding published the results of his 2009 International Survey of Library Automation a few days ago. Juicy stuff, with averages, medians, and modes for the negative/positive responses on a variety of ILS and vendor-related questions, and some written comments from the respondents. One would expect the library geek … PKG_CHECK_MODULES syntax error near unexpected token 'DEPS,' The next time you bash your brains against autotools for a while wondering why your perfectly good PKG_CHECK_MODULES() macro, as cut and paste directly from the recommended configure.ac entry for the package you're trying to integrate (in this case libmemcached), and you get the error message PKG_CHECK_MODULES syntax error … MARC library for C# coders C# isn't in my go-to list of programming languages, but I can understand why others would be interested in developing applications in C#. So it's good news to the C# community of library developers (it would be interesting to find out how many of you are out there) that there … Doing useful things with the TXT dump of SFX holdings, part 1: database There must be other people who have much more intelligent things than me with the TXT dump of SFX holdings that you can generate via the Web administration interface, but as I've gone through this process at least twice and rediscovered it each time, perhaps I'll save myself an hour … Transparent acquisitions budgets and expenditures for academic libraries In my most recent post over at the Academic Matters site, after a general discussion about "new books lists" in academic libraries, I tackle one of the dirty laundry areas for academic libraries: exposing how collection development funds are allocated to departments. Here's a relevant quote: For 2008-2009, we decided … Making Skype work in a Windows XP VirtualBox guest instance If you, like me, install Skype in a Windows XP VirtualBox guest instance running on an Ubuntu host on a ThinkPad T60 with an Intel 2300 dual-core 32-bit processor, it might throw Windows exceptions and generate error reports as reported in VirtualBox ticket #1710. If you then go into your … In which my words also appear elsewhere I'm excited to announce the availability of my first post as an invited contributor to the More than Bookends blog over at the revamped Academic Matters web site. My fellow contributors are Anne Fullerton and Amy Greenberg, and I'm delighted to be included with them in our appointed task of … Presentation: LibX and Zotero Direct link to the instructional presentation on LibX and Zotero at Laurentian University (ODT) (PDF) I had the pleasure of giving an instructional session to a class of graduate students on Monday, November 24th. The topic I had been asked to present was an extended version of the Artificially Enhanced … Archive of OCLC WorldCat Policy as posted 2008-11-02 I noticed last night (Sunday, November 2nd, 2008) that the new and much-anticipated / feared OCLC WorldCat Policy had been posted. As far as the clarified terms went, I was willing to give them the benefit of the doubt until they were actually posted. I was first alerted to the freshly … Dear Dan: why is using Flash for navigation a bad idea? I received the following email late last week, and took the time to reply to it tonight. I had originally been asked by a friend to help diagnose why his organization's site navigation wasn't working in some of his browsers. I noticed that the navigation bar was implemented in Flash … Boss me around, s'il vous plait My place of work, Laurentian University, is looking for a new Director of the J.N. Desmarais Library. The call for applications closes October 30th. I think our library has done some impressive work (participating in the food security project for the Democratic Republic of Congo, building the Mining Environment … Software Freedom Day 2008 - Sudbury I opted to do something out of the unusual (for me) this year when I learned about Software Freedom Day; I signed up to organize an event in Sudbury. Given everything that was already on my plate, it was pure foolishness to do so - but it was also important to … In which digital manifestations of myself plague the Internets Over the past few months, I've been fortunate enough to participate in a few events that have been recorded and made available on the 'net for your perpetual amusement. Well - amusing if you're a special sort of person. Following are the three latest such adventures, in chronological order: CouchDB: delicious … Test server strategies Occasionally on the #OpenILS-Evergreen IRC channel, a question comes up what kind of hardware a site should buy if they're getting serious about trying out Evergreen. I had exactly the same chat with Mike Rylander back in December, so I thought it might be useful to share the strategy we … Inspiring confidence that my problem will be solved Hmm. I think I'm in trouble if the support site itself is incapable of displaying accented characters properly. Corrupted characters in a problem report about corrupted characters. Oh dear. My analysis of the problem is that the content in the middle is contained within a frame, and is actually encoded … CouchDB: delicious sacrilege Well, the talk about CouchDB (an open-source document database similar in concept to Lotus Notes, but with a RESTful API and JSON as an interchange format) wasn't as much of a train wreck as it could have been. I learned a lot putting it together, and had some fun with … Oooh... looks like I've got (even more) work cut out for me PHP is getting a native doubly-linked list structure. This is fabulous news; when I wrote the File_MARC PEAR package, I ended up having to implement a linked list class in PEAR to support it. File_MARC does its job today (even though I haven't taken it out of alpha yet), but … Geek triumph What a night. I upgraded Serendipity, DokuWiki, Drupal, involving four different servers and three different Linux distros, and shifted one application from one server to another (with seamless redirects from the old server to the new) with close to no downtime. I think this is the first time I've completed … A chance to work at Laurentian University library Hey folks, if you're interested in working at Laurentian University, we've got a couple of tenure-track positions looking for qualified people who can stand the thought of working with me... (nothing like narrowing the field dramatically, ah well). The following position descriptions are straight out of the Employment Vacancies page … Ariel: Go back to your room, NOW! I've been working on automating the delivery of electronic documents to our patrons; most of the work over the summer was spent in ensuring that we had our legal and policy bases covered. I read through the documentation for Ariel, our chosen ILL software, to ensure that everything we wanted … "A canonical example of a next-generation OPAC?" Ooh, yes, I remember writing that now. Not about Evergreen, which has book bags and format limiters and facets and whiz-bangy unAPI goodness whose potential for causing mayhem has barely been scratched - but about Fac-Back-OPAC, the Django-and-Solr-and-Jython beast that Mike Beccaria and I picked up from Casey Durfee's scraps pile … The pain: discovery layer selection I returned from a week of vacation to land solidly in the middle of a discovery layer selection process -- not for our library, yet, but from a consortial perspective clearly having some impact on possible decisions for us further on down the road. As the systems librarian, I was nominated … Access 2007 draft program is online! I had been getting anxious about the lack of news on the Access 2007 conference front, but just saw in my trusty RSS feed that the draft program schedule is now available. I'm already looking forward to Jessamyn West's opening keynote and Roy Tennant's closing keynote. They always bring … Evergreen VMWare image available for download After much iteration and minor bug-squashing in my configuration, I am pleased to announce the Evergreen on Gentoo VMWare image is available for download. The download itself is approximately 500MB as a zipped image; when you unzip the image, it will require approximately 6GB of disk space. (1) Basic instructions … In which I make one apology, and two lengthy explanations I recently insulted Richard Wallis and Rob Styles of Talis by stating on Dan Chudnov's blog: To me it felt like Talis was in full sales mode during both Richard's API talk and Rob's lightning talk I must apologize for using the terms "sales mode" and "sales pitch" to describe … FacBackOPAC: making Casey Durfee's code talk to Unicorn For the past couple of days, I've been playing with Casey Durfee's code that uses Solr and Django to offer a faceted catalogue. My challenge? Turn a dense set of code focused on Dewey and Horizon ILS into a catalogue that speaks LC and Unicorn. Additionally, I want it to … Lightning talk: File_MARC for PHP I gave a lightning talk at the code4lib conference today on “File_MARC for PHP” introducing the File_MARC library to anybody who hasn't already heard about it. I crammed nine slides of information into five minutes, which was hopefully enough to convince people to start using it and provide feedback on … Google Summer of code4lib? Google just announced that they will start accepting applications in March for the Google Summer of Code (GSoC) 2007. In 2006, over 100 organizations participated in the GSoC, and Google expects to have a similar number participating in 2007. There are no lack of potential open-source development projects in the … Long time, no wild conjecture So here's the first of two posts based on purely wild conjecture. In a lengthening chain of trackbacks, Ryan Eby mentioned Christina's observation that Springlink has started displaying Google ads, presumably to supplement their subscription and pay-per-article income. Ryan goes on to wonder: Will vendors continue with the subscription model … A short-term SirsiDynix prediction The second of tonight's wild conjecture-based predictions. One of the things that I was thinking about as I was shovelling the snow off our driveway on Monday (other than yes! finally some snow... one of these days Amber is going to go rolling around in it) was the position that … Reflections at the start of 2007 2006 was a year full of change - wonderful, exhausting change. Here's a month-by-month summary of the highlights of 2006: January I did a whole lot of work on the PECL ibm_db2 extension, reviewed a good book on XML and PHP, and finally fixed up my blog a little bit. I've … Oh, Vista has _acquired_ SirsiDynix... A little over a week ago, I made the following prediction following the extremely under-the-radar press release on December 22nd that Vista Equity Partners was investing in SirsiDynix: I'll go out on a limb and say that a merger or acquisition of SirsiDynix in 2007 is unlikely (33% confidence), but … Musing about SirsiDynix's new investment partner Sirsi Corporation merged with Dynix Corporation in June 2005. Now SirsiDynix has announced that Vista Equity Partners is investing in their company. Let's take a look at Vista's investment philosophy: *We invest in companies that uniquely leverage technology to deliver best-of-class products or services.* I wonder if Vista confused "most … Save your forehead from flattening prematurely I gave up on trying to get Ubuntu 6.10 (Edgy Eft) to run ejabberd today; it looks like there are some fundamental issues going on between the version of erlang and the version of ejabberd that get bundled together. That was a fairly serious setback to my "Evergreen on … BiblioCommons wireframe walk-through After the Future of the ILS Symposium wrapped up, Beth Jefferson walked some of us through the current state of the BiblioCommons mocked-up Web UI for public library catalogs; the project grew out of a youth literacy project designed to encourage kids to read through the same sort of social … Future of the ILS Symposium: building our community and a business case I headed down to Windsor early on Tuesday morning for the Future of the ILS Symposium hosted by the Leddy Library at the University of Windsor. It was a good thing I decided to take the 12 hours of bus + train approach to getting there, as Sudbury's airport was completely … Neat-o: Archimède uses Apache Derby A while back I mentioned on the DSpace-devel mailing list that I was interested in adapting DSpace to use embedded Apache Derby as the default database, rather than PostgreSQL, as a means of lowering the installation and configuration barriers involved with setting up access to an external database. I haven't … PEAR File_MARC 0.1.0 alpha officially released Just a short note to let y'all know that I received the thumbs-up from my fellow PEAR developers to add File_MARC as an official PEAR package. What does this mean? Well, assuming you have PHP 5.1+ and PEAR installed, you can now download and install File_MARC and its prerequisite … Belated Access 2006 notes: Saturday, Oct. 14th Final entry in publishing my own hastily jotted Access 2006 conference notes--primarily for my own purposes, but maybe it will help you indirectly find some real content relating to your field of interest at the official podcast/presentation Web site for Access 2006. Contents include: consortial updates from ASIN, Quebec … Getting the Goods: Libraries and the Last Mile In my continuing series of publishing my Access 2006 notes, Roy Tennant's keynote on finishing the task of connecting our users to the information they need is something to which every librarian should pay attention. If you don't understand something I've written, there's always the podcast of Roy's talk. In … Access 2006 notes: October 12 My continuing summaries from Access 2006. Thursday, October 12th was the first "normal" day of the conference featuring the following presentations: Open access, open source, content deals: who pays? (Leslie Weir) Our Ontario: Yours to Recover (Art Rhyno, Walter Lewis) Improving the Catalogue Interface using Endeca (Tito Sierra) Lightning talks … Library Geeks in human form So, I think I read somewhere on #code4lib that Dan Chudnov, the most excellent host of the Library Geeks podcast, refused to make human-readable links to the MP3 files for the podcasts available in plain old HTML because he had bought into the stodgy old definition of podcasts (hah! "stodgy … Double-barreled PHP releases I'm the proud parent of two new releases over the past couple of days: one official PEAR release for linked list fans, and another revision of the File_MARC proposal for library geeks. Structures_LinkedList A few days ago marked the first official PEAR release of the Structures_LinkedList. Yes, it's only at … Feeling sorry for our vendor So I'm here in rainy Alabama (the weather must have followed me from Ottawa) taking a training course from our ILS vendor. I'm getting some disturbing insights into the company that are turning my general state of disbelief at the state of the system that we're paying lots of money … Backlog of Access 2006 notes Following on my plea for access to Access presentations, I'm in the process of posting the notes I took at the CARL instutitional repository pre-conference and Access 2006. I probably should have posted these to a wiki so that others (like the presenters) could go ahead and make corrections/additions … Calling for access to all future Access presentations It's a bit late now, but as the guy in the corner with the clicky keyboard desperately trying to take notes during the presentations (when not stifling giggles and snorts from #code4lib), I would be a lot more relaxed if I was certain that the presentations were going to be … Secretssss of Free WiFi at Access 2006 The bulk of the Access 2006 conference is being held at a hotel-that-shall-not-be-named-for-reasons-that-will-become-apparent-shortly in Ottawa this week. I was at the CARL Pre-Conference on Institutional Repositories today and a kind man (Wayne Johnston from the University of Guelph) tipped me off that the hotel's pay-for-wifi system is a little bit … Laundry list systems librarians On the always excellent Techessence, Dorothea Salo posted Hiring a systems librarian. The blog post warned against libraries who put together a “laundry-list job description” for systems librarians: Sure, it'd be nice to have someone who can kick-start a printer, put together a desktop machine from scraps, re-architect a website … File_MARC and Structure_Linked_List: new alpha releases Earlier in the month I asked for feedback on the super-alpha MARC package for PHP. Most of the responses I received were along the lines of "Sounds great!" but there hasn't been much in the way of real suggestions for improvement. In the mean time, I've figured out (with Lukas … Super-alpha MARC package for PHP: comments requested Okay, I've been working on this project (let's call it PEAR_MARC, although it's not an official PEAR project yet) in my spare moments over the past month or two. It's a new PHP package for working with MARC records. The package tries to follow the PEAR project standards (coding, documentation … coinmarketcap-com-4395 ---- Tether price today, USDT live marketcap, chart, and info | CoinMarketCap Cryptos:  11,256Exchanges:  393Market Cap:  $1,959,537,118,53924h Vol:  $112,290,902,120Dominance:  BTC: 44.5% ETH: 19.3%ETH Gas:  29 Gwei Cryptocurrencies Exchanges NFT Portfolio Watchlist Calendars Products Learn Cryptos:  11,256Exchanges:  393Market Cap:  $1,959,537,118,53924h Vol:  $112,290,902,120Dominance:  BTC: 44.5% ETH: 19.3%ETH Gas:  29 Gwei CryptocurrenciesTokensTether TetherUSDT Rank #5 Token On 306,799 watchlists Tether Price (USDT) $1.00 0.01% 0.00002156 BTC0.01% 0.0003099 ETH0.01% Low:$0.9999 High:$1.00 24h   Tether USDTPrice: $1.00  0.01% Add to Main Watchlist   Market Cap $63,262,499,206 0.47% Fully Diluted Market Cap $64,484,916,763 0.02% Volume 24h $78,039,058,695 6.30% Volume / Market Cap 1.23 Circulating Supply 63.25B USDT Max Supply -- Total Supply 64,468,847,060 More stats BuyExchangeGamingEarn Crypto Sponsored Links Website, Explorers, Socials etc. tether.to Explorers Community Chat Whitepaper Tether Links Links tether.to Whitepaper Chat Explorers www.omniexplorer.info etherscan.io algoexplorer.io tronscan.org bscscan.com Community Twitter Contracts Ethereum0xdac1...d831ec7 More Ethereum0xdac1...d831ec7 Tether Contracts Ethereum0xdac1...d831ec7 Binance Smart Chain0x55d3...3197955 TronTR7NHq...zgjLj6t TomoChain0x381b...135a59e SolanaBQcdHd...BBBDiq4 Algorand312769 Heco0xa71e...2c3e47a Xdai chain0x4ECa...ed605C6 Fantom0x049d...33a3c7a Polygon0xc213...4b58e8f Avalanche Contract Chain0xde3a...649b084 OKExChain0x382b...6c65c50 Bitcoin Cash9fc89d...3615c11 KCC0x0039...11baf48 Please change the wallet network Change the wallet network in the MetaMask Application to add this contract. I understand Audits CertiK CertiK Tether Audits CertiK Tags Payments Stablecoin +4 Payments Stablecoin Stablecoin - Asset-Backed Binance Smart Chain View all Tether Tags Property Payments Stablecoin Stablecoin - Asset-Backed Binance Smart Chain Avalanche Ecosystem Solana Ecosystem OverviewMarketHistorical DataHoldersWalletsNewsSocialsRatingsAnalysisShare Tether Chart Loading Data Please wait, we are loading chart data USDT Price Live Data The live Tether price today is $1.00 USD with a 24-hour trading volume of $78,039,058,695 USD. Tether is down 0.01% in the last 24 hours. The current CoinMarketCap ranking is #5, with a live market cap of $63,262,499,206 USD. It has a circulating supply of 63,246,734,131 USDT coins and the max. supply is not available. If you would like to know where to buy Tether, the top exchanges for trading in Tether are currently Binance, Tokocrypto, OKEx, CoinTiger, and Huobi Global. You can find others listed on our crypto exchanges page. What Is Tether (USDT)? USDT is a stablecoin (stable-value cryptocurrency) that mirrors the price of the U.S. dollar, issued by a Hong Kong-based company Tether. The token’s peg to the USD is achieved via maintaining a sum of commercial paper, fiduciary deposits, cash, reserve repo notes, and treasury bills in reserves that is equal in USD value to the number of USDT in circulation. Originally launched in July 2014 as Realcoin, a second-layer cryptocurrency token built on top of Bitcoin’s blockchain through the use of the Omni platform, it was later renamed to USTether, and then, finally, to USDT. In addition to Bitcoin’s, USDT was later updated to work on the Ethereum, EOS, Tron, Algorand, and OMG blockchains. The stated purpose of USDT is to combine the unrestricted nature of cryptocurrencies — which can be sent between users without a trusted third-party intermediary — with the stable value of the US dollar. Who Are The Founders Of Tether? USDT — or as it was known at the time, Realcoin — was launched in 2014 by Brock Pierce, Reeve Collins and Craig Sellars. Brock Pierce is a well-known entrepreneur who has co-founded a number of high-profile projects in the crypto and entertainment industries. In 2013, he co-founded a venture capital firm Blockchain Capital, which by 2017 had raised over $80 million in funding. In 2014, Pierce became the director of the Bitcoin Foundation, a nonprofit established to help improve and promote Bitcoin. Pierce has also co-founded Block.one, the company behind EOS, one of the largest cryptocurrencies on the market. Reeve Collins was the CEO of Tether for the first two years of its existence. Prior to that, he had co-founded several successful companies, such as the online ad network Traffic Marketplace, entertainment studio RedLever and gambling website Pala Interactive. As of 2020, Collins is heading SmarMedia Technologies, a marketing and advertising tech company. Other than working on Tether, Craig Sellars has been a member of the Omni Foundation for over six years. Its Omni Protocol allows users to create and trade smart-contract based properties and currencies on top of Bitcoin’s blockchain. Sellars has also worked in several other cryptocurrency companies and organizations, such as Bitfinex, Factom, Synereo and the MaidSafe Foundation. What Makes Tether Unique? USDT’s unique feature is the fact that its value is guaranteed by Tether to remain pegged to the U.S. dollar. According to Tether, whenever it issues new USDT tokens, it allocates the same amount of USD to its reserves, thus ensuring that USDT is fully backed by cash and cash equivalents. The famously high volatility of the crypto markets means that cryptocurrencies can rise or fall by 10-20% within a single day, making them unreliable as a store of value. USDT, on the other hand, is protected from these fluctuations. This property makes USDT a safe haven for crypto investors: during periods of high volatility, they can park their portfolios in Tether without having to completely cash out into USD. In addition, USDT provides a simple way to transact a U.S. dollar equivalent between regions, countries and even continents via blockchain — without having to rely on a slow and expensive intermediary, like a bank or a financial services provider. However, over the years, there have been a number of controversies regarding the validity of Tether’s claims about their USD reserves, at times disrupting USDT’s price, which went down as low as $0.88 at one point in its history. Many have raised concerns about the fact that Tether’s reserves have never been fully audited by an independent third party. Related Pages: Looking for market and blockchain data for BTC? Visit our block explorer. Want to buy crypto? Use CoinMarketCap’s guide. How Many Tether (USDT) Coins Are There In Circulation? There is no hard-coded limit on the total supply of USDT — given the fact that it belongs to a private company, theoretically, its issuance is limited only by Tether’s own policies. However, because Tether claims that every single USDT is supposed to be backed by one U.S. dollar, the amount of tokens is limited by the company’s actual cash reserves. Moreover, Tether does not disclose its issuance schedules ahead of time. Instead, they provide daily transparency reports, listing the total amount of their asset reserves and liabilities, the latter corresponding to the amount of USDT in circulation. As of September 2020, there are over 14.4 billion USDT tokens in circulation, which are backed by $14.6 billion in assets, according to Tether. How Is the Tether Network Secured? USDT does not have its own blockchain — instead, it operates as a second-layer token on top of other cryptocurrencies’ blockchains: Bitcoin, Ethereum, EOS, Tron, Algorand, Bitcoin Cash and OMG, and is secured by their respective hashing algorithms. Where Can You Buy Tether (USDT)? It is possible to buy Tether / USDT on a large number of cryptocurrency exchanges. In fact, USDT’s average daily trading volume is often on par or even exceeds that of Bitcoin. It is especially prominent on those exchanges where fiat-to-crypto trading pairs are unavailable, as it provides a viable alternative to USD. Here are some of the most popular exchanges that support Tether trading: Binance OKEx HitBTC Huobi Global Our most recent articles about Tether: What Is Yield Farming? Is Tether Untouchable? The Latest Twist in a Long-Running Drama U.S. Examining Whether Tether Committed Bank Fraud, Bloomberg Says Pancake Bunny Price Crashes After DeFi Token Targeted in Flash Loan Attack Tether Releases Breakdown of Reserves for the First Time Ever Read more USDT Tether USD United States Dollar USDT Price Statistics Tether Price Today Tether Price $1.00 Price Change24h $-0.000098690.01% 24h Low / 24h High $0.9999 / $1.00 Trading Volume24h $78,039,058,695.286.30% Volume / Market Cap 1.23 Market Dominance 3.23% Market Rank #5 Tether Market Cap Market Cap $63,262,499,206.350.47% Fully Diluted Market Cap $64,484,916,763.000.02% Tether Price Yesterday Yesterday's Low / High $0.9999 / $1.00 Yesterday's Open / Close $1.00 / $1.00 Yesterday's Change 0.04% Yesterday's Volume $73,792,175,887.67 Tether Price History 7d Low / 7d High $0.9998 / $1.00 30d Low / 30d High $0.9992 / $1.00 90d Low / 90d High $0.9986 / $1.01 52 Week Low / 52 Week High $0.9903 / $1.03 All Time High May 27, 2017 (4 years ago) $1.2117.32% All Time Low Jun 14, 2021 (2 months ago) $0.0001018982849.35% Tether ROI 0.02% Tether Supply Circulating Supply 63,246,734,131 USDT Total Supply 64,468,847,060 USDT Max Supply No Data Show more Trending Coins and Tokens 🔥 Hold BNB on BinanceAnd Get 25% Off Trading Fees. Trade NowSponsored PORNROCKETPORNROCKET#2659 CUMROCKETCUMMIES#463 PolyPlayPLAY#2730 MinifootballMINIFOOTBALL#2732 MinaMINA#112 People Also Watch EFFORCE $0.71 8.62% Polkadot $22.26 6.60% Uniswap $29.02 2.91% Power Ledger $0.3042 -1.90% PERL.eco $0.08815 10.73% Mirrored Google $2,772.29 2.52% Products Blockchain Explorer Crypto API Crypto Indices Interest Jobs Board Sitemap Company About us Terms of use Privacy Policy Disclaimer Methodology CareersWe’re hiring! Support Request Form Contact Support FAQ Glossary Socials Facebook Twitter Telegram Instagram Interactive Chat © 2021 CoinMarketCap. All rights reserved coinmarketcap-com-7001 ---- Bitcoin price today, BTC live marketcap, chart, and info | CoinMarketCap Cryptos:  11,256Exchanges:  393Market Cap:  $1,960,495,153,85124h Vol:  $112,319,088,063Dominance:  BTC: 44.5% ETH: 19.3%ETH Gas:  32 Gwei Cryptocurrencies Exchanges NFT Portfolio Watchlist Calendars Products Learn Cryptos:  11,256Exchanges:  393Market Cap:  $1,960,495,153,85124h Vol:  $112,319,088,063Dominance:  BTC: 44.5% ETH: 19.3%ETH Gas:  32 Gwei CryptocurrenciesCoinsBitcoin BitcoinBTC Rank #1 Coin On 1,946,111 watchlists Bitcoin Price (BTC) $46,403.52 0.22% 14.38 ETH0.22% Low:$46,182.28 High:$48,098.68 24h   Bitcoin BTCPrice: $46,403.52  0.22% Add to Main Watchlist   Market Cap $871,721,138,080 0.21% Fully Diluted Market Cap $974,473,939,371 8.51% Volume 24h $33,030,738,408 9.70% Volume / Market Cap 0.03789 Circulating Supply 18,785,668.00 BTC 89% Max Supply 21,000,000 Total Supply 18,785,668 More stats BuyExchangeGamingEarn Crypto Sponsored Links Website, Explorers, Socials etc. bitcoin.org Explorers Community Source code Whitepaper Bitcoin Links Links bitcoin.org Source code Whitepaper Explorers blockchain.coinmarketcap.com blockchain.info live.blockcypher.com blockchair.com explorer.viabtc.com Community bitcointalk.org Reddit Tags Mineable PoW +27 Mineable PoW SHA-256 Store of Value View all Bitcoin Tags Consensus Algorithm PoW SHA-256 Property Store of Value State channels Coinbase Ventures Portfolio Three Arrows Capital Portfolio Polychain Capital Portfolio Binance Labs Portfolio Arrington XRP capital Blockchain Capital Portfolio BoostVC Portfolio CMS Holdings Portfolio DCG Portfolio DragonFly Capital Portfolio Electric Capital Portfolio Fabric Ventures Portfolio Framework Ventures Galaxy Digital Portfolio Huobi Capital Alameda Research Portfolio A16Z Portfolio 1Confirmation Portfolio Winklevoss Capital USV Portfolio Placeholder Ventures Portfolio Pantera Capital Portfolio Multicoin Capital Portfolio Paradigm XZY Screener Other Mineable OverviewMarketHistorical DataHoldersWalletsNewsSocialsRatingsAnalysisPrice EstimatesShare Bitcoin Chart Loading Data Please wait, we are loading chart data BTC Price Live Data The live Bitcoin price today is $46,403.52 USD with a 24-hour trading volume of $33,030,738,408 USD. Bitcoin is down 0.22% in the last 24 hours. The current CoinMarketCap ranking is #1, with a live market cap of $871,721,138,080 USD. It has a circulating supply of 18,785,668 BTC coins and a max. supply of 21,000,000 BTC coins. If you would like to know where to buy Bitcoin, the top exchanges for trading in Bitcoin are currently Binance, Tokocrypto, OKEx, CoinTiger, and Bybit. You can find others listed on our crypto exchanges page. What Is Bitcoin (BTC)? Bitcoin is a decentralized cryptocurrency originally described in a 2008 whitepaper by a person, or group of people, using the alias Satoshi Nakamoto. It was launched soon after, in January 2009. Bitcoin is a peer-to-peer online currency, meaning that all transactions happen directly between equal, independent network participants, without the need for any intermediary to permit or facilitate them. Bitcoin was created, according to Nakamoto’s own words, to allow “online payments to be sent directly from one party to another without going through a financial institution.” Some concepts for a similar type of a decentralized electronic currency precede BTC, but Bitcoin holds the distinction of being the first-ever cryptocurrency to come into actual use. Who Are the Founders of Bitcoin? Bitcoin’s original inventor is known under a pseudonym, Satoshi Nakamoto. As of 2020, the true identity of the person — or organization — that is behind the alias remains unknown. On October 31, 2008, Nakamoto published Bitcoin’s whitepaper, which described in detail how a peer-to-peer, online currency could be implemented. They proposed to use a decentralized ledger of transactions packaged in batches (called “blocks”) and secured by cryptographic algorithms — the whole system would later be dubbed “blockchain.” Just two months later, on January 3, 2009, Nakamoto mined the first block on the Bitcoin network, known as the genesis block, thus launching the world’s first cryptocurrency. However, while Nakamoto was the original inventor of Bitcoin, as well as the author of its very first implementation, over the years a large number of people have contributed to improving the cryptocurrency’s software by patching vulnerabilities and adding new features. Bitcoin’s source code repository on GitHub lists more than 750 contributors, with some of the key ones being Wladimir J. van der Laan, Marco Falke, Pieter Wuille, Gavin Andresen, Jonas Schnelli and others. What Makes Bitcoin Unique? Bitcoin’s most unique advantage comes from the fact that it was the very first cryptocurrency to appear on the market. It has managed to create a global community and give birth to an entirely new industry of millions of enthusiasts who create, invest in, trade and use Bitcoin and other cryptocurrencies in their everyday lives. The emergence of the first cryptocurrency has created a conceptual and technological basis that subsequently inspired the development of thousands of competing projects. The entire cryptocurrency market — now worth more than $300 billion — is based on the idea realized by Bitcoin: money that can be sent and received by anyone, anywhere in the world without reliance on trusted intermediaries, such as banks and financial services companies. Thanks to its pioneering nature, BTC remains at the top of this energetic market after over a decade of existence. Even after Bitcoin has lost its undisputed dominance, it remains the largest cryptocurrency, with a market capitalization that fluctuated between $100-$200 billion in 2020, owing in large part to the ubiquitousness of platforms that provide use-cases for BTC: wallets, exchanges, payment services, online games and more. Related Pages: Looking for market and blockchain data for BTC? Visit our block explorer. Want to buy Bitcoin? Use CoinMarketCap’s guide. Should you buy Bitcoin with PayPal? What is wrapped Bitcoin? Will Bitcoin volatility ever reduce? How to use a Bitcoin ATM How Much Bitcoin Is in Circulation? Bitcoin’s total supply is limited by its software and will never exceed 21,000,000 coins. New coins are created during the process known as “mining”: as transactions are relayed across the network, they get picked up by miners and packaged into blocks, which are in turn protected by complex cryptographic calculations. As compensation for spending their computational resources, the miners receive rewards for every block that they successfully add to the blockchain. At the moment of Bitcoin’s launch, the reward was 50 bitcoins per block: this number gets halved with every 210,000 new blocks mined — which takes the network roughly four years. As of 2020, the block reward has been halved three times and comprises 6.25 bitcoins. Bitcoin has not been premined, meaning that no coins have been mined and/or distributed between the founders before it became available to the public. However, during the first few years of BTC’s existence, the competition between miners was relatively low, allowing the earliest network participants to accumulate significant amounts of coins via regular mining: Satoshi Nakamoto alone is believed to own over a million Bitcoin. Mining Bitcoins can be very profitable for miners, depending on the current hash rate and the price of Bitcoin. While the process of mining Bitcoins is complex, we discuss how long it takes to mine one Bitcoin on CMC Alexandria — as we wrote above, mining Bitcoin is best understood as how long it takes to mine one block, as opposed to one Bitcoin. How Is the Bitcoin Network Secured? Bitcoin is secured with the SHA-256 algorithm, which belongs to the SHA-2 family of hashing algorithms, which is also used by its fork Bitcoin Cash (BCH), as well as several other cryptocurrencies. What Is Bitcoin’s Role as a Store of Value? Bitcoin is the first decentralized, peer-to-peer digital currency. One of its most important functions is that it is used as a decentralized store of value. In other words, it provides for ownership rights as a physical asset or as a unit of account. However, the latter store-of-value function has been debated. Many crypto enthusiasts and economists believe that high-scale adoption of the top currency will lead us to a new modern financial world where transaction amounts will be denominated in smaller units. The top crypto is considered a store of value, like gold, for many — rather than a currency. This idea of the first cryptocurrency as a store of value, instead of a payment method, means that many people buy the crypto and hold onto it long-term (or HODL) rather than spending it on items like you would typically spend a dollar — treating it as digital gold. Crypto Wallets The most popular wallets for cryptocurrency include both hot and cold wallets. Cryptocurrency wallets vary from hot wallets and cold wallets. Hot wallets are able to be connected to the web, while cold wallets are used for keeping large amounts of coins outside of the internet. Some of the top crypto cold wallets are Trezor, Ledger and CoolBitX. Some of the top crypto hot wallets include Exodus, Electrum and Mycelium. How Is Bitcoin’s Technology Upgraded? A hard fork is a radical change to the protocol that makes previously invalid blocks/transactions valid, and therefore requires all users to upgrade. For example, if users A and B are disagreeing on whether an incoming transaction is valid, a hard fork could make the transaction valid to users A and B, but not to user C. A hard fork is a protocol upgrade that is not backward compatible. This means every node (computer connected to the Bitcoin network using a client that performs the task of validating and relaying transactions) needs to upgrade before the new blockchain with the hard fork activates and rejects any blocks or transactions from the old blockchain. The old blockchain will continue to exist and will continue to accept transactions, although it may be incompatible with other newer Bitcoin clients. A soft fork is a change to the Bitcoin protocol wherein only previously valid blocks/transactions are made invalid. Since old nodes will recognise the new blocks as valid, a soft fork is backward-compatible. This kind of fork requires only a majority of the miners upgrading to enforce the new rules. Some examples of prominent cryptocurrencies that have undergone hard forks are the following: Bitcoin’s hard fork that resulted in Bitcoin Cash, Ethereum’s hard fork that resulted in Ethereum Classic. https://coinmarketcap.com/alexandria/article/bitcoin-vs-bitcoin-cash-vs-bitcoin-svBitcoin Cash has been hard forked since its original forking, with the creation of Bitcoin SV. What Is the Lightning Network? The Lightning Network is an off-chain, layered payment protocol that operates bidirectional payment channels which allows instantaneous transfer with instant reconciliation. It enables private, high volume and trustless transactions between any two parties. The Lightning Network scales transaction capacity without incurring the costs associated with transactions and interventions on the underlying blockchain. How Much Is Bitcoin? The current valuation of Bitcoin is constantly moving, all day every day. It is a truly global asset. From a start of under one cent per coin, BTC has risen in price by thousands of percent to the numbers you see above. The prices of all cryptocurrencies are quite volatile, meaning that anyone’s understanding of how much is Bitcoin will change by the minute. However, there are times when different countries and exchanges show different prices and understanding how much is Bitcoin will be a function of a person’s location. Where Can You Buy Bitcoin (BTC)? Bitcoin is, in many regards, almost synonymous with cryptocurrency, which means that you can buy Bitcoin on virtually every crypto exchange — both for fiat money and other cryptocurrencies. Some of the main markets where BTC trading is available are: Binance Coinbase Pro OKEx Kraken Huobi Global Bitfinex If you are new to crypto, use CoinMarketCap’s own easy guide to buying Bitcoin. Our most recent articles about Bitcoin: The Market Is Picking Up, and Macro Factors Dominates the Crypto Market: Weekly Review From TokenInsight What Is the Crypto Fear and Greed Index? How Regulations May Affect the Mining Industry: A Data Perspective by IntoTheBlock Who Owns Bitcoin? What’s Russia Up To? A Weekly Russian Crypto News Recap Read more BTC Bitcoin USD United States Dollar BTC Price Statistics Bitcoin Price Today Bitcoin Price $46,403.52 Price Change24h $-100.710.22% 24h Low / 24h High $46,182.28 / $48,098.68 Trading Volume24h $33,030,738,408.319.70% Volume / Market Cap 0.03789 Market Dominance 44.44% Market Rank #1 Bitcoin Market Cap Market Cap $871,721,138,079.960.21% Fully Diluted Market Cap $974,473,939,371.188.51% Bitcoin Price Yesterday Yesterday's Low / High $44,282.42 / $47,831.98 Yesterday's Open / Close $44,439.69 / $47,793.32 Yesterday's Change 7.55% Yesterday's Volume $31,744,259,538.78 Bitcoin Price History 7d Low / 7d High $42,618.57 / $48,098.68 30d Low / 30d High $29,360.96 / $48,098.68 90d Low / 90d High $28,893.62 / $47,831.98 52 Week Low / 52 Week High $9,916.49 / $64,863.10 All Time High Apr 14, 2021 (4 months ago) $64,863.1028.46% All Time Low Jul 05, 2013 (8 years ago) $65.5370716.96% Bitcoin ROI 34186.87% Bitcoin Supply Circulating Supply 18,785,668 BTC Total Supply 18,785,668 BTC Max Supply 21,000,000 BTC Show more Trending Coins and Tokens 🔥 Hold BNB on BinanceAnd Get 25% Off Trading Fees. Trade NowSponsored PORNROCKETPORNROCKET#2659 CUMROCKETCUMMIES#463 PolyPlayPLAY#2730 MinifootballMINIFOOTBALL#2732 MinaMINA#112 People Also Watch Reserve Rights $0.03974 -0.37% MicroBitcoin $0.0000103 2.47% Typerium $0.0004779 4.45% DFI.Money $4,046.73 -4.94% IONChain $0.00142 -8.38% Ink Protocol $0.001684 2.37% Products Blockchain Explorer Crypto API Crypto Indices Interest Jobs Board Sitemap Company About us Terms of use Privacy Policy Disclaimer Methodology CareersWe’re hiring! Support Request Form Contact Support FAQ Glossary Socials Facebook Twitter Telegram Instagram Interactive Chat © 2021 CoinMarketCap. All rights reserved cointelegraph-com-8236 ---- When will Ethereum 2.0 fully launch? Roadmap promises speed, but history says otherwise $ BTC $46,395 ETH $3,229 XRP $1.15 BCH $647 XMR $266.9 DASH $193 EOS $5.19 ZEC $139 ADA $2.148 NEO $54.03 BNB $404 XLM $0.360 USDT $1.0003 MIOTA $1.14 DOGE $0.28 X BTC $46,395 -0.46% ETH $3,229 -0.26% XRP $1.15 +11.13% BCH $647 +1.24% EOS $5.19 +3.78% DOGE $0.28 +0.84% English Advertise Careers News Bitcoin News Ripple News Ethereum News Litecoin News Altcoin News Blockchain News Business News Technology News Policy & Regulations Markets Market News Price Indexes Market Analysis Heatmap Top 10 Cryptocurrencies Magazine People Top 100 2021 Top 100 2020 Opinion Expert Take Interview Cryptopedia Explained How to Crypto Bitcoin101 Ethereum101 Dogecoin101 Bitcoin Cash101 Ripple101 Lightning Network101 Altcoin101 DeFi101 Trading101 NFT101 Industry DApplist Events Jobs Press Releases Store BlockShow Consulting Consulting Services Technology Providers Industry Reports Video Markets Pro Julia Magas Dec 16, 2020 When will Ethereum 2.0 fully launch? Roadmap promises speed, but history says otherwise The new Ethereum 2.0 roadmap review: What updates have been added, and how soon can they be implemented? 132790 Total views 46 Total shares Listen to article 0:00 Analysis On Dec. 2, shortly after the long-awaited release of Ethereum 2.0, platform founder Vitalik Buterin announced an updated roadmap. At first glance, it does not differ much from the previous version from March. However, it brought some clarity on current progress and further stages, giving grounds for estimating how soon a full-fledged transition to proof-of-stake and the launch of sharding can be expected. Just a spoiler: The full implementation of Ethereum 2.0 will not be coming soon. Formally Ethereum 2.0, but not yet Dec. 1 marked a pivotal event for the entire crypto industry as the first block of the new Ethereum network was generated, the one developers had been preparing to see through for the past few years. Ethereum 2.0 is expected to become a super-fast, reliable version of the previous blockchain, all thanks to so-called sharding and the transition to the PoS consensus algorithm. In fact, the update that came out under the name Ethereum 2.0 is not entirely what its namesake claims to be, and the Beacon Chain, its first phase, is actually Phase 0. The Beacon Chain is needed exclusively for the development and testing of innovations that, if successful, will be introduced into the main Ethereum 2.0 network. Thus, the second upgrade is more fundamental, as the platform will finally let go of proof-of-work and will be fully supported by the stakers. Simply put, Phase 0 — aka the Beacon Chain — lays a basis for implementing staking and sharding in the next upgrade, or as figuratively explained by the Ethereum team, serves as “a new engine” of the future spacecraft. Even though Ethereum formally switched to version 2.0, the network still depends on the computing power of miners. The developers also launched PoS in parallel to gradually recruiting the stakers necessary to ensure the stable operation of the network. Praneeth Srikanti, investment principle at ConsenSys Ventures, discussed with Cointelegraph the structure and functionality of the Beacon Chain: “The new beacon chain runs on Casper POS for itself and the shard chains — and would ultimately be managing validators, choosing a block proposer for each shard and organizing validator groups (in the form of committees) for voting on the proposed blocks and managing consensus rules.” Srikanti added that the PoS mechanism is already live on the Beacon Chain and that it requires attestations for shard blocks and PoS votes for the Beacon Chain blocks. The network is now ready enough for users to join and become validators. To do so, they need to have 32 Ether (ETH) in their accounts, locked for transfer and exchange until the network fully transitions to new characteristics. The rewards that validators receive for supporting the new blockchain will also be locked until the release of the next phase, meaning that stakers will probably be able to access their funds no earlier than 2021 or 2022. Commenting on how the changes in the Ethereum 2.0 roadmap can affect stakers, Jay Hao, CEO of OKEx, told Cointelegraph: “While it does most likely mean that users will have to wait longer until they can withdraw their ETH from staking, there are still many advantages to staking ETH. To start with, stakers are supporting the move to ETH 2.0 and the ETH community. They will earn generous rewards when they do withdraw and, it is always possible (especially in this fast-paced industry) that other solutions will appear that expedite this new timeline.” The implementation of shards — another unique invention of Ethereum, thanks to which the network will be able to provide services to hundreds of millions of users — will also be available only in future versions of the blockchain. It’s expected that there will be 65 of them in the new Ethereum network, with the Beacon Chain acting as a control blockchain. The paradox is that sharding is not applied to the Beacon Chain, which will actually be the focal point of the network. Current progress The Ethereum development team has been repeatedly criticized for missing deadlines and constantly delaying the updates. So, what is the real state of affairs at this time? Judging by the progress bar that the developers of Ethereum have added to the new roadmap, the implementation of the second update is not expected anytime soon. Work on the most important tasks necessary for a full transition to a new network — namely, Eth1/Eth2 merge implementation — is in its early stages, with about 15% completed. Things are more positive on the sharding frontier, with about half of the work already done, judging by the progress bar. The good news is that the new roadmap is missing Phases 1.5, 2 and others that were present in previous versions of the document. This means that a full-fledged transition to a new network can be expected sooner and that the next phase will be the final one, combining all of the most important updates. Earlier, it was expected that shard chains would appear in Phase 1, and only after that, in the second phase, would SNARK/STARK transactions become possible. Now, all of these updates are expected to be launched under Phase 1, and some progress has already been made toward that end. The organization of the teams’ work has also changed from step-by-step to parallel. The new roadmap suggests that the execution of each task is organized autonomously and is not disrupted in the event of difficulties with the other segments. In other words, different teams can work on different tasks at the same time, which may speed up the transition to the new network. Some of the tasks can be expected soon, as indicated by the roadmap. In particular, the developers have already done the bulk of the work on implementing the EIP 1559 protocol, aimed at stabilizing the cost of commissions on the network and gas repricing. In addition, the release of EVM384, which will allow for faster operations of the Ethereum Virtual Machine, is in the process of transitioning to a more advanced version called “Ethereum-flavored WebAssembly,” or Ewasm. Interestingly, Ewasm is the only major implementation missing in the new roadmap. It will probably come as part of the upgrade called “VM upgrades,” and its implementation will not be carried out in the next phase. It’s expected that Ewasm will manage the work of smart contracts and make the network more decentralized. Layer-two solutions advancing scalability and security such as SNARK/STARK operations, post-quantum crypto and the launch of CBC Casper — an improved version of the protocol that will mark the final transition of the network to the staking model — remain among the solutions that are likely to appear much later on. When will Eth2 fully launch? Looking at how fast the relevant updates were implemented in the previous versions of Ethereum roadmaps, it turns out that the planned and real release dates are about a year apart, at the very minimum. Thus, for example, according to the estimates made by the developers of large blockchain software company ConsenSys in May 2019, the release of the Beacon Chain blockchain was supposed to have happened back in 2019. Regarding the Ewasm release, the full-scale launch of the machine is supposed to occur in 2020 or 2021. This means that it should be expected to come no earlier than 2021 to 2022 — the time frame that coincides with the one set by the Ethereum developer team for the Ethereum 2.0 mainnet release. Still, the full scope of work that needs to be done before the Ethereum 2.0 blockchain becomes fully complete can make it challenging to set predictions. Meanwhile, some suggest that upgrade releases could be delayed for an even longer period of time. YouTube crypto blogger Boxmining recommended adding one to two years to the previous estimates, suggesting that the market will see Casper and sharding in full glory only in 2022 to 2023. A more pessimistic forecast suggests that it might take years before the market will see the final version of Ethereum 2.0. Himanshu Bisht, marketing head at Razor Network — which operates on a PoS consensus algorithm — told Cointelegraph that such a timeframe is realistic: “Mainnet Ethereum will need to ‘merge’ with the beacon chain at some point. This will be the start of a new phase of the Ethereum ecosystem in a true sense. However, we might not be able to see this before February, 2022.” Nir Kshetri, a professor at the University of North Carolina-Greensboro and a research fellow at Kobe University, agreed that the Ethereum 2.0 transition is likely to take a fair bit of time. According to him, the EVM upgrade is a challenging process, as he further told Cointelegraph: “Organizations are likely to be effectively locked in EVM and it is difficult to break the self-reinforcing mechanism. There are already millions of existing smart contracts and enormous amounts of tools and languages, optimizations. On top of that convincing Ethereum users that the PoS is safe and secure is a challenge of another magnitude.” Paolo Ardoino, chief technology officer of crypto exchange Bitfinex, told Cointelegraph that the full transition to Ethereum 2.0 could take three years, although he doesn’t rule out a faster development: “I think that after this initial phase, it is likely that the pace of Ethereum 2.0 development will improve over the coming year. We wonder if full Ethereum 2.0 transition will be complete up to 3 years from now, but we expect token transfers will likely be available earlier than that.” On the other hand, a streamlined organization of Ethereum client operations and the work of developers, as well as immense assistance from the community, can significantly reduce the time frame of the roadmap. In general, as the Beacon Chain explorer shows, the deployment of the new PoS network is proceeding successfully. At the moment, more than 33,000 users have become stakers, with almost 1.1 million ETH staked so far. #Proof-of-Stake #Ethereum Related News Blockchain Can Disrupt Higher Education Today, Global Labor Market Tomorrow Q&A: Here’s what you may not know about privacy preserving computation networks The rise of DEX robots: AMMs push for an industrial revolution in trading Bitcoin still on track to $100K despite growing risks, says strategic investor Lyn Alden JPMorgan report: Eth2 could kick-start $40B staking industry by 2025 Finance Redefined: The $500 million bet on ETH 2.0 making waves! June 24-July 1 Load more articles Editor’s Choice London court orders Binance to trace hackers behind $2.6M Fetch.ai attack Ethereum alone not enough to disrupt Big Tech: Jack Dorsey OpenSea trading volume explodes 76,240% YTD amid NFT boom Wyoming’s crypto-friendly bill could be a sandbox in action, Sen. Lummis says ADA hits $2 for the first time since May ahead of Cardano smart contract announcement Cointelegraph YouTube Subscribe Advertise with us commonplace-net-7555 ---- commonplace.net – Data. The final frontier. Skip to content commonplace.net Data. The final frontier. Publications A Common Place All Posts About Contact Infrastructure for heritage institutions – Open and Linked Data June 1, 2021June 1, 2021 Lukas KosterData, Infrastructure, Library In my June 2020 post in this series, “Infrastructure for heritage institutions – change of course  ” , I said: “The results of both Data Licences and the Data Quality projects (Object PID’s, Controlled Vocabularies, Metadata Set) will go into the new Data Publication project, which will be undertaken in the second half of 2020. This project is aimed at publishing our collection data as open and linked data in various formats via various channels. A […] Read more Infrastructure for heritage institutions – ARK PID’s November 3, 2020November 11, 2020 Lukas KosterData, Infrastructure, Library In the Digital Infrastructure program at the Library of the University of Amsterdam we have reached a first milestone. In my previous post in the Infrastructure for heritage institutions series, “Change of course“, I mentioned the coming implementation of ARK persistent identifiers for our collection objects. Since November 3, 2020, ARK PID’s are available for our university library Alma catalogue through the Primo user interface. Implementation of ARK PID’s for the other collection description systems […] Read more Infrastructure for heritage institutions – change of course June 23, 2020 Lukas KosterData, Infrastructure, Library In July 2019 I published the first post about our planning to realise a “coherent and future proof digital infrastructure” for the Library of the University of Amsterdam. In February I reported on the first results. As frequently happens, since then the conditions have changed, and naturally we had to adapt the direction we are following to achieve our goals. In other words: a change of course, of course.  Projects  I will leave aside the […] Read more Infrastructure for heritage institutions – first results February 24, 2020February 25, 2020 Lukas KosterData, Infrastructure, Library In July 2019 I published the post Infrastructure for heritage institutions in which I described our planning to realise a “coherent and future proof digital infrastructure” for the Library of the University of Amsterdam. Time to look back: how far have we come? And time to look forward: what’s in store for the near future? Ongoing activities I mentioned three “currently ongoing activities”:  Monitoring and advising on infrastructural aspects of new projects Maintaining a structured dynamic overview […] Read more Infrastructure for heritage institutions July 11, 2019January 11, 2020 Lukas KosterData, Infrastructure, Library During my vacation I saw this tweet by LIBER about topics to address, as suggested by the participants of the LIBER 2019 conference in Dublin: It shows a word cloud (yes, a word cloud) containing a large number of terms. I list the ones I can read without zooming in (so the most suggested ones, I guess), more or less grouped thematically: Open scienceOpen dataOpen accessLicensingCopyrightsLinked open dataOpen educationCitizen science Scholarly communicationDigital humanities/DHDigital scholarshipResearch assessmentResearch […] Read more Ten years linked open data June 4, 2016February 13, 2020 Lukas KosterData, Library This post is the English translation of my original article in Dutch, published in META (2016-3), the Flemish journal for information professionals. Ten years after the term “linked data” was introduced by Tim Berners-Lee it appears to be time to take stock of the impact of linked data for libraries and other heritage institutions in the past and in the future. I will do this from a personal historical perspective, as a library technology professional, […] Read more Maps, dictionaries and guidebooks August 3, 2015February 3, 2020 Lukas KosterData Interoperability in heterogeneous library data landscapes Libraries have to deal with a highly opaque landscape of heterogeneous data sources, data types, data formats, data flows, data transformations and data redundancies, which I have earlier characterized as a “data maze”. The level and magnitude of this opacity and heterogeneity varies with the amount of content types and the number of services that the library is responsible for. Academic and national libraries are possibly dealing with more […] Read more Standard deviations in data modeling, mapping and manipulation June 16, 2015February 3, 2020 Lukas KosterData Or: Anything goes. What are we thinking? An impression of ELAG 2015 This year’s ELAG conference in Stockholm was one of many questions. Not only the usual questions following each presentation (always elicited in the form of yet another question: “Any questions?”). But also philosophical ones (Why? What?). And practical ones (What time? Where? How? How much?). And there were some answers too, fortunately. This is my rather personal impression of the event. For a […] Read more Analysing library data flows for efficient innovation November 27, 2014February 14, 2020 Lukas KosterLibrary In my work at the Library of the University of Amsterdam I am currently taking a step forward by actually taking a step back from a number of forefront activities in discovery, linked open data and integrated research information towards a more hidden, but also more fundamental enterprise in the area of data infrastructure and information architecture. All for a good cause, for in the end a good data infrastructure is essential for delivering high […] Read more Looking for data tricks in Libraryland September 5, 2014January 12, 2020 Lukas KosterLibrary IFLA 2014 Annual World Library and Information Congress Lyon – Libraries, Citizens, Societies: Confluence for Knowledge After attending the IFLA 2014 Library Linked Data Satellite Meeting in Paris I travelled to Lyon for the first three days (August 17-19) of the IFLA 2014 Annual World Library and Information Congress. This year’s theme “Libraries, Citizens, Societies: Confluence for Knowledge” was named after the confluence or convergence of the rivers Rhône and Saône where the city of […] Read more Posts navigation Older posts Profiles and social @lukask on Twitter @lukask on Mastodon My ORCID My Impactstory My Zotero My UvA profile Recent Posts Infrastructure for heritage institutions – Open and Linked Data Infrastructure for heritage institutions – ARK PID’s Infrastructure for heritage institutions – change of course Infrastructure for heritage institutions – first results Infrastructure for heritage institutions Ten years linked open data Most Popular Posts Is an e-book a book? (8,472 views) Who needs MARC? (5,846 views) Linked Data for Libraries (5,060 views) Mobile app or mobile web? (4,562 views) User experience in public and academic libraries (4,286 views) Mainframe to mobile (3,491 views) (Discover AND deliver) OR else (3,260 views) Recent Comments Maarten Brinkerink on Infrastructure for heritage institutions Gittaca on Infrastructure for heritage institutions Libraries & the Future of Scholarly Communication at #BTPDF2 – UC3 Portal on Beyond The Library Tatiana Bryant (@BibliotecariaT) on Analysing library data flows for efficient innovation @BibliotecariaT on Analysing library data flows for efficient innovation @LizWoolcott on Analysing library data flows for efficient innovation Tags apps authority files catalog collection conferences cultural heritage data data management developer platforms discovery tools elag exlibris foaf frbr hardware heritage identifiers igelu infrastructure innovation integration interoperability libraries library Library2.0 library systems linked data linked open data marc meetings metadata mobile next generation open data open source open systems people persistent identifiers rda rdf semantic web social networking technology uri web2.0 This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. System Log in Entries feed Comments feed WordPress.org Top Posts & Pages Infrastructure for heritage institutions - Open and Linked Data Infrastructure for heritage institutions - ARK PID's Infrastructure for heritage institutions - change of course Infrastructure for heritage institutions - first results Infrastructure for heritage institutions Ten years linked open data Maps, dictionaries and guidebooks Contact Standard deviations in data modeling, mapping and manipulation Analysing library data flows for efficient innovation Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy commonplace.net A Common Place About All Posts Contact Publications Powered by WordPress | Theme: Astrid by aThemes. commons-emich-edu-9785 ---- "The CRAAP Test" by Sarah Blakeslee Home Search Browse Collections My Account About Digital Commons Network™ Skip to main content Home About FAQ My Account     Home > Community Engagement > LOEX > LOEXQUARTERLY > Vol. 31 > No. 3 (2004)   LOEX Quarterly Article Title The CRAAP Test Authors Sarah Blakeslee Publication Date Fall 2004 Recommended Citation Blakeslee, Sarah (2004) "The CRAAP Test," LOEX Quarterly: Vol. 31 : No. 3 , Article 4. Available at: https://commons.emich.edu/loexquarterly/vol31/iss3/4 Download DOWNLOADS Since January 20, 2011 Share COinS       Follow Journal Home About this Journal Aims & Scope Editorial Board Policies Submit Article Most Popular Papers Notify me via email or RSS Select an issue: All Issues Vol. 47, No. 2 Vol. 46, No. 4 / Vol. 47, No. 1 Vol. 46, No. 3 Vol. 46, No. 2 Vol. 45, No. 4 Vol. 45, No. 3 Vol. 45, No. 2 Vol. 45, No. 1 Vol. 44, No. 4 Vol. 44, No. 3 Vol. 44, No. 2 Vol. 44, No. 1 Vol. 43, No. 4 Vol. 43, No. 3 Vol. 43, No. 2 Vol. 43, No. 1 Vol. 42, No. 4 Vol. 42, No. 3 Vol. 42, No. 2 Vol. 42, No. 1 Vol. 41, No. 4 Vol. 41, No. 3 Vol. 41, No. 2 Vol. 41, No. 1 Vol. 40, No. 4 Vol. 40, No. 3 Vol. 40, No. 2 Vol. 40, No. 1 Vol. 39, No. 4 Vol. 39, No. 3 Vol. 39, No. 2 Vol. 39, No. 1 Vol. 38, No. 4 Vol. 38, No. 3 Vol. 38, No. 2 Vol. 38, No. 1 Vol. 37, No. 4 Vol. 37, No. 3 Vol. 37, No. 2 Vol. 37, No. 1 Vol. 36, No. 4 Vol. 36, No. 3 Vol. 36, No. 2 Vol. 36, No. 1 Vol. 35, No. 4 Vol. 35, No. 3 Vol. 35, No. 2 Vol. 35, No. 1 Vol. 34, No. 4 Vol. 34, No. 3 Vol. 34, No. 2 Vol. 34, No. 1 Vol. 33, No. 4 Vol. 33, No. 3 Vol. 33, No. 1 Vol. 32, No. 4 Vol. 32, No. 3 Vol. 32, No. 1 Vol. 31, No. 4 Vol. 31, No. 3   Enter search terms: Select context to search: in this journal in this repository across all repositories Advanced Search     Digital Commons Home | About | FAQ | My Account | Accessibility Statement Privacy Copyright commons-wikimedia-org-6927 ---- File:Altair.jpg - Wikimedia Commons File:Altair.jpg From Wikimedia Commons, the free media repository Jump to navigation Jump to search File File history File usage on Commons File usage on other wikis Metadata No higher resolution available. Altair.jpg ‎(512 × 511 pixels, file size: 12 KB, MIME type: image/jpeg) File information Structured data Captions EnglishAdd a one-line explanation of what this file represents RussianЗвезда Альтаир Captions Summary[edit] DescriptionAltair.jpg English: The star Altair Source http://photojournal.jpl.nasa.gov/catalog/PIA04204 Author NASA/JPL/Caltech/Steve Golden This image or video was catalogued by Jet Propulsion Laboratory of the United States National Aeronautics and Space Administration (NASA) under Photo ID: PIA04204. This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing. العربيَّة | беларуская (тарашкевіца)‎ | български | català | čeština | Deutsch | English | español | فارسی | français | galego | magyar | հայերեն | Bahasa Indonesia | italiano | 日本語 | македонски | മലയാളം | Nederlands | polski | português | русский | sicilianu | Türkçe | українська | 中文 | 中文(简体)‎ | +/− Licensing[edit] Public domainPublic domainfalsefalse This file is in the public domain in the United States because it was solely created by NASA. NASA copyright policy states that "NASA material is not protected by copyright unless noted". (See Template:PD-USGov, NASA copyright policy page or JPL Image Use Policy.) Warnings: Use of NASA logos, insignia and emblems is restricted per U.S. law 14 CFR 1221. The NASA website hosts a large number of images from the Soviet/Russian space agency, and other non-American space agencies. These are not necessarily in the public domain. Materials based on Hubble Space Telescope data may be copyrighted if they are not explicitly produced by the STScI.[1] See also {{PD-Hubble}} and {{Cc-Hubble}}. The SOHO (ESA & NASA) joint project implies that all materials created by its probe are copyrighted and require permission for commercial non-educational use. [2] Images featured on the Astronomy Picture of the Day (APOD) web site may be copyrighted. [3] The National Space Science Data Center (NSSDC) site has been known to host copyrighted content. Its photo gallery FAQ states that all of the images in the photo gallery are in the public domain "Unless otherwise noted." Original upload log[edit] date/time username resolution size edit summary 21:53, 18 November 2005 User:Makary 512×511 12 KB (Altair_PIA04204.jpg Credit: NASA/JPL/Caltech/Steve Golden Source:http://photojournal.jpl.nasa.gov/catalog/PIA04204) File history Click on a date/time to view the file as it appeared at that time. Date/Time Thumbnail Dimensions User Comment current 03:53, 31 March 2008 512 × 511 (12 KB) BetacommandBot (talk | contribs) move approved by: User:LERK This image was moved from Image:PIA04204.jpg == Summary == Altair == Opis == Altair_PIA04204.jpg Credit: NASA/JPL/Caltech/Steve Golden Source:http://photojournal.jpl.nasa.gov/catalog/PIA04204 == Licensing == {{PD-US You cannot overwrite this file. File usage on Commons There are no pages that use this file. File usage on other wikis The following other wikis use this file: Usage on ar.wikipedia.org النسر الطائر (نجم) Usage on ast.wikipedia.org Altair Usage on be-tarask.wikipedia.org Альтаір Шаблён:Зорка Шаблён:Зорка/Дакумэнтацыя Usage on bg.wikipedia.org Алтаир Usage on ca.wikipedia.org Altair Usage on de.wikipedia.org Altair Usage on en.wikipedia.org Altair User:Mitternacht90 User:InvaderXan User:RadicalOne/UBX Design/Altair User:InvaderXan/trash User:Miniwikiuser User:AltairPayne User:Boyangcheng User:Commander v99 User:Crab rangoons User:Bhootrina User:Bobbylon Wikipedia:Userboxes/Science/Astronomy Usage on eo.wikipedia.org Altairo Usage on et.wikipedia.org Altair Usage on fa.wikipedia.org کرکس پرنده Usage on fi.wikipedia.org Altair Usage on fr.wikipedia.org Altaïr Usage on ga.wikipedia.org Altair Usage on gl.wikipedia.org Alpha Aquilae Usage on id.wikipedia.org Altair Usage on incubator.wikimedia.org Wt/mnc/ᡳᡤᡝᡵᡳ ᡠᠰᡳᡥᠠ Wp/ckt/Пэгытти Usage on it.wikipedia.org Altair Usage on ja.wikipedia.org アルタイル Usage on ko.wikipedia.org 독수리자리 알타이르 Usage on la.wikipedia.org Altair Usage on lt.wikipedia.org Altayras Usage on mk.wikipedia.org Алтаир Usage on ml.wikipedia.org ആൾട്ടയർ Usage on nn.wikipedia.org Altair Usage on no.wikipedia.org Altair Usage on oc.wikipedia.org Agla (constellacion) Usage on pl.wikipedia.org Altair (gwiazda) Usage on pl.wiktionary.org النسر الطائر Usage on pt.wikipedia.org Altair (estrela) Usage on ro.wikipedia.org Altair Usage on ru.wikipedia.org Альтаир Usage on ru.wikiquote.org Альтаир Usage on ru.wiktionary.org ҫӑлтӑр жұлдыз View more global usage of this file. Metadata This file contains additional information such as Exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. If the file has been modified from its original state, some details such as the timestamp may not fully reflect those of the original file. The timestamp is only as accurate as the clock in the camera, and it may be completely wrong. _error 0 Structured data Items portrayed in this file depicts Retrieved from "https://commons.wikimedia.org/w/index.php?title=File:Altair.jpg&oldid=359986829" Category: Altair Hidden categories: Files from NASA with known IDs PD NASA Navigation menu Personal tools English Not logged in Talk Contributions Create account Log in Namespaces File Discussion Variants Views View Edit History More Search Navigate Main page Welcome Community portal Village pump Help center Participate Upload file Recent changes Latest files Random file Contact us Tools What links here Related changes Special pages Permanent link Page information Concept URI Cite this page Print/export Download as PDF Printable version This page was last edited on 30 July 2019, at 10:31. Files are available under licenses specified on their description page. All structured data from the file and property namespaces is available under the Creative Commons CC0 License; all unstructured text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and the Privacy Policy. Privacy policy About Wikimedia Commons Disclaimers Mobile view Developers Statistics Cookie statement conaltuohy-com-1058 ---- Conal Tuohy's blog Conal Tuohy's blog The blog of a digital humanities software developer Analysis & Policy Online Notes for my Open Repositories 2017 conference presentation. I will edit this post later to flesh it out into a proper blog post. Follow along at: conaltuohy.com/blog/analysis-policy-online/ background Early discussion with Amanda Lawrence of APO (which at that time stood for “Australian Policy Online”) about text mining, at the 2015 LODLAM Summit in Sydney. They … Continue reading Analysis & Policy Online A tool for Web API harvesting As 2016 stumbles to an end, I’ve put in a few days’ work on my new project Oceania, which is to be a Linked Data service for cultural heritage in this part of the world. Part of this project involves harvesting data from cultural institutions which make their collections available via so-called “Web APIs”. There are … Continue reading A tool for Web API harvesting Oceania I am really excited to have begun my latest project: a Linked Open Data service for online cultural heritage from New Zealand and Australia, and eventually, I hope, from our other neighbours. I have called the service “oceania.digital” The big idea of oceania.digital is to pull together threads from a number of different “cultural” data … Continue reading Oceania Australian Society of Archivists 2016 conference #asalinks Last week I participated in the 2016 conference of the Australian Society of Archivists, in Parramatta. I was very impressed by the programme and the discussion. I thought I’d jot down a few notes here about just a few of the presentations that were most closely related to my own work. The presentations were all … Continue reading Australian Society of Archivists 2016 conference #asalinks Linked Open Data Visualisation at #GLAMVR16 On Thursday last week I flew to Perth, in Western Australia, to speak at an event at Curtin University on visualisation of cultural heritage. Erik Champion, Professor of Cultural Visualisation, who organised the event, had asked me to talk about digital heritage collections and Linked Open Data (“LOD”). The one-day event was entitled “GLAM VR: … Continue reading Linked Open Data Visualisation at #GLAMVR16 Visualizing Government Archives through Linked Data Tonight I’m knocking back a gin and tonic to celebrate finishing a piece of software development for my client the Public Record Office Victoria; the archives of the government of the Australian state of Victoria. The work, which will go live in a couple of weeks, was an update to a browser-based visualization tool which … Continue reading Visualizing Government Archives through Linked Data Taking control of an uncontrolled vocabulary A couple of days ago, Dan McCreary tweeted: Working on new ideas for NoSQL metadata management for a talk next week. Focus on #NoSQL, Documents, Graphs and #SKOS. Any suggestions? — Dan McCreary (@dmccreary) November 14, 2015 It reminded me of some work I had done a couple of years ago for a project which … Continue reading Taking control of an uncontrolled vocabulary Bridging the conceptual gap: Museum Victoria’s collections API and the CIDOC Conceptual Reference Model This is the third in a series of posts about an experimental Linked Open Data (LOD) publication based on the web API of Museum Victoria. The first post gave an introduction and overview of the architecture of the publication software, and the second dealt quite specifically with how names and identifiers work in the LOD … Continue reading Bridging the conceptual gap: Museum Victoria’s collections API and the CIDOC Conceptual Reference Model Names in the Museum My last blog post described an experimental Linked Open Data service I created, underpinned by Museum Victoria’s collection API. Mainly, I described the LOD service’s general framework, and explained how it worked in terms of data flow. To recap briefly, the LOD service receives a request from a browser and in turn translates that request … Continue reading Names in the Museum Linked Open Data built from a custom web API I’ve spent a bit of time just recently poking at the new Web API of Museum Victoria Collections, and making a Linked Open Data service based on their API. I’m writing this up as an example of one way — a relatively easy way — to publish Linked Data off the back of some existing … Continue reading Linked Open Data built from a custom web API conaltuohy-com-9832 ---- Conal Tuohy's blog – The blog of a digital humanities software developer Skip to content Conal Tuohy's blog The blog of a digital humanities software developer Menu and widgets About me Tags Archives CIDOC-CRM Data mining distant reading joai JSON Linked Data LODLAM LODLive Museum newspapers oai-pmh oceania.digital papers past proxy REST retailer SKOS SPARQL Transcription trove twitter Visualization Vocabularies Web API XForms XML XPath XProc XProc-Z Zotero Twitter My Tweets Recent Posts Analysis & Policy Online A tool for Web API harvesting Oceania Australian Society of Archivists 2016 conference #asalinks Linked Open Data Visualisation at #GLAMVR16 Top Posts & Pages Oceania All My Posts June 2017 December 2016 October 2016 August 2016 April 2016 November 2015 October 2015 September 2015 August 2015 July 2015 May 2015 April 2015 March 2015 December 2014 November 2014 September 2014 August 2014 March 2014 December 2013 Log in Analysis & Policy Online Notes for my Open Repositories 2017 conference presentation. I will edit this post later to flesh it out into a proper blog post. Follow along at: conaltuohy.com/blog/analysis-policy-online/ Continue reading Analysis & Policy Online Posted on June 28, 2017June 29, 2017Tags Data mining, Linked Data, LODLAM, oai-pmh, SPARQL, XML, XProc, XProc-ZLeave a comment on Analysis & Policy Online A tool for Web API harvesting A medieval man harvesting metadata from a medieval Web API As 2016 stumbles to an end, I’ve put in a few days’ work on my new project Oceania, which is to be a Linked Data service for cultural heritage in this part of the world. Part of this project involves harvesting data from cultural institutions which make their collections available via so-called “Web APIs”. There are some very standard ways to publish data, such as OAI-PMH, OpenSearch, SRU, RSS, etc, but many cultural heritage institutions instead offer custom-built APIs that work in their own peculiar way, which means that you need to put in a certain amount of effort in learning each API and dealing with its specific requirements. So I’ve turned to the problem of how to deal with these APIs in the most generic way possible, and written a program that can handle a lot of what is common in most Web APIs, and can be easily configured to understand the specifics of particular APIs. Continue reading A tool for Web API harvesting Posted on December 31, 2016December 31, 2016Tags oceania.digital, REST, trove, Web API, XML, XPathLeave a comment on A tool for Web API harvesting Oceania I am really excited to have begun my latest project: a Linked Open Data service for online cultural heritage from New Zealand and Australia, and eventually, I hope, from our other neighbours. I have called the service “oceania.digital” The big idea of oceania.digital is to pull together threads from a number of different “cultural” data sources and weave them together into a single web of data which people can use to tell a huge number of stories. There are a number of different aspects to the project, and a corresponding number of stages to go through… Continue reading Oceania Posted on December 28, 2016December 30, 2016Tags Linked Data, LODLAM, oceania.digital1 Comment on Oceania Australian Society of Archivists 2016 conference #asalinks Last week I participated in the 2016 conference of the Australian Society of Archivists, in Parramatta. #ASALinks poster I was very impressed by the programme and the discussion. I thought I’d jot down a few notes here about just a few of the presentations that were most closely related to my own work. The presentations were all recorded, and as the ASA’s YouTube channel is updated with newly edited videos, I’ll be editing this post to include those videos. Continue reading Australian Society of Archivists 2016 conference #asalinks Posted on October 25, 2016December 20, 2016Tags Archives, Data mining, Linked Data, LODLAM, Museum, Transcription, VisualizationLeave a comment on Australian Society of Archivists 2016 conference #asalinks Linked Open Data Visualisation at #GLAMVR16 On Thursday last week I flew to Perth, in Western Australia, to speak at an event at Curtin University on visualisation of cultural heritage. Erik Champion, Professor of Cultural Visualisation, who organised the event, had asked me to talk about digital heritage collections and Linked Open Data (“LOD”). The one-day event was entitled “GLAM VR: talks on Digital heritage, scholarly making & experiential media”, and combined presentations and workshops on cultural heritage data (GLAM = Galleries, Libraries, Archives, and Museums) with advanced visualisation technology (VR = Virtual Reality). The venue was the Curtin HIVE (Hub for Immersive Visualisation and eResearch); a really impressive visualisation facility at Curtin University, with huge screens and panoramic and 3d displays. There were about 50 people in attendance, and there would have been over a dozen different presenters, covering a lot of different topics, though with common threads linking them together. I really enjoyed the experience, and learned a lot. I won’t go into the detail of the other presentations, here, but quite a few people were live-tweeting, and I’ve collected most of the Twitter stream from the day into a Storify story, which is well worth a read and following up. Continue reading Linked Open Data Visualisation at #GLAMVR16 Posted on August 30, 2016September 23, 2016Tags Linked Data, LODLAM, LODLive, SPARQL, Visualization1 Comment on Linked Open Data Visualisation at #GLAMVR16 Visualizing Government Archives through Linked Data Tonight I’m knocking back a gin and tonic to celebrate finishing a piece of software development for my client the Public Record Office Victoria; the archives of the government of the Australian state of Victoria. The work, which will go live in a couple of weeks, was an update to a browser-based visualization tool which we first set up last year. In response to user testing, we made some changes to improve the visualization’s usability. It certainly looks a lot clearer than it did, and the addition of some online help makes it a bit more accessible for first-time users. The visualization now looks like this (here showing the entire dataset, unfiltered, which is not actually that useful, though it is quite pretty): Continue reading Visualizing Government Archives through Linked Data Posted on April 5, 2016September 23, 2016Tags Archives, CIDOC-CRM, Linked Data, LODLAM, oai-pmh, SPARQL, Visualization3 Comments on Visualizing Government Archives through Linked Data Taking control of an uncontrolled vocabulary A couple of days ago, Dan McCreary tweeted: Working on new ideas for NoSQL metadata management for a talk next week. Focus on #NoSQL, Documents, Graphs and #SKOS. Any suggestions? — Dan McCreary (@dmccreary) November 14, 2015 It reminded me of some work I had done a couple of years ago for a project which was at the time based on Linked Data, but which later switched away from that platform, leaving various bits of RDF-based work orphaned. One particular piece which sprung to mind was a tool for dealing with vocabularies. Whether it’s useful for Dan’s talk I don’t know, but I thought I would dig it out and blog a little about it in case it’s of interest more generally to people working in Linked Open Data in Libraries, Archives and Museums (LODLAM). Continue reading Taking control of an uncontrolled vocabulary Posted on November 16, 2015November 17, 2015Tags LODLAM, SKOS, SPARQL, Vocabularies, XForms1 Comment on Taking control of an uncontrolled vocabulary Bridging the conceptual gap: Museum Victoria’s collections API and the CIDOC Conceptual Reference Model A Museum Victoria LOD graph about a teacup, shown using the LODLive visualizer.This is the third in a series of posts about an experimental Linked Open Data (LOD) publication based on the web API of Museum Victoria. The first post gave an introduction and overview of the architecture of the publication software, and the second dealt quite specifically with how names and identifiers work in the LOD publication software. In this post I’ll cover how the publication software takes the data published by Museum Victoria’s API and reshapes it to fit a common conceptual model for museum data, the “Conceptual Reference Model” published by the documentation committee of the Internal Council of Museums. I’m not going to exhaustively describe the translation process (you can read the source code if you want the full story), but I’ll include examples to illustrate the typical issues that arise in such a translation. Continue reading Bridging the conceptual gap: Museum Victoria’s collections API and the CIDOC Conceptual Reference Model Posted on October 22, 2015June 11, 2017Tags CIDOC-CRM, Linked Data, LODLAM, LODLive, Museum, Web API6 Comments on Bridging the conceptual gap: Museum Victoria’s collections API and the CIDOC Conceptual Reference Model Names in the Museum My last blog post described an experimental Linked Open Data service I created, underpinned by Museum Victoria’s collection API. Mainly, I described the LOD service’s general framework, and explained how it worked in terms of data flow. To recap briefly, the LOD service receives a request from a browser and in turn translates that request into one or more requests to the Museum Victoria API, interprets the result in terms of the CIDOC CRM, and returns the result to the browser. The LOD service does not have any data storage of its own; it’s purely an intermediary or proxy, like one of those real-time interpreters at the United Nations. I call this technique a “Linked Data proxy”. I have a couple more blog posts to write about the experience. In this post, I’m going to write about how the Linked Data proxy deals with the issue of naming the various things which the Museum’s database contains. Continue reading Names in the Museum Posted on October 1, 2015April 27, 2016Tags CIDOC-CRM, Linked Data, LODLAM, Museum1 Comment on Names in the Museum Linked Open Data built from a custom web API I’ve spent a bit of time just recently poking at the new Web API of Museum Victoria Collections, and making a Linked Open Data service based on their API. I’m writing this up as an example of one way — a relatively easy way — to publish Linked Data off the back of some existing API. I hope that some other libraries, archives, and museums with their own API will adopt this approach and start publishing their data in a standard Linked Data style, so it can be linked up with the wider web of data. Continue reading Linked Open Data built from a custom web API Posted on September 7, 2015April 27, 2016Tags CIDOC-CRM, JSON, Linked Data, LODLAM, Museum, proxy, REST, Web API, XProc-Z5 Comments on Linked Open Data built from a custom web API Posts navigation Page 1 Page 2 Page 3 Next page Proudly powered by WordPress confluence-ucop-edu-5093 ---- Background: "Toward a National Archival Finding Aid Network" Planning Initiative (2018-19) - Building a National Finding Aid Network - Confluence Skip to content Skip to breadcrumbs Skip to header menu Skip to action menu Skip to quick search Linked ApplicationsLoading… Confluence Spaces Hit enter to search Help Online Help Keyboard Shortcuts Feed Builder What’s new Available Gadgets About Confluence Log in Building a National Finding Aid Network Page tree Browse pages ConfigureSpace tools Attachments (2) Page History People who can view Page Information Resolved comments View in Hierarchy View Source Export to PDF Export to Word Pages Building a National Finding Aid Network About the Project Skip to end of banner Jira links Go to start of banner Background: "Toward a National Archival Finding Aid Network" Planning Initiative (2018-19) Skip to end of metadata Created by Adrian Turner, last modified on Feb 05, 2021 Go to start of metadata Summary With crucial funding support from the US Institute of Museum and Library Services (IMLS) under the provisions of the Library Services and Technology Act (LSTA), administered in California by the State Librarian, the CDL coordinated a 1-year collaborative planning initiative (October 2018 – September 2019) with the following key objectives: Identify key challenges facing finding aid aggregators. Uncover and validate high-level stakeholder (archivists, researchers, etc.) requirements and needs for finding aid aggregations. Explore the possibilities of shared infrastructure and services among current finding aid aggregators, to test the theory that collaboration will benefit our organizations, contributors, and end users. If so, identify potential shared infrastructure and service models. Determine if there is collective interest and capacity to collaborate on developing shared infrastructure. Develop a concrete action plan for next steps based on the shared needs, interests and available resources within the community of finding aid aggregators. Discussions of viable collaboration models and sustainability strategies will be included. Developing a collective understanding of requirements and challenges was a necessary first step for establishing the trajectory of any future finding aid aggregation effort. The planning initiative produced the following key deliverables. Key Deliverables See Project Reports and Resources Partners and Roles Group Role Member resources Core Partners This group comprised representatives from state and regional archival description aggregators. Members did not represent their institution but rather leveraged their background and experience within the field to inform the group's work. Expectations for Core Partners: Identify one or more individuals who can provide complete, objective, and correct information on the past and/or present program, for incorporation into a profile of the current US archival description aggregator landscape Review the profile and findings, in advance of the symposium; Prepare for and attend the symposium; Serve on one or more working groups after the symposium and participate actively in the work of those groups; Participate actively in formulating and vetting the action plan that results from this work. Roster Advisory Partners This group comprised representatives from state and regional projects or programs that provide some form of support for finding aid and archival description aggregation, but who did not have capacity at this time to be Core Partners. This group also included those entities no longer providing services, as well as those planning to do so in the future. Expectations for Advisory Partners: Identify one or more individuals who can provide complete, objective, and correct information on the past and/or present program, for incorporation into a profile of the current US archival description aggregator landscape. Review and optionally provide feedback on the action plan that results from this project. Roster Expert Advisors Advisors were invited to the project symposium to advise, inspire, and contribute to the discussions in a variety of key areas. This group included representatives from organizations that provide services that are part of the archival description ecosystem, as well as organizational development, community engagement, and sustainability. Expectations: Review the profile of the current US archival description aggregator landscape, in advance of the symposium; Prepare for and attend the symposium in spring 2019. Constructively comment on the action plan that results from this project. Roster Project Team Jodi Allison-Bunnell (AB Consulting): Jodi’s responsibilities included: providing leadership and research analyst support services for surveying and establishing a profile of finding aid aggregators and related organizations; facilitating stakeholder discussions; and assisting in the creation of a final report and recommendations. Adrian Turner (California Digital Library, Senior Product Manager): Adrian’s responsibilities included: coordinating project activities; supporting CDL's administration of the LSTA-funded project; and serving as a Core Partner, drawing from his experience with CDL's Online Archive of California (OAC) service. Grant Proposal LSTA (2018-19) Project Proposal 40-8847 No labels Overview Content Tools Activity Contact Us | Privacy Policy | Accessibility Powered by Atlassian Confluence 7.4.7 Printed by Atlassian Confluence 7.4.7 Report a bug Atlassian News Atlassian {"serverDuration": 46, "requestCorrelationId": "2fa7961d5291f776"} coopcloud-tech-238 ---- The Co-op Cloud Docs Blog Public interest infrastructure. An alternative to corporate clouds built by tech co-ops. Learn More benefits Collaborative Democratic development process, centred on libre software licenses, community governance and a configuration commons. simple Quick, flexible, and intuitive with low resource requirements, minimal overhead, and extensive documentation. private Control your hosting: use on-premise or virtual servers to suit your needs. Encryption as standard. transparent Following established open standards, best practices, and builds on existing tools. faq What is Co-op Cloud? Co-op Cloud aims to make hosting libre software applications simple for small service providers such as tech co-operatives who are looking to standardise around an open, transparent and scalable infrastructure. It uses the latest container technologies and configurations are shared into the commons for the benefit of all. Is this a good long-term choice? Co-op Cloud re-uses upstream libre software project packaging efforts (containers) so that we can meet projects where they are at and reduce duplication of effort. The project proposes the notion of more direct coordination between distribution methods (app packagers) and production methods (app developers). What libre apps are available? Co-op Cloud helps deploy and maintain applications that you may already use in your daily life: Nextcloud, Jitsi, Mediawiki, Rocket.chat and many more! These are tools that are created by volunteer communities who use libre software licenses in order to build up the public software commons and offer more digital alternatives. What about other alternatives? Co-op Cloud helps fill a gap between the personal and the industrial-scale: it’s easier to use than Kubernetes or Ansible, does more to support multi-server, multi-tenant deployments than Cloudron, and is much easier than manual deployments. See all the comparisons with other tools. Read all Frequently Asked Questions Who is involved Autonomic is a worker‑owned co‑operative, dedicated to using technology to empower people making a positive difference in the world. We offer service hosting using Co‑op Cloud and other platforms, custom development, and infrastructure set‑up. visit: autonomic.zone 🏠 This is a community project. Get involved Source Code |Documentation |Public Matrix chat |Mastodon |Twitter Copyleft 2021 Autonomic Cooperative courses-lumenlearning-com-3920 ---- Limits of Resolution: The Rayleigh Criterion | Physics Skip to main content Physics Wave Optics Search for: Limits of Resolution: The Rayleigh Criterion Learning Objectives By the end of this section, you will be able to: Discuss the Rayleigh criterion. Light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. While this can be used as a spectroscopic tool—a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra—diffraction also limits the detail we can obtain in images. Figure 1a shows the effect of passing light through a small circular aperture. Instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. This pattern is caused by diffraction similar to that produced by a single slit. Light from different parts of the circular aperture interferes constructively and destructively. The effect is most noticeable when the aperture is small, but the effect is there for large apertures, too. Figure 1. (a) Monochromatic light passed through a small circular aperture produces this diffraction pattern. (b) Two point light sources that are close to one another produce overlapping images because of diffraction. (c) If they are closer together, they cannot be resolved or distinguished. How does diffraction affect the detail that can be observed when light passes through an aperture? Figure 1b shows the diffraction pattern produced by two point light sources that are close to one another. The pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. If they were closer together, as in Figure 1c, we could not distinguish them, thus limiting the detail or resolution we can obtain. This limit is an inescapable consequence of the wave nature of light. There are many situations in which diffraction limits the resolution. The acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. Be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. Thus light passing through a lens with a diameter D shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter D does. So diffraction limits the resolution of any system having a lens or mirror. Telescopes are also limited by diffraction, because of the finite diameter D of their primary mirror. Take-Home Experiment: Resolution of the Eye Draw two lines on a white sheet of paper (several mm apart). How far away can you be and still distinguish the two lines? What does this tell you about the size of the eye’s pupil? Can you be quantitative? (The size of an adult’s pupil is discussed in Physics of the Eye.) Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) (see Figure 2a). It can be shown that, for a circular aperture of diameter D, the first minimum in the diffraction pattern occurs at [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See Figure 2b. The first minimum is at an angle of [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], so that two point objects are just resolvable if they are separated by the angle [latex]\displaystyle\theta=1.22\frac{\lambda}{D}\\[/latex], where λ is the wavelength of light (or other electromagnetic radiation) and D is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. In this expression, θ has units of radians. Figure 2. (a) Graph of intensity of the diffraction pattern for a circular aperture. Note that, similar to a single slit, the central maximum is wider and brighter than those to the sides. (b) Two point objects produce overlapping diffraction patterns. Shown here is the Rayleigh criterion for being just resolvable. The central maximum of one pattern lies on the first minimum of the other. Making Connections: Limits to Knowledge All attempts to observe the size and shape of objects are limited by the wavelength of the probe. Even the small wavelength of light prohibits exact precision. When extremely small wavelength probes as with an electron microscope are used, the system is disturbed, still limiting our knowledge, much as making an electrical measurement alters a circuit. Heisenberg’s uncertainty principle asserts that this limit is fundamental and inescapable, as we shall see in quantum mechanics. Example 1. Calculating Diffraction Limits of the Hubble Space Telescope The primary mirror of the orbiting Hubble Space Telescope has a diameter of 2.40 m. Being in orbit, this telescope avoids the degrading effects of atmospheric distortion on its resolution. What is the angle between two just-resolvable point light sources (perhaps two stars)? Assume an average light wavelength of 550 nm. If these two stars are at the 2 million light year distance of the Andromeda galaxy, how close together can they be and still be resolved? (A light year, or ly, is the distance light travels in 1 year.) Strategy The Rayleigh criterion stated in the equation [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] gives the smallest possible angle θ between point sources, or the best obtainable resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are. Solution for Part 1 The Rayleigh criterion for the minimum resolvable angle is [latex]\theta=1.22\frac{\lambda}{D}\\[/latex]. Entering known values gives [latex]\begin{array}{lll}\theta&=&1.22\frac{550\times10^{-9}\text{ m}}{2.40\text{ m}}\\\text{ }&=&2.80\times10^{-7}\text{ rad}\end{array}\\[/latex] Solution for Part 2 The distance s between two objects a distance r away and separated by an angle θ is s = rθ. Substituting known values gives [latex]\begin{array}{lll}s&=&\left(2.0\times10^6\text{ ly}\right)\left(2.80\times10^{-7}\text{ rad}\right)\\\text{ }&=&0.56\text{ ly}\end{array}\\[/latex] Discussion The angle found in Part 1 is extraordinarily small (less than 1/50,000 of a degree), because the primary mirror is so large compared with the wavelength of light. As noticed, diffraction effects are most noticeable when light interacts with objects having sizes on the order of the wavelength of light. However, the effect is still there, and there is a diffraction limit to what is observable. The actual resolution of the Hubble Telescope is not quite as good as that found here. As with all instruments, there are other effects, such as non-uniformities in mirrors or aberrations in lenses that further limit resolution. However, Figure 3 gives an indication of the extent of the detail observable with the Hubble because of its size and quality and especially because it is above the Earth’s atmosphere. Figure 3. These two photographs of the M82 galaxy give an idea of the observable detail using the Hubble Space Telescope compared with that using a ground-based telescope. (a) On the left is a ground-based image. (credit: Ricnun, Wikimedia Commons) (b) The photo on the right was captured by Hubble. (credit: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)) The answer in Part 2 indicates that two stars separated by about half a light year can be resolved. The average distance between stars in a galaxy is on the order of 5 light years in the outer parts and about 1 light year near the galactic center. Therefore, the Hubble can resolve most of the individual stars in Andromeda galaxy, even though it lies at such a huge distance that its light takes 2 million years for its light to reach us. Figure 4 shows another mirror used to observe radio waves from outer space. Figure 4. A 305-m-diameter natural bowl at Arecibo in Puerto Rico is lined with reflective material, making it into a radio telescope. It is the largest curved focusing dish in the world. Although D for Arecibo is much larger than for the Hubble Telescope, it detects much longer wavelength radiation and its diffraction limit is significantly poorer than Hubble’s. Arecibo is still very useful, because important information is carried by radio waves that is not carried by visible light. (credit: Tatyana Temirbulatova, Flickr) Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. Any beam of light having a finite diameter D and a wavelength λ exhibits diffraction spreading. The beam spreads out with an angle θ given by the equation [latex]\theta=1.22\frac{\lambda}{D}\\[/latex]. Take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to θ = 0º as possible) instead spreads out at an angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], where D is the diameter of the beam and λ is its wavelength. This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. However, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (see Figure 5). To avoid this, we can increase D. This is done for laser light sent to the Moon to measure its distance from the Earth. The laser beam is expanded through a telescope to make D much larger and θ smaller. Figure 5. In Figure 5 we see that the beam produced by this microwave transmission antenna will spread out at a minimum angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] due to diffraction. It is impossible to produce a near-parallel beam, because the beam has a limited diameter. In most biology laboratories, resolution is presented when the use of the microscope is introduced. The ability of a lens to produce sharp images of two closely spaced point objects is called resolution. The smaller the distance x by which two objects can be separated and still be seen as distinct, the greater the resolution. The resolving power of a lens is defined as that distance x. An expression for resolving power is obtained from the Rayleigh criterion. In Figure 6a we have two point objects separated by a distance x. According to the Rayleigh criterion, resolution is possible when the minimum angular separation is [latex]\displaystyle\theta=1.22\frac{\lambda}{D}=\frac{x}{d}\\[/latex] where d is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that x is much smaller than d), so that tan θ ≈ sin θ ≈ θ. Therefore, the resolving power is [latex]\displaystyle{x}=1.22\frac{\lambda{d}}{D}\\[/latex] Figure 6. (a) Two points separated by at distance x and a positioned a distance d away from the objective. (credit: Infopro, Wikimedia Commons) (b) Terms and symbols used in discussion of resolving power for a lens and an object at point P. (credit: Infopro, Wikimedia Commons) Another way to look at this is by re-examining the concept of Numerical Aperture (NA) discussed in Microscopes. There, NA is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. Figure 6b shows a lens and an object at point P. The NA here is a measure of the ability of the lens to gather light and resolve fine detail. The angle subtended by the lens at its focus is defined to be θ = 2α. From the Figure and again using the small angle approximation, we can write [latex]\displaystyle\sin\alpha=\frac{\frac{D}{2}}{d}=\frac{D}{2d}\\[/latex] The NA for a lens is NA = n sin α, where n is the index of refraction of the medium between the objective lens and the object at point P. From this definition for NA, we can see that [latex]\displaystyle{x}=1.22\frac{\lambda{d}}{D}=1.22\frac{\lambda}{2\sin\alpha}=0.61\frac{\lambda{n}}{NA}\\[/latex] In a microscope, NA is important because it relates to the resolving power of a lens. A lens with a large NA will be able to resolve finer details. Lenses with larger NA will also be able to collect more light and so give a brighter image. Another way to describe this situation is that the larger the NA, the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. Thus the microscope has more information to form a clear image, and so its resolving power will be higher. One of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. Consider focusing when only considering geometric optics, shown in Figure 7a. The focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the NA of the objective lens. For wave optics, due to diffraction, the focal point spreads to become a focal spot (see Figure 7b) with the size of the spot decreasing with increasing NA. Consequently, the intensity in the focal spot increases with increasing NA. The higher the NA, the greater the chances of photodegrading the specimen. However, the spot never becomes a true point. Figure 7. (a) In geometric optics, the focus is a point, but it is not physically possible to produce such a point because it implies infinite intensity. (b) In wave optics, the focus is an extended region. Section Summary Diffraction limits resolution. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. This occurs for two point objects separated by the angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], where λ is the wavelength of light (or other electromagnetic radiation) and D is the diameter of the aperture, lens, mirror, etc. This equation also gives the angular spreading of a source of light having a diameter D. Conceptual Questions A beam of light always spreads out. Why can a beam not be created with parallel rays to prevent spreading? Why can lenses, mirrors, or apertures not be used to correct the spreading? Problems & Exercises The 300-m-diameter Arecibo radio telescope pictured in Figure 4 detects radio waves with a 4.00 cm average wavelength. (a) What is the angle between two just-resolvable point sources for this telescope? (b) How close together could these point sources be at the 2 million light year distance of the Andromeda galaxy? Assuming the angular resolution found for the Hubble Telescope in Example 1, what is the smallest detail that could be observed on the Moon? Diffraction spreading for a flashlight is insignificant compared with other limitations in its optics, such as spherical aberrations in its mirror. To show this, calculate the minimum angular spreading of a flashlight beam that is originally 5.00 cm in diameter with an average wavelength of 600 nm. (a) What is the minimum angular spread of a 633-nm wavelength He-Ne laser beam that is originally 1.00 mm in diameter? (b) If this laser is aimed at a mountain cliff 15.0 km away, how big will the illuminated spot be? (c) How big a spot would be illuminated on the Moon, neglecting atmospheric effects? (This might be done to hit a corner reflector to measure the round-trip time and, hence, distance.) A telescope can be used to enlarge the diameter of a laser beam and limit diffraction spreading. The laser beam is sent through the telescope in opposite the normal direction and can then be projected onto a satellite or the Moon. (a) If this is done with the Mount Wilson telescope, producing a 2.54-m-diameter beam of 633-nm light, what is the minimum angular spread of the beam? (b) Neglecting atmospheric effects, what is the size of the spot this beam would make on the Moon, assuming a lunar distance of 3.84 × 108 m? The limit to the eye’s acuity is actually related to diffraction by the pupil. (a) What is the angle between two just-resolvable points of light for a 3.00-mm-diameter pupil, assuming an average wavelength of 550 nm? (b) Take your result to be the practical limit for the eye. What is the greatest possible distance a car can be from you if you can resolve its two headlights, given they are 1.30 m apart? (c) What is the distance between two just-resolvable points held at an arm’s length (0.800 m) from your eye? (d) How does your answer to (c) compare to details you normally observe in everyday circumstances? What is the minimum diameter mirror on a telescope that would allow you to see details as small as 5.00 km on the Moon some 384,000 km away? Assume an average wavelength of 550 nm for the light received. You are told not to shoot until you see the whites of their eyes. If the eyes are separated by 6.5 cm and the diameter of your pupil is 5.0 mm, at what distance can you resolve the two eyes using light of wavelength 555 nm? (a) The planet Pluto and its Moon Charon are separated by 19,600 km. Neglecting atmospheric effects, should the 5.08-m-diameter Mount Palomar telescope be able to resolve these bodies when they are 4.50 × 109 km from Earth? Assume an average wavelength of 550 nm. (b) In actuality, it is just barely possible to discern that Pluto and Charon are separate bodies using an Earth-based telescope. What are the reasons for this? The headlights of a car are 1.3 m apart. What is the maximum distance at which the eye can resolve these two headlights? Take the pupil diameter to be 0.40 cm. When dots are placed on a page from a laser printer, they must be close enough so that you do not see the individual dots of ink. To do this, the separation of the dots must be less than Raleigh’s criterion. Take the pupil of the eye to be 3.0 mm and the distance from the paper to the eye of 35 cm; find the minimum separation of two dots such that they cannot be resolved. How many dots per inch (dpi) does this correspond to? Unreasonable Results. An amateur astronomer wants to build a telescope with a diffraction limit that will allow him to see if there are people on the moons of Jupiter. (a) What diameter mirror is needed to be able to see 1.00 m detail on a Jovian Moon at a distance of 7.50 × 108 km from Earth? The wavelength of light averages 600 nm. (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or inconsistent? Construct Your Own Problem. Consider diffraction limits for an electromagnetic wave interacting with a circular object. Construct a problem in which you calculate the limit of angular resolution with a device, using this circular object (such as a lens, mirror, or antenna) to make observations. Also calculate the limit to spatial resolution (such as the size of features observable on the Moon) for observations at a specific distance from the device. Among the things to be considered are the wavelength of electromagnetic radiation used, the size of the circular object, and the distance to the system or phenomenon being observed. Glossary Rayleigh criterion: two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other Selected Solutions to Problems & Exercises 1. (a) 1.63 × 10−4 rad; (b) 326 ly 3. 1.46 × 10−5 rad 5. (a) 3.04 × 10−7 rad; (b) Diameter of 235 m 7. 5.15 cm 9. (a) Yes. Should easily be able to discern; (b) The fact that it is just barely possible to discern that these are separate bodies indicates the severity of atmospheric aberrations. Licenses and Attributions CC licensed content, Shared previously College Physics. Authored by: OpenStax College. Located at: http://cnx.org/contents/031da8d3-b525-429c-80cf-6c8ed997733a/College_Physics. License: CC BY: Attribution. License Terms: Located at License Previous Next csirt-divd-nl-1063 ---- Kaseya Case Update 2 | DIVD CSIRT Skip to the content. Home / Blog / Kaseya case update 2 DIVD CSIRT Making the internet safer through Coordinated Vulnerability Disclosure Menu Home DIVD CSIRT Cases DIVD-2021-00015 Telegram group shares stolen credentials.... DIVD-2021-00014 DIVD recommends not exposing the on-premise Kaseya Unitrends servers to the... DIVD-2021-00012 Botnet stolen credentials... DIVD-2021-00011 Multiple vulnerabilities discovered in Kaseya VSA.... DIVD-2021-00010 A PreAuth RCE vulnerability has been found in vCenter Server... DIVD-2021-00005 A PreAuth RCE vulnerability has been found in Pulse Connect Secure... DIVD-2021-00004 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2021-00002 Kaseya recommends disabling the on-premise Kaseya VSA servers immediately.... DIVD-2021-00001 On-prem Exchange Servers targeted with 0-day exploits... DIVD-2020-00014 SolarWinds Orion API authentication bypass... DIVD-2020-00013 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2020-00012 A list of 49 577 vulnerable Fortinet devices leaked online... DIVD-2020-00011 Four critical vulnerabilities in Vembu BDR... DIVD-2020-00010 WordPress Plugin wpDiscuz has a vulnerability that alllows attackers to tak... DIVD-2020-00009 Data dumped from compromised Pulse Secure VPN enterprise servers.... DIVD-2020-00008 313 000 .NL domains running Wordpress scanned.... DIVD-2020-00007 Citrix ShareFile storage zones Controller multiple security updates... DIVD-2020-00006 SMBv3 Server Compression Transform Header Memory Corruption... DIVD-2020-00005 Apache Tomcat AJP File Read/Inclusion Vulnerability... DIVD-2020-00004 List of Mirai botnet victims published with credentials... DIVD-2020-00003 Exploits available for MS RDP Gateway Bluegate... DIVD-2020-00002 Wildcard Certificates Citrix ADC... DIVD-2020-00001 Citrix ADC... CVEs CVE-2021-30201 - Authenticated XML External Entity vulnerability in Kaseya VS... CVE-2021-30121 - Authenticated local file inclusion in Kaseya VSA < v9.5.6... CVE-2021-30120 - 2FA bypass in Kaseya VSA CVE-2021-30119 - Authenticated Authenticated reflective XSS in Kaseya VSA CVE-2021-30118 - Unautheticated RCE in Kaseya VSA < v9.5.5... CVE-2021-30117 - Autheticated SQL injection in Kaseya VSA < v9.5.6... CVE-2021-30116 - Unautheticated credential leak and business logic flaw in Ka... CVE-2021-26474 - Unauthenticated server side request forgery in Vembu product... CVE-2021-26473 - Unauthenticated arbitrary file upload and command execution ... CVE-2021-26472 - Unauthenticated remote command execution in Vembu products... Blog 2021-07-07 : Kaseya VSA Limited Disclosure... 2021-07-06 : Kaseya Case Update 3... 2021-07-04 : Kaseya Case Update 2... 2021-07-03 : Kaseya Case Update... 2021-07-02 : Kaseya VSA Advisory... 2021-06-06 : vCenter Server PreAuth RCE... 2021-06-03 : Warehouse Botnet... 2021-05-14 : Closing ProxyLogon case / Case ProxyLogon gesloten... 2021-05-11 : Vembu Zero Days... 2021-05-10 : Pulse Secure PreAuth RCE... More... Donate RSS Contact Kaseya Case Update 2 04 Jul 2021 - Victor Gevers English below During the last 48 hours, the number of Kaseya VSA instances that are reachable from the internet has dropped from over 2.200 to less than 140 in our last scan today. And, by working closely with our trusted partners and national CERTs, the number of servers in The Netherlands has dropped to zero. A good demonstration of how a cooperative network of security-minded organizations can be very effective during a nasty crisis. By now, it is time to be a bit more clear on our role in this incident. First things first, yes, Wietse Boonstra, a DIVD researcher, has previously identified a number of the zero-day vulnerabilities [CVE-2021-30116] which are currently being used in the ransomware attacks. And yes, we have reported these vulnerabilities to Kaseya under responsible disclosure guidelines (aka coordinated vulnerability disclosure). Our research into these vulnerabilities is part of a larger project in which we investigate vulnerabilities in tools for system administration, specifically the administrative interfaces of these applications. These are products like Vembu BDR, Pulse VPN, Fortinet VPN, to name a few. We are focusing on these types of products because we spotted a trend where more and more of the products that are used to keep networks safe and secure are showing structural weaknesses. After this crisis, there will be the question of who is to blame. From our side, we would like to mention Kaseya has been very cooperative. Once Kaseya was aware of our reported vulnerabilities, we have been in constant contact and cooperation with them. When items in our report were unclear, they asked the right questions. Also, partial patches were shared with us to validate their effectiveness. During the entire process, Kaseya has shown that they were willing to put in the maximum effort and initiative into this case both to get this issue fixed and their customers patched. They showed a genuine commitment to do the right thing. Unfortunately, we were beaten by REvil in the final sprint, as they could exploit the vulnerabilities before customers could even patch. After the first reports of ransomware occurred, we kept working with Kaseya, giving our input on what happened and helping them cope with it. This included giving them lists of IP addresses and customer IDs of customers that had not responded yet, which they promptly contacted by phone. So, in summary: DIVD has been in a Coordinated Vulnerability Disclosure process with Kaseya, who was working on a patch. Some of these vulnerabilities were used in this attack. Kaseya and DIVD collaborated to limit the damage wherever possible. As more details become available, we will report them on our blog and case file. Updated statistics:  Twitter  Facebook  LinkedIn csirt-divd-nl-3484 ---- DIVD-2021-00011 - Kaseya VSA Limited Disclosure | DIVD CSIRT Skip to the content. Home / Cases / Divd-2021-00011 - kaseya vsa limited disclosure DIVD CSIRT Making the internet safer through Coordinated Vulnerability Disclosure Menu Home DIVD CSIRT Cases DIVD-2021-00015 Telegram group shares stolen credentials.... DIVD-2021-00014 DIVD recommends not exposing the on-premise Kaseya Unitrends servers to the... DIVD-2021-00012 Botnet stolen credentials... DIVD-2021-00011 Multiple vulnerabilities discovered in Kaseya VSA.... DIVD-2021-00010 A PreAuth RCE vulnerability has been found in vCenter Server... DIVD-2021-00005 A PreAuth RCE vulnerability has been found in Pulse Connect Secure... DIVD-2021-00004 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2021-00002 Kaseya recommends disabling the on-premise Kaseya VSA servers immediately.... DIVD-2021-00001 On-prem Exchange Servers targeted with 0-day exploits... DIVD-2020-00014 SolarWinds Orion API authentication bypass... DIVD-2020-00013 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2020-00012 A list of 49 577 vulnerable Fortinet devices leaked online... DIVD-2020-00011 Four critical vulnerabilities in Vembu BDR... DIVD-2020-00010 WordPress Plugin wpDiscuz has a vulnerability that alllows attackers to tak... DIVD-2020-00009 Data dumped from compromised Pulse Secure VPN enterprise servers.... DIVD-2020-00008 313 000 .NL domains running Wordpress scanned.... DIVD-2020-00007 Citrix ShareFile storage zones Controller multiple security updates... DIVD-2020-00006 SMBv3 Server Compression Transform Header Memory Corruption... DIVD-2020-00005 Apache Tomcat AJP File Read/Inclusion Vulnerability... DIVD-2020-00004 List of Mirai botnet victims published with credentials... DIVD-2020-00003 Exploits available for MS RDP Gateway Bluegate... DIVD-2020-00002 Wildcard Certificates Citrix ADC... DIVD-2020-00001 Citrix ADC... CVEs CVE-2021-30201 - Authenticated XML External Entity vulnerability in Kaseya VS... CVE-2021-30121 - Authenticated local file inclusion in Kaseya VSA < v9.5.6... CVE-2021-30120 - 2FA bypass in Kaseya VSA CVE-2021-30119 - Authenticated Authenticated reflective XSS in Kaseya VSA CVE-2021-30118 - Unautheticated RCE in Kaseya VSA < v9.5.5... CVE-2021-30117 - Autheticated SQL injection in Kaseya VSA < v9.5.6... CVE-2021-30116 - Unautheticated credential leak and business logic flaw in Ka... CVE-2021-26474 - Unauthenticated server side request forgery in Vembu product... CVE-2021-26473 - Unauthenticated arbitrary file upload and command execution ... CVE-2021-26472 - Unauthenticated remote command execution in Vembu products... Blog 2021-07-07 : Kaseya VSA Limited Disclosure... 2021-07-06 : Kaseya Case Update 3... 2021-07-04 : Kaseya Case Update 2... 2021-07-03 : Kaseya Case Update... 2021-07-02 : Kaseya VSA Advisory... 2021-06-06 : vCenter Server PreAuth RCE... 2021-06-03 : Warehouse Botnet... 2021-05-14 : Closing ProxyLogon case / Case ProxyLogon gesloten... 2021-05-11 : Vembu Zero Days... 2021-05-10 : Pulse Secure PreAuth RCE... More... Donate RSS Contact DIVD-2021-00011 - Kaseya VSA Limited Disclosure Our reference DIVD-2021-00011 Case lead Frank Breedijk Author Lennaert Oudshoorn Researcher(s) Wietse Boonstra Lennaert Oudshoorn Victor Gevers Frank Breedijk Hidde Smit CVE(s) CVE-2021-30116 CVE-2021-30117 CVE-2021-30118 CVE-2021-30119 CVE-2021-30120 CVE-2021-30121 CVE-2021-30201 Product Kaseya VSA Versions All on-premise Kaseya VSA versions. Recommendation All on-premises VSA Servers should continue to remain offline until further instructions from Kaseya about when it is safe to restore operations. A patch will be required to be installed prior to restarting the VSA and a set of recommendations on how to increase your security posture. Status Open English below Summary One of our researchers found multiple vulnerabilities in Kaseya VSA, which we were in the process of responsible disclosure (or Coordinated Vulnerability Disclosure) with Kaseya, before all these vulnerabilities could be patched a ransomware attack happened using Kaseya VSA. Ever since we released the news that we indeed notified Kaseya of a vulnerability used in the ransomware attack, we have been getting requests to release details about these vulnerabilities and the disclosure timeline. In line with the guidelines for Coordinated Vulnerability Disclosure we have not disclosed any details so far. And, while we feel it is time to be more open about this process and our decisions regarding this matter, we will still not release the full details. The vulnerabilities We notified Kaseya of the following vulnerabilities: CVE-2021-30116 - A credentials leak and business logic flaw, to be included in 9.5.7 CVE-2021-30117 - An SQL injection vulnerability, resolved in May 8th patch. CVE-2021-30118 - A Remote Code Execution vulnerability, resolved in April 10th patch. (v9.5.6) CVE-2021-30119 - A Cross Site Scripting vulnerability, to be included in 9.5.7 CVE-2021-30120 - 2FA bypass, to be resolved in v9.5.7 CVE-2021-30121 - A Local File Inclusion vulnerability, resolved in May 8th patch. CVE-2021-30201 - A XML External Entity vulnerability, resolved in May 8th patch. What you can do All on-premises VSA Servers should continue to remain offline until further instructions from Kaseya about when it is safe to restore operations. A patch will be required to be installed prior to restarting the VSA and a set of recommendations on how to increase your security posture. Kaseya has released a Detection tool tool help determine if a system has been compromised. Cado Security has made a github repository with Resources for DFIR Professionals Responding to the REvil Ransomware Kaseya Supply Chain Attack. We recommend that any Kaseya server is carefully checked for signs of compromise before taking it back into service, including, but not limited to, the IoCs published by Kaseya. What we are doing The Dutch Institute for Vulnerability Disclosure (DIVD) performs a daily scan to detect vulnerable Kaseya VSA servers and notify the owners directly or via the known abuse channels, Gov-CERTs, and other trusted channels. We have identified this server by downloading the paths ‘/’, ‘/api/v1.5/cw/environment’ and ‘/install/kaseyalatestversion.xml’ and matching patterns in these files. In the past few days we have been working with Kaseya to make sure customers turn off their systems, by tipping them off about customers that still have systems online, and hope to be able to continue to work together to ensure that their patch is installed everywhere. Timeline Date Description 01 Apr 2021 Research start 02 Apr 2021 DIVD starts scanning internet-facing implementations. 04 Apr 2021 Start of the identification of possible victims (with internet-facing systems). 06 Apr 2021 Kaseya informed. 10 Apr 2021 Vendor starts issuing patches v9.5.5. Resolving CVE-2021-30118. 8 May 2021 Vendor issues another patch v9.5.6. Resolving CVE-2021-30117, CVE-2021-30121, CVE-2021-30201. 04 Jun 2021 DIVD CSIRT hands over a list of identified Kaseya VSA hosts to Kaseya. 26 Jun 2021 9.5.7 on SaaS Resolving CVE-2021-30116 and CVE-2021-30119. 02 Jul 2021 DIVD responds to the ransomware, by scanning for Kaseya VSA instances reachable via the Internet and sends out notifications to network owners 07 Jul 2021 Limited publication (after 3 months). More information official advisory from Kaseya DoublePulsar blog post Sophos blog post CISA-FBI Guidance for MSPs and their Customers Affected by the Kaseya VSA Supply-Chain Ransomware Attack  Twitter  Facebook  LinkedIn csirt-divd-nl-68 ---- Kaseya VSA Limited Disclosure | DIVD CSIRT Skip to the content. Home / Blog / Kaseya vsa limited disclosure DIVD CSIRT Making the internet safer through Coordinated Vulnerability Disclosure Menu Home DIVD CSIRT Cases DIVD-2021-00015 Telegram group shares stolen credentials.... DIVD-2021-00014 DIVD recommends not exposing the on-premise Kaseya Unitrends servers to the... DIVD-2021-00012 Botnet stolen credentials... DIVD-2021-00011 Multiple vulnerabilities discovered in Kaseya VSA.... DIVD-2021-00010 A PreAuth RCE vulnerability has been found in vCenter Server... DIVD-2021-00005 A PreAuth RCE vulnerability has been found in Pulse Connect Secure... DIVD-2021-00004 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2021-00002 Kaseya recommends disabling the on-premise Kaseya VSA servers immediately.... DIVD-2021-00001 On-prem Exchange Servers targeted with 0-day exploits... DIVD-2020-00014 SolarWinds Orion API authentication bypass... DIVD-2020-00013 A list of credentials that phishers gained from victims has leaked and has ... DIVD-2020-00012 A list of 49 577 vulnerable Fortinet devices leaked online... DIVD-2020-00011 Four critical vulnerabilities in Vembu BDR... DIVD-2020-00010 WordPress Plugin wpDiscuz has a vulnerability that alllows attackers to tak... DIVD-2020-00009 Data dumped from compromised Pulse Secure VPN enterprise servers.... DIVD-2020-00008 313 000 .NL domains running Wordpress scanned.... DIVD-2020-00007 Citrix ShareFile storage zones Controller multiple security updates... DIVD-2020-00006 SMBv3 Server Compression Transform Header Memory Corruption... DIVD-2020-00005 Apache Tomcat AJP File Read/Inclusion Vulnerability... DIVD-2020-00004 List of Mirai botnet victims published with credentials... DIVD-2020-00003 Exploits available for MS RDP Gateway Bluegate... DIVD-2020-00002 Wildcard Certificates Citrix ADC... DIVD-2020-00001 Citrix ADC... CVEs CVE-2021-30201 - Authenticated XML External Entity vulnerability in Kaseya VS... CVE-2021-30121 - Authenticated local file inclusion in Kaseya VSA < v9.5.6... CVE-2021-30120 - 2FA bypass in Kaseya VSA CVE-2021-30119 - Authenticated Authenticated reflective XSS in Kaseya VSA CVE-2021-30118 - Unautheticated RCE in Kaseya VSA < v9.5.5... CVE-2021-30117 - Autheticated SQL injection in Kaseya VSA < v9.5.6... CVE-2021-30116 - Unautheticated credential leak and business logic flaw in Ka... CVE-2021-26474 - Unauthenticated server side request forgery in Vembu product... CVE-2021-26473 - Unauthenticated arbitrary file upload and command execution ... CVE-2021-26472 - Unauthenticated remote command execution in Vembu products... Blog 2021-07-07 : Kaseya VSA Limited Disclosure... 2021-07-06 : Kaseya Case Update 3... 2021-07-04 : Kaseya Case Update 2... 2021-07-03 : Kaseya Case Update... 2021-07-02 : Kaseya VSA Advisory... 2021-06-06 : vCenter Server PreAuth RCE... 2021-06-03 : Warehouse Botnet... 2021-05-14 : Closing ProxyLogon case / Case ProxyLogon gesloten... 2021-05-11 : Vembu Zero Days... 2021-05-10 : Pulse Secure PreAuth RCE... More... Donate RSS Contact Kaseya VSA Limited Disclosure 07 Jul 2021 - Frank Breedijk English below Why we are only disclosing limited details on the Kaseya vulnerabilities Last weekend we found ourselves in the middle of a storm. A storm created by the ransomware attacks executed via Kaseya VSA, using a vulnerability which we confidentially disclosed to Kaseya, together with six other vulnerabilities. Ever since we released the news that we indeed notified Kaseya of a vulnerability used in the ransomware attack, we have been getting requests to release details about these vulnerabilities and the disclosure timeline. In line with the guidelines for Coordinated Vulnerability Disclosure we have not disclosed any details so far. And, while we feel it is time to be more open about this process and our decisions regarding this matter, we will still not release the full details. Why the secrecy? As the ransomware attack using Kaseya VSA software has shown, the effects of a malicious actor knowing the full details of a vulnerability can be devastating. This immediately poses a dilemma to anybody that discovers a critical vulnerability in a critical piece of software, do we disclose the details or not? Let’s use an analogy. Say a security researcher discovers a vulnerability in a high-end car. When you kick the left rear bumper in just the right way, the car doors open, and the engine starts. What should the researcher do? Tell everybody, tell all the owners of this type of car, or inform the manufacturer so he can recall and fix the car? If the full details are made public, it is evident that many cars will get stolen very soon. If you inform the owners, this will likely happen too. The chances of the details remaining secret are slim if you inform a broad audience. Even if you limit the details to ‘a security issue involving the bumper’, you might tip off the wrong people. Telling the manufacturer there is a good chance that he comes up with a fix before large-scale car thefts are happening, and consider if you need to tell the owners to keep their car behind closed doors in the meantime. How does this relate to Kaseya VSA? When we discovered the vulnerabilities in early April, it was evident to us that we could not let these vulnerabilities fall into the wrong hands. After some deliberation, we decided that informing the vendor and awaiting the delivery of a patch was the right thing to do. We hypothesized that, in the wrong hands, these vulnerabilities could lead to the compromise of large numbers of computers managed by Kaseya VSA. As we stated before, Kaseya’s response to our disclosure has been on point and timely; unlike other vendors, we have previously disclosed vulnerabilities to. They listened to our findings, and addressed some of them by releasing a patch resolving a number of these vulnerabilities. Followed by a second patch resolving even more. We’ve been in contact with Kaseya ahead of the release of both these patches, allowing us to validate that these vulnerabilities had indeed been resolved by the patch in development. Unfortunately, the worst-case scenario came true on Friday the 2nd of July. Kaseya VSA was used in an attack to spread ransomware, and Kaseya was compelled to use the nuclear option: shutting down their Kaseya Cloud and advising customers to turn off their on-premise Kaseya VSA servers. A message that unfortunately arrived too late for some of their customers. We later learned that one of the two vulnerabilities used in the attack was one we previously disclosed to Kasya VSA. What can we tell? In this blogpost and DIVD case DIVD-2021-00011 we publish the timeline and limited details of the vulnerabilities we notified Kaseya of. Full disclosure? Given the serious nature of these vulnerabilities and the obvious consequences of abuse of Kaseya VSA we will not disclose the full details of the vulnerabilities until such time that Kaseya has released a patch and this patch has been installed on a sufficient number of systems, something for which we have the monitoring scripts. In the past few days we have been working with Kaseya to make sure customers turn off their systems, by tipping them off about customers that still have systems online, and hope to be able to continue to work together to ensure that their patch is installed everywhere. We have no indication that Kaseya is hesitant to release a patch. Instead they are still working hard to make sure that after their patch the system is as secure as possible, to avoid a repeat of this scenario. Therefore we do not feel the need to lay down any kind of deadline for full disclosure at this point in time. A properly patched and secure Kaseya VSA is in the best interest of security of Kaseya customers and the internet at large. The vulnerabilities We notified Kaseya of the following vulnerabilities: CVE-2021-30116 - A credentials leak and business logic flaw, resolution in progress. CVE-2021-30117 - An SQL injection vulnerability, resolved in May 8th patch. CVE-2021-30118 - A Remote Code Execution vulnerability, resolved in April 10th patch. (v9.5.6) CVE-2021-30119 - A Cross Site Scripting vulnerability, resolution in progress. CVE-2021-30120 - 2FA bypass, resolution in progress. CVE-2021-30121 - A Local File Inclusion vulnerability, resolved in May 8th patch. CVE-2021-30201 - A XML External Entity vulnerability, resolved in May 8th patch. Timeline Date Description 01 Apr 2021 Research start 02 Apr 2021 DIVD starts scanning internet-facing implementations. 04 Apr 2021 Start of the identification of possible victims (with internet-facing systems). 06 Apr 2021 Kaseya informed. 10 Apr 2021 Vendor starts issuing patches v9.5.5. Resolving CVE-2021-30118. 8 May 2021 Vendor issues another patch v9.5.6. Resolving CVE-2021-30117, CVE-2021-30121, CVE-2021-30201. 04 Jun 2021 DIVD CSIRT hands over a list of identified Kaseya VSA hosts to Kaseya. 26 Jun 2021 9.5.7 on SaaS Resolving CVE-2021-30116 and CVE-2021-30119. 02 Jul 2021 DIVD responds to the ransomware, by scanning for Kaseya VSA instances reachable via the Internet and sends out notifications to network owners 07 Jul 2021 Limited publication (after 3 months).  Twitter  Facebook  LinkedIn cynthiang-ca-193 ---- Learning (Lib)Tech – Stories from my Life as a Technologist Skip to content Learning (Lib)Tech Stories from my Life as a Technologist Menu About Me About this Blog Contact Me Twitter GitHub LinkedIn Flickr RSS Reflection: My third year at GitLab and becoming a non-manager leader Wow, 3 years at GitLab. Since I left teaching, because almost all my jobs were contracts, I haven’t been anywhere for more than 2 years, so I find it interesting that my longest term is not only post-librarian-positions, but at a startup! Year 3 was full on pandemic year and it was a busy one. Due to the travel restrictions, I took less vacation than previous years, and I’ll be trying to make up for that a little by taking this week off. Wow, 3 years at GitLab. Since I left teaching, because almost all my jobs were contracts, I haven’t been anywhere for more than 2 years, so I find it interesting that my longest term is not only post-librarian-positions, but at a startup! Year 3 was full on pandemic year and it was a busy one. Due to the travel restrictions, I took less vacation than previous years, and I’ll be trying to make up for that a little by taking this week off. When I started It’s only been 3 years, but the company has grown considerably. When I started, there were 279 people (with 183 team members currently who started before me), and GitLab now has over 1300 team members. July 2019 at @gitlab is starting off with some exciting news 🎉. The GitLab team is now 700+ strong. To put that into perspective, that is 50% growth since Feb 2019 and 118% growth from 1 yr ago 😍📈 — nadiavatalidis (@NadiaVat) July 1, 2019 With time, I learnt what it meant to live by the GitLab values, and even though the company has grown and there are "growing pains", the company hasn’t changed a lot. I’m thankful to all the people who have had a major influence on what I think it means to "live the values" though many are no longer at GitLab. I have to say @lkozloff who pushed me to be the best that I could be, giving me what I needed to always keep moving forward. Also those who lead by example and whom I’ve learnt a lot about #gitlab values from: @sytses @_victorwu_ @RemoteJeremy @ClemMakesApps @MvRemmerden — Cynthia "Arty" Ng (@TheRealArty) March 17, 2021 Despite all the changes around me and the pandemic, I’ve only ever had two managers while at GitLab, which helped me settle in, grow, and influence others in turn. If you want to learn more about my first two years at GitLab, please check out my previous reflections: My first year at GitLab and becoming Senior My second year at GitLab and on becoming Senior again Working on (O)KRs After my promotion to Senior last year, I started being more involved in OKRs (Objective Key Results). The OKRs are owned by managers and they’re accountable for them, but as managers are supposed to do, much of the work itself can be delegated to others. I was responsible for tracking progress on a joint docs OKR last year, and running the retrospective on how it went in Support. I also helped revamp Support onboarding by: merging the Support Agent (deprecated position) with outdated .com Engineer onboarding, creating a console training module, reorganizing onboarding content to remove redundancies, filling in missing content, and making tasks more consistent for new team members, creating a Documentation training module, creating a Code contribution training module, and probably more that I’ve forgotten. More recently, I’ve started working with my manager to come up with KRs that are (not OKRs for the team, but) goals for me to fulfill within a specified time span (typically a quarter). For example, as part of our efforts to "coalesce" the team so that everyone works on tickets in both products (SaaS and Self-managed), I have an epic to ensure everyone is cross-trained. Training was considered out of scope of the Areas of Focus workgroup that came up with the changes that are coming, but it’s definitely a requirement for fully implementing the changes we want to make. So, I took on the epic as something I would lead this quarter. It’s a bit stalled at the moment waiting on managers, but I’m sure I’ll get it done within the timeline that the managers expect. While looking at the level of work I was doing, shortly after my re-promotion to senior, my manager suggested that we start working on the next promotion document. On not getting promoted to Staff Earlier this year, I wrote about choosing not to pursue the management track at least for now. The main reason was that going into management means shifting your focus to your direct reports, while I believe I can be successful as a manager (having been a manager before), I know that shifting back to a technical individual contributor role might be difficult for various reasons. As a result, I began to pursue a promotion to Staff Support Engineer. Discussions on #CareerDevelopment are much more open @gitlab than anywhere else I've been largely, thanks to our #transparency value. If you want a glimpse into becoming a Staff #Support team member, the video of my discussion with @lkozloff is public! https://t.co/X3hZwaBD4Q — Cynthia "Arty" Ng (@TheRealArty) January 5, 2021 After submitting my promotion document, it got rejected. It hit me fairly hard at the time. Despite being told I was doing great work, getting the promotion rejected was a blow to my confidence. I also got the news while my manager was away, so I’m grateful to Izzy Fee, another support manager, who stepped in to have the "hard" conversation with me. In a follow up meeting with my senior manager, it turns out, we weren’t aligned on what we believed are the company needs for Support. So, it wasn’t surprising that it got rejected. Partly, it was my manager and my own fault for not writing the document to talk about how my promotion would fill a company need. So, pro-tip, if a promotion is reliant on not just merit, but also a "company need", then talk to your manager’s manager before writing your promotion document, not after. It was a really good discussion nonetheless. We agreed that while I wouldn’t be fulfilling the biggest need that we currently have (being a Staff level engineer who is an expert and can lead training and ramp up of others for supporting Kubernetes and highly-available self-managed instances), I could potentially fill a less urgent but still important need on increasing efficiency. The idea is to help answer question of > how could we handle double the support tickets with the same number of team members we have now? I’ve been working on a number of "Support Efficiency" pieces, and helping my manager with some of his OKRs that focus on that as well. The best example of increased efficiency, especially with low effort, is when I figured out that limiting self-serve deletion drove up our account deletion requests. In cutting it down by at least half, I’ve saved the company an estimated support engineer’s salary’s worth. At the moment, I haven’t gone back to revising my promotion document. Instead, I’ve been focused on moving ahead with training, onboarding and training others, and working on the project-type work. I’m also hoping that some of the work I’m doing this quarter will add to the document. The more I discuss the possible promotion to Staff though, the more I wonder if I really want it. I’m happy doing what I do now, and I don’t want the time I’m not working on tickets and mentoring others to be solely focused on the company need that I’m supposed to be fulfilling. So, we’ll see. We’ve always said that there’s nothing wrong with staying an intermediate, so I simply need to remind myself that there’s nothing wrong with staying a senior. Becoming a non-manager leader I believe my main accomplishment of the last year is becoming a leader (without having moved to the managerial level). At GitLab, we have a list of leadership competencies with a list on how they apply at the various individual contributor levels within Support. I don’t review the list regularly, so it’s not like I consider it a list where I check off the boxes to say I’m doing those things. I like to think that instead, I’ve internalized a lot of the GitLab values and became a leader through helping, mentoring, and coaching others; thinking about and actioning on improvements; and most of all, leading by example. As usual, I continued to help improve training in general such as creating self-assessment quizzes for SAML training and SCIM training, and doing a SCIM deep dive session. More than that though, I take opportunities to help others identify when they can take what they’ve learnt and spread the knowledge as well. Primarily, I do this by helping others make documentation and handbook contributions, and I often do documentation reviews, especially for newer team members. Of course, that’s just one example. I also regularly meet with other team members to talk about how they’re doing, and particularly, career path and growth. While team members are generally talking to their managers about their career path, most of the support managers we currently have started within the last 1.5 years, so don’t necessarily have the knowledge of learning opportunities, promotions, and career paths within GitLab that I’ve gained. For example, many at GitLab don’t know that we have an internship for learning program where you can "intern" for another team. Being an individual contributor can also be a very different experience for some, because we are expected to be a manager of one and many aren’t used to self-leadership and very importantly, making their work known to their manager, especially at performance review time. I’m sure I could go on, but I’ll move on instead. Being a leader outside of the "work" It’s important to contribute and be a leader outside of the expected work as well, which in Support means outside of answering tickets, process improvements, and contributing to the product. I strive to be one of the people who are influential and make an impact as others have done for me, like those I mentioned near the beginning of this post (who outside of my manager were not part of my team). This year, I become a member of the Women’s Team Member Resource Group’s Learning and Development subcommittee with helping to facilitate learning pathways we’ve created. There’s no official list of members, but I started attending the meetings and help with the initiatives we decide on. While I hesitated at first since I wasn’t sure I could make it, I’ve signed up to be a Contribute Ambassador for our next event in September. Ambassadors are volunteers that do everything they can to assist the organizers make the event a success. Interestingly, one of the organizers messaged me on the last day for sign ups that she was hoping to see me apply. It was certainly great to hear. I also wanted to do a fun, more social thing for the team, so at the end of last year, I organized a "secret santa" gift exchange for the Support team. Again, there’s probably more, and certainly some which I started in previous years that I’ve continued, but I decided to pull out just a small number of examples. The feedback on being a non-manager leader While it can be hard to gauge how much of an impact you truly have, I’ve gotten bits and pieces of feedback from both individual contributors and managers in the past few months. A number of the team members have expressed their appreciation in having a non-manager’s perspective on how to prioritize work, record work done, make work visible to their manager, and move forward in learning and career development. I often have a "manager’s perspective" because I’ve worked as one, and I do my best to apply coaching skills as well. So, I sometimes have a fairly different perspective and approach to these discussions than other individual contributors on the team. A couple of the managers have occasionally shared stories with me as well on the impact that I’ve had on their direct reports. Once it was how I strategized and provided options on a possible solution to customer problem. Another time was how welcoming I made a new team member feel. Of course, the best anecdotes are the ones from the individuals themselves. I’ve had a team member say they look up to me and aspire to work like me. My manager told me he learns a lot from me. Someone from another team recently said they remember how welcoming I was to them when they started (more than a year ago). The most memorable is still this one: Chatting with one of the new Support team members and had them tell me their decision to join #GItLab was due to my blog posts. Apparently they read all of them and decided "that looks like a great place to work" <3https://t.co/EpDrTiTn8B — Cynthia "Arty" Ng (@TheRealArty) August 13, 2020 From the positive feedback I’ve received so far, I like to think that the whole aspiring to be a non-manager leader thing is working. External contributions and podcasts I have always made sure to partake in some external contributions including conference volunteering and mentorship programs. Somehow this past year ended up being one for podcasts as well: Co-guest with Amy Quails from the Technical Writing team on the WritetheDocs podcast The Journey into Tech Series Guest speaker on another GitLab team member’s podcast, Brown Tech Bae Sharing knowledge and training in support engineering on Customers Support Leaders podcast I hope to continue being involved in the technical writing, library, and support communities as the years go on. Here’s to Year 4! This year’s post is not as linear of a story, but I believe that’s a result of going from a new(er) team member still developing and growing in a role to someone who is more settled into the role, picking out highlights of the year. We’ll see what Year 4’s reflection is like to prove or disprove my theory. If you made it to the end, thanks for reading! Hope to see you again next year! Your prize is a happy quokka. =D Author CynthiaPosted on June 14, 2021July 12, 2021Categories UpdateTags GitLab, reflectionLeave a comment on Reflection: My third year at GitLab and becoming a non-manager leader UBC iSchool Career Talk Series: Journey from LibTech to Tech The UBC iSchool reached out to me recently asking me to talk about my path from getting my library degree to ending up working in a tech company. Below is the script for my portion of the talk, along with a transcription of the questions I answered. Continue reading “UBC iSchool Career Talk Series: Journey from LibTech to Tech” Author CynthiaPosted on March 5, 2021March 5, 2021Categories Events, LibrarianshipTags career growth, reflectionLeave a comment on UBC iSchool Career Talk Series: Journey from LibTech to Tech Choosing not to go into management (again) Often, to move up and get a higher pay, you have to become a manager, but not everyone is suited to become a manager, and sometimes given the preference, it’s not what someone wants to do. Thankfully at GitLab, in every engineering team including Support, we have two tracks: technical (individual contributor), and management. Continue reading “Choosing not to go into management (again)” Author CynthiaPosted on February 2, 2021March 5, 2021Categories Work cultureTags career growth, management, reflection1 Comment on Choosing not to go into management (again) Prioritization in Support: Tickets, Slack, issues, and more I mentioned in my GitLab reflection that prioritization has been quite different working in Support compared to other previous work I’ve done. In most of my previous work, I’ve had to take “desk shifts” but those are discreet where you’re focused on providing customer service during that period of time and you can focus on other things the rest of the time. In Support, we have to constantly balance all the different work that we have, especially in helping to ensure that tickets are responded to within the Service Level Agreement (SLA). It doesn’t always happen, but I ultimately try to reach inbox 0 (with read-only items possibly left), and GitLab to-do 0 by the end of the every week. People often ask me how I manage to do that, so hopefully this provides a bit of insight. Continue reading “Prioritization in Support: Tickets, Slack, issues, and more” Author CynthiaPosted on December 11, 2020December 24, 2020Categories MethodologyTags productivityLeave a comment on Prioritization in Support: Tickets, Slack, issues, and more Reflection Part 2: My second year at GitLab and on becoming Senior again This reflection is a direct continuation of part 1 of my time at GitLab so far. If you haven’t, please read the first part before beginning this one. Continue reading “Reflection Part 2: My second year at GitLab and on becoming Senior again” Author CynthiaPosted on June 17, 2020January 31, 2021Categories Update, Work cultureTags GitLab, organizational culture, reflection1 Comment on Reflection Part 2: My second year at GitLab and on becoming Senior again Reflection Part 1: My first year at GitLab and becoming Senior About a year ago, I wrote a reflection on Summit and Contribute, our all staff events, and later that year, wrote a series of posts on the GitLab values and culture from my own perspective. There is a lot that I mention in the blog post series and I’ll try not to repeat myself (too much), but I realize I never wrote a general reflection at year 1, so I’ve decided to write about both years now but split into 2 parts. Continue reading “Reflection Part 1: My first year at GitLab and becoming Senior” Author CynthiaPosted on June 16, 2020January 31, 2021Categories Update, Work cultureTags GitLab, organizational culture, reflection1 Comment on Reflection Part 1: My first year at GitLab and becoming Senior Is blog reading dead? There was a bit more context to the question, but a friend recently asked me: What you do think? Is Blogging dead? Continue reading “Is blog reading dead?” Author CynthiaPosted on May 8, 2020May 7, 2020Categories UpdateTags reflectionLeave a comment on Is blog reading dead? Working remotely at home as a remote worker during a pandemic I’m glad that I still have a job, that my life isn’t wholly impacted by the pandemic we’re in, but to say that nothing is different just because I was already a remote worker would be wrong. The effect the pandemic is having on everyone around you has affects your life. It seems obvious to me, but apparently that fact is lost on a lot of people. I’d expect that’s not the case for those who read my blog, but I thought it’d be worth reflecting on anyway. Continue reading “Working remotely at home as a remote worker during a pandemic” Author CynthiaPosted on May 4, 2020May 2, 2020Categories Work cultureTags remoteLeave a comment on Working remotely at home as a remote worker during a pandemic Code4libBC Lightning Talk Notes: Day 2 Code4libBC Day 2 lightning talk notes! Continue reading “Code4libBC Lightning Talk Notes: Day 2” Author CynthiaPosted on November 29, 2019Categories EventsTags authentication, big data, c4lbc, code, code4lib, digital collections, privacy, reference, teachingLeave a comment on Code4libBC Lightning Talk Notes: Day 2 Code4libBC Lightning Talk Notes: Day 1 Code4libBC Day 1 lightning talk notes! Continue reading “Code4libBC Lightning Talk Notes: Day 1” Author CynthiaPosted on November 28, 2019Categories EventsTags c4lbc, digital collections, intranet, MARC, metadata, teachingLeave a comment on Code4libBC Lightning Talk Notes: Day 1 Posts navigation Page 1 Page 2 … Page 47 Next page Cynthia Technologist, Librarian, Metadata and Technical Services expert, Educator, Mentor, Web Developer, UXer, Accessibility Advocate, Documentarian View Full Profile → Follow Us Twitter LinkedIn GitHub Telegram Search for: Search Categories Events Librarianship Library Academic Public Special Tours Methodology Project work Technology Tools Update Web design Work culture Follow via Email Enter your email address to receive notifications of new posts by email. Email Address: Follow About Me About this Blog Contact Me Twitter GitHub LinkedIn Flickr RSS Learning (Lib)Tech   Loading Comments...   You must be logged in to post a comment. cynthiang-ca-9550 ---- Learning (Lib)Tech Learning (Lib)Tech Stories from my Life as a Technologist Reflection: My third year at GitLab and becoming a non-manager leader Wow, 3 years at GitLab. Since I left teaching, because almost all my jobs were contracts, I haven't been anywhere for more than 2 years, so I find it interesting that my longest term is not only post-librarian-positions, but at a startup! Year 3 was full on pandemic year and it was a busy one. Due to the travel restrictions, I took less vacation than previous years, and I'll be trying to make up for that a little by taking this week off. UBC iSchool Career Talk Series: Journey from LibTech to Tech The UBC iSchool reached out to me recently asking me to talk about my path from getting my library degree to ending up working in a tech company. Below is the script for my portion of the talk, along with a transcription of the questions I answered. Context To provide a bit of context (and … Continue reading "UBC iSchool Career Talk Series: Journey from LibTech to Tech" Choosing not to go into management (again) Often, to move up and get a higher pay, you have to become a manager, but not everyone is suited to become a manager, and sometimes given the preference, it’s not what someone wants to do. Thankfully at GitLab, in every engineering team including Support, we have two tracks: technical (individual contributor), and management. Progression … Continue reading "Choosing not to go into management (again)" Prioritization in Support: Tickets, Slack, issues, and more I mentioned in my GitLab reflection that prioritization has been quite different working in Support compared to other previous work I’ve done. In most of my previous work, I’ve had to take “desk shifts” but those are discreet where you’re focused on providing customer service during that period of time and you can focus on … Continue reading "Prioritization in Support: Tickets, Slack, issues, and more" Reflection Part 2: My second year at GitLab and on becoming Senior again This reflection is a direct continuation of part 1 of my time at GitLab so far. If you haven’t, please read the first part before beginning this one. Becoming an Engineer (18 months) The more time I spent working in Support, the more I realized that the job was much more technical than I originally … Continue reading "Reflection Part 2: My second year at GitLab and on becoming Senior again" Reflection Part 1: My first year at GitLab and becoming Senior About a year ago, I wrote a reflection on Summit and Contribute, our all staff events, and later that year, wrote a series of posts on the GitLab values and culture from my own perspective. There is a lot that I mention in the blog post series and I’ll try not to repeat myself (too … Continue reading "Reflection Part 1: My first year at GitLab and becoming Senior" Is blog reading dead? There was a bit more context to the question, but a friend recently asked me: What you do think? Is Blogging dead? I think blogging the way it used to work is (mostly) dead. Back in the day, we had a bunch of blogs and people who subscribe to them via email and RSS feeds. … Continue reading "Is blog reading dead?" Working remotely at home as a remote worker during a pandemic I’m glad that I still have a job, that my life isn’t wholly impacted by the pandemic we’re in, but to say that nothing is different just because I was already a remote worker would be wrong. The effect the pandemic is having on everyone around you has affects your life. It seems obvious to … Continue reading "Working remotely at home as a remote worker during a pandemic" Code4libBC Lightning Talk Notes: Day 2 Code4libBC Day 2 lightning talk notes! Code club for adults/seniors – Dethe Elza Richmond Public Library, Digital Services Technician started code clubs, about 2 years ago used to call code and coffee, chain event, got little attendance had code codes for kids, teens, so started one for adults and seniors for people who have done … Continue reading "Code4libBC Lightning Talk Notes: Day 2" Code4libBC Lightning Talk Notes: Day 1 Code4libBC Day 1 lightning talk notes! Scraping index pages and VuFind implementation – Louise Brittain Boisvert Systems Librarian at Legislative collection development policy: support legislators and staff, receive or collect publications, many of them digital but also some digitized (mostly PDF, but others) accessible via link in MARC record previously, would create an index page … Continue reading "Code4libBC Lightning Talk Notes: Day 1" daily-jstor-org-9924 ---- Franz Kafka's The Trial—It's Funny Because It's True | JSTOR Daily Skip to content where news meets its scholarly match Newsletters About JSTOR Daily Arts & Culture Art & Art History Film & Media Language & Literature Performing Arts Business & Economics Business Economics Politics & History Politics & Government U.S. History World History Social History Quirky History Science & Technology Health Natural Science Plants & Animals Sustainability & The Environment Technology Education & Society Education Lifestyle Religion Social Sciences Newsletters Contact The Editors Support JSTOR Daily Arts & Culture Franz Kafka’s The Trial—It’s Funny Because It’s True Just because you’re paranoid doesn’t mean they’re not out to get you. Jeremy Irons plays Kafka in Steven Soderbergh's Kafka. © Miramax 1991. By: Benjamin Winterhalter July 2, 2019 July 2, 2019 8 minutes Share Tweet Email Print In Franz Kafka’s novel The Trial, first published in 1925, a year after its author’s death, Josef K. is arrested, but can’t seem to find out what he’s accused of. As K. navigates a labyrinthine network of bureaucratic traps—a dark parody of the legal system—he keeps doing things that make him look guilty. Eventually his accusers decide he must be guilty, and he is summarily executed. As Kafka puts it in the second-to-last chapter, “The Cathedral:” “the proceedings gradually merge into the judgment.” Kafka’s restrained prose—the secret ingredient that makes this story about a bank clerk navigating bureaucracy into an electrifying page-turner—trades on a kind of dramatic irony. As the novelist David Foster Wallace noted in his essay “Laughing with Kafka,” this is Kafka’s whole schtick, and it’s what makes him so funny. By withholding knowledge from the protagonist and the reader, Kafka dangles the promise that all will be revealed in the end. But with every sentence the reader takes in, it feels increasingly likely that the reason for K.’s arrest will remain a mystery. As The Trial follows its tragic path deeper into K.’s insular, menacing, and sexualized world, it gradually becomes clear that the answer was never forthcoming. In fact, Kafka hints at the narrator’s ignorance at the very beginning: “Someone must have slandered Josef K., for one morning, without having done anything wrong, he was arrested.” Why the conjecture about what “must have” happened unless the narrator, who relates the story in the past tense, doesn’t know? That ignorance sets up the humor: During certain particularly insane moments in K.’s journey, the frustration of not knowing why he’s enduring it all becomes unbearable, at which point there’s no choice but to laugh. Weekly Digest Get your fix of JSTOR Daily’s best stories in your inbox each Thursday. Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message. Into the Unreal Many commentators on The Trial have observed a sense of unreality in the novel, a feeling that something is somehow “off” that hangs like a fog over Kafka’s plotline. The philosopher Hannah Arendt, for instance, wrote in her well-known essay on Kafka: “In spite of the confirmation of more recent times that Kafka’s nightmare of a world was a real possibility whose actuality surpassed even the atrocities he describes, we still experience in reading his novels and stories a very definite feeling of unreality.” For Arendt, this impression of unreality derives from K.’s internalization of a vague feeling of guilt. That all-pervasive guilt becomes the means to secure K.’s participation in a corrupt legal system. She writes: This feeling, of course, is based in the last instance on the fact that no man is free from guilt. And since K., a busy bank employee, has never had time to ponder such generalities, he is induced to explore certain unfamiliar regions of his ego. This in turn leads him into confusion, into mistaking the organized and wicked evil of the world surrounding him for some necessary expression of that general guiltiness… In other words, Arendt reads The Trial as a kind of controlled descent into madness and corruption, ending in a violent exaggeration of the knowledge that nobody’s perfect. The literary scholar Margaret Church expanded on these psychological themes in an article in Twentieth Century Literature, pointing to “the dreamlike quality of time values and the assumption of an interior time” employed throughout The Trial. According to Church, this unsteady temporality implies that most of what happens in The Trial isn’t real—or isn’t fully real, at any rate. In fact, she suggests, it makes as much sense to assume that “the characters are projections of K.’s mind.” Likewise, the literary scholar Keith Fort remarked in an article in The Sewanee Review that it’s “the nightmarish quality of unreality that has made Kafka’s name synonymous with any unreal, mysterious force which operates against man.” In other words, this type of unreality has become so closely associated with Kafka that the best word to describe it is, circularly, Kafkaesque. Jeremy Irons in Steven Soderbergh’s Kafka (1991). License to Kafka Writing in Critical Inquiry, however, the art historian Otto Karl Werckmeister charged that many Kafka interpreters, including Arendt, were engaging with a politicized fantasy of what Kafka represents, rather than with the real Kafka. For Werckmeister, this fantastical reading of Kafka’s life was best captured in Steven Soderbergh’s movie Kafka, which casts Jeremy Irons as “Kafka the brooding office clerk turned underworld agent”—a device that Werckmeister calls “Kafka 007” (after, of course, the James Bond franchise). Some of these interpreters were motivated by a desire to dismiss Kafka as a handwringing bourgeois do-nothing—or even a “pre-fascist.” Others expressed a willingness to excuse Kafka’s alleged “indifference to social policy” by appealing to his loose associations with socialist or anarchist circles. All of them, Werckmeister argues, are missing the obvious: They underestimate Kafka’s role as a lawyer at the Worker’s Accident Insurance Institute, in Prague, where he imposed workplace safety regulations on unwilling industrial employers: Thus at the highest echelons of a semipublic, government-sanctioned institution enacting social policy, Kafka’s job was to regulate the social conduct of employers vis-à-vis the working class… The employers under Kafka’s supervision tenaciously resisted the application of recent Austrian social policy laws, which were adapted from Bismarck’s legislation in Germany. They contested their risk classifications, disregarded their safety norms, tried to thwart plant inspections, and evade their premium payments. The department headed by Kafka was pitted against them in an adversarial relationship, no matter how conciliatory the agency’s mission was meant to be. Kafka’s tales are a reflection of the deep obstacles to progress he perceived in the social reality of his time. He even “anticipated the political self-critique of literature to the point of its nonpublication,” keeping most of his writing private, then asking that the manuscripts for The Trial and The Castle be destroyed upon his death (a wish that was, thankfully, not honored). But the critical focus has been trained on the received version of Kafka, an interpretation of his life derived under exigent circumstances—the eruption of fascism in Europe—for ideological purposes. It has been trained, so to speak, on the Kafkaesque, instead of on Kafka himself. What about the plunging sense of unease—like a feeling of falling—that no one can quite seem to shake when they first encounter Kafka’s stories? For Werckmeister, it’s true that Kafka’s fiction ultimately offers the most reliable guide to his political orientation, provided we understand that fiction in the context of his professional life. At work, he took the side of the working class—indeed, he represented its interests in a struggle against capital. He was “a man who tried to live his life according to principles of humanism, ethics, even religion.” As a direct result of that experience, he learned the disturbing truth that, in the law, “Lies are made into a universal system,” as he wrote in the penultimate chapter of The Trial. The best he could manage within the law still would be a far cry from real justice (which, Kafka also knew, would have to include sexual justice to be anywhere near complete). Maybe Werckmeister is right about the political motives of critics like Arendt, but what about the plunging sense of unease—like a feeling of falling—that no one can quite seem to shake when they first encounter Kafka’s stories? “The Trial” by Wolfgang Letti (1981). “It’s Funny Because It’s True” I’m here to suggest, following Werckmeister, that this feeling results from the fact that Kafka’s stories, despite their bizarre premises, are unnervingly real. Although there is undoubtedly an element of the absurd in the worlds Kafka creates, his style—unpretentious and specific, yet free from slang—renders those worlds with such painful accuracy that they seem totally familiar while we’re in them, like déjà vu or a memory of a bad dream: K. turned to the stairs to find the room for the inquiry, but then paused as he saw three different staircases in the courtyard in addition to the first one; moreover, a small passage at the other end of the courtyard seemed to lead to a second courtyard. He was annoyed that they hadn’t described the location of the room more precisely; he was certainly being treated with strange carelessness or indifference, a point he intended to make loudly and clearly. Then he went up the first set of stairs after all, his mind playing with the memory of the remark the guard Willem had made that the court was attracted by guilt, from which it actually followed that the room for the inquiry would have to be located off whatever stairway K. chanced to choose. The time-bending nature of Kafka’s prose, then, shouldn’t be seen as a pathological formalism—a linguistically engineered unreality—but as a reflection of Kafka’s intuitive understanding of the emerging principles of modern physics, in which time itself is relative. A 1956 article by the renowned physicist Werner Heisenberg suggests that a high-level awareness of modern physics defines and structures the modernist sensibility in art. While “there is little ground for believing that the current world view of science has directly influenced the development of modern art,” still “the changes in the foundations of modern science are an indication of profound transformations in the fundamentals of our existence.” Now that the cat is out of Schrödinger’s bag (or rather, box): The old compartmentalization of the world into an objective process in space and time, on the one hand, and the soul in which this process is mirrored, on the other… is no longer suitable as the starting point for the understanding of modern science. In the field of view of this science there appears above all the network of relations between man and nature, of the connections through which we as physical beings are dependent parts of nature and at the same time, as human beings, make them the object of our thought and actions. The “unreality” in Kafka that has captivated so many commentators is what best aligns, ironically, with the current scientific worldview, which sees its own understanding of reality as necessarily partial, limited, and relative. If time in The Trial seems nonlinear, that’s only because the novel is so thoroughly modern; the uneven flow of time in the novel captures the dawning scientific realization that time is neither absolute nor universal. Isn’t it, after all, the sense that Kafka—the voice on the page—is firmly in touch with reality that makes it feel acceptable to laugh at the deranged goings-on in The Trial? His jokes are technical achievements, yes, but they also speak to a feeling of loneliness that typifies the modern condition. Kafka himself couldn’t resist laughing when asked to read aloud from his work. To orchestrate this kind of laughter—to borrow a word from Wallace—might have offered relief from the relentless (and political) self-criticism that drove Kafka to conceal his writings. Kafka’s suppression of information gets us to let our emotional guard down. He contrives narrative tension so that he can shock us, confronting us anew with injustices to which we’ve become numb. Share Tweet Email Print Have a correction or comment about this article? Please contact us. comedyEuropean literaturefilmConjunctionsCritical InquiryDaedalusThe Sewanee ReviewTwentieth Century Literature Resources JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR. From "The Trial" By: Franz Kafka and Breon Mitchell Conjunctions, No. 30, PAPER AIRPLANE: The Thirtieth Issue (1998), pp. 7-24 Conjunctions Time and Reality in Kafka's The Trial and The Castle By: Margaret Church Twentieth Century Literature, Vol. 2, No. 2 (Jul., 1956), pp. 62-69 Duke University Press The Function of Style in Franz Kafka's "The Trial" By: Keith Fort The Sewanee Review, Vol. 72, No. 4 (Autumn, 1964), pp. 643-651 The Johns Hopkins University Press Kafka 007 By: O. K. Werckmeister Critical Inquiry, Vol. 21, No. 2 (Winter, 1995), pp. 468-495 The University of Chicago Press The Representation of Nature in Contemporary Physics By: Werner Heisenberg Daedalus, Vol. 87, No. 3, Symbolism in Religion and Literature (Summer, 1958), pp. 95-108 The MIT Press on behalf of American Academy of Arts & Sciences Join Our Newsletter Get your fix of JSTOR Daily’s best stories in your inbox each Thursday. Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message. Read this next Education & Society Wittgenstein on Whether Speech Is Violence When is speech violence? Sometimes. It depends. That’s a complicated question. Trending Posts Here’s Why the CDC Recommends Indoor Masks for the Vaccinated Healing and Memory in Ancient Greece How “Carpe Diem” Got Lost in Translation Herbs & Verbs: How to Do Witchcraft for Real Rednecks: A Brief History More Stories Language & Literature Six Cat Poems That Aren’t That Owl and Pussycat One There's nothing practical about these felines. Meow. Art & Art History The Philosophy of Posthumous Art For some creators, death isn’t the end of their career. How should we think about completing and releasing their work afterward? Film & Media Asthma Tropes and the Kids Who Hate Them Children with asthma respond to the movie executives who see them as weak people helped by magical inhalers. Film & Media The Photographers Who Captured the Great Depression The Farm Security Administration had photographers fan out across the country to document agricultural conditions. But they brought back much more. Recent Posts Six Cat Poems That Aren’t That Owl and Pussycat One Tidal Power: A Forgotten Renewable Resource? The Philosophy of Posthumous Art Policing the Bodies of Women Athletes Is Nothing New The Permanent Crisis of Infrastructure Support JSTOR Daily Help us keep publishing stories that provide scholarly context to the news. Become a member About Us JSTOR Daily provides context for current events using scholarship found in JSTOR, a digital library of academic journals, books, and other material. We publish articles grounded in peer-reviewed research and provide free access to that research for all of our readers. Contact The Editors Masthead Newsletters About Us Submission Guidelines RSS Support JSTOR Daily JSTOR.org Terms and Conditions of Use Privacy Policy Cookie Policy Accessibility JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. © ITHAKA. All Rights Reserved. JSTOR®, the JSTOR logo, and ITHAKA® are registered trademarks of ITHAKA. Sign up for our weekly newsletter Get your fix of JSTOR Daily’s best stories in your inbox each Thursday. Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message. × dancohen-org-8836 ---- Dan Cohen – Vice Provost, Dean, and Professor at Northeastern University Skip to the content Search Dan Cohen Vice Provost, Dean, and Professor at Northeastern University Menu About Blog Newsletter Podcast Publications Social Media CV RSS Search Search for: Close search Close Menu About Blog Newsletter Podcast Publications Social Media CV RSS What’s New Podcast Humane Ingenuity Newsletter Blog Publications © 2021 Dan Cohen Powered by WordPress To the top ↑ Up ↑ datasciencebydesign-org-6282 ---- Events – Data Science by Design Home Events Blog Grants & Anthology Code of Conduct About Events Overview of DSxD events DSxD Creator Conference May 27 & 28, 2021 9 - 3 Pacific, 12 - 6 Eastern Apply to attend: here. Applications now closed. About Do you have a data story to tell but need help broadening your audience? Do you wish you had some time and support to exercise the creative side of your data-loving brain? On May 27 and 28th, we are hosting a a conference spread over two half days (5 hours each day), where we will come together to get over that activation energy to start (or finish) creative data-related projects. We’ll learn about others’ creative processes and storytelling techniques and work through design exercises that will push us to get creating. We’ll hear from expert storytellers, creators, and designers about how they: brainstorm and find inspiration, get started bringing an idea to life, continue to make progress and refine an idea, pitch ideas to get buy-in from others, and share their final products with a broad audience. As we learn, we’ll be coming up with our own ideas and thinking about what resources and support we need from this creator community to bring them to fruition. And we’ll be doodling all the while. This conference is part of our Data Science by Design (DSxD) initiative aimed to bring together data enthusiasts of all kinds to use creative mediums to communicate data-related work and establish new collaborations across domains. The Creator Conference’s role is to kick off the eventual creation of personal essays, drawings, explainers, or how-to guides on research best practices, findings, methodology, or even work culture. Featuring Joining us in the sessions will be researchers, educators, artists, computer scientists, and more! Below is a sneak peek into the amazing crew of organizers, panelists, session leads, and speakers who will be joining us this year. Full Schedule HERE. Ijeamaka Anyene she/her data analyst and data viz Rebecca Barter she/her researcher, data scientist, and data viz Anna E. Cook she/her accessibility designer Erin Davis she/her data visualization and analysis Frank Elavsky he/him accessibility and data experience engineer Julia Evans she/her programmer and maker Sharla Gelfand they/them data wrangler, developer, and data viz Allison Horst she/her professor, artist, and researcher Patricia Kambitsch she/her visual storyteller and artist Kevin Koy he/him human-centered design x data science at IDEO Sean Kross he/him researcher, educator, and developer Giorgia Lupi she/her information designer and partner at Pentagram Ciera Martinez she/her researcher, biologist, and data scientist Sarah Mirk she/her writer, editor, and good ideas Grishma Rao she/her artist, writer, emerging technology designer Tara Robertson she/her equity and inclusion advocate and rogue librarian Sara Stoudt she/her professor, stats communicator Chris Walker he/him data x design Valeri Vasquez she/her researcher and data scientist Peter Winter he/him Data as a creative medium for design Shirley Wu she/her data visualization designer, developer, and author We Want You to Attend! Maybe you identify as a student, an educator, a researcher, a designer, an artist, an analyst, an engineer, or as something else entirely. All career stages - everyone is welcome! As organizers of Data Science by Design, we are committed to fostering a supportive community among participants. One of our priorities for this event is to increase the amount of people who see themselves in data-related fields. Therefore, we strongly encourage applications from women and other underrepresented genders, people of color, people who are LGBTQ, people with disabilities or any other underrepresented minorities in data-related fields. To ensure an inclusive experience for everyone who participates, we will follow a code of conduct. Apply to Attend Apply here: https://forms.gle/kECLXC9zACLyt2Js9 Apply by May 10 to ensure your application is reviewed. We want attendees to come with ideas. Each applicant is asked to submit a short application that responds to at least two of the following. The application can feature creative mediums (e.g., a page of illustrations) but we ask that you do write out in English (1) your full name (2) your email address and (3) what items below you are focusing on: Pitch us. What project are you already working on OR would you love to work on, given the time and resources, at the intersection of data science and the creative arts? See examples below! Share with us. Are there two examples of work at the intersection of data science and the creative arts that inspire/motivate/excite you? Please include links or references, where applicable. Think with us. What would your goals be coming out of this experience? What are the contributions you feel you’re best placed to make to enrich the experience of other participants? Application Help: Project Examples and Scope The scope for the Creator Conf application includes anything relevant to communicating about data or working with data (e.g., best practices) using creative mediums. We are looking for people who want to work on projects that they are passionate about, including (but definitely not limited to!) data visualizations. Below are some examples of project ideas that are within scope: Format Examples short essays or stories interviews websites tutorials data story/presentations (data viz, sonification, sculpture, etc.) zine software library or package or something else! Topic Examples exploration of a dataset how to create community in data, computer science or related fields personal narratives about working in or wanting to work in data, computer science or related fields data science workflows concepts explained/taught diversity and inclusion computing challenges algorithmic bias accessibility burn out best practices or something else! Questions? Contact us at datasciencebydesign@gmail.com Schedule   Thursday, May 27 Friday, May 28 Time (Pacific) Title and Brief Description Speakers / Leads Time (Pacific) Title and Brief Description Speakers / Leads 9:00 AM [all zones] Welcome to DSxD Creator Conf! DSxD Team 10:00 AM [all zones] Day 2 Opener DSxD Team 9:15 AM [all zones] Future of Data Science Discussion With Live Sketching How do we envision the field of Data Science moving forward? In this conversation we invite everyone to share what we want more of (or less of!) in the field of data and computational sciences. Featuring live sketching from artist Patricia Kambitsch, with advice on how she approaches effective and creative notetaking. Ciera Martinez and Patricia Kambitsch 10:15 AM [all zones] Coffee and Collabs Grab a coffee and 1. Introduce yourself 2. Talk about your dream project and 3. get help and maybe even find some summer collaborators! If you don't want to chat on camera go to #projects Slack Channel and post / react to others projects. DSxD Team 10:00 AM [all zones] Break   11:00 AM [all zones] Break   10:15 AM [all zones] Visual Storytellers Panel Conversation on the practice and process of communicating complex information visually. Moderator: Sharla Gelfand Panelists: Julia Evans, Giorgia Lupi, and Shirley Wu 11:15 AM [all zones] The Pitching Process (w/ activity) Going from “thinking about thinking about it” to “thinking about it” to actually “doing it” Speakers: Erin Davis Lead: Sara Stoudt 11:00 AM [all zones] Break   12:15 PM [all zones] Break   11:30 PM [all zones] Creativity to Learn, Creativity to Teach (w/ Activity) How can we engage students (and ourselves) to get a broader audience excited about data tools and techniques? Leads: Sean Kross, Allison Horst, and Sara Stoudt 12:45:00 PM [all zones] Self Publishing and the Power of Zines (w/ activity) You don't need permission to publish. Learn about an old-school approach to open source information sharing: zines! We'll learn how to make a one-page zine, admire zines from around the world, and do a short creative exercise to each create a data-driven zine. Lead: Sara Mirk 1:00 PM [all zones] Break   2:00 PM [all zones] Break   1:15 PM [all zones] Designing for Accessibility and Inclusivity Short talks and discussion on how to instill inclusion and accessibility practices into how we work. Lead: Valeri Vasquez Speakers: Anna Cook, Frank Elavsky, and Tara Robertson 2:15 PM [all zones] DSxD Summer to Create Next steps, future projects, and funding opportunities with DSxD DSxD Team 2:15 PM [all zones] Day 1 Closing DSxD Team     . © 2021 Argon Design System by Creative Tim & Jekyll Themes Home Events Blog Grants & Anthology Code of Conduct About davidgerard-co-uk-1000 ---- Tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu Tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it 13th December 202022nd March 2021 - by David Gerard - 12 Comments. Tether is a US dollar substitute token, issued by Tether Inc., an associate of cryptocurrency exchange Bitfinex. Tether is a favourite of the crypto trading markets — it moves at the speed of crypto, without all that tedious regulation and monitoring that actual dollars attract. Also, exchanges that are too dodgy to get dollar banking can use tethers instead. Bryce Weiner has written a good overview of how Tether works in relation to the cryptocurrency industry. His piece says nothing that any regular reader of this blog won’t already know, but he says it quite well. [Medium] Weiner’s thesis: the whole crypto industry depends on Tether staying up. Tether is Too Big To Fail. The purpose of the crypto industry, and all its little service sub-industries, is to generate a narrative that will maintain and enhance the flow of actual dollars from suckers, and keep the party going. Increasing quantities of tethers are required to make this happen. We just topped twenty billion alleged dollars’ worth of tethers, sixteen billion of those just since March 2020. If you think this is sustainable, you’re a fool.   Bitcoin-Tether volumes are now magnitudes greater than Bitcoin-Dollar volumes. #BTC #USDT pic.twitter.com/ZiZUEItKwR — Kaiko (@KaikoData) December 4, 2020   Pump it! Does crypto really need Tether? Ask the trading market — stablecoins are overwhelmingly tethers by volume. I suspect the other stablecoins are just a bit too regulated for the gamblers. Tether is functionally unregulated. In fact, the whole crypto market is overwhelmingly tethers by volume — Tether has more trading volume than the next three coins, Bitcoin, Ethereum and XRP, combined. [Decrypt] In March, when the price of Bitcoin crashed, a pile of exchanges and whales (large holders) put money, or maybe bitcoins, into Tether to keep the system afloat. At least some percentage of the backing for tethers might exist! (All are now complicit in a manner that would be discoverable in court.) This is why crypto is so up in arms about the proposed STABLE Act, which would require stablecoin issuers to become banks — the Act would take out Tether immediately, given Tether’s extensive and judicially-recognised connections to New York State, and there is no way on earth that a company that comports itself in the manner of Tether is getting a banking charter. (The Tether printer was quiet for about a week after the STABLE Act came out — and the price of Bitcoin slid down about $3,000.) The Bitcoin price is visibly pumped by releases of tethers — particularly on weekends. I find it strangely difficult to believe real-money institutional investors are so keen to get on the phone to Tether on a Saturday.   A snapshot of Coinbase BTC/USD this weekend. Spot when tethers were deployed on Binance or Huobi.   Clap if you believe in Tether But it’s far wider than the traders. Every person making money from crypto — not just Bitcoin, but everything else touching crypto — is painfully aware they need Tether to keep the market pumped. All the little service sub-industries are vested in making crypto look real — and not just handwaving nonsense held up by a hilariously obvious fraud. So Tether must be propped up, at all cost — in the face of behaviour that, at any real financial institution, would have had the air-raid sirens going off long ago. The big question is: what are all those tethers backed by? Tether used to confidently claim that every tether was backed by a dollar held in a bank account. This turned out not to be the case — so now tethers are backed by dollars, or perhaps bitcoins, or loans, or maybe hot air. Various unfortunately gullible journalists have embarrassed themselves by taking what Tether tells them at face value. Matthew Leising from Bloomberg confidently declared in December 2018, based on documents supplied to him by Tether, that Tether seemed fully backed and solvent! [Bloomberg] Then four months later, the New York Attorney General revealed that Tether had admitted to them that tethers were no more than 74% backed, and the backing had failed in October 2018  — Tether had snowed Leising. I don’t recall Leising ever speaking of this again, even to walk back his claim. Larry Cermak from The Block fell for the same trick recently, from Tether and from people working closely with Tether. [Twitter] Bitfinex/Tether was supposed to give the NYAG a pile of documents by now. The NYAG is talking to the companies about document production, and just what documents they do and don’t have — so proceedings have been delayed until 15 January 2021. [letter, PDF]     The end game Dan Davies’ book Lying For Money (UK, US) talks about the life cycle of frauds. A fraud may start small — but it has to keep growing, to cover up the earlier fraud. So a fraud will grow until it can’t. I did a chat with the FT Alphaville Unofficial Telegram a few weeks ago. Someone asked a great question: “Who’s going to make out like a bandit when/if Bitcoin collapses?” Most scam collapses involve someone taking all the money. In the case of Bitcoin and Tether, I think the answer is … nobody. A whole lot of imaginary value just vanishes, like morning dew. I can’t think of a way for Tether or the whales to exit scam with a large pile of actual dollars — because a shortage of actual dollars is crypto’s whole problem. I mean, I’m sure someone will do well. But there’s no locked-up pile of money to plunder. Crypto “market cap” is a marketing fiction — there’s very little realisable money there. What was the “market cap” of beanie babies in 2000? Imaginary, that’s what. So how does this end? The main ways out I see are NYAG or the CFTC finally getting around to doing something. Either of those are a long way off — because regulators move at the speed of regulators. Even the NYAG proceeding is just an investigation at this stage, not a case as such. Everyone in crypto’s service industries has a job that’s backed by the whales. Perhaps the whales will keep funding stuff? I’m not counting on it, given all the redundancies and shut-down projects over 2018 and 2019. This will keep going until it can’t. Remember that it took seventeen years to take down Bernie Madoff. He got institutional buyers in, too.   Past asset bubble veteran Inch the Inchworm, his Ty slightly askew, hitting the sauce after seeing his crypto portfolio   Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbryce weinerlarry cermakmatthew leisingnew yorktether Post navigation Previous Article Facebook’s Libra is now Diem; STABLE Act says stablecoins must get banking licenses Next Article News: Coinbase key holders leave, Ethereum 2.0 slashing, Libra may not become Diem, regulatory clarity in France 12 Comments on “Tether is ‘too big to fail’ — the entire cryptocurrency industry utterly depends on it” Brendan says: 13th December 2020 at 11:52 pm Tether’s design is so flawed it has to fail and when it does it will take BTC and the rest of crypto with it. I predicted this over 2 years ago and got out of Crypto and into BSV – the real Bitcoin which is not dependent on scammy fraudsters running a counterfeit USD operation to pump the price. Reply David Gerard says: 14th December 2020 at 12:15 am they had us in the second half meme Reply Eloi says: 14th December 2020 at 3:37 am BSV is crypto… Reply Mark Chamberlain says: 14th December 2020 at 12:12 am It would be interesting to know if there are any institutional or hedge fund players who understand this enough to be short crypto. Plus how would it work? If you borrow the currency to sell it short and the price goes to zero and stops trading, how do you close your position? Reply Dylan says: 16th December 2020 at 6:29 pm This depends on a reliable way of shorting crypto, and I wouldn’t trust any of the existing exchanges enough to give money to them (unless I was ok losing that money). And I trust the “trustless” smart contract tools for doing this sort of thing even less. Plus, you’d have to be confident about when the crypto market explodes, which is difficult to call given how manipulated it is. The market can stay irrational longer than you can stay solvent. Reply David Gerard says: 17th December 2020 at 2:31 pm yeah, in crypto the platforms themselves are part of the threat model. I expect you could deal with, say, CME in confidence. Reply Mark Chamberlain says: 17th December 2020 at 2:39 pm The news today is about how the shorts were sold out in this new move. The better idea might be to short the new index fund BITW (if I was crazy enough to try). It’s now trading at an insane premium to NAV, when you can instead just buy Bitcoin itself through PayPal. The kind of thing that happens at blow off tops…. Reply Greg ALLEN says: 27th December 2020 at 12:38 pm Not really – see articles on Alphaville etc post-Lehman, on how the move to mandate exchange settlement of derivatives just moved the counterparty risk from other banks to a single centralized exchange. I wouldn’t assume the exchange is sufficiently capitalized to help, and I wouldn’t even expect it to be on the hook for the other side of your futures contract Reply massimo says: 13th January 2021 at 8:35 am interesting thesis, could you deepen the subject please? Mark Bloomfield says: 10th January 2021 at 5:13 pm Please keep on at this: the message needs to get out. Reply Sesang says: 11th January 2021 at 6:48 am Nice read. Ty Reply David says: 17th January 2021 at 10:42 pm Excellent article. There’s no worst blind that those who don’t want to see. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-2213 ---- News: Everybody hates Chia, Defi100 rugpull, China versus mining, China versus crypto – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu News: Everybody hates Chia, Defi100 rugpull, China versus mining, China versus crypto 22nd May 202123rd May 2021 - by David Gerard - 4 Comments. You can support my work by signing up for the Patreon — $5 or $20 a month is like a few drinks down the pub while we rant about cryptos once a month. It really does help. [Patreon] The Patreon also has a $100/month Corporate tier — the number is bigger on this tier, and will look more impressive on your analyst newsletter expense account. [Patreon] And tell your friends and colleagues to sign up for this newsletter by email! [scroll down, or click here] Below: a screenshot of me in Dead Man’s Switch, the documentary about the collapse of Quadriga, looking like a skinhead thug crammed into a suit. “ew see guv oi fink we need a wurd abaht dese ‘criptose’ yuse gasbaggin on abaht”     Chia petard This blog lives on Hetzner Cloud. I was already a happy customer, and now I’m an even happier one — Hetzner’s told crypto miners and Chia farmers to just bugger off. [Twitter; Twitter] Rough translation: Yes, it’s true, we have extended the terms and conditions and banned crypto mining. We have received many orders for our servers with large hard drives. However, large storage servers are increasingly being rented for this [mining]. This leads to problems with bandwidth on the storage systems. With Chia mining, there is also the problem that the hard drives are extremely stressed by the many read and write cycles, and will break. The aggrieved replies from never-customers are the usual coinerism. Particular points to the guy who thought that renting something meant he had the absolute right to thrash it to death. Meanwhile, I’m told that bids for bulk storage on Hetzner are already over three times what they were just a few months ago. [Twitter] To “farm” Chia, you plot and save as many bingo cards as you can. The system calls a number; if you have the right bingo card, you win. You compete by holding petabytes of bingo cards, and writing more as fast as possible. Holding the bingo cards triples the price of large hard disks, and writing the bingo cards burns out an SSD in six weeks instead of ten years. Chia was known to be ridiculous in 2018, because it was stupidly obvious that Amazon Web Services would beat all comers at masses of raw storage space. [DSHR, 2018] In 2021, Amazon Web Services beats all comers at masses of storage space. So AWS China had, for a short time, a page offering to rent you space for Chia farming. The enterprising marketer responsible for this suggested using an “i3.2xlarge” high-I/O server (8 CPUs, 61GB RAM, 10Gbit networking) with a 1.9 TB SSD for plotting, and bulk storage on S3 to store your completed plots. They also gave basic set-up instructions for Chia farming. The page disappeared in short order — but an archive exists. [The Block; AWS, archive, in Chinese; CoinDesk] Bram Cohen, the inventor of Chia, is striking back at the “fashionable fud” that Chia trashes hard drives! He starts a Twitter thread by saying that the claim is false — though by the end of the thread, he admits it’s true. But it’s your own fault for using consumer SSDs, because the rule in crypto is always to blame the user. [Twitter thread] Never mind that the Chia FAQ still tells people they can farm Chia on a desktop computer (likely SSD), laptop (likely SSD) or a mobile phone. [Chia, archive] Decentralisation by “proof of X” will always be an engine of pointless destruction. It’s pointless because decentralisation always recentralises — because centralisation is more financially efficient. “Decentralisation” only exists as a legal construct — “can’t sue me bro” — and not at all as a functional description. Bitcoin mining is completely centralised. Ethereum mining is completely centralised, and the network has a hard dependency on Consensys. Chia was launched centralised, because Chia Network recruited large crypto mining companies before launch. “Decentralisation” is chasing a phantom. There is not a country’s worth of CO2 value in “decentralisation” as a legal construct. Saying that you can see hypothetical value in “decentralisation” is a grossly insufficient excuse for the destruction it observably results in. Update: my piece on Chia for Foreign policy is just out!   the blockchain is just a human centipede for math — rstevens 🐳💨 (@rstevens) May 3, 2021   Q. What do you call unsmokeable mushrooms? A. Non-Tokeable Fungi NFTs are the same grift as ICO tokens, altcoins and Bitcoin before them: invent a new form of magic bean, sell it for actual money. Frequently the same grifters, too. If anyone ever tells you NFTs are new: In 1962, the French neo-avant-garde artist Yves Klein began dealing in what he declared to be Zones of Immaterial Pictorial Sensibility. In exchange for a sum of solid gold, Klein would imbue a patch of thin air with his artistic aura and provide a receipt. One such “zone” was bequeathed to the Los Angeles County Museum of Art where it exists only as a photograph of the transaction, taken as the receipt was set ablaze and half the gold tossed into the Seine. [Guardian] There’s more detail on musician Imogen Heap’s planned NFT collection. She’s going to use a ton of electricity, and pump out kilograms of carbon dioxide, to … buy carbon credits. [DeZeen] The article says “This helped to remove a total of 20 tonnes of carbon dioxide from the atmosphere.” This is true only for redefinitions of “remove” that are so weasely, you’d think they were on a blockchain. Carbon credits don’t remove a damn thing, and it’s a lie to claim they do. It’s a bad excuse for doing a bad thing that shouldn’t be done in the first place. [Greenpeace] Dapper Labs has been sued by Rosen Law Firm on behalf of plaintiff Jeeun Friel and others, alleging that NBA Top Shot Moments are unregistered securites. Dapper issued Top Shots, Dapper runs the marketplace, Dapper controls everything about the market, and the company is notoriously slow at giving people their payouts. CoinDesk has the complaint. [CoinDesk] Dominic Cummings, the Poundshop Rasputin at the heart of British politics for most of 2019 and 2020, is offering to do an NFT of documents he has to submit to a Parliamentary committee. I’m honestly surprised he wasn’t deep in crypto already, and editing past blog posts to say how he gave Satoshi the idea for BitGold. [FT, paywalled] Newsy: NFT art auctions have a piracy problem. Includes bits from me. [WPTV; YouTube] Metakovan’s B.20 token scheme sold shares in a pile of Beeple NFTs, including the $69 million JPEG. It’s not going well so far. [Artnet] (“Non-Tokeable Fungi” joke courtesy Ingvar Mattson Daniel Dern.) [File 770]   this is just to say i have attached an NFT to the plums that were in the ice box (and which in many senses, still are) forgive me can you help me pay the electric bill — Crowsa Luxemburg (@quendergeer) April 28, 2021   It has been [0] days since the last DeFi rugpull Defi100 is a decentralised finance protocol running on the Binance Smart Chain — a private permissioned Ethereum instance run by popular crypto casino Binance to attract DeFi to a blockchain that isn’t clogged to uselessness, the way the main Ethereum public chain is. Today, we are all Defi100, as the site puts up an important customer service message: “WE SCAMMED YOU GUYS AND YOU CANT DO S—T ABOUT IT  HA HA. All you moon bois have been scammed and you cant do s—t about it.” A heartwarming step up from just changing the site to the word “penis.” They took $32 million in users’ cryptos with them. [Defi100, archive] Reddit users on /r/defi100 warned about the protocol previously — “They are either a scam, or they are completely incompetent” — but the subreddit moderator took care to remove it. The subreddit has seen no posts in the past two months — or none that were left up, at least. [Twitter; Reddit] DeFi lender BlockFi has self-rugpulled. In BlockFi’s March promotion, the company ran a giveaway where they would give customers a few dollars’ bonus for sufficient trading volumes — but accidentally gave them a few bitcoins’ worth instead. BlockFi reversed the transactions, but sent legal threats to customers who had already withdrawn the bonus. You don’t get that sort of customer service for free. [CoinDesk]   The business model of crypto is to provide a platform for crooks to scam muppets without running the risk of jail time. Few understand this. https://t.co/vFeyosRKPE — Trolly🐴 McTrollface 🌷🥀💩 (@Tr0llyTr0llFace) May 7, 2021   Bitcoin in the enterprise The Health Service Executive in Ireland has been ransomwared. [CNBC; RTE] The same attack hit Tusla, the Irish child and family agency, whose systems are linked to the HSE’s. [RTE] Toshiba Tec, meanwhile, refuses to accept that crypto is the future of payments! The Toshiba subsidiary told Dark Side, the ransomware gang that shut down Colonial Pipeline, to just bugger off. [CNBC] Luddites in hospitals in Waikato, New Zealand also refuse to embrace Bitcoin as a payment platform — they won’t be paying the ransomware either. [Stuff] Insurer AXA halts new policies to reimburse ransomware payments in France — because the authorities told them to stop directly motivating ransomware. [ABC News] How a CyberNews reporter applied for a job with a ransomware gang. [CyberNews] Sophos has published its 2021 report on the state of ransomware. [Sophos]   Art. pic.twitter.com/3Q9urXwddV — Kenneth Finnegan (@KWF) May 1, 2021   I heard it on the blockchain Good news for Ethereum in the enterprise — it’s evolved beyond the capabilities of Microsoft Azure! Azure Blockchain as a Service (BaaS) is shutting down in September 2021. It apparently still has any customers; they’ve been invited to move to Consensys. [ZDNet; Microsoft] Is Blockchain Ready for the Enterprise? LOL, no. The bit on the Decentralized Identifiers standard is hilarious — they wrote the standard, but fobbed the hard bit off onto another standards committee. [blog post] Enterprise blockchain was always a proxy for the price of Bitcoin. It was a way of saying “Bitcoin” without touching a Bitcoin. This was cool when number was going up in 2017, and not so cool when it wasn’t. Not a single one of these projects ever had a use case. Not a single one, ever, did any job better than existing technologies, rather than worse. “Smart contracts” are far less impressive when you realise they’re literally just “database triggers” or “stored procedures,” and that there are excellent reasons we try not to write code right there in the database in actual enterprise computing.   Using a blockchain when a database would work pic.twitter.com/uSwvom8P8o — Nathaniel Whittemore (@nlw) May 16, 2021   Baby’s on fire China has announced, yet again, that it’s kicking out the crypto miners — to “crack down on Bitcoin mining and trading behavior, and resolutely prevent the transmission of individual risks to the social field,” according to the Financial Stability and Development Committee of China’s State Council. The People’s Bank of China hates crypto a whole lot, and the Chinese government’s been trying to push the crypto miners out since 2018 — but perhaps they’ll make it stick this time. [Twitter; Reuters] Inner Mongolia has set up a hotline to report suspected crypto mining. This is the sort of regulation the world needs. Inner Mongolia has been told by Beijing to cut its CO2 production considerably, and they’d rather use their quota for steel and similar useful things. [Twitter; Sixth Tone; FT, paywalled] I am told that miners are calling around trying to get the hell out of China ASAP. Mining facilities are set up in containers to be moved around quickly, so this isn’t hard. The hard part is that Bitcoin now uses as much electricity as the Netherlands. Where do you put a medium-sized country’s worth of electricity usage at short notice? Bitcoin is green, hypothetically! In real life, here’s a gas-powered plant that restarted just to mine bitcoins. [Architect’s Newspaper, archive] The state of New York is suggesting banning bitcoin mining — new operations would be frozen, pending environmental review. [CoinDesk] Bankers have heard bitcoiners’ concerns about all the energy banks use — the Bank of Italy notes specifically that the Target Instant Payment Settlement (TIPS) system uses one 40,000th the energy of Bitcoin. [Bank of Italy] The European Central Bank describes Bitcoin’s “exorbitant carbon footprint” as “grounds for concern.” [ECB] The Financial Times has a detailed writeup of Bitcoin’s dirty energy problem. [FT, paywalled] NiceHash suspends all withdrawals due to a “security incident”. Funds are safe, I’m sure. [NiceHash] The Wyoming Blockchain grifters are at it again. David Dodson says: burn electricity for Wyoming’s prosperity! He quotes me saying “Bitcoin is literally anti-efficient,” but he puts this forward as a point in its favour. If coiners understood externalities, they wouldn’t be coiners. At least he spelt Attack of the 50 Foot Blockchain  right. [WyoFile]   any defense of bitcoin inevitably involves an argument where no sentence has anything to do with the proceeding sentence. at its most advanced levels, even clauses stop agreeing with each other https://t.co/X9zvcPppyE — Gorilla Warfare (again) (@MenshevikM) April 30, 2021   Lie dream of a casino soul Crypto exchanges are already forbidden in China. So instead they run “over the counter” desks, matching buyers and sellers, which is different because, uh, reasons. There’s a fresh rumour that China is closing down, or has closed down, Okex and Huobi’s OTC desks. Chinese USDT holders are also rumoured to be dumping their tethers. In probably-related news, China has also placed even tighter restrictions on dealing in cryptos for actual money. [Reuters] Turkey has put crypto exchanges under MASAK, its financial regulator. The exchanges are daunted at the work required not to be incompetent money-laundering chop shops. Turkey needed to put in the new rules to follow FATF guidance on virtual asset service providers, but having not one but two exchanges messily implode just recently was probably quite motivating. [Resmî Gazete, PDF, in Turkish; Decrypt; Reuters] Thailand mandates improved customer service for crypto traders! In-person ID checks will now be required to get a new account at an exchange. [Bangkok Post] The IRS’ plans to subpoena the Kraken crypto exchange about its users have been approved by the court. [Department of Justice] The IRS and the Department of Justice are just seeking information from Binance Holdings Ltd, you understand. Binance has not, as yet, been specifically accused of wrongdoing. [Bloomberg] The US Treasury has called for crypto transfers of value greater than $10,000 to be reported to the IRS. [Bloomberg]   decided to put my savings into cryptcurrency pic.twitter.com/8aWiCAcPN7 — Matt Round (@mattround) May 19, 2021   Things happen You can browse in absolute privacy on the Tor network! Except that last year, one in four Tor exit nodes was compromised — the attackers were replacing Bitcoin addresses in Bitcoin mixer web pages with the attackers’ address. The attacker group is stil at it — last year they had 23% of nodes, this year they have 27%. [The Record] Two years after SingularDTV shut down Breaker Mag, the site archive has finally rugpulled — all article URLs give “404 not found” or just don’t load. I’m pretty sure most of it is on the Internet Archive. [Breaker, archive of 12 May 2021] The suit in which Dave Kleiman’s estate is suing Craig Wright over the bitcoins that Wright claimed he mined with Kleiman is finally going to trial, starting on 1 November 2021. [Order, PDF] Goldman Sachs has finally reopened a very limited crypto trading desk, apparently due to client demand. The desk is dealing only in futures and non-deliverable trades. It has executed at least two trades. [FT, paywalled] Institutional investors speak on Bitcoin! “Our stance with clients is the 10-foot pole rule: stay away from it.” [FT, paywalled] The People’s Bank of China has run another DC/EP trial in Shehzhen, and the users respond! With a resounding “meh.” Again. It still offers nothing to end-users over Alipay or WeChatPay, as they already found in Shenzhen in November 2020. [Bloomberg] Robinhood appears to procure its Dogecoin from Binance, for what that’s worth. [Twitter] Is there anything Bitcoin can’t do? in Knoxville, Tennessee, a user applies Bitcoin to marriage counselling — and it’s transparent and cryptographically verifiable! Specifically, Nelson Paul Replogle paid a hitman in Bitcoin to kill his wife, and the FBI traced the transaction through Coinbase to his IP. Repogle is now facing murder-for-hire charges. [WATE TV, archive] Bitcoins only get you out of divorce settlements if you never spend them: “Courts have the power to re-open divorce settlements years afterwards if non-disclosure can be proven; if you find out five years down the line that your ex has made a big purchase from unknown funds, it may be possible to re-open a previous divorce settlement.” [FT, paywalled]   i don’t think it’s enough of a pyramid scheme, which is why i encourage crypto enthusiasts to buy my “GROW YOUR COIN” intro course $4200.69. if you bring in THREE friends we’ll throw in our intermediate course for 30% off—then, may we also interest you in our compound just by alb https://t.co/gCczcBnVDi — hannah gais (@hannahgais) May 6, 2021   Hot takes Frances Coppola on Tether’s smoke and mirrors: “It’s not reserves we should be worrying about, it’s capital. And the total lack of transparency.” [blog post] Charlie Munger and Warren Buffett are not fans of Bitcoin. “I don’t welcome a currency that’s so useful to kidnappers and extortionists and so forth, nor do I like just shuffling out of your extra billions of billions of dollars to somebody who just invented a new financial product out of thin air. I think I should say modestly that the whole damn development is disgusting and contrary to the interests of civilization.” [CNBC]   Instead of enrolling in the U of M's online cryptocurrency course, try this: • Send me $18k of your parents money. • I'll send them an email saying you're very smart and totally weren't playing video games the whole time. pic.twitter.com/QRR2Il7v6P — Lemon 🍋 (@AhoyLemon) May 4, 2021   Living on video Expert on Facebook’s Latest Digital Currency Attempt — me on NTD, in my capacity as the guy who literally wrote the book on Libra. [NTD] I spoke to Radio Free Asia about the Bitcoin crash. The Google translation is usable. [Radio Free Asia, translation] The Daily Mail quotes me, accurately, on Bitcoin’s environmental impact. [Daily Mail, archive] Libra Shrugged, reviewed by Bill Ryan: Battling Megaliths: Facebook and the American Government Battle Over Digital Currency and the Future of Money. [Medium] This Harry & Paul sketch from 2010 is not about crypto and Tether, except it totally is: [YouTube]     The underside of the Washington St bridge has taken a strong anti-NFT stance pic.twitter.com/NsRjpllXg3 — Karl Stomberg (@KFosterStomberg) April 30, 2021   anyone: *mentions financial derivatives* me, nodding: d$/dx — Dr. Anna Hughes (@AnnaGHughes) April 12, 2021   So finally @saifedean's book landed on my desk. And I have to say I'm now a convert. Any asset with a scarce supply, not controlled by humans, must inevitably become the new world reserve currency. If you don't get this please learn monetary economics. pic.twitter.com/XsIK1EHrb8 — Bernhard 'Game Theory Optimal Trading' Mueller (@muellerberndt) May 7, 2021   Meanwhile, the Iraqi Dinar had a bad 2020, but has been steady this whole year. pic.twitter.com/sswm6m1Pz5 — Travis View (@travis_view) May 13, 2021   tell me Elon's tweeting again without telling me Elon's tweeting again pic.twitter.com/FEMRO46gsM — Kyla (@kylascan) May 16, 2021   #Dogelon 😂 pic.twitter.com/wlU7B6gdh7 — King Bach (@KingBach) May 12, 2021   Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedamazonaxaazurebank of italybeeplebinanceblockfibram cohenbreakercbdcchiachinacraig wrightcybernewsdapper labsdark sidedave kleimandavid dodsondcepdefi100dogecoindominic cummingsecbethereumgoldman sachshealth service executivehetznerhuobiimogen heapinner mongoliairelandirskrakenlinksmasakmetakovanminingnba top shotsnelson paul reploglenew yorknew zealandnftnicehashokexotcpbocransomwarerobinhoodsingulardtvsophostetherthailandtortoshiba tecturkeytuslaunited statesus treasurywyomingyves klein Post navigation Previous Article Bitcoin crashed today — it was market shenanigans, not China Next Article Foreign Policy: Chia Is a New Way to Waste Resources for Cryptocurrency 4 Comments on “News: Everybody hates Chia, Defi100 rugpull, China versus mining, China versus crypto” Austin George Loomis says: 23rd May 2021 at 3:10 am In exchange for a sum of solid gold, Klein would imbue a patch of thin air with his artistic aura and provide a receipt. For everyone who thought Michael Craig-Martin’s An Oak Tree was too tangible. Reply David Gerard says: 23rd May 2021 at 12:08 pm notice he only threw half the gold into the Seine Reply Ingvar says: 23rd May 2021 at 1:16 pm Original source (or, at least, where I saw it) was on File770, attributed to Daniel Dern. But, as you were writing about NFTs at the time, it was imperative to pass it on. Reply David Gerard says: 23rd May 2021 at 1:40 pm credit updated! Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-2233 ---- Stablecoins through history — Michigan Bank Commissioners report, 1839 – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu Stablecoins through history — Michigan Bank Commissioners report, 1839 21st January 202123rd May 2021 - by David Gerard - 1 Comment A “stablecoin” is a token that a company issues, claiming that the token is backed by currency or assets held in a reserve. The token is usually redeemable in theory — and sometimes in practice. Stablecoins are a venerable and well-respected part of the history of US banking! Previously, the issuers were called “wildcat banks,” and the tokens were pieces of paper.   Genuine as a three-dollar bill — from the American Numismatic Society blog.   The wildcat banking era, more politely called the “free banking era,” ran from 1837 to 1863. Banks at this time were free of federal regulation — they could launch just under state regulation. Under the gold standard in operation at the time, these state banks could issue notes, backed by specie — gold or silver — held in reserve. The quality of these reserves could be a matter of some dispute. The wildcat banks didn’t work out so well. The National Bank Act was passed in 1863, establishing the United States National Banking System and the Office of the Comptroller of the Currency — and taking away the power of state banks to issue paper notes. Advocates of Austrian economics often want to bring back “free banking” in this manner, because they despise the Federal Reserve. They come up with detailed theory as to how letting free banking happen again will surely work out well this time. [Mises Institute search] On 15 March 1837, the “general banking law” was passed in Michigan. Bray Hammond’s classic “Banks and Politics in America” from 1957 (UK, US) tells how this all worked out (p. 601): Of her free-banking measure, Michigan’s Governor said: “The principles under which this law is based are certainly correct, destroying as they do the odious features of a bank monopoly and giving equal rights to all classes of the community.” Within a year of the law’s passage, more than forty banks had been set up under its terms. Within two years, more than forty were in receivership. Thus America grew great. Hammond quotes another source on the notes themselves: “Get a real furioso plate, one that will take with all creation — flaming with cupids, locomotives, rural scenery, and Hercules kicking the world over.” The ICO white papers of their day. After the Michigan law allowing free banking had been in effect for two years, Michigan’s state banking commissioners reported to the legislature on how it was all going. The whole report is available as a scan in Google Books — Documents Accompanying the Journal of the House of Representatives of the State of Michigan, pp. 226–258. [Google Books] There’s also a bad OCR — ctrl-F to “Bank Commissioners’ Report.” [Internet Archive] This is not your normal bureaucratic report from civil servants to the legislature. It’s a work of thundering Victorian passion, excoriating the criminals and frauds the commissioners found themselves responsible for dealing with. We should have more official reports that you could do a dramatic reading of: The peculiar embarrassments which they have had to encounter, and the weighty responsibilities consequent thereupon, clothes this duty with a new character. It becomes an act of justice to themselves, and to those who have honored them with so important a trust. At the period the commissioners entered upon their labors, every portion of the state was flooded with a paper currency, issued by the institutions created under the general banking law. New organizations were daily occurring, and the public mind was everywhere agitated with apprehension and distrust. The state was in the midst of the evils consequent upon an excessive and doubtful circulation. Rumors of the most frightful and reckless frauds were daily increasing. In this emergency, prompt and vigorous action was imperiously demanded, as well by the public voice as the urgent necessity of the case. Upon a comparison of opinions, the commissioners united in the conclusion that their duty was of a two fold character. The first, and most obvious one, was to take immediate and decided measures in ascertaining and investigating the affairs of every institution suspected of fraud, and closing the door against the evil without delay. The second was a duty of far more difficult and delicate a nature, and involving the assumption of a deep responsibility. The report outlines the problems in each particular district, and lists the local troubled banks. The commissioners tried to distinguish fraudulent banks from merely inept ones, and help the second sort get back on their feet for the public good. Most of it’s tedious detail. But there’s considerable parallels to our wonderful world of crypto: The loan of specie from established corporations, became an ordinary traffic, and the same money, set in motion a number of institutions. Specie certificates, verified by oath, were every where exhibited, although these very certificates had been cancelled at the moment of their creation, by a draft for a similar amount; and yet such subterfuges were pertinaciously insisted upon, as fair business transactions, sanctioned by custom and precedent. Stock notes were given, for subscriptions to stock, and counted as specie, and thus not a cent of real capital actually existed, beyond the small sums paid in by the upright and unsuspecting farmer and mechanic, whose little savings and honest name were necessary to give confidence and credit. The notes of institutions thus constituted, were spread abroad upon the community, in every manner, and through every possible channel; property, produce, stock, farming utensils, every thing which the people of the country were tempted, by advanced prices, to dispose of, were purchased and paid for in paper, which was known by the utterers to be absolutely valueless. Large amounts of notes were hypothecated for small advances, or loans of specie, to save appearances. Quantities of paper were drawn out by exchange checks, that is to say, checked out of the banks, by individuals who had not a cent in bank, with no security, beyond the verbal understanding that notes of other banks should be returned, at some future time. The banking system at the time featured barrels of gold that were carried to other banks, just ahead of the inspectors: The singular spectacle was presented, of the officers of the state, seeking for banks in situations the most inaccessible and remote from trade, and finding at every step, an increase of labor, by the discovery of new and unknown organizations. Before they could be arrested, the mischief was done; large issues were in circulation, and no adequate remedy for the evil. Gold and silver flew about the country with the celerity of magic; its sound was heard in the depths of the forest, yet like the wind, one knew not whence it came or whither it was going. Such were a few of the difficulties against which the Commissioners had to contend. The vigilance of a regiment of them would have been scarcely adequate, against the host of bank emissaries, which scoured the country to anticipate their coming, and the indefatigable spies which hung upon their path, to which may be added perjuries, familiar as dicers’ oaths, to baffle investigation. Bray Hammond’s book elaborates on these stories: Their cash reserves were sometimes kegs of nails and broken glass with a layer of coin on top. Specie exhibited to the examiners at one bank was whisked through the trees to be exhibited at another the next day. Banknotes increased liquidity— they helped value flow faster through the economy. Who benefited from this increase in liquidity? Mostly the fraudulent banknote issuers: It has been said, with some appearance of plausibility, that these banks have at least had the good effect of liquidating a large amount of debt. This may be true; but whose debts have they liquidated? Those of the crafty and the speculative — and by whom? Let every poor man, from his little clearing and log hut in the woods, make the emphatic response by holding up to view, as the rewards of his labor, a handful of promises to pay, which, for his purposes, are as valueless as a handful of the dry leaves at his feet. Were this the extent of the evil, the indomitable energy and spirit of our population, who have so manfully endured it, would redeem the injury. But when it is considered how much injury is inflicted at home, by the sacrifice of many valuable farms, and the stain upon the credit of the state abroad, the remedy is neither so easy nor so obvious. When we reflect, too, that the laws are ineffective in punishing the successful swindler, and that the moral tone of society seems so far sunk as to surround and protect the dishonest and fraudulent with countenance and support, it imperatively demands that some legislative action should be had, to enable the prompt and rigorous enforcement of the laws, and the making severe examples of the guilty, no matter how protected and countenanced. Passing around the corporate shell after you’ve scoured it has long been the fashion: So that the singular exhibition has been made of banks passing from hand to hand like a species of merchandize, each successive purchaser less conscientious than the preceding, and resorting to the most desperate measures for reimbursement on his speculation. The stablecoins of the day depreciated horribly, even while the institutions were still up and running, and it was the innocent members of the public stuck with the tokens who paid: Under the present law, the order in which the means and securities are to be realized and exhausted, will protract the payment of their liabilities to an indefinite period, and make them utterly useless to the great body of the bill holders, whose daily necessities compel them to sell at an enormous loss. The banks themselves, through their agents, are thus enabled to buy up their circulation at an immense depreciation, and their debtors to pay their liabilities in the notes of banks, purchased at a great discount. The daily advertisements for the purchase of safety fund notes in exchange for land and goods, and the placards every where to be seen in the windows of merchants and brokers, is a sufficient argument for the necessity of the measure proposed. The Commissioners pause here to examine the rationale for having free banking at all — principles of freedom, versus how it actually worked out in practice. They quote William M. Gouge’s A Short History of Paper-money and Banking in the United States, 1833, p. 230: [Google Books] A reform will not be accomplished in banking, as some suppose, by granting charters to all who apply for them. It would be as rational to abolish political aristocracy, by multiplying the number of nobles. The one experiment has been tried in Germany, the other in Rhode Island. Competition in that which is essentially good, in farming, in manufactures, and in regular commerce, is productive of benefit ; but competition in that which is essentially evil, may not be desirable. No one has yet proposed to put an end to gambling by giving to every man the privilege of opening a gambling house. This story reminds me of recent stablecoin “attestations,” and documents waved at apparently credulous journalists: The Farmers’ and Mechanics’ bank of Pontiac, presented a more favorable exhibit in point of solvency, but the undersigned having satisfactorily informed himself that a large proportion of the specie exhibited to the commissioners, at a previous examination, as the bona fide property of the bank, under the oath of the cashier, had been borrowed for the purpose of exhibition and deception; that the sum of ten thousand dollars which had been issued for “exchange purposes,” had not been entered on the books of the bank, reckoned among its circulation, or explained to the commissioners. What do you do with bankers so bad you can’t tell if they’re crooks or just bozos? You put their backsides in jail, with personal liability for making good: Upon officially visiting the Berrien county bank, the undersigned found its operations suspended by his predecessor, Col. Fitzgerald. On investigation of its affairs, with that gentleman, much was exhibited betraying either culpable mismanagement, or gross ignorance of banking. Col. Fitzgerald, however, with the usual vigilance and promptitude characteristic of all his official acts, had, previous to my arrival, caused the arrest of some of the officers of the institution, under the provisions of the act of December 30th, 1837; and required of the proprietors to furnish real estate securities to a considerable amount, conditioned to be released on the entire re-organization of the bank, and its being placed on a sound and permanent basis, or suffer a forfeiture of the lands pledged, which, together with their assets in bank, individual responsibility and the real estate security, given in conformity to law, must in the worst event, be more than sufficient to satisfy and pay all their liabilities. In crypto, not even the frauds are new. Fortunately, we have suitable remedies to hand — such as the STABLE Act. Put the stablecoin issuers under firm regulation. Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbray hammondmichiganstablecoinunited states Post navigation Previous Article News: Twitter on the blockchain, the US coup on the blockchain, Gensler in at the SEC, Ripple ex-CTO loses Bitcoin keys Next Article Blockchain Debate Podcast: Diem is a glorified Paypal (David Gerard vs. Bryce Weiner) One Comment on “Stablecoins through history — Michigan Bank Commissioners report, 1839” Steve Brown says: 25th January 2021 at 7:45 am Perhaps you’ll be interested in my 2003 article: How Wildcat Notes were Distributed: The Carpet-Baggers The term ‘carpet-bagger’ does not refer to Northerners migrating south after the civil war but refers to the fact that circulators of wildcat notes traveled with a carpet bag full of their dubious offerings. The idea was to transport notes from inaccessible Town A to inaccessible Town B where the distributor purchased whatever goods and livestock that he could, assuming that anyone from Town B accepted the notes. When the notes get to Town B’s bank the bank happily takes the notes for debt – but not for coin – and the bank pays the notes out to their unsuspecting customers in lieu of coin whenever and wherever possible. This way of business carries on until all of the carpet-bagger’s currency is in circulation in Town B. [A reverse of this very situation is likely played out in the reverse direction, eg a Town B carpet-bagger distributes Bank B’s dubious notes in Town A.] While the business of the town is carried on in this way quite legitimately for some period of time, eventually it becomes worthwhile for Town B’s bank to replenish it’s specie simply to ensure it’s own liquidity when Town B customers begin to demand coin, for example with a tightening of credit. In this case the Town B bank opts for the very simple expedient of refusing to honor Town A’s bank notes and forces holders to sell the notes at a significant discount to a local broker for specie. The local broker and the bank then split the difference with regard to these transactions and the bank then replenishes it’s coin. While the scenario described above seems complex, the underlying principle is quite simple and can be reduced in money terms to a usurious form of interest charged on the bank’s customers, where usury is an exorbitant or unlawful rate of interest. Usury was illegal in all states until the Marquette Nat. Bank of Minneapolis v. First of Omaha Service Corp’s supreme court decision reinstated the abuse in 1976; the modern analogy concerns credit swaps and is not illegal. The old credit swap system was very common during coin shortages when banks and their customers were forced to exercise their wits amidst an improvised local monetary crisis, and useful when paper notes could be exchanged for real goods and services, until credit tightening created demand for coin. A simpler but riskier proposition worked in larger metropolitan areas in the East, this idea was simply to transport notes from a far-off bank to a broker in another state who was willing to accept the notes at a significant discount. The risk here involved acceptance of the notes, as this was never guaranteed. As technology improved we can see how counterfeiting evolved into a significant industry by 1860, but it was not counterfeiting alone that constricted the economy of the United States. The major impediment to financial progress was greed and self-interest within a developing capitalist society which was largely unregulated, and incapable of policing itself in order to ensure the ascendance of the common financial good, the problem addressed beginning with the Treasury Act of 1863. In conclusion we see that the term ‘wildcat’ began with highly dubious banking practices engendered by coin shortages, poor communications and bad roads characteristic of the early nineteenth century. The end of the Civil War, telegraphic communication and railroad development all occurred at approximately the same time, along with the beginning of the Industrial Revolution, and even though wildcat banks would disappear, dubious banking practices did not and wildcat mines, oil, and many other types of questionable investments would take the wildcat bank’s place in history with significant modern examples easily identifiable today. @newsypaperz Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-4453 ---- News: Stopping ransomware, China hates miners, Ecuador CBDC history, NFTs still too hard to buy – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu News: Stopping ransomware, China hates miners, Ecuador CBDC history, NFTs still too hard to buy 16th June 202121st June 2021 - by David Gerard - 3 Comments. Libra Shrugged is in the Smashwords Father’s Day promotion until July 22 — and you can get it cheap with a coupon. Tell your friends! Tell your father! You can support my work by signing up for the Patreon — $5 or $20 a month is like a few drinks down the pub while we rant about cryptos once a month. It really does help. [Patreon] The Patreon also has a $100/month Corporate tier — the number is bigger on this tier, and will look more impressive on your analyst newsletter expense account. [Patreon] And tell your friends and colleagues to sign up for this newsletter by email! [scroll down, or click here] I consult, and take freelance writing commissions — I have a huge one I need to get to once I can get out from under all this El Salvador news … Rats! Cassie and Cygnus, sisters and littermates.     Bitcoin in the enterprise Is ransomware a plague yet? Insurance companies are already recoiling as ransomware attacks “skyrocket.” [FT, paywalled] But the most important thing in dealing with ransomware is to work on every part of the problem except the payment channels — because bitcoins are too precious to be hampered in any way. Also, it’s not possible to do more than one thing at the same time, if it’s about a problem related to crypto. Fix all the corporate networks in the US — then look at the payment channels. If ever. As the largest actual-dollar exchange in the US, Coinbase directly makes money when ransomware victims buy the coins to pay their ransom. Davide de Cillo, Coinbase’s head of product, is helpfully weighing in on Twitter threads and explaining why you certainly shouldn’t do anything about the payment channels for ransomware. “Thank god that didn’t ask for truly untraceable cash. Can you imagine if we had to ban USD bills?” Nice to see that the response “what about this other thing that isn’t the topic, huh” is fully industry-endorsed. [Twitter, archive] Others don’t go along with the crypto view of the problem. Nicholas Weaver: The Ransomware Problem Is a Bitcoin Problem. [Lawfare] Jacob Silverman: Want to Stop Ransomware Attacks? Ban Bitcoin and Other Cryptocurrencies. [New Republic] J. P. Koning has a more creative solution: shoot the hostage. Instead of banning crypto to stop ransoms … punish victims who pay up! [AIER] Although Koning does point out why ransomware gangs like bitcoin — it’s the censorship resistance. This doesn’t really make the case against taking out the payment channels. [Moneyness] Governments aren’t taking the crypto industry line either. The US will be giving ransomware attacks similar priority to terrorism. Jake Sullivan, President Biden’s national security adviser, said that ransomware needs to be a “priority” for NATO and the G7 nations. And GCHQ says that ransomware attacks are now a bigger cyber threat to the UK than hostile states. [Reuters; Independent; FT, paywalled] The future of ransomware may be cyber warfare rather than obtaining bitcoins. NotPetya is already thought to have been primarily a Russian attack on Ukrainian interests, and only secondarily about acquiring bitcoins. [War On The Rocks]   Wait for ransomware, and then sell assets as beyond economical recovery. Structural equivalent of writing off a car and all you have to do is park it somewhere you know it'll get phished. It's better than burning the place down for the insurance because you can sell the building. — crossestman (@crossestman) June 4, 2021   Baby’s on fire, but not so much in China Imagine Bitcoin after the apocalypse, when we’re finally free of computers: “I guess … three” “Oh, well done! You win the 0.097 bitcoins!” China is thoroughly sick of the crypto miners. The province of Qinghai is not permitting new mining projects, and will be closing down existing mining operations. [CoinDesk] Yunnan province was reported to be kicking the miners out. But what it’s actually doing is forcing the miners to connect to the official grid, and not just strike deals for cheaper electricity with power plants directly, who then don’t pay the state their cut. The Yunnan Energy Bureau is enforcing this with inspections of mining operations. Paying official prices may, of course, effectively push the miners out. [Fortune; The Block] HashCow will no longer sell mining rigs in China. Sichuan Duo Technology put its machines up for sale on WeChat. BTC.TOP, which does 18% of all Bitcoin mining, is suspending operations in China, and plans to mine mainly in North America. [Time] Mining rigs are for sale at 20–40% off. Chinese miners are looking to set up elsewhere. Some are looking to Kazakhstan. [Wired] Some have an eye on Texas — a state not entirely famous for its robust grid and ability to keep the lights on in bad weather. [CNBC] Crypto miners still mean software developers can’t have nice things — cloud computing service Docker is discontinuing their free tier specifically because of abusive miners, as of 18 June. “In the last few months we have seen a massive growth in the number of bad actors who are taking advantage of this service with the goal of abusing it for crypto mining … In April we saw the number of build hours spike 2X our usual load and by the end of the month we had already deactivated ~10,000 accounts due to mining abuse. The following week we had another ~2200 miners spin up.” [Docker] Are crypto miners money transmission businesses? Well, FinCEN explicitly says that creating fresh coins and distributing them to your pool is not. But miners also process transactions — and the trouble is that they pick and choose which ones they process. One mining pool, Marathon, was blocking transmissions that OFAC didn’t like — but stopped because of …  complaints from pool participants. Nicholas Weaver points out that this completely gives the game away: miners have always been able to comply with money transmission rules, they just got away with not doing it. [The Block; Lawfare] Why You Should Care About Bitcoin if You Care About the Planet: “Bitcoin is bringing dirty power plants out of retirement. Earthjustice is fighting this new trend in order to put an end to fossil fuels once and for all.” [Earthjustice] Shocked to see that the timeline for Ethereum moving to ETH2 and getting off proof-of-work mining has been put back to late 2022 … about 18 months from now. This is mostly from delays in getting sharding to work properly. Vitalik Buterin says that this is because the Ethereum team isn’t working well together. [Tokenist] Detect the malware — or become the malware? Norton 360 Antivirus will soon have a function to mine Ethereum on the user’s graphics card. Nobody knows why they thought this was in any way a good idea. [press release] Chia farming malware may have been spotted in the wild, infecting QNAP file servers. [QNAP forums]   Cops accidentally shut down operation actually harmful to society https://t.co/aC8BgVSWZz — Tom Hatfield (@WordMercenary) May 28, 2021   Regulatory clarity Bitcoin was invented as a new form of money, free of government coercion! The trouble there is when it interacts with the government-regulated world of actual money. Governments are stupid and incompetent in a great many ways — I mean, I live in the UK. My natural inclination is that governments can bugger off out of my face. Which is why it annoys me about Bitcoin that it keeps making the case for statism. I had someone call me out on this: my books are way too much like advocacy of the existing system. And it’s true — because both Bitcoin and Facebook came up with ideas that were even worse than what we have now. Getting your money out past awful governments is absolutely a crypto use case. I remain sceptical of many of the people advocating it, because a lot are just scammy crooks — or at best, craven number-go-up guys pretending to care about human rights. The Biden administration is proposing to collect data on foreign crypto investors active in the US, to “bolster international cooperation” and crack down on tax evasion. [Bloomberg] UK legacy fiat banksters can’t cope with the demands of the burgeoning crypto economy, and they’re blocking payments to exchanges, claiming “high levels of suspected financial crime” or some such nonsense. This time it’s Barclays, but also the “challenger” fintech banks Monzo and Starling. Reddit also reports Revolut blocking transactions. [Telegraph; Reddit] Why are crypto business not getting registered with the FCA in the UK?” A significantly high number of businesses are not meeting the required standards under the Money Laundering Regulations.” The deadline is now 31 March 2022. [FCA] CFTC Commissioner Dan Berkovitz is unhappy with DeFi — specifically, that it’s an unregulated market for the purpose of trading derivatives of commodities. [CFTC] Thailand has banned crypto exchanges from trading “gimmick” meme tokens, NFTs, and tokens issued by “digital asset exchanges or related persons.” [Bangkok Post] The SEC has updated its List of Unregistered Soliciting Entities, who use questionable information to solicit investment. Lots of crypto firms here. [SEC press release; Unregistered Soliciting Entities, PDF]   I engage in complex trading strategies; in real terms I’m consistently down but my imaginary gains are always limitless. — Josh Cincinnati (@acityinohio) May 30, 2021   Central banking, not on the blockchain The Bank of England discussion paper “New Forms of Digital Money” is about Libra/Diem-style coins and central bank digital currencies (CBDCs). It’s not at all about crypto trading coins like Tether, though that’s what the press has latched onto, all citing Jemima Kelly in the Financial Times laughing the Tether reserves pie charts out of the room. Anyway, the Bank looks forward to your comments — get them in by 7 September. [Bank of England; FT, paywalled] In 2015, Ecuador tried to do a CBDC, based on US dollars: Sistema de Dinero Electrónico. It failed pretty hard. I blogged about it here, and wrote it up in chapter 15 of Libra Shrugged. Now there’s a detailed history in the Latin American Journal of Central Banking, by Andrés Arauza, Rodney Garratt and Diego F. Ramos. [Science Direct] Frances Coppola: “I’ve written up my analysis of the BIS’s proposed capital regulations for cryptocurrencies and stablecoins. With a primer on bank capital and reserves, since people still don’t seem to know what these are and how they differ from each other.” This is about banks’ own capital, not liabilities to customers. [blog post]   I have previously simplified that down to: "If someone says 'Blockchain solves X', substitute database for blockchain. If it still makes sense, use a database. If it doesn't make sense, Blockchain doesn't solve it." — Myz Lilith (@MyzLilith) May 31, 2021   Sales receipt fan fiction Protos: “The NFT market has imploded over the past month, with sales in every single category almost entirely drying up.” This was based on data from nonfungible.com — which they say Protos misinterpreted. Amy Castor, in ArtNet, concurs wth nonfungible.com — the NFT market is down, but not to the extent Protos painted it.  [Protos; nonfungible.com; Artnet, paywalled] BBC: Buying a pink NFT cat was a crypto nightmare — a normal person tries to buy an NFT, and discovers that, after 12 years, crypto is still basically unusable garbage for normal humans. With quotes from me. Crypto still has the usability of a bunch of wires on a lab bench. Crypto pumpers really hate having that pointed out, and will always blame the victim. In thirty years, the crypto bros will be saying “it’s early days, give it thirty years, time will tell.” [BBC] TechMonitor: ‘The apotheosis of ownership’: What is the future of NFTs? With quotes from me. [TechMonitor]   I have an idea for a data structure, hear me out. A linked list where every node contains a hash of all the data in the nodes behind it, and every time you want to add a new node, you need about 200.000 other computers to say ok and consume the power equivalent of a small nation — Ólafur Waage (@olafurw) May 6, 2018   Things happen BBC on crypto day trading: “This is the crack cocaine of gambling because it is so fast. It’s 24/7. It’s on your phone, your laptop, it’s in your bedroom.” [BBC] Someone just told me what IBM Blockchain was charging for “managed blockchain”: on the order of $1000 per node per month. “I was speechless after receiving the quote.” IBM didn’t quite make the case against PostgreSQL. This is a hilarious story from the 1990s, as told in 2012: How The Government Set Up A Fake Bank To Launder Drug Money. (Podcast plus transcript.) [Public Radio East] Beginning 3 August, Google will no longer accept ads for DeFi protocols, decentralised exchanges, ICOs, crypto loans or similar financial products. [Google Support] Wealth manager Ruffer handles large investments for “institutions, wealthy individuals and charities.” Ruffer got into Bitcoin in November 2020, and assured the crypto world that they were in this for the long haul. It turns out Ruffer sold up in April 2021, for a tidy profit. The “speculative frenzy” was making them nervous. [FT, paywalled]   I wish people would stop saying “crypto” when they mean “cybercrypto”. Words have meaning, people. — matt blaze (@mattblaze) June 7, 2021   Hot takes New Bitfinex’ed blog post just dropped: Tether is setting a New Standard for Transparency, that is Untethered from facts. A catch-up on the Tether situation, as the rest of the world finally starts noticing there might be a problem here. [Medium] There’s another podcast series about the Quadriga crypto exchange and its allegedly-deceased founder, Gerry Cotten. This one is “Exit Scam”. For once, I’m not in this one. [Exit Scam] Cas Piancey: Michael Saylor of MicroStrategy has always been like this — what actually went down at MicroStrategy, and then at the SEC, from 1998 to 2000. [Medium] The Marshall Islands SOV Deconstructed, by IMF researcher Sonja Davidson — Sonja was one of the people who helped look over the CBDC chapter in Libra Shrugged. [Global Fintech Intelligencer] Requiem for a Bright Idea (1999) — a post-mortem on David Chaum’s DigiCash, a predecessor of Bitcoin. [Forbes, 1999] At last, a worthy successor to brazil.txt, but this time with grown-ups — the founder of Zebi, the “Indian ETH,” pumps, and dumps. [Reddit]   help I'm in an abusive relationship pic.twitter.com/VIE3RLiewr — CryptoFungus (@crypt0fungus) June 14, 2021   Living on video I’ll talk to almost any podcast or media in good faith, at the drop of a hat. (Email me!) But I keep being asked to talk about crypto on Clubhouse. Unfortunately, Clubhouse isn’t on Android yet — and my A5 2016 is apparently too boomer to run the Clubhouse for Android preview. So if you want me to debate cryptos on Clubhouse, you’ll need to buy me an unlocked iPhone 12 or Galaxy S21. Demonstrate your proof-of-stake on this one. (I AM BEING SILENCED by BAD FAITH CULTURAL MARXIST WOKEISTS not buying me a new top-end phone.) When the Music Stops is Aviv Milner’s new skeptical podcast about cryptocurrency. I went on to talk about Chia and the various disk-space coins. [Anchor.fm] NTD: China’s Robinhoods Eye US Market with Cryptos — with me, 10:50 on. Not very crypto news, this is more about the daytrading, i.e., gambling market, with Chinese stock daytrading companies looking to get into the US and offer cryptos — which they can’t do in China. Because China hates cryptos. [YouTube] I also did a pile of press on El Salvador — it’s disconcerting to discover that my blog posts and Foreign Policy article seem to be the primary sources of information on the scheme. But those will be in my next El Salvador post! As well as Bukele, there’s the weird Strike and Bitcoin Beach factions. The El Salvador Bitcoin Caper would make a hilarious slapstick comedy about crook vs. crook vs. crook — if it wasn’t real life, with six and a half million victims.   Already loving this book “Attack of the 50 Foot blockchain” by @davidgerard pic.twitter.com/1J0KcNEQb2 — Bichael (@MikeLewisATX) June 5, 2021   Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbank of englandbarclaysbisbitcoinbitfinexedblockchainbtc.topcbdccftcchiachinaclubhousecoinbasecryptokittiesdan berkovitzdavid chaumdavide de cillodefidigicashdockerearthjusticeecuadorel salvadorethereumfcagchqgerald cottengooglehashcowibmlinksmarathonmarshall islandsmichael saylormicrostrategyminingmonzonftnortonofacpodcastqinghaiqnapquadrigacxransomwareruffersdesecsichuan duo technologysovstablecoinstarlingthailandunited kingdomunited stateswhen the music stopsyunnanzebi Post navigation Previous Article Foreign Policy: El Salvador Is Printing Money With Bitcoin Next Article Bitcoin myths: immutability, decentralisation, and the cult of “21 million” 3 Comments on “News: Stopping ransomware, China hates miners, Ecuador CBDC history, NFTs still too hard to buy” Ingvar says: 18th June 2021 at 7:54 am Fuel for future write-ups: https://irony-97882.medium.com/the-melting-of-iron-89469b01e083 Reply Allan says: 21st June 2021 at 6:38 am The link to Frances Coppola’s blog post does not work. Need to add www subdomain, here is working link: https://www.coppolacomment.com/2021/06/bank-capital-and-cryptocurrencies.html Reply David Gerard says: 21st June 2021 at 8:17 am well, it worked before! Fixed 🙂 Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-5238 ---- News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith 30th July 202130th July 2021 - by David Gerard - 3 Comments. Get your signed copies of Attack of the 50 Foot Blockchain and Libra Shrugged, for £25 the pair plus postage — £6 UK, £10 Europe, £12 rest of world. See the post for details. You can support my work by signing up for the Patreon — $5 or $20 a month to support this world of delights! [Patreon] And remind your friends and colleagues to sign up to get my newsletter by email. [scroll down, or click here]     El Salvador calls Cardano El Faro got hold of a presentation from the Cardano Foundation and Whizgrid — a white-label crypto exchange provider — to President Bukele’s Bitcoin team on 4 June 2021, the day before Bukele announced El Salvador’s embrace of Bitcoin. The Zoom videos leaked last week revealed the plan to release a new electronic Colón-Dollar. This fresh leak outlines how to introduce the Colón-Dollar, pay welfare subsidies with the system, and use QR codes for daily transactions in shops. That is: they presented a completely generic electronic payment system, that doesn’t benefit from cryptocurrency in any way. (Reader tv comments: “The first slide in that presentation is so generic and slapped together that they neglected to replace the Euro sign with a dollar sign.”) The story is in Spanish, but the leaked slides are in English. The presentation is nine slides at the bottom of the page on El Faro — you need to swipe sideways to go to the next slide. [El Faro] “Cryptoassets as National Currency? A Step Too Far” by Tobias Adrian and Rhoda Weeks-Brown, IMF. It’s clear from how Adrian and Weeks-Brown talk about “cryptoassets,” and that the one they name is “Bitcoin,” that this is about El Salvador and nothing else. This is a perfect response from the people who know about these things, and Adrian in particular — who wrote a paper in May 2019 that pretty closely anticipated Facebook’s June 2019 plan for Libra. I don’t expect President Bukele to pay very much attention. [IMF] Everybody still hates Binance How cryptocurrency exchanges work: Withdrawals are temporarily unavailable due to unscheduled scheduled maintenance. Funds are absolutely safe. Scurrilous rumours that we have been hacked, or “hacked,” are entirely untrue, and we will sue anyone spreading such. To file as an unsecured creditor, please contact our receivers, Grabbit, Skimm & Runne Accountants, at their Mail Boxes Etc in the Caymans. Good news for Binance — the Securities Commission of Malaysia officially recognises Binance! And CZ personally! The Securities Commission has announced actions against Binance for “illegally operating a Digital Asset Exchange (DAX)” — Binance is not registered in Malaysia. [press release] The action is against Binance Holdings Limited (Cayman Islands), Binance Digital Limited (UK), Binance UAB (Lithuania), Binance Asia Services Pte Ltd (Singapore), and Changpeng Zhao himself. Binance must disable binance.com and its mobile app in Malaysia by 8 August, stop all marketing and email to Malaysians, and restrict Malaysians from Binance’s Telegram group. CZ is personally ordered to make sure these all happen. Binance was included on the Securities Commission’s Investor Alert List in July 2020. The list also contains a pile of “potential clone entities” using “Binance” in their names. [SC, archive] The Securities Commission finishes: “Those who currently have accounts with Binance are strongly urged to immediately cease trading through its platforms and to withdraw all their investments immediately.” Couldn’t put it better myself. Binance is stopping margin trading against euros, pounds or Australian dollars from 10 August, and has stopped all margin trading in any currency in Germany, Italy and the Netherlands. [Reuters; Reuters] Binance is totally not insolvent! They just won’t give anyone their cryptos back because they’re being super-compliant. KYC/AML laws are very important to Binance, especially if you want to get your money back after suspicious activity on your account — such as pressing the “withdraw” button. Please send more KYC. [Binance] CZ is looking for a patsy to stand up front while he keeps collecting the money — a replacement CEO for Binance, with a “strong regulatory background,” so CZ can “contribute to Binance and the BNB ecosystem. I don’t have to be CEO to do that.” [CoinDesk] The Long Blockchain of Bitcoin ETFs Goldman Sachs proposes a “DeFi and Blockchain Equity” exchange-traded fund! [SEC filing] The Goldman Sachs Innovate DeFi and Blockchain Equity ETF (the “Fund”) seeks to provide investment results that closely correspond, before fees and expenses, to the performance of the Solactive Decentralized Finance and Blockchain Index (the “Index”). Solactive doesn’t have an index of that name. Solactive told CryptoNews that Goldman would be using their “Solactive Blockchain Technology Performance Index” — which is a list of global tech companies that have, at most, put out a blockchain-themed press release. No Riot Blockchain, no Microstrategy, no Coinbase — instead, we have Nokia (who?) and Accenture. When a client thinks they want “blockchain” exposure, the advisor could just recommend this. [CryptoNews] Solactive later told CryptoNews that they were putting together a new index for Goldman’s ETF. Though the SEC filling says that the ETF is “not sponsored, promoted, sold or supported in any other manner by Solactive AG.” We’ll see what the eventual list looks like. Regulatory clarity The New Jersey cease and desist order against BlockFi offering its BlockFi Interest Accounts has been delayed, and will not go into effect before 2 September 2021. [Twitter] Vermont’s Department of Financial Regulation has also asked BlockFi to show cause within 30 days why the company should not be issued a cease and desist on its BlockFi Interest Accounts. Vermont asked BlockFi about this back in January 2021, and considered BlockFi’s response “unsatisfactory.” [CoinDesk] South Korea changes its tax rules to make it easier to seize crypto to pay back taxes. [Reuters] The new Australian Securities and Investments Commission chair, Joseph Longo, warns about unregulated cryptocurrency trading “and other economic threats to pandemic recovery.” He means investment scams. [ABC] I fought the law For Tether’s criminal investigation by the Department of Justice, and the coincidental spike in the price of Bitcoin, see yesterday’s post. Coiners’ continuing assertions that anyone should assume good faith in Bitfinex or Tether when they’re known bad actors in extensively documented, and indeed legally binding, detail is Upton Sinclair’s law — “It is difficult to get a man to understand something, when his salary depends on his not understanding it” — on PCP. Here’s me on a Turkish crypto blog talking about Tether and USDC. Google Translate gives a reasonable rendition of my original answers. [PA Sosyal] Virgil Griffith is the Ethereum Foundation developer who evangelised Ethereum to North Korea and got himself arrested in 2019. Griffith was out on bail, but violated his bail conditions by trying to access his Coinbase account — he got his mother to log in, which totally doesn’t count, right? “Though the defendant is a bright well-educated man, his method of circumvention of the Order was neither clever nor effective,” said the judge. Griffith is back behind bars until his hearing in September. [Amy Castor; order, PDF] Na-no Some Nano fan on Twitter wanted me to listen to a podcast featuring Nano Foundation director George Coxon. I said I’d listen for $50 — very cheap consulting indeed. After one guy tried paying me in Nano, and multiple people attempted to explain to him in small words why digital pogs do not in fact constitute dollars, another guy sent $50 in actual money via PayPal. As a man of my word, I live-tweeted the ordeal. You’ll be unsurprised to hear that this podcast did not, in fact, sell me on Nano. Nano is an ambitious but insignificant research coin. and not even a very fast one — at 70 transactions per second (for comparison, a private Ethereum instance can do 300 TPS), it’s not quite taking over the world any time soon. And even without proof-of-work, Nano still wants to use Bitcoin’s broken and incompetent conspiracy theory economics — it’s all Austrian economics, Bitcoin variant. If this podcast is the sort of marketing that convinces Nano’s bloody awful Twitter pumpers, I can see why they’re like they are. The nicest thing I can say is they’re not as bad as the XRP or Iota pumpers from back in the day. The next time someone wants me to listen to their crypto podcast, it’ll be $100. That’s actual-money USD, not your altcoin. A third podcast will probably be $200. I’ll double the price from there until the requests stop. [Twitter] Things happen No, Amazon is not accepting Bitcoin, you idiots. This rumour was entirely based on a single blue-sky job advertisement for a “Digital Currency and Blockchain Product Lead” to “develop the case for the capabilities which should be developed” — which crypto promoters then pushed as far as they thought they could. [Reuters; Amazon, archive] MicroStrategy has announced its Q2 2021 earnings. Businesses still buy their software — the useful thing that MSTR does — giving the company “one of our best operational quarters in our software business in years, highlighted by 13% revenue growth” to $125.4 million. However, “Digital asset impairment losses” are $424.8 million — it seems that Bitcoin went down. But that’s fine — “Going forward, we intend to continue to deploy additional capital into our digital asset strategy.” Good luck with that. [press release] The Huobi and OKCoin crypto exchanges are closing their mainland Chinese subsidiaries — both companies’ operations are now thoroughly Hong Kong-based. [SCMP] Not your JSON, not your coins. But the Ethereum Foundation turns out to be a touchable — and suable — entity. Someone bought 3,000 ETH in the original Ethereum crowdfunding in 2015, the download of the private key failed, and the Foundation can’t give them a backup copy. The Foundation offered the buyers 2,750 ETH in a settlement, but they want all 3,000 ETH. [WJLA] Hot takes Bruce Schneier: Bitcoin isn’t usable as a currency. So if you’re getting bitcoins in bulk from anywhere other than buying them, you’re likely a criminal. [blog post] Notes on Mondex, an early stored-value card, and how a lot of what they did in the 1990s is relevant to CBDCs today. [CHYP] Samantha Keeper: The NFT Rube Goldberg Machine, or, Why is NFT Art So Lazy? “Art and automation’s merger long predates cryptoart’s use of procedural generation. You’ll never hear NFT sellers talk seriously about that history, though, cause it reveals not just NFT art’s contradictions, but also its cynical laziness.” [Storming The Ivory Tower]   Even darkweb black markets don't want to deal with antivaxxers and covid deniers. pic.twitter.com/n9Pt2Q3Njz — MalwareTech (@MalwareTechBlog) July 25, 2021   Hard to believe it's only been six months since the GameStop short squeeze kicked off a populist uprising that will fundamentally change capitalism forever. — Osita Nwanevu (@OsitaNwanevu) July 25, 2021   Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedamazonasicaustraliabinanceblockficardanoel faroel salvadoretfethereum foundationgeorge coxongoldman sachshuobiimfjoseph longolinksmalaysiamicrostrategymondexnanonew jerseynftokcoinsolactivesouth koreatethervermontvirgil griffithwhizgrid Post navigation Previous Article Tether criminally investigated by Justice Department — When The Music Stops podcast Next Article News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack 3 Comments on “News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith” tv says: 30th July 2021 at 7:51 pm The first slide in that presentation is so generic and slapped together that they neglected to replace the Euro sign with a dollar sign. Reply David Gerard says: 30th July 2021 at 8:04 pm lol, well spotted! Reply Elsie H. says: 30th July 2021 at 11:32 pm You’re not familiar with Nokia, the world-famous manufacturer of rubber boots? I’m not sure what a rubber boots have to do with the blockchain, but they’d probably be a good investment for Goldman Sachs regardless. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-5818 ---- Tether criminally investigated by Justice Department — When The Music Stops podcast – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu Tether criminally investigated by Justice Department — When The Music Stops podcast 29th July 2021 - by David Gerard - Leave a Comment Number go up! Because there’s trouble at Tether. Specifically: [Bloomberg] A U.S. probe into Tether is homing in on whether executives behind the digital token committed bank fraud, a potential criminal case … the Justice Department investigation is focused on conduct that occurred years ago, when Tether was in its more nascent stages. Specifically, federal prosecutors are scrutinizing whether Tether concealed from banks that transactions were linked to crypto, said three people with direct knowledge of the matter who asked not to be named because the probe is confidential. That’s the entire new information in the story. We don’t know precisely what “years ago” means here — but I’d be surprised if the New York Attorney General didn’t helpfully supply a pile of information from their recently-concluded investigation. Remember that Bitfinex/Tether were lying to their banks all through 2017 and 2018, with banks kicking them off as soon as they found out their customer was iFinex. This week’s “number go up” happened several hours before the report broke — likely when the Bloomberg reporter contacted Tether for comment. BTC/USD futures on Binance spiked to $48,000, and the BTC/USD price on Coinbase spiked at $40,000 shortly after. Here’s the one-minute candles on Coinbase BTC/USD around 01:00 UTC (2am BST on this chart) on 26 July — the price went up $4,000 in three minutes. You’ve never seen something this majestically organic:     Janet Yellen, the Secretary of the Treasury, met on 19 July with the Presidential Oh-Sh*t Working Group of regulators to talk about “stablecoins.” The meeting was closed-door, but a report has leaked. They’re not happy about Libra/Diem-style plans, or about Tether: [Bloomberg] Acting Comptroller of the Currency Michael Hsu said regulators are scrutinizing Tether’s stockpile of commercial paper to see whether it fulfills the company’s pledge that each token is backed by the equivalent of one U.S. dollar. Amy Castor wrote up the present saga for her blog, [Amy Castor] and we both went on Aviv Milner’s podcast When The Music Stops to talk about Tether’s new round of troubles. It’s 51 minutes. [Anchor.fm]     Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedamy castorbitcoinbitfinexpodcasttetherwhen the music stops Post navigation Previous Article News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Next Article News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-690 ---- Number go up! New Bitcoin peak, exactly three years after the last — what’s happening here – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu Number go up! New Bitcoin peak, exactly three years after the last — what’s happening here 18th December 202019th December 2020 - by David Gerard - 1 Comment So firstly, I called it two years ago:   I'd expect the bubbly headlines to start again 2021-2022, it feels to me like that's about how long it will take to grow a fresh crop of Greater Fools. In general – there's always going to be people who are desperate to buy into this year's version of ostrich farming. — David Gerard 🐍👑 (@davidgerard) November 25, 2018   Not that this was any feat of prognostication. Bitcoin is a speculative commodity without a use case — there is nothing it can do, other than bubble, or fail to be bubbling. It was always destined to stagger along, and then jump at some point, almost certainly for stupid reasons. Remember that the overarching goal of the entire crypto industry is to get those rare actual-dollars flowing in again. That’s what this is about. We saw about 300 million Tethers being lined up on Binance and Huobi in the week previously. These were then deployed en masse.     Pump it! You can see the pump starting at 13:38 UTC on 16 December. BTC was $20,420.00 on Coinbase at 13:45 UTC. Notice the very long candles, as bots set to sell at $20,000 sell directly into the pump. Tether hit 20 billion, I hit 10,000 Twitter followers, BTC hit $20,000. It is clearly as JFK Jr. foretold. A series of peaks followed, as the pumpers competed with bagholders finally taking their chance to cash out — including $21,323,97 at 21:54 UTC 16 December, $22,000.00 precisely at 2:42 UTC 17 December, and the peak as I write this, $23,750.00 precisely at 17:08 UTC 17 December. This was exactly three years after the previous high of $19,783.06 on 17 December 2017. [Fortune]   Coinbase BTC/USD chart at the moment of the peak. Dull, isn’t it?   Can you cash out? The Coinbase chart showed quite a lot of profit-taking, as bagholders could finally dump. When you see a pump immediately followed by a drop, that’s what’s happening. Approximately zero of the people saying “BEST INVESTMENT OF THE LAST DECADE” got in a decade ago — they bought at a peak, and are deliriously happy that they can finally cash out. Seriously: if you have Bitcoin bags, this is the time to at least make up your cost basis — sell enough BTC to make up the ActualMoney you put in. Then the rest is free money. Binance and Coinbase showed the quality we’ve come to expect of cryptocurrency exchanges — where you can make them fall over by using them. But I’m sure Coinbase will get right onto letting you have your actual dollars. [CoinTelegraph]     So does everyone love Bitcoin now? Not on the evidence. The Google chart is still dead — see above. That peak in 2017 is what genuine organic retail interest looks like. This isn’t that, at all. Retail still isn’t diving into Bitcoin — even with Michael Saylor at MicroStrategy and Jack Dorsey at Square spending corporate funds on marketing Bitcoin as an investment for actual institutions, and getting holders at hedge funds to do the same. But the marketing will continue. Remember that there’s a lot of stories happening in crypto right now. BitMEX is still up, but Arthur Hayes is a fugitive. The Department of Justice accuses BitMEX — and specifically Hayes — of trading directly against their own customers. iFinex (Bitfinex/Tether) are woefully short of the documents that the New York Attorney General subpoenaed and that they’ve been fighting against producing for two years now, and that they must produce by 15 January 2021. Remember: if iFinex had the documents, they’d have submitted them by now. So there’s a pile of people in trouble — who have coins they need to dump. (And John McAfee did promise “I will eat my dick on national television” if BTC didn’t hit $500,000 within three years of 2017. So maybe this is a last ditch McAfee penis pump.) [The Dickening] A pumped Bitcoin peak is just one story among many going on right now — and a completely expected one. UPDATE: Here’s the smoking gun that this was a  coordinated pump fueled by stablecoins — 127 different addresses trying to deposit stablecoins to exchanges in one block of transactions on Ethereum, just a few minutes before the first price peak. [Twitter]   Lots of people deposited stablecoins to exchanges 7 mins before breaking $20k. Price is all about consensus. I guess the sentiment turned around to buy $BTC at that time. This indicator is helpful to see buying power. Set Alert 👉 https://t.co/xjY9mvaa15 pic.twitter.com/gV4J1N4R9g — Ki Young Ju 주기영 (@ki_young_ju) December 16, 2020   Start the new year by finding a way to create a little joy, no matter how small or fleeting pic.twitter.com/fiiCB4BW3c — foone (@Foone) January 1, 2018 Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbinancebitcoincoinbasenumber go uptrading Post navigation Previous Article Me in BitcoinBlog.de: “Bitcoin piles up so many layers of ideas that are bad or wrong or just don’t work.” Next Article SEC sues Ripple Labs, claiming XRP is a security One Comment on “Number go up! New Bitcoin peak, exactly three years after the last — what’s happening here” Sherman McCoy says: 21st December 2020 at 4:30 am I am curious what pair are the arbitrage bots arbitraging? They end up holding tether I presume after selling into the pump, what then? Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-6965 ---- Bitcoin myths: immutability, decentralisation, and the cult of “21 million” – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu Bitcoin myths: immutability, decentralisation, and the cult of “21 million” 27th June 20214th July 2021 - by David Gerard - 8 Comments. The Bitcoin blockchain is famously promoted as “immutable.” Is the structure of Bitcoin itself immutable? Is the 21 million BTC limit out of the control of any individual? Is Bitcoin a decentralised entity of its own, the essence of its operating parameters unalterable by mere fallible humans? Is Bitcoin truly trustless? Well, no, obviously. Even though bitcoiners literally argue all the above — most notably Saifedean Ammous in The Bitcoin Standard, and Andreas Antonopoulos in his books on Bitcoin — and these ideas are standard in the subculture. Decentralisation was always a phantom. At most it’s a way to say “can’t sue me, bro.” Every process in Bitcoin tends to centralisation — because Bitcoin runs on economic incentives, and centralised systems are more economically efficient. Trustlessness is also a phantom. Bitcoin had to create an entire infrastructure of trusted entities to operate in the world. Something called “Bitcoin” will be around for decades. All you need is the software, the blockchain data, and two or more enthusiasts. But Bitcoin’s particular mythology and operating parameters are entirely separate questions. Bitcoin’s basic operating parameters are unlikely to change in the near future — but this is entirely based on trust in the humans who run it. Their actions are based in whether changes risk spooking the suckers with the precious actual-dollars. The Bitcoin Cash debacle destroyed any hope of substantive change to Bitcoin for a while, leaving Bitcoin as just a speculative trading commodity with nothing else going for it. But if a new narrative is needed, all bets are off. Social convention is entirely normal, and how everything else works — but it’s not the promise of immutable salvation through code. That was always delusion at best.   Image by Mike In Space   How Bitcoin is marketed Bitcoin is not about the technology. It’s never been about the technology. Bitcoin is about the psychology of getting rich for free. People will say and do anything if you tell them they can get rich for free. You don’t even have to deliver. Bitcoin also has an elaborate political mythology — which is largely delusional and literally based in conspiracy theories. The marketing pitch is that the actual-money economy will surely collapse any moment now! And if you get into Bitcoin, you can get rich from this. If you want to get rich for free, take on this weird ideology. Don’t worry if you don’t understand the ideology yet — just keep doing the things, and you’ll get rich for free! That the mythology is so clearly at odds with reality is a feature, not a bug — it just proves the world’s out to get you, and you need to stick with the tribe. So the key ingredient in Bitcoin is mythology. And job number one is: don’t spook the suckers. Also, say “21 million” a lot. It’s a mantra to remind the believers to keep the faith. The centralisation of mining When Satoshi Nakamoto designed Bitcoin, he wanted the process of generating new bitcoins to be distributed. He distributed coins to whoever processed a block of transactions. (These blocks were chained together to form the ledger — which was called the “block chain” in the original Bitcoin source code.) Just giving away bitcoins to anyone who asked wouldn’t work because of the sybil problem — where you couldn’t tell if a thousand people asking for coins were really just one guy with a thousand sockpuppets. Satoshi’s way around this was to require some form of unfakeable commitment before you’d be allowed to validate the transactions, and win the coins. He came up with using an old idea called “proof-of-work” — which is really proof of waste. You waste electricity to show your commitment, and the competitors win bitcoins in proportion to how much electricity they waste. This is called “bitcoin mining,” in an analogy to gold mining. Miners guess numbers and calculate a hash as fast as they can; if their guess hashes to a small enough number, they win the bitcoins! Satoshi envisioned widely distributed Bitcoin mining — “Proof-of-work is essentially one-CPU-one-vote.” [Bitcoin white paper, PDF] The problem here is that mining has economies of scale. The bigger a mining operation you have, the more you can optimise the process, and calculate more hashes with each watt-hour of energy. This means that proof-of-work mining naturally centralises. And this is what we see happening in practice. For the first year, Satoshi personally did a lot of the mining — accumulating a stash of around a million bitcoins, that he never moved. More individual CPU users joined in over 2009 and 2010; but by late 2010, people started mining on video cards, which could calculate hashes much faster — to Satoshi’s shock. [thread] When you have a tiny share of all the mining, rewards are sporadic — you might not see a bitcoin for months. The solution was to join together in mining pools, who would share the rewards. The first was Slushpool in late 2010. The Deepbit pool ran from 2011 to 2013; at its peak, it controlled 45% of mining. By late 2013, application-specific ICs (ASICs) that did nothing but mine bitcoins as fast as possible were being deployed; you couldn’t compete without using ASICs. The most successful early ASIC manufacturer was Bitmain — who also controlled a large mining pool. The doomsday scenario in early Bitcoin was a “51% attack” — if you had 51% of mining, you could block anyone else’s transactions and accept only those you wanted, and the Bitcoin blockchain would read the way you wanted it to. If anyone achieved 51%, it was game over! GHash.io achieved 51% in July 2014. [Guardian, 2014] The GHash.io pool promptly split apart, to calm the upset Bitcoin fan base. Nobody spoke of the 51% problem in Bitcoin again. (Though altcoins have frequently suffered 51% attacks.) Even Ammous talked about 51% attacks in The Bitcoin Standard and somehow forgot to mention that this had already happened. But from 2014 on, Bitcoin mining was indisputably centralised. In 2015, the men controlling 80% of Bitcoin mining stood on stage together at a conference. Three or four entities have run Bitcoin mining since then.  The only thing preventing miner misbehaviour is wanting to avoid spooking the suckers — it’s completely trust-based. Bitcoin now uses a country’s worth of electricity [Digiconomist; CBECI] for no actual reason. You could do the transactions on a 2007 iPhone. Controlling the mining chips also controls mining. You can’t afford to piss off Bitmain — as Sia found out, when they couldn’t get chips for their minor altcoin fabricated in China because Bitmain didn’t like it. The centralisation of development Bitcoin’s operating parameters get tweaked all the time. There’s even a standard process for suggesting improvements. Peter Ryan’s essay “Bitcoin’s Third Rail: The Code is Controlled” details how the Bitcoin development process works in practice: you submit changes to a core group, who then decide whether this is going in. The core group then decides what goes in and how it will work. [Ryan Research] The thing is, this is completely normal. This is how real open source projects work. The essay presents this basic reality in a straightforward fashion; I’d take issue only with the headline, which paints this as in any way surprising or shocking. It really isn’t. There are multiple bitcoin wallets — the official bitcoin.org wallet and many, many others. So the wallet software is nicely varied. You are, of course, trusting the developers — because approximately zero Bitcoin users are capable of auditing the code, let alone doing so. Hardware wallets such as Ledger have also been exploited plenty of times. What if you don’t like what the developers do? The fundamental promise of free software, or open source software, is that you have the freedom to change the code and do your own version, and the original developers can’t stop you. If you don’t like the official version of Linux, you can go off and do your own. In the world of cryptocurrency, this means you can take the Bitcoin code, tweak a few numbers, and start your own altcoin. And thousands did. What this doesn’t get you is control of the existing network that runs a crypto-token that’s called “BTC” and sold for a lot of actual money on crypto exchanges. That’s the prize. The centralisation of trading Crypto exchanges also benefit from economies of scale — there’s more liquidity and volume on the biggest exchanges. However, there’s now a reasonable variety of exchanges to choose from — not like 2014, when everyone’s coins were in Mt. Gox. You can even pick whether you want a pro-regulation exchange like Gemini, or a free-for-all offshore casino! Exchanges collectively hold one important power: what they trade as the token with the ticker symbol “BTC.” As we’ll see later, this power needs consideration. Crypto remains utterly dependent on the US dollar. Bitcoin maxis who profess to hate dollars will never shut up about the dollar price of their holding. (Unless number goes down — then they’re suddenly into Bitcoin for the technology.) The point of crypto trading is to cash out at some point — and actual dollars have regulation. FinCEN are interested in what you do with actual money. Dollars pretty much always pass through the New York banking system, which creates a number of issues for crypto companies. A few exchanges let you get actual dollars in and out. Many more — including some of the most popular — are too dodgy to get proper banking. These use tethers, a stablecoin supposedly worth one dollar. Crypto trading solved the tawdry nuisance of dollars being regulated by using tethers instead, and cashing out your bitcoin winnings through Coinbase or Bitstamp as gateway exchanges. If you can believe the numbers reported by the tether exchanges — which, to be fair, you probably can’t without a massive fudge factor — then the overwhelming majority of trading against Bitcoin, or indeed any other crypto, is in tethers. This becomes a systemic issue for the crypto markets when Tether’s backing turns out to be ludicrously questionable. The crypto market pumpers seem to think Tether is in trouble, and now the USDC stablecoin is issuing dollar-equivalent tokens at a rate of billions a month. USDC’s accountant attestations recently changed from saying that every USDC is backed by dollars in a bank account to saying that they may also be backed by “approved investments” — though not what those investments might be, or what the proportion is. [March attestation, PDF] I’m sure it’ll be fine. Scaling bitcoin Bitcoin doesn’t work at scale, and can’t work — because they took Satoshi’s paper-and-string proof of concept and pressed it into production. This approach never goes well. But insisting the fatal flaws are actually features has pacified the suckers so far. Bitcoin is not very fast. It can process a theoretical maximum of about 7 to 10 transactions per second (TPS) — total, world-wide, across the whole network. In practice, it’s usually around 4 to 5 TPS. For comparison, Visa claims up to 65,000 TPS. [Visa, PDF] In mid-2015, the Bitcoin network finally filled its tiny transaction capacity. Transactions became slow, expensive and clogged. By October 2016, Bitcoin regularly had around 40,000 unconfirmed transactions waiting, and in May 2017 it peaked at 200,000 stuck in the queue. [FT, 2017, free with login] Nobody could agree how to fix this, and everyone involved despised each other. The possible solutions were: Increase the block size. This would increase centralisation even further. (Though that ship really sailed in 2013.) The Lightning Network: bolt on a completely different non-Bitcoin network, and do all the real transactions there. This only had the minor problem that the Lightning Network’s design couldn’t possibly fix the problem. Do nothing. Leave the payment markets to use a different cryptocurrency that hasn’t clogged yet. (Payment markets, such as the darknets, ended up moving to other coins that worked better.) Bitcoin mostly chose option 3 — though 2 is talked up, just as if saying “But, the Lighting Network!” solves the transaction clog. But, the Lightning Network! The Lightning Network was proposed in February 2015 as a solution to the clog that everyone could see was coming. It’s really clearly an idea someone made up off the top of their heads — but it was immediately seized upon as a possible solution, because nobody had any better ideas. Lightning doesn’t work as advertised, and can’t work. This is not a matter of a buggy or incomplete implementation — this is a matter of the blitheringly incompetent original design: prepaid channels, and a mesh network that would literally require new mathematics to implement. Users have to set up a pre-funded channel for Lightning transactions, by doing an expensive transaction on the Bitcoin blockchain. This contains all the money you think you’ll ever spend in the channel. This is ridiculously impractical. You’re not going to send money back-and-forth with a random coffee shop — you want to give them money and get your coffee. Lightning’s promise of thousands of transactions per second can obviously only work if you have large centralised entities who almost everyone opens channels with. You could call them “banks,” or “money transmitters.” Lightning originally proposed a mesh network — you send money from A to B via C, D and E, who all have their own funded channels set up. But routing transactions across a mesh network, from arbitrary point A to arbitrary point B, without a centralised map or directory, is an unsolved problem in computer science. This is even before the added complication of the network’s liquidity changing with every transaction. This basic design parameter of Lightning requires a solution nobody knows how to code. Again, this drives Lightning toward central entities as the only way to get your transactions across the network. There are other technical issues in the fundamental design of Lightning, which make it a great place to lose your money to bugs, errors or happenstance. [Reddit] Frances Coppola has written several essays on Lightning’s glaring design failures as a payment system. [Forbes, 2016; blog post, 2018; blog post, 2018; CoinDesk, 2018] Lightning is an incompetent banking system, full of credit risks, set up by people who had no idea that’s what they were designing, and that they’re still in denial that that’s what Lightning is. The only purpose Lightning serves is as an excuse for Bitcoin’s miserable failure to scale, with the promise it’ll be great in eighteen months. Lightning has been promising that it’s eighteen months from greatness since 2015. A good worked example of Lightning as an all-purpose excuse can be seen in the present version of El Salvador’s nascent Bitcoin system. Strike loudly proclaims it uses Lightning — but their own FAQ says they don’t pass transactions from the rest of the network. Bitcoin Beach uses Lightning — but reports indicate it doesn’t reliably pass money back out again, except via the slow and expensive Bitcoin blockchain. With sufficient thrust, pigs fly just fine. [RFC 1925] The Lightning fans strap a rocket to Porky and then try to sell you on his graceful aerobatics. The Bitcoin Cash split So why not just make the Bitcoin blocks bigger? If Bitcoin sucks, we can fix it, right? Bitcoin Cash, in 2017, was the last time there was a serious attempt to fix Bitcoin’s operating parameters. Bitcoin developers could see the blocks filling in early 2015. Some proposed a simple fix: raise the size of a block of transactions from 1 megabyte to 2 or 8 megabytes. But the Bitcoin community was sufficiently dysfunctional that even this simple proposal led to community schisms, code forks, retributive DDOS attacks, death threats, a split between the Chinese miners and the American core programmers … and plenty of other clear evidence that this and other problems in the Bitcoin protocol could never be fixed by a consensus process. “Trustlessness” just ends up attracting people who can’t be trusted. [New York Times] This didn’t make the problems go away. Finally, in late 2017, large holder Roger Ver, in concert with large mining pool and mining hardware manufacturer Bitmain, promoted Bitcoin Cash, with large blocks, as the replacement for the deprecated Bitcoin software. This wasn’t just starting a fresh altcoin — it was a fork of the blockchain itself. So everyone who had a large Bitcoin holding suddenly had the same holding of Bitcoin Cash as well. Free money! The Bitcoin Cash split was a decentralised Judgement of Solomon — wherein an exasperated Solomon says “All right, we’ll just cut the baby in two then,” and the mothers think that’s a great idea, and start fighting bitterly over whether to slice the kid horizontally or vertically. Bitcoin Cash launched in September 2017, and wanted to take over the “BTC” ticker on exchanges. This failed — it had to go with BCH. But Bitcon Cash did have a chance at the “BTC” ticker for a while there; exchanges were watching to see if Bitcoin Cash became more popular. Bitmain mined BCH furiously instead of mining BTC — and BTC’s block times went from ten minutes to over an hour. As it happened, Bitcoin was in the middle of a bubble — so nobody much noticed or cared, because all the number-go-up action was happening on exchanges, and not on chain. Ver owned bitcoin.com, and furiously promoted BCH as a Bitcoin that you could use for cash transactions. This was a worthy ambition — but nobody much cared. Mostly, the few retail users of Bitcoin would get confused between the two, and end up sending money to an address on the wrong chain. Bitcoin Cash completely failed to gain traction as a retail crypto. Most glaringly, it failed to get any takeup in the darknet markets — the first real use case for Bitcoin, and the people having the most practical trouble with Bitcoin’s transaction clog. The BTC version of Bitcoin also lost all traction as a retail crypto — the average transaction fee peaked at around $55 in December 2017. It was around this time that Bitcoin advocates stopped trying to pretend that Bitcoin would ever work as currency. They went in hard on the “digital gold” narrative, and pretended this had always been the intention — and never mind that the Bitcoin white paper is literally titled “Bitcoin: A Peer-to-Peer Electronic Cash System.” There were other completely ridiculous fights to the death for insanely low stakes around this time — Segwit (which was eventually adopted, making blocks slightly larger), Segwit2x (which wasn’t), and UASF (where non-miners thought they could sabotage protocol changes they didn’t like). These were different in technical terms, but not different in bitterness or stupidity. None of the disputes were really technical — it was all the politics of who got to make money. Everyone involved hated everyone else, and characterised their opponents as working in bad faith to sabotage Bitcoin. They figured that if they shouted enough abuse at each other, they’d get rich faster, or something. Bitcoin Cash fell flat on its face. Bitmain ended up with 1 million BCH “inventory” — that is, a pile of coins there was no market for — and fired its entire BCH team in late 2018. [CCN] BCH continues as just another altcoin with hopes and dreams — and not as any serious prospect to take over from BTC. The main thing keeping it alive is that it’s listed on Coinbase. Also, you can still send BCashers into conniptions just by calling Bitcoin Cash “BCash” — the BCashers decided that calling BCH-Coin “BCash” was a slur. Though their real objection was that this term leaves out the word “bitcoin.” It’s unfortunate that BCash’s graphic designer never told them about the effects of putting a transparent logo PNG on a white background, so it looks like it says “B Cash.” [bitcoincash.org, archive] Jonathan Bier has written a book about this slice of Bitcoin history: The Blocksize War (US, UK). Peter Ryan has reviewed the book; he thinks the chronology is correct, but the presentation is horribly slanted. [Twitter] I concur, and it’s glaring right from the intro — it’s a book-length blog rant about people who really pissed Bier off, and it’s incomprehensible if you weren’t following all of this at the time. It’s on Kindle Unlimited.   THIS IS WHY YOU GET A PROPER GRAPHIC ARTIST, AND PAY THEM.   But the blockchain’s immutable, right? The blockchain is probably immutable — there’s no real way to change past entries without redoing all the hash-guessing that got to the present. But the blockchain is data interpreted by software. In 2016, Ethereum suffered the collapse of The DAO — a decentralised organisation, deliberately restricted from human interference, running on “the steadfast iron will of unstoppable code.” (Bold in original.) The DAO was hacked — taking 14% of all ether at the time with it. How did the Ethereum developers get around this? They changed how the software interpreted the data! Immutability lasted precisely until the big boys were in danger of losing money. There’s already legal rumblings in this direction — Craig Wright, the man who has previously failed to be Satoshi Nakamoto, has sent legal letters to Bitcoin developers demanding that they aid him in recovering 111,000 bitcoins that he doesn’t have the keys for. [Reuters] Has history ended, then? There are people who would quite like the 21 million Bitcoin limit to change: the miners. The 2020 halving dropped the issuance of bitcoins from 12.5 BTC per block to 6.25 BTC. In 2024, that’ll drop again, to 3.125 BTC. The question is power — miners have a lot of power in the Bitcoin system. That power is shaky at present, because so much mining just got kicked out of China. Can they swing a change to Bitcoin issuance? The bit where proof-of-work mining uses a country’s worth of electricity to run the most inefficient payment system in human history is finally coming to public attention, and is probably Bitcoin’s biggest public relations problem. Normal people think of Bitcoin as this dumb nerd money that nerds rip each other off with — but when they hear about proof-of-work, they get angry. Externalities turn out to matter. Ethereum is the other big crypto that’s relatively convertible to and from actual money. If Ethereum can pull off a move away from proof-of-work, that will create tremendous pressure on Bitcoin to change, or be hobbled politically and risk the all-important interfaces to actual money. (That said, Ethereum just put off the change from proof-of-work to … about eighteen months away, where it’s been since 2014. Ah well.) Bitcoin mythology has changed before, and it’ll change again. “21 million” will be broken — if number-go-up requires it. The overriding consideration for any change to Bitcoin is: it must not risk shaking the faith of the most dedicated suckers. They supply the scarce actual-dollars that keep the casino going.   This article was a commission: to write about “the fallacy that Bitcoin is immutable, when apparently a vote can increase the 21M cap or do anything else.” If you have a question you really want in-depth coverage of, and that I haven’t got around to doing a deep dive on — I write for money! Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbitcoinbitcoin cashbitcoin corebookcraig wrightethereumghash.iojonathan bierlightning networkminingmt goxpeter ryansatoshi nakamototetherthe blocksize warthe daotradingusdcvisa Post navigation Previous Article News: Stopping ransomware, China hates miners, Ecuador CBDC history, NFTs still too hard to buy Next Article El Salvador: Chivo Wallet, Strike speaks, users test the Bitcoin system so far 8 Comments on “Bitcoin myths: immutability, decentralisation, and the cult of “21 million”” Paul J says: 27th June 2021 at 10:27 pm Just to note that in the linked article – https://www.ryanresearch.co/post/bitcoin-s-third-rail-the-code-is-controlled – he refers to BIP 42 without realizing it was an April Fool’s joke (that Satoshi failed to set a limit, so we will in this BIP in 2014). That has confused me a little. Is the 21million number in the original code or added later and if so do you know when? Reply Brandon says: 28th June 2021 at 5:20 am The 21 mil limit was original. It’s a natural consequence of the reward decay rules though. This makes it an indirect limit which is why the April fools joke is technically not a joke, there really is no coded upper limit. Reply DES says: 28th June 2021 at 9:00 am BIP-42 was not a joke, just tongue-in-cheek. It fixed a bug that would have caused the subsidy (aka block reward) to roll back around to 50 BTC after 64 halving intervals or ~256 years (~2264). And to answer your question, the issuance limit is the sum of subsidies up to the point where the block reward goes to 0 (after 32 halvings or around ~2136), specifically 2,099,999,997,690,000 sat or 20,999,999.9769 BTC. The curve is exponential, so 99% of that will already have been mined by the 7th halving in ~2036. This can be changed but I find it highly unlikely; if BTC is still a thing by 2036 the whales are unlikely to agree to a change that will immediately devalue their HODLings, Reply Mark Rimer says: 28th June 2021 at 4:33 am Great review on butt-history, and especially the Lightning Network, David! It’s the definition of, “ That’s so stupid; you must be explaining it wrong!” Goes right up there with the log-scale graphs that lie about data right on same graph! Reply David Gerard says: 28th June 2021 at 8:08 am I had this argument with someone on Twitter about Lightning this week – they were convinced it was just a matter of further development, and not a matter of a design that couldn’t work unless they discovered new mathematics. Reply Brandon says: 28th June 2021 at 5:38 am Another big myth is that Bitcoin has no monetary authority. It actually does but it is a democratic authority. All of the things that a central planner can do are possible in Bitcoin as well. Bitcoin wallets can be blacklisted/blocked from transacting, the coin cap can be changed, taxes/fees can be introduced, pretty much anything that a bank or government can do to a US dollar account can be done to a Bitcoin wallet by the developers, nodes, and miners. Reply David Gerard says: 28th June 2021 at 8:12 am Not so much democratic as plutocratic. Reply Blake says: 28th June 2021 at 6:26 pm Exactly, Bitcoin Banks and financial services will act like all other banks and financial services, in fact, they will probably be bought by the existing banks, if successful. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. davidgerard-co-uk-971 ---- The origin of ‘number go up’ in Bitcoin culture – Attack of the 50 Foot Blockchain Skip to content Attack of the 50 Foot Blockchain Blockchain and cryptocurrency news and analysis by David Gerard About the author Attack of the 50 Foot Blockchain: The Book Book extras Business bafflegab, but on the Blockchain Buterin’s quantum quest Dogecoin Ethereum smart contracts in practice ICOs: magic beans and bubble machines Imogen Heap: “Tiny Human”. Total sales: $133.20 Index Libra Shrugged: How Facebook Tried to Take Over the Money My cryptocurrency and blockchain commentary and writing for others Press coverage: Attack of the 50 Foot Blockchain Press coverage: Libra Shrugged Table of Contents The conspiracy theory economics of Bitcoin The DAO: the steadfast iron will of unstoppable code Search for: Main Menu The origin of ‘number go up’ in Bitcoin culture 27th May 201923rd March 2021 - by David Gerard - 6 Comments. This post is ridiculously popular, for unclear reasons. Please check out the news blog, buy my book — it got good reviews! — or sponsor the site! You can also sign up (for free) at the “Get blog posts by email” box at your right! Bitcoin believers — often the same Bitcoin believers — will say whatever they think is good news for Bitcoin at that moment. From anarcho-capitalism and replacing the International Bankers, to stablecoins and institutional investors. Because “NUMBER GO UP” is the only consistent Bitcoin ideology. The phrase itself — without a “the,” for ungrammatical effect — has long been used wherever Goodhart’s law applies — “When a measure becomes a target, it ceases to be a good measure.” But who first applied “number go up” to cryptocurrency? This came up in a Twitter discussion today, when it was mentioned by Chris DeRose. So I went looking. The earliest “number go up” I’ve found in crypto discussion is from SomethingAwful — not the Buttcoin thread, but one of the other SomethingAwful crypto threads at the time — in a comment by user Blade Runner, on 5 October 2017 (archive), as the crypto bubble was in full swing: It’s legitimately amusing to see a Ham shoe shine boy incessantly screeching with no knowledge of how Buttcoin works beyond “NUMBER GO UP, ME COULD HAVE HAD BIG NUMBER, GOON BAD” There are many reasons that it’s a terrible investment, and these have been explained over and over again, and yet the man continues to screech while pointing at an imaginary number inflated by ridiculous amounts of margin trading on an exchange that can’t be withdrawn from It was promptly adopted both on that thread and the main Buttcoin thread. Particularly by me. The earliest crypto Twitter “number go up” without a “the” that I can find is from @VexingCrypto talking about their WaltonCoins, on 15 November 2017. Though it wasn’t in the present sense:   I just locked in an order for $Wtc on #Binance at .0006500 and I'm not changing it as long as $BTC is going up. LOCK YOUR NUMBER NOW! Change order when $BTC drops then your shares for same number go UP. We will see what happens. #Waltonchain #Bitcoin — Vexing Crypto 💦 (@VexingCrypto) November 15, 2017   The first usage I can find on Twitter in the present usage seems to be from me, on 9 December 2017, about proof-of-work crypto mining:   PoW is a kludge, a terrible one. it was literally never an actually good idea. and bitcoin had substantially recentralised after five years anyway. now it's just a complete waste. and as other coins have shown, nobody actually cares about 51% as long as number go up. — David Gerard 🐍👑 🌷 (@davidgerard) December 9, 2017   @buttcoin used it the next day, about Iota:   because number go up look,, i don't think you,,, really understand,,,, the blockchain,,,,, https://t.co/dSxFDengng — Buttcoin (@ButtCoin) December 10, 2017   Since then, the number of “number go up” continue only go up. BULLISH! TO THE MOON!! Your subscriptions keep this site going. Sign up today! Share this: Click to share on Twitter (Opens in new window) Click to share on Facebook (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Reddit (Opens in new window) Click to share on Telegram (Opens in new window) Click to share on Hacker News (Opens in new window) Click to email this to a friend (Opens in new window) Taggedbitcoinbuttcoinnumber go upsomethingawful Post navigation Previous Article Woolf, the University on the Blockchain — or not Next Article News: LocalBitcoins less local, Cryptopia, QuadrigaCX, crypto banking, Telegram, Wirecard, EOS Voice, Kik and Defund Crypto, Wright gets a copyright competitor 6 Comments on “The origin of ‘number go up’ in Bitcoin culture” Ranulph Flambard says: 28th May 2019 at 2:43 am Numbers go up IS GOOD. Numbers go down is bad. Simples. Reply Satoshi Nakamoto says: 28th May 2019 at 4:59 pm Fly away goon, all the way back to GBS. Reply David Gerard says: 28th May 2019 at 7:53 pm as a cultured denizen of PYF, C-SPAM and YOSPOS, I, er, never mind. Reply Chris DeRose says: 3rd June 2019 at 8:41 pm Love it. The unironic adoption by the bitcoin community, of this mantra, is hilarious. Reply David Gerard says: 3rd June 2019 at 9:13 pm NUMERALIS ASCENDUS! Reply Flibbertygibbet says: 4th May 2021 at 3:02 am “This post is ridiculously popular, for unclear reasons.” It’s the top google result for “number go up”. You somehow stumbled into SEO. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Email * Website Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. Search for: Click here to get signed copies of the books!   Get blog posts by email! Email Address Subscribe Support this site on Patreon! Hack through the blockchain bafflegab: $5/month for early access to works in progress! $20/month for early access and even greater support! $100/month corporate rate, for your analyst newsletter budget! Buy the books! Libra Shrugged US Paperback UK/Europe Paperback ISBN-13: 9798693053977 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Attack of the 50 Foot Blockchain US Paperback UK/Europe Paperback ISBN-13: 9781974000067 Kindle: UK, US, Australia, Canada (and all other Kindle stores) — no DRM Google Play Books (PDF) Apple Books Kobo Smashwords Other e-book stores Available worldwide  RSS - Posts  RSS - Comments Recent blog posts News: the Senate has mild contempt for Bitcoin, Tether and USDC attestations, DeFi Money Market and Poloniex settle with SEC, Poly Network hack News: El Salvador, Binance vs Malaysia, Goldman Sachs non-blockchain ETF, Virgil Griffith Tether criminally investigated by Justice Department — When The Music Stops podcast News: El Salvador Colón-Dollar, everybody hates BlockFi, Tether does CNBC Summer reading for the cryptocurrency skeptic: part 1 Excerpts from the book Table of Contents The conspiracy theory economics of Bitcoin Dogecoin Buterin’s quantum quest ICOs: magic beans and bubble machines Ethereum smart contracts in practice The DAO: the steadfast iron will of unstoppable code Business bafflegab, but on the Blockchain Imogen Heap: “Tiny Human”. Total sales: $133.20 Index About Press coverage for Attack of the 50 Foot Blockchain Press coverage for Libra Shrugged My cryptocurrency and blockchain press commentary and writing Facebook author page About the author Contact The content of this site is journalism and personal opinion. Nothing contained on this site is, or should be construed as providing or offering, investment, legal, accounting, tax or other advice. Do not act on any opinion expressed here without consulting a qualified professional. I do not hold a position in any crypto asset or cryptocurrency or blockchain company. Amazon product links on this site are affiliate links — as an Amazon Associate I earn from qualifying purchases. (This doesn’t cost you any extra.) Copyright © 2016–2021 David Gerard Powered by WordPress and HitMag. Send to Email Address Your Name Your Email Address Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. digitallibrarian-org-8652 ---- The Digital Librarian – Information. Organization. Access. ↓ Skip to Main Content The Digital Librarian Information. Organization. Access. Main Navigation Menu Home About Libraries and the state of the Internet By jaf Posted on June 27, 2016 Posted in digital libraries No Comments Mary Meeker presented her 2016 Internet Trends report earlier this month. If you want a better understanding of how tech and the tech industry is evolving, you should watch her talk and read her slides. This year’s talk was fairly … Libraries and the state of the Internet Read more » Meaningful Web Metrics By jaf Posted on January 3, 2016 Posted in Web metrics No Comments This article from Wired magazine is a must-read if you are interested in more impactful metrics for your library’s web site. At MPOE, we are scaling up our need for in-house web product expertise, but regardless of how much we … Meaningful Web Metrics Read more » Site migrated By jaf Posted on October 1, 2012 Posted in blog No Comments Just a quick note – digitallibrarian.org has been migrated to a new server. You may see a few quirks here and there, but things should be mostly in good shape. If you notice anything major, send me a Challah. Really. … Site migrated Read more » The new iPad By jaf Posted on March 18, 2012 Posted in Apple, Hardware, iPad 1 Comment I decided that it was time to upgrade my original iPad, so I pre-ordered a new iPad, which arrived this past Friday. After a few days, here are my initial thoughts / observations: Compared to the original iPad, the new … The new iPad Read more » 3rd SITS Meeting – Geneva By jaf Posted on August 3, 2011 Posted in Conferences, digital libraries, Uncategorized, workshops No Comments Back in June I attend the 3rd SITS (Scholarly Infrastructure Technical Summit) meeting, held in conjunction with the OAI7 workshop and sponsored by JISC and the Digital Library Federation. This meeting, held in lovely Geneva, Switzerland, brought together library technologists … 3rd SITS Meeting – Geneva Read more » Tagged with: digital libraries, DLF, SITS David Lewis’ presentation on Collections Futures By jaf Posted on March 2, 2011 Posted in eBooks, Librarianship 1 Comment Peter Murray (aka the Disruptive Library Technology Jester) has provided an audio-overlay of David Lewis’ slideshare of his plenary at the last June’s RLG Annual Partners meeting. If you are at all interested in understanding the future of academic libraries, … David Lewis’ presentation on Collections Futures Read more » Tagged with: collections, future, provisioning Librarians are *the* search experts… By jaf Posted on August 19, 2010 Posted in Librarianship No Comments …so I wonder how many librarians know all of the tips and tricks for using Google that are mentioned here? What do we want from Discovery? Maybe it’s to save the time of the user…. By jaf Posted on August 18, 2010 Posted in Uncategorized 1 Comment Just a quick thought on discovery tools – the major newish discovery services being vended to libraries (WorldCat local, Summon, Ebsco Discovery Service, etc.) all have their strengths, their complexity, their middle-of-the-road politician trying to be everything to everybody features. … What do we want from Discovery? Maybe it’s to save the time of the user…. Read more » Putting a library in Starbucks By jaf Posted on August 12, 2010 Posted in digital libraries, Librarianship No Comments It is not uncommon to find a coffee shop in a library these days. Turn that concept around, though – would you expect a library inside a Starbucks? Or maybe that’s the wrong question – how would you react to … Putting a library in Starbucks Read more » Tagged with: coffee, digital library, library, monopsony, starbucks, upsell 1 week of iPad By jaf Posted on April 14, 2010 Posted in Apple, eBooks, Hardware, iPad 1 Comment It has been a little over a week since My iPad was delivered, and in that time I have had the opportunity to try it out at home, at work, and on the road. In fact, I’m currently typing this … 1 week of iPad Read more » Tagged with: Apple, digital lifestyle, iPad, mobile, tablet Posts navigation 1 2 3 Next © 2021 | Powered by Responsive Theme digital-library-upenn-edu-7430 ---- The Dream Coach. A Celebration of Women Writers The Dream Coach By Anne Parrish, 1888-1957 and Dillwyn Parrish, 1894-1941. New York: The Macmillan Company, 1924. Copyright not renewed. A Newbery Honor Book, 1925. The Dream Coach THE MACMILLAN COMPANY NEW YORK ˙ BOSTON ˙ CHICAGO ˙ DALLAS ATLANTA ˙ SAN FRANCISCO MACMILLAN & CO., LIMITED LONDON ˙ BOMBAY ˙ CALCUTTA MELBOURNE THE MACMILLAN CO. OF CANADA, LTD. TORONTO THE DREAM COACH FARE: FORTY WINKS COACH LEAVES EVERY NIGHT FOR NO ONE KNOWS WHERE * * AND HERE IS TOLD HOW A PRINCESS, A LITTLE CHINESE EMPEROR, A FRENCH BOY, & A NORWEGIAN BOY TOOK TRIPS IN THIS GREAT COACH * BY ANNE AND DILLWYN PARRISH * * WITH PICTURES & A MAP By THE AUTHORS NEW YORK THE MACMILLAN COMPANY ☆ ☆ MCMXXIV ☆ ☆ COPYRIGHT, 1924. BY THE MACMILLAN COMPANY. Set up and electrotyped. Published, September, 1924. Printed in the United States of America TO EVERETT AND ROLAND JACKSON CONTENTS   PAGE THE DREAM COACH 3 THE SEVEN WHITE DREAMS OF THE KING'S LITTLE DAUGHTER 9 GORAN'S DREAM 29 A BIRD-CAGE WITH TASSELS OF PURPLE AND PEARLS     (Three Dreams of a Little Chinese Emperor) 59 "KING" PHILIPPE'S DREAM 87 The Dream Coach THE DREAM COACH If you have been unhappy all the day, Wait patiently until the night: When in the sky the gentle stars are bright The Dream Coach comes to carry you away. Great Coach, great Coach, how fat and bright your sides, To please the child who rides! Painted with funny men – see that one's hose, How blue! How red and long is that one's nose! And under this one's arm a flapping cock! Great dandelions tell us what o'clock With silver globe much bigger than the moon — Dream Coach, come soon! Come soon! What pretty pictures! Angels at their play, And brown and lilac butterflies, and spray Of stars, and animals from far away, Grey elephants, a bright pink water bird; Things lovely and absurd. As the wheels turn, they wake to lovely sound, Musical boxes – as the wheels go round They play a little silver spray of notes: "Swift Runs the River" – "Bluebells in the Wood" — "The Waterfall" – "The Child Who Has Been Good" — Like splash of foam at keel of little boats. Under a sky of duck-egg green Have you not seen The hundred misty horses that delight To draw the coach all night, And the queer little Driver sitting high And singing to the sky? His hat is as tall as a cypress tree, His hair is as white as snow; His cheeks and his nose are as red as can be; He sings: "Come along! Come along with me!" Let us go! Let us go! His coat is speckledy red and black, His boots are as green as a beetle's back, His beard has a fringe of silver bells And scarlet berries and small white shells, And as through the night the Dream Coach gleams, The song he sings like a banner streams: "Nothing is real in all the world, Nothing is real but dreams." Through sound of rain the Dream Coach gallops fast. All those that we have loved are riding there: I hear their laughter on the misty air. I wait for you – I have been waiting long: Far off I hear the Driver's tiny song – Oh, Dream Coach! Come at last! (From Knee-High to a Grasshopper.) WHEN the Driver of the Dream Coach reached the last small star in the sky, he unharnessed his hundred misty horses and put them out to pasture in the great blue meadow of Heaven. It was well he reached the end of his journey when he did, for in another moment a mounting wave of sunlight and wind, rushing up from the world far below, blew out the silver-white fiame of the star so that no one could follow the strange Driver and his strange Coach to their resting place. Resting place? What a mistake! The Driver of the Dream Coach never rests. You see, there are so many things to do even when he is carrying no passengers. There are new dreams to invent: queer dreams, funny dreams, fairy dreams, goblin dreams, happy dreams, exciting dreams, short dreams, long dreams, brightly colored dreams, and dreams made out of shadows and mist that vanish as soon as one opens one's eyes. Then there is the very bothersome matter of keeping the records straight, records of those who deserve good dreams, those who need cheering with ridiculous dreams, and those, alas, who have been bad and naughty and have to be punished (how the little Driver hates this!) with nightmares. It is hard to keep all those dreams from getting mixed up, there are so many of them. Indeed, sometimes, they do get mixed up, and a good child, who was meant to have a dream as pretty as a pansy or as funny as a frog, gets a nightmare by mistake. But the Driver of the Dream Coach tries as hard as he possibly can never to let this happen. He has so very much to do that he never would catch up with his work no matter how quickly his beautiful horses galloped from star to star, from world to world, if there was not some one to help him. There are little angels who help the Driver of the Dream Coach. In their gold and white book they keep a record of every one on earth. As soon as the Driver of the Dream Coach had unharnessed his horses he went to these angels and planned his next trip. What a busy night it was to be! If I should use all the paper and all the pencils in the world I could not begin to tell you about all the dreams he arranged to carry to the sleeping world. And yet there was one child who was nearly forgotten, a little Princess whose name had been written at the top of a new page which the Driver had neglected to turn in his hurry. "Surely you are not going to forget the little Princess on her birthday!" pleaded the little angels, turning the page. "Oh, dear!" said the Driver. "That will never do; now, will it? And yet – I simply can't pack another dream into the Coach. I'm sorry, but I'm afraid – " "Oh, dear!" echoed the angels. "Perhaps – " Just then one of the youngest angels, who happened to be leaning over the parapet of Paradise, saw the Princess begin to cry, and took in the situation instantly. So he hurried to the others and suggested that he himself should carry a dream to the little Princess. The Driver of the Dream Coach thought this was a splendid idea and thanked him again and again for his help. That is how the seven white dreams of the King's little daughter were carried to her by an angel, and as you know (or if you don't, I will tell you) the dreams carried in the moonbeam basket of the angels are the most beautiful of all. What did the Princess dream? That you shall hear. I CANNOT remember all the names of the King's little daughter, and indeed few can. The Archbishop who christened her says that he can, but he is so great and so deaf a dignitary that no one would think of asking him to prove it. They are all there, twelve pages of them, in the great book where are recorded the baptisms of all the Royal babies, so that you can look for yourself if none of the ones I can remember – Angelica Mary Delphine Violet Candida Pamelia Petronella Victoire Veronica Monica Anastasia Yvonne – happen to please you. It was the fifth birthday of the little Princess, and there were to be great celebrations in her honor. Fireworks would blossom in the night sky, and in the gardens lanterns were hung like bubbles of colored light from white rose tree to red, while the great fountains would turn from pink to mauve, from mauve to azure, to amber, and to green, as they flung up slender stems and great spreading lacy fronds of water. Every one from the King down to the smallest kitchen-maid had new clothes for the occasion, and the Chief Cook had created a birthday cake iced with fairy grottoes and gardens of spun sugar, so huge and so heavy that the Princess's ten pages in their new sky-blue and silver liveries, staggered under the weight of it. The little Princess had a new gown of white satin, sewn so thickly with pearls that it was perfectly stiff, and stood as well without her as when she was inside it. It was standing by her bedside when the bells of the city awoke her on her birthday morning, together with her silver bath shaped like a great shell, and her nine lace petticoats, and her hoops to go over the petticoats, and her little white slippers on a cushion of cloth-of-silver, and her whalebone stays, and her cobweb stockings, and her ten Ladies-In-Waiting, Grand Duchesses every one. When she opened her blue eyes they all swept her the deepest curtsies, their skirts of bright brocade billowing up about them, and said together: "Long Life and Happiness to Your Serene Highness!" and then the first Grand Duchess popped her out of bed and into her bath, where she got a great deal of soap in the Princess's eyes while she conversed in a most respectful and edifying manner. The second Grand Duchess, who was Lady-In-Waiting-In-Charge-Of-The-Imperial-Towel, was even more respectful, and nearly rubbed the Princess's tiny button of a nose entirely off her face. The third Grand Duchess brushed and combed the little duck tails of yellow silk that covered the Royal head; and oh, how she did pull! The fourth Grand Duchess was Lady-In-Waiting-In-Charge-Of-The-Imperial-Shift, and as she was rather old and slow, although extremely noble, the Princess grew cold indeed before the shift covered up her little pink body. The fifth Grand Duchess put on the rigid stays. The sixth put on the stockings and slippers. The seventh was very important and gave herself airs, for the nine lace petticoats were her concern. The eighth Grand Duchess was Lady-In-Waiting-In-Charge-Of-The-Imperial-Hoops. The ninth put on the little Princess the dress of satin and pearls, that glowed softly like moonlit drops of water. And the tenth Grand Duchess, the oldest and ugliest and noblest and crossest and most respectful of them all, placed on the yellow head the little frosty crown of diamonds. Then the Princess's Father Confessor, a very noble Prince of the Church, dressed in violet from top to toe, came in between two little boys in lace, and said a long prayer in Latin. It was so long that, I am sorry to have to tell you, right in the middle the Princess yawned, so of course another long prayer had to be said to ask Heaven to overlook such shocking wickedness on the part of Her Highness. Then the Chief-Steward-In-Attendance-On-The-Princess brought her breakfast – bread and milk in a silver porringer. The little Princess had hoped for strawberries, as it was her birthday, but the Chief Gardener was saving every strawberry in the Royal gardens for the great Birthday Banquet that was to be held that evening. Then the little Princess went to say good morning to her Mother and Father, and this is the way she went. First came two heralds in forest green, blowing on silver trumpets. Then came the Father Confessor and his little lace-covered boys. Then came the Ladies-In-Waiting in their bright brocades, with feathers in their powdered hair, and after each lady came a little black page to carry her handkerchief on a satin cushion. The ten pages of the Princess were next, and after them came the Royal Baby's Own Regiment of Dragoons in white and scarlet. And last came four gigantic blacks The nine lace petticoats were her concern. wearing white loin cloths and enormous turbans of flamingo pink, and carrying a great canopy of cloth-of-silver fringed with pearls, and under this, very tiny, and looking, in her spreading gown, like a little white hollyhock out for a walk, came the Princess. After she had curtsied, and kissed the hands of her Royal parents, her Father gave her a rope of milk-white pearls and her Mother gave her a ruby as big as a pigeon's egg, both of which were instantly locked up in the Royal treasury. They then bestowed upon her, in addition to her other titles, that of Grand Duchess of Pinchpinchowitz, which took so long to do that when she had said thank you it was time for lunch, which was just the same as breakfast, except that this time the porringer was gold. After lunch the Prime Minister read the Princess an illuminated Birthday Greeting from her loyal subjects, which ran along so that the Ladies-In-Waiting nearly yawned their heads off behind their painted fans, and the Princess had a nice little nap, and dreamed that there would be strawberries for supper. But instead there was bread and milk in a porringer covered with turquoises and moonstones. Then, as the younger Ladies-In-Waiting were thinking of the Gentlemen-Of-The-Court who would be waiting for them among the rose trees and yew hedges, to watch the colored water of the fountains and listen to the harps and flutes, and as the older Ladies-In-Waiting were thinking of comfortable seats out of a draught in the State Ball Room, and having the choicest morsels of roasted peacock and larks' tongue pie and frozen nectarines, they popped the Princess into bed pretty promptly – indeed, an hour earlier than usual – and went off to celebrate her birthday. The room in which the little Princess lay was as big as a church, and the great bed was as big as a chapel. Four carved posts as tall as palm trees in a tropic jungle, held a canopy of needlework where hunters rode and hounds gave chase and deer fled through dark forests. Below this lay the broad smooth expanse of silken sheet and counterpane, and in the midst, as little and alone as a bird in an empty sky, lay the King's little daughter. One large tear rolled down her round pink cheek, and then another. The long dull day had tired her, and the great dim room frightened her, and she wanted to see the fireworks she had heard her pages whispering about. She sat up among her lace pillows, and her tears went splash, splash, on the embroidered flowers and leaves of her coverlet. One of the youngest angels happened to be leaning over the parapet of Paradise when the Princess began to cry, and he took in the situation instantly, and hurried off to his Heavenly playmates to tell them about it. "It is her birthday," he said, "and no one has given her as much as a red apple or a white rose – only silly old rubies and pearls that she wasn't even allowed to play marbles with! And now they have left her to weep in the dark while they dance and feast! I shall go down to her and sit by her bed till her tears are dry, and take her a white dream as a gift." "Oh, let me send a dream too!" cried another angel. "And let me!" "And let me!" So that by the time the little angel was ready to start to earth there were seven white dreams to be taken as birthday gifts from Heaven, and he had to weave a basket of moonbeams to carry them in. That night the Princess dreamed that she was a daisy in a field, dancing delicately in the wind among other daisies as thick as the stars in the Milky Way. Feathery grasses danced with them, and yellow butterflies danced above, and the larks in the sky flung down cascades of lovely notes that scattered like spray on the joyous wind. Some poor little girls were playing in the field. Their feet were bare and their faded frocks were torn, but they danced and sang too. There came a rumbling like thunder, and through a gap in the hawthorn hedge the children and the daisies saw the King's little daughter driven past in her great scarlet coach drawn by eight dappled horses. They could see the little Princess sitting up very straight with her crinoline puffing about her and her crown on her head, and after she had passed all the children played that they were princesses, making daisy crowns for their heads, and hoops of brier boughs to hold out their limp little petticoats. The next day the Princess looked in vain for a daisy as she took her morning constitutional in the Royal gardens. There were roses and lilies, blue irises, and striped red and yellow carnations tied to stakes, all stiff and straight. "Hold up your head, Serene Highness!" snapped one of the Ladies-In-Waiting, who had had too many cherry tarts at too late an hour the night before. But daisies danced in the Princess's heart.   The next night the Princess dreamed that she was a little white cloud afloat in the bright blue sky. She floated over the blue sea and the white sand, and over black forests of whispering pines, and over a land where fields of tulips bloomed for miles, in squares of lovely colors, delicate rose and mauve and purple, coppery pink and creamy yellow, with canals running through them like strips of old, dark looking-glass. She floated over rye fields turning silver in the wind, and over nuns at work in their walled gardens, and finally over a great grim palace where a King's little daughter lived. "I would rather be free and afloat in the sky," thought the small white cloud.   When she took the air the next day, she looked up to see if any white clouds were in the sky. "Her Highness is growing very proud," said the Ladies-In-Wait- ing. "She holds her nose up in the air as a King's daughter should." On the third night, the Princess dreamed she was a little lamb skipping and nibbling the new green grass in a meadow where hundreds of lilies of the valley were in bloom. They were still wet and sparkling with rain, but now the sun shone and a beautiful rainbow arched above the meadow and the lilies of the valley and the happy little lamb. Through the rest of her life the gentleness of the lamb lay in the heart of the Princess.   The next night she dreamed that she was a white butterfly drifting with other butterflies among the tree ferns and orchids of the jungle, gentle and safe from harm, although serpents lay among the branches of the trees and lions and tigers roamed through the green shadows. A white butterfly flew in at her window the next day. "A moth! A moth!" cried the Ladies-In-Waiting. "Camphor and boughs of cedar must be procured instantly, or the dreadful creature will eat up Her Highness's ermine robes!" But the little Princess knew better than that.   On the fifth night she dreamed that she was a tiny white egg lying in a nest that a humming bird had hung to a spray of fern by a rope of twisted spider's web. The nest was softly and warmly lined with silky down, and above her was the soft warmth of the mother bird's breast. On the sixth night she was a snowflake. It was Christmas night, and the towns and villages were gay. Rosy light poured from every window, blurred by the falling snow, and the air was full of the sound of bells. High up on the mountain was a lonely wayside shrine with carved and painted wooden figures of the Mother and Her Child whose Birthday it was. There were no bells there, nor yellow candle light, but only snow and dark evergreen trees. The snowflake, whirling and dancing down from the sky, a tiny frosty star, gave its life as a birthday gift to the Holy Child, lying for its little moment in His outstretched hand. The angel was distressed to find, on the seventh night, that the seventh dream had slipped through a hole in the moonbeam basket and was lost. Careless little angel! But it really did not matter, for instead of a dream, he showed himself to the Princess. And she liked that the best of all, for she had never had any one to play with before, and there is no playmate equal to an angel. But the seventh dream is still drifting about the world – I wonder where? Perhaps it will be upon my pillow to-night – perhaps upon yours. Who knows? CRACK! went the Driver's whip, but it did not hurt the galloping misty horses, for it was only a ribbon of rainbow that he liked to use because both he and his horses thought it so pretty. And away went the great Coach, over the forests and over the seas, over the cities and plains, to a country where the sea thrusts long silver fingers into the land, where mountains are white with snow at the same time that the meadows are bright with wild flowers, and where in summer the sun never sets, and in winter it never rises. And here the Dream Coach drew up beside a cottage where a lonely little Norwegian boy was falling asleep. "Come, Goran!" called the Driver. "Come, climb into the Coach and find the dream I have brought for you!" Who was Goran? What dream did he find? That you shall hear. LITTLE Goran and his grandmother lived in a tiny house in Norway, high above the deep waters of a fjord. When Goran was a baby they used to tie one end of a rope around his waist and the other to the door, so that if he toddled over the edge he could be hauled back like a fish on a line. But now he was no longer a baby, but a big boy, six years old, and he tried to take care of his grandmother as a big boy should. It was a lovely spot in summer, when the waterfalls went pouring down milk-white into the green fjord, sending up so much spray that they looked as if they were steaming hot; when rainbows hung in the sky; when the small steep meadows were bright with wild flowers, and even the sod roof of the cottage was like a little wild garden of harebells and pansies and strawberries that Goran gathered for breakfast sometimes. He was happy all day then, fishing in the fjord, making a little cart for Nanna, the goat, to pull, trying to teach Gustava, the hen, to sing, putting on his fingers the pink and purple hats that he picked from the tall spires of wild foxglove and monkshood, and making them dance and bow, and listening to the loud music of the waterfalls after rain. And in the evening after supper Goran's grandmother would tell him splendid stories while they sat together in the doorway making straw beehives, sewing the rounds of straw together with split blackberry briers. The sun would shine on the straw and make it look so yellow and glistening that Goran would pretend he was making a golden beehive for the Queen Bee's palace. For where Goran lived the sun never sets at all in the middle of summer, and it is bright daylight not only all day, but all night as well. You and I would never have known when to go to bed, but Goran and his grandmother were used to it, and even Gustava, the hen, knew enough to put her head under her wing and make her own dark night. But with winter, changes came. The flowers slept under the earth until spring's call should wake them, and yawning and stretching, s-t-r-e-t-c-h-i-n-g, they should stretch up into the air and sunlight. The waterfalls no longer flung up clouds of spray like smoke, but built roofs of ice over themselves. And, strangest of all, the winter darkness came, so that the days were like the nights, and you and I would never have known when to get up. "I must go to the village for our winter supplies before the snow falls and cuts us off," his grandmother said to Goran one day. "Neighbor Skylstad has offered me a seat in his rowboat to-morrow, and will bring me back the next day. You won't be afraid to stay here alone, will you, Goran?" "No, Grandmother," said Goran. He pretended to be tremendously interested in poking his finger into the earth in a geranium pot, so that his grandmother shouldn't see that his eyes were full of tears and his lower lip was trembling. For to tell you the truth he was frightened. The little house was so far from any other house, and then Goran had never spent a night alone. Last year when the winter's supplies were bought, he had gone to the village with his grandfather, and he had told Nanna and Gustava and Mejau, the cat, all about what a wonderful place it was, a thousand times over; the warm shop, with its great cheeses in wooden boxes painted with bright birds and flowers, and its glowing stove, as tall and slim as a proud lady in a black dress, with a wreath of iron ferns upon her head; the other children who had let him play with them while grandfather exchanged the socks and mittens knitted by grandmother for potatoes and candles. And they had slept at the inn under a feather bed so heavy that you would have thought by morning they would have been pressed as flat as the flowers in grandmother's big Bible. But they weren't! They got up just as round as ever, and had a wonderful breakfast of dark grayish-brown goats'-milk cheese, cold herring, and stewed bilberries. Grandfather had gone to Heaven since then, and Goran wondered if he could possibly be finding it as delightful as the village. How he did want to go this time! But of course he knew that some one must stay behind to feed Nanna and Gustava and Mejau, to tend the fire and water the geraniums and wind the clock. So he said as bravely as he could: "I'll take care of everything, Grandmother."   Soon after his grandmother left, the snow began to fall. How that frightened Goran! Suppose it snowed so hard that she could never get back to him! For when winter really began, the little house was often up to its chimney in snow, and they could get to no one, and no one could get to them. How poor little Goran's heart began to hammer at the thought! He fell to work to make himself forget the snow. First, seizing a broom made of a bundle of twigs, he swept the hard earth floor, which in summer had so pretty a carpet of green leaves, strewn fresh every day by Goran and his grandmother. Then he poured some water on the geraniums in the window, only spilling a little on himself. Then he stroked Mejau, who was purring loudly in front of the fire; and all this made him feel much better. "Time for dinner, Goran!" said the old clock on the wall. At least it said: "Ding! Ding! Ding! Ding! Ding! Ding! Ding! Ding! Ding! Ding! Ding! Ding!" which meant the same thing. So Goran ate the goats'-milk cheese and black bread that his grandmother had left for him; and then, and not before, he summoned up enough courage to look out to see if the snow was still falling. It was snowing harder than ever, and already everything had a deep fluffy covering. Oh, would his grandmother ever be able to get back to him? But he must be brave, and not cry, for he was six years old. He said a little prayer, as his grandmother had taught him to do whenever he was frightened or unhappy, and his heavy heart grew lighter. "I'll make a snowman," Goran decided. Perhaps then the time would seem shorter. Grandfather and he had made a splendid snowman after the first snowfall last winter. It was not late enough in the year to have the day as dark as night. It was only as dark as a deep winter twilight, and the white snow seemed to give out a light of its own for Goran to work by. First he found an old broomstick and thrust it into the snow so that it stood upright. Then he pushed the heavy wet snow around it, patting on here, scooping out there, until there was a body to hold the big snowball he rolled for the head. A bent twig pressed in made a pleasant smile, and for eyes Goran ran indoors and took from the little box that held his treasures two marbles of sky-blue glass that his grandfather had given him once for his birthday. What a beautiful snowman! With his sky-blue eyes he gazed through the falling snow at little Goran.   "Ding! Ding! Ding! Ding! Ding!" called the old clock, and that was the same as saying: "Time for supper, Goran!" The fire lit up the room with a warm glow, painted the curtains crimson, and made wavering gigantic shadows on the walls. The water bubbled in the pot, and the boiling potatoes knocked against the lid. "Prr-prrr!" said Mejau, blinking in front of the blaze, and the old clock answered: "Tock! Tick! Tock!" Goran had given their supper to Nanna and Gustava and Mejau, and had taken one good-night look at his snowman. Now he put his bowl of boiled potatoes on the table in front of the fire, and pulled up his chair. Lying on the floor where she had fallen from his box when he was getting his snowman's blue eyes was a playing card, the Queen of Clubs. His grandfather had found it lying in the road in the village, and had brought it home as a present for Goran. The little boy thought the Queen was very splendid, with her crown and her veil, and her red dress trimmed with bands of blue and leaves and stars and rising suns of yellow. In one hand she held on high a little yellow flower. Now he picked her up and put her on a chair beside him, pretending the Queen had come for supper to keep him from being lonely. Each mouthful of potato he first offered her, with great politeness, but the delicate lady only gazed off into space. Goran's supper made his insides feel as if a soft blanket had been tucked cozily about them, and he was warm and sleepy. "Was there anything else Grandmother told me to do before I went to bed?" he murmured. "Tick! Tock! Yes, there was," the Clock replied. "She told you to wind me up. Climb on a chair and do it carefully. Don't shake me. I can't stand that, for I'm not as young as I used to be." "And I want a drink!" cried the youngest geranium, who was little, and had been hidden by the bigger pots when Goran watered them. Knock, knock, knock! What a knocking at the door! Goran ran to open it, and the firelight fell on Nanna the Goat and Gustava the Hen against a background of whirling snow. Nanna was wearing Grandmother's quilted jacket – where in the world had she found that? And Gustava had wrapped Goran's muffler about herself and the little basket she carried on her wing. "Good evening!" began Nanna, rather timidly for her. "May Gustava and I come in and sit by the fire? We thought you might be lonely, and then it is so cold in the shed. I did have a muffler like Gustava's, but I absent-mindedly ate it. I'm growing very absent-minded. We've come with an important message for you, but I can't remember what it is. Can you, Gustava?" "Cluck! Clu-uck! No, I can't. But I've brought my beautiful child to call on you," said Gustava; and she lifted her wing and showed Goran the brown egg in her basket. "Shut the door! Shut the door!" several Geraniums called indignantly. "We are very delicate, and we shall catch our deaths of cold!" So in came Nanna and Gustava and Gustava's Egg, and Goran shut the door. "Present my subjects!" commanded the Queen of Clubs, and Goran saw that she was no longer a little card, but a lady as big as his grandmother. In front she still wore her blue and red and yellow dress, but in back she was all blue, every inch of her, with a pattern of gilt stars, and when she turned sideways she seemed to vanish, for she was only as thick as cardboard. But she was so proud and grand that Goran wished he had on his Sunday suit, with the long black trousers and the short black jacket with its big silver buttons, the waistcoat all covered with needlework flowers, and the raspberry pink neckerchief. "This is Nanna, our Goat, your Majesty," he said. "Goat, you may kiss my hand," said the Queen. "I don't know whether I want to," replied rude Nanna, who had never been presented to a Queen before, and didn't know the proper way to behave. "Mercy on us! What manners!" cried the Geraniums, blushing deep red that the Queen should be spoken to in that manner, in what they thought of as their house. "But I wouldn't mind eating your yellow flower," continued Nanna. "I like to eat flowers." And she looked at the Geraniums, who nearly fainted. "Your turn next," said the Queen to Gustava. She had heard gentlemen say that so often when they were playing Skat with her and her companions that she always repeated it when she could think of nothing else to say. "Squawk! Cluck!" cried Gustava. "Would your Majesty like to see my beautiful child?" and she showed the Queen her Egg. "Just look, your Majesty! Have you ever seen anything more lovely? Such a pale brown color! Such an innocent expression! Perhaps your Majesty is also a mother?" "Tick! Tock! Don't forget to wind me!" said the old Clock. "Gustava Hen talks too much," the fat Teapot in the corner cupboard told her daughters the Teacups. "When the Queen speaks to you, just say 'Yes, your This is Nanna, our Goat, your Majesty. Majesty,' and 'No, your Majesty,' and I dare say she will take you all to Court and find you handsome husbands among the Royal Coffeecups." "Your Majesty should see my beautiful home," went on Gustava. "A nest of pure gold!" (She thought it was gold, but it was really yellow straw.) "Just like my throne," replied the Queen. "Speaking of beautiful homes, you should see my Palace! There are fifty-three rooms!" (She said this because it was the highest number she knew, for there are fifty-three cards in the pack, counting the Joker who keeps all the cards amused when they are shut up in their box. And she had seen a room in the Palace, because she had been used in a game of Skat there, once in her early youth. But that was long, long ago.) "My throne and the King's throne are pure gold, just like your nest, my good Gustava. And the walls are painted red and white, in swirls, like strawberries and cream. The stove has such a tall slender figure, and wears a golden crown. And then, just imagine, all the lamps are dripping with icicles at the same time that the floor is covered with blooming roses!" (For that is how she thought of the glass lusters on the lamps and the carpet on the floor.) "Icicles! Ice! Freezing! That reminds me of our important message!" cried Nanna. "Your Snowman, Goran. He looks so dreadfully cold out there, we were afraid he would perish." "Oh, yes! How could we have forgotten for so long! Cluck! Cluck! Cluck! He will certainly be frozen to death unless something is done quickly!" "Do you mean to tell me that any one is out of doors on such a night as this?" questioned the Queen. "Have him brought in at once! Your turn next!" And she looked so severely at Goran that he felt his ears getting red. So Goran and Nanna brought the Snowman in, while the Queen gave orders from the doorway, Gustava sat on her darling Egg to keep it warm, Mejau walked away with his tail as big as a bottle brush, and the Geraniums cried in chorus: "Shut the door! Shut the door! We shall all catch cold!" The Queen and the Snowman. "Poor thing! How pale he is!" exclaimed the Queen. "And how dreadfully cold! Put him in a chair by the fire!" The Snowman looked out of wondering sky-blue glass eyes, but said never a word, for he was very shy; and as he had only been born that afternoon, everything in the world was new to him. "I want a drink!" cried the youngest Geranium; and: "Tick! Tock! Tick! Don't forget to wind me!" the old Clock repeated; but no one paid any attention to them. "Your turn next!" said the Queen to Nanna. "Make a blaze, for this poor creature is nearly frozen." So with a clatter of tiny hoofs, Nanna built up the fire, only pausing to eat a twig or two, until even Mejau was nearly roasted. But the poor Snowman was worse instead of better. His twig mouth still smiled bravely, and his blue eyes remained wide open, but tears seemed to pour down his cheeks, and he was growing thinner before their very eyes. "If you please," he said in a timid voice, "I'm — " "Give him a drink of something hot," advised the fat Teapot, and that reminded the youngest Geranium, who began screaming: "I want a drink! I want a drink! I want a drink!" "I'll be delighted to oblige with some nice warm milk," Nanna offered, so Goran milked a bowlful. But the Snowman could not drink it, and the tears ran faster and faster down his face. "If you please – " he began again, faintly. "We must put him to bed," the Queen interrupted, with a stern look at Gustava who was sitting on her darling Egg in the center of Grandmother's feather bed. "Your turn next!" Grandmother's bed was built into the wall, like a cupboard. It was all carved with harebells and pine-cones and kobolds and nixies. The kobolds are the elves who live in the mountain forests, and the nixies are water fairies who sit under the waterfalls playing upon their harps and making the sweetest music in the world. There was a big white feather bed on Grandmother's bed, and a big red feather bed on top of that, and two fat pillows stuffed with goose feathers. And above all this was a little shelf with two smaller feather beds and two smaller pillows, and that was Goran's bed. On dreadfully cold nights they pulled two little wooden doors shut, and there they were, quite warm and cozy – even quite stuffy, you and I might think! The doors of the bed were painted with pink tulips and red hearts, and Grandmother said it made her feel quite young and warm to look at them, and Goran said it made him feel quite young and warm too. And Gustava the Hen thought they were beautiful, so there she sat on her darling Egg, and as she could never think of more than one thing at a time, she had forgotten all about the Snowman, and was happily clucking this song to her Egg: "Make a wreath, I beg,  For my darling Egg! "Flowers blue as cloudless sky  When the summer Sun is high,  Harebells, little cups of blue,  Holding drops of crystal dew. "Rain-wet pinks as sweet as spice,  Lilies white as snow and ice,  Lemon-colored lilies, too,  And the flax-flower's lovely blue. "Strawberries sweet and red and small,  And the purple monkshood tall;  Let the moon-white daisies shine,  Bring the coral columbine. "Weave the shining buttercup,  Bind the sweet wild roses up;  Poppies, red as coals of fire,  And the speckled foxglove spire. "And the iris blue that gleams  Knee-deep in the foamy streams.  Bring the spruce cones brown and long." (Thus ran on Gustava's song). "Make a wreath, I beg,  For my darling Egg!" "Make a wreath, I beg,  For Gustava's Egg," broke in Nanna the Goat impatiently: "Why leave the Geraniums out?  Add the Teapot's broken spout,  Cheese, and brown potatoes, too;  Anything at all will do. "Feathers from the feather bed,  Goran's mittens, warm and red,  And the flower the Queen holds up,  And the cracked blue china cup. "But the Queen has said  Kindly leave that bed!" So Gustava had to flop off the bed with a squawk, while Goran handed her her Egg, and then they put the poor Snowman, what was left of him, into Grandmother's bed, and pulled the eiderdown quilts over him. "If you please," said the Snowman in a feeble whisper, "oh, if you please, I'm — " "I know this is the right thing to do, because it is the way we always treat Snowmen at the Palace," broke in the Queen. To tell you the truth, she had never seen a Snowman in her life before, but she would never admit that she didn't know all about everything. The Snowman looked at them with despairing sky-blue eyes, while his tears poured down, soaking Grandmother's pillow. He had tried desperately to tell them something, but they would none of them listen. Suddenly Goran knew what it was. "I believe we're melting him," said Goran. "He needs air." "I need air," said the Snowman, his face shining with hope. "Air?" said the Queen. "Nonsense! He's had too much air. He needs a hot brick at his feet!" "I need air," faltered the Snowman. "Air? Nonsense!" cried the fat Teapot and all her Teacup daughters, hoping the Queen would hear, and take them back to the Palace with her. "I need air," sighed the Snowman, and now he looked discouraged. "Air? Brrr-rrr!" And Mejau squeezed himself under the chest of drawers, much annoyed with every one. "I need air," breathed the Snowman, looking at Goran with imploring eyes. "Air? Mercy on us, that will mean opening the door again!" And the Geraniums shivered in every leaf and petal. But Goran had helped the poor Snowman, now nearly melted away, out of bed, and was leading him to the door. "I need – " whispered the Snowman, and his voice was so faint that Goran could hardly hear it. And there, because he was melting away so fast, his mouth fell out and lay on the floor, just a little bent twig. Poor Snowman! Oh, poor Snowman! He could not make a sound now – he could only look at them, so sadly, so sadly! But a little Mouse peeping with bright eyes out of its hole saw what had happened, and, since Mejau was nowhere in sight, ventured to squeak: "Oh, please, Ma'am! Oh, please, Sir! The poor gentleman's mouth is lying on the floor!" So the Queen picked it up and pressed it into place again, but by mistake she put it on wrong side up, so that instead of a pleasant smile the Snowman had the crossest mouth in the world, pulled far down at each corner. And what a change it made in him! Before, his voice had been a gentle whisper – now it was an angry bellow that made the Teacups shiver on their shelf and the Geraniums turn quite pale, and the little Mouse dive back with a squeak into her hole, thinking to herself: "Well, I never!" "Here, you!" shouted the Snowman. "Get me out of here, and get me out quick. Hop along, my girl, and open the door! Your turn next!" (This was to the astonished Queen.) "Now, then, carry me out!" "Tick! Tock! I'm feeling dreadfully run down," said the old Clock. "Tick! Tock!  Wind the Clock!  Tock! Tick!  Wind it quick! "Tick – Tock" and he stopped talking. The astonished Queen meekly threw open the door, and Goran carried the Snowman into the snowy darkness. Brr-rr! It was bitter cold! "Now bring some snow and build me up," the Snow- man ordered. "Leave the door open so that you can see – don't dawdle!" The firelight from the open door shone on his blue glass eyes, and made two angry red sparks gleam in them. Goran and the Queen, Gustava and Nanna, scooped up handfuls and hoof-fuls and wing-fuls of newly fallen snow, and patted it on to the Snowman until he stuck out his chest more proudly than he had done in the first place, and he was so fat that he looked as if he were wearing six white fur coats, one on top of another. And all the time when he wasn't frightening the Queen half out of her wits by shouting: "Your turn next!" he kept muttering away to himself: "Melt me over the fire! Smother me in a feather bed! Put a hot brick at my feet!" It was when Goran was patting a little fresh snow on the Snowman's nose that he accidentally knocked his twig mouth off again. And this time it was put back right side up, so that the Snowman was as smiling as he had been in the beginning. He stopped roaring. He stopped muttering. Did the fire die down? For the red sparks no longer gleamed in his gentle sky-blue eyes. "Oh, thank you so much!" said the Snowman. "You have been so kind to me! And I know that you were trying to help me in the house. Forgive me for having been so cross! Will you please forgive me?" And the Snowman looked so anxiously at Goran and the Queen and Nanna and Gustava that they all answered: "Yes, yes, of course we will! And will you please forgive us for nearly melting you?" "And now go in, for this lovely air is cold for you, I know." "Oh, it is bitter cold!" agreed the Queen. "Brr-rrr! It is bitter cold."   Brr-rr! It was bitter cold! Goran rubbed his eyes. Only a few red embers glowed in the fireplace. How stiff he was! He must have slept in his chair all night, but he could not tell how late it was, for the Clock had stopped. He had forgotten to wind it, he remembered now. There sat the Queen in her chair, but she was just a little card again. Then he remembered the Snowman. He ran out of doors. There the Snowman stood, as roly-poly as ever, with his twig mouth smiling and his sky-blue eyes wide open. He said nothing, but Goran felt they two understood each other. What a night it had been! Could it all have been a dream? But now the night was over, and the storm was over; and, best of all, through the dim twilight he saw on the fjord far below him Neighbor Skylstad's rowboat, and seated in it, wrapped in her red shawl, his own dear grandmother coming home to him. THE Driver of the Dream Coach paused as he turned over the pages of the great white and gold book in which are kept the names of all those who have ridden or are to be given rides in the brightly painted Coach. "I see," he said, addressing the little angels who helped him keep these records, "I see the name of the Little Chinese Emperor. And there is a cross opposite his name. Has he been naughty?" he asked. "Has he been picking the sacred lotus flowers of his honorable ancestors? Has he – ?" "Oh, please," interrupted one of the smallest angels, "I put that cross there to remind me to tell you something about the Little Emperor. You see he hasn't been naughty — not exactly – but he's made a mistake. He doesn't understand," said the smallest angel, with his eyes round and serious. "And can I help the Little Emperor understand?" asked the Driver of the Dream Coach. "Of course you can!" cried the smallest angel, beaming brightly. "It's this way. The Little Chinese Emperor has a friend of mine fastened up in a cage, where he is very sad – " "An angel in a cage?" asked the Driver. "I never heard of such a thing!" "Well, not exactly an angel, a – " But what it was, and how the Driver helped the little angel's friend – That you shall hear. THE Little Emperor was dreadfully bored. He yawned so that his round little face, as round and yellow as a full moon, grew quite long, and his nose wrinkled up into soft yellow creases, like cream that is being pushed back by the skimmer from the top of a bowl of milk. His slanting black eyes shut up tight, and when they opened they were so full of tears that they sparkled like blackthorn berries wet with rain. "Oh, dear! Oh, dear!" cried his aunt, Princess Autumn Cloud. "The Little Emperor is bored! What shall we do, oh, what shall we do to amuse him? For when he is bored, he very soon grows naughty, and when he is naughty – oh, dear!" And she began to cry. But then she was always crying. When she was born her father and mother named her Bright Yellow Butterfly Floating In The Sunshine, but she cried so much that by the time she was five years old they saw that name wouldn't do at all, and changed it to Autumn Cloud Pouring Down Rain Upon The Sad Gray Sea. She cried about anything. If her Lady-In-Waiting brought her a bowl of tea with honeysuckle blossoms in it, she would cry because they weren't jasmine flowers. If they were jasmine, she would cry because they weren't honeysuckle. When the peach trees bloomed she would cry because that meant that spring had come, and that meant summer would soon follow, and then autumn, and then the cold winter. "And oh, how cold the wind will be then, and how fast the snow will fall!" sobbed Princess Autumn Cloud, looking through her tears at the bright pink peach blossoms. She cried because her sea-green jacket was embroidered with storks instead of bamboo trees. She cried because they brought her shark-fin soup in a bowl of green lacquer with a gold dragon twisting around it, instead of a red lacquer bowl with a silver dragon. She cried if the weather was hot. She cried if the weather was cold. And hardest of all she cried whenever the Little Emperor was naughty. Whenever she began to cry a Lady-In-Waiting knelt in front of her and caught her tears in a golden bowl, for it never would have done to let them run down her cheeks, like an ordinary person's tears; they would have washed such deep roads through the thick white powder on her face. Every morning Princess Autumn Cloud (and, indeed, every lady in the Court of the Little Emperor) covered her face with honey in which white jasmine petals had been crushed to make it smell sweet, then when she was all sticky she put on powder until her face was as white as an egg. Then she painted on very surprised-looking black eyebrows and a little mouth as red and round as a blob of sealing wax. It looked just as if her mouth were an important letter that had to be sealed up to keep all sorts of secrets from escaping. Princess Autumn Cloud and Princess Gentle Breeze and Lady Gleaming Dragonfly and Lady Moon Seen Through The Mist and all the rest of them would have thought it as shocking to appear without paint and powder covering up their faces as they would have thought it to appear without any clothes. So Princess Autumn Cloud leaned over as if she were making a deep bow, and let her tears fall in a golden bowl, and then, because they were Royal tears, they were poured into beautiful porcelain bottles that were sealed up and placed, rows and rows and rows of them, in a room all hung with silk curtains embroidered with weeping willows. "Oh, what shall be done to amuse the Little Emperor?" sobbed Princess Autumn Cloud. "Perhaps he would like some music!" And she clapped her hands, with their long, long fingernails covered with gold fingernail protectors. So four fat musicians, dressed in vermilion silk and wearing big horn-rimmed spectacles to show how wise they were, came and kowtowed to the Little Emperor. That is, they got down on their knees, which was hard for them to do because they were so fat, and then, all together, knocked their heads on the floor nine times apiece to show their deep respect. Then one beat on a drum, boom boom, and one clashed cymbals of brass together, crash bang, and one rang little bells of green and milk-white jade, and the oldest and fattest beat with mallets up and down the back of a musical instrument carved and painted to look like a life-sized tiger with glaring eyes and sharp white teeth. The Little Emperor sprawled back in his big dragon throne under the softly waving peacock feather fans, Four fat Chinese musicians. stretched out his arms and legs, and yawned harder than ever. "Oh! Oh! Oh! What shall be done to amuse him?" wailed Princess Autumn Cloud, bursting into tears afresh. "Can no one suggest anything?" And although the Mandarins and the Court Ladies thought to themselves that what they would really like to suggest for such a spoiled little boy would be to send him to bed without his supper, they none of them dared say so, but tried to look very solemn and sympathetic. "Would the Little Old Ancestor enjoy some sweetmeats?" suggested Lady Lotus Blossom. "Old Ancestor" is what you call the Emperor if you are properly brought up, and polite, and Chinese. So Gentlemen-In-Waiting came and kowtowed and offered the Little Emperor lacquered boxes of crystallized ginger, of sugared sunflower seeds, and of litchi nuts. But do you think he was interested? Not at all. He would not even look at them. "The wind is blowing hard. Would it amuse the Little Old Ancestor to watch the kites fly?" asked old Lord Mighty Swishing Dragon's Tail. The Little Emperor didn't know whether it would or not. However, he couldn't be more bored than he was already, so he climbed down from his throne and went out into the windy autumn garden. First marched the musicians, beating on drums to let every one know that the Emperor was coming. Then came the Court Ladies tottering along on their "golden lilies," which is what they call their tiny feet that have been bound up tightly to keep them small ever since the ladies were babies. Then the Mandarins with their long pigtails and their padded silk coats whose big sleeves held fans and tobacco and bags of betel nuts and sheets of pale green and vermilion writing paper. Then Princess Autumn Cloud in a jade green gown embroidered with a hundred lilac butterflies, a lilac jacket, and pale rose-colored trousers tied with lilac ribbons. In her ears, around her arms, and on her fingers were jade and pearls, and her rose-colored shoes were trimmed with tassels of pearls and were so tiny that she could hardly hobble. In her shiny black hair she wore on one side a big peony, the petals made of mother-of-pearl and the leaves of jade. Each petal and leaf was on a fine wire so that when she moved her head they trembled as real flowers do when the wind blows over them. On the other side were two jade butterflies that trembled too. In front of her, walking backward, went her Lady-In-Waiting holding the golden tear bowl, in case the Princess should suddenly begin to cry. And last of all, surrounded by his Gentlemen-In-Waiting, came the Little Emperor, dressed from head to foot in yellow, the Imperial color, so that he looked like a yellow baby duckling. And as he came every one in the Palace and in the Garden had to stop whatever they were doing – gossiping, teasing the Royal monkeys, chewing betel nuts, or sweeping up dead leaves – and kneel down and knock their heads on the ground until he had passed. How the wind was blowing! It sent the willow branches streaming, it wrinkled the lake water and turned the lotus leaves wrong side out, it scattered the petals of the chrysanthemums. It tossed the kites high in the air. How brightly their colors shone against the gray sky! Some were made to look like pink and yellow melons with trailing leaves, some were like warriors in vermilion, some were golden fish, others were black bats, and the biggest one of all was a great blue-green dragon. As for the Little Emperor, he took one look at them and then yawned so hard that they were afraid he would dislocate his jaw. A little brown bird the color of a dead leaf had been hopping about on the ground under the chrysanthemums looking for something for its supper, and now suddenly flew up into a willow tree and began to sing.   The Little Emperor clapped his hands, and all his servants dropped on their knees and began to kowtow. "Catch me that little brown bird with the beautiful song!" he said. He stopped yawning, and his eyes grew bright with eagerness. "But, Little Old Ancestor, that is such a plain little bird," said his aunt timidly. "Surely you would rather have a cockatoo as pink as a cloud at dawn, or a pair of lovebirds as green as leaves in spring – " The rude Little Emperor paid not the slightest attention to her, but stamped his foot and shouted: "Catch me that little brown bird!" So his servants chased the poor little fluttering bird with butterfly nets. The wind whipped their bright silk skirts, and their pigtails streamed out behind, and they puffed and panted, for they were most of them very fat. And at last the bird was caught, and put in a cage trimmed with tassels of purple silk and pearls, with His servants chased the bird with butterfly nets. drinking cup and seed cup made like the halves of plums from purple amethysts on brown amber twigs with green jade leaves. For a time the Little Emperor was delighted with his new pet, and every day he carried it in its cage when he went for a walk. But it never sang, only beat against the bars of its cage, or huddled on its perch, so presently he grew tired of it, and it was hung up in its cage in a dark corner of one of the Palace rooms, where he soon forgot all about it.   How could the little bird sing? It was sick for the wide blue roads of the air, for wet green rice fields where the coolies stand with bare legs, sky-blue shirts, and bamboo hats as big as umbrellas, for the yellow rivers, and the mountains bright with red lilies. How could it sing in a cage? But sometimes it tried to cry to them: "Let me out! Please, please let me out! I have never done anything to harm you! I am so unhappy I think my heart is breaking! Please let me go free!" "What a sweet song!" everybody would say. "Run and tell the Little Emperor that his bird is singing again." After a while the little bird realized that they did not understand, and it tried no longer, but drooped, dull-eyed and silent, in its cage.   One night the Little Emperor had a dream. Perhaps you won't wonder when I tell you what he had for supper. First he had tea in a bowl of jade as round and white as the moon, heaped up with honeysuckle flowers. Then, in yellow lacquer boxes, sugared seeds, sunflower and lotus flower and watermelon seeds, boiled walnuts, and lotus buds. Then velvety golden peaches and purple plums with a bloom of silver on them. Pork cooked in eleven different ways: chopped, cold, with red beans and with white beans, with bamboo shoots, with onions, and with cherries, with eggs, with mushrooms, with cabbage, and with turnips. Ducks and chickens stuffed with pine needles and roasted. Smoked fish. Shrimps and crabs, fried together. Shark fins. Boiled birds' nests. Porridge of tiny yellow seeds like bird seed. Cakes in the shapes of seashells, fish, dragons, butterflies, and flowers. Chrysanthemum soup, steaming in a yellow bowl with a green dragon twisting around it. Not one other thing did that poor Little Emperor have for his supper! When he was so full that he couldn't hold anything more, not even one sugared watermelon seed, they took off his silk napkin embroidered with little brown monkeys eating pink and orange persimmons. He was so sleepy that he did not even stamp his feet when they washed his face and hands. Then they took off his red silk gown embroidered with gold dragons and blue clouds and lined with soft gray fur, his yellow silk shirt and his red satin shoes with their thick white soles. But he went to bed in his pale yellow pantaloons, tied around the ankles with rose-colored ribbons. I must tell you about his bed. It was made of brick, and inside of it a small fire was built to keep the Little Emperor warm. On top of this three yellow silk mattresses were placed, then silk sheets, red, yellow, green, blue, and violet, then a coverlet of yellow satin embroidered with stars. Under his head were pillows stuffed with tea leaves; and above him was a canopy of yellow silk, embroidered with a great round moon whose golden rays streamed down the yellow silk curtains drawn around him. He fell asleep, and this is what he dreamed.   The long golden rays seemed to turn into the bars of a cage. Yes, he was in a huge cage! He tried frantically to get out! He beat against the bars! Then he saw what looked like the roots of trees, and brown tree trunks, a grove all around the cage. But the trees moved and stepped about, and, looking up the trunks, instead of leaves he saw feathers, and still farther, sharp beaks, and then bright eyes looking at him. They were birds! What he had thought were the roots of trees were their claws, and the trunks of the trees were their legs. But what enormous birds! They were as big as men, while he was as small as a bird. "Let me out!" he shouted. "Don't you know I am the Emperor, and every one must obey me? Let me out, I say!" "Ah, he is beginning to sing," said one bird to another. "Not a very musical song. Too shrill by far! Take my advice, wring his neck and roast him. He would make a tender, juicy morsel for our supper." "Please, please let me out!" "Oh, let me out! Please, please let me out!" cried the poor Little Emperor in terror. "He is singing more sweetly now," remarked one of the birds. "Too loud! Quite ear-splitting!" said a lady bird, fluffing out her breast feathers and lifting her wings to show how sensitive she was. "If he were mine I should pluck him. His little yellow silk trousers would line my nest so softly." "Oh, please, please set me free!" "Really, his song is growing quite charming! But one can't stand listening to it all day." And with a great whir and flap and rustle of wings the birds flew away and left him in his cage, alone. He called for help and threw himself against the bars until he was exhausted. Then bruised, panting, his heart nearly breaking out of his body, he lay on the floor of the cage. Finally, growing hungry and thirsty, he looked in his seed and water cups, drank a little lukewarm water, and ate a dry bread crumb. Now and then birds came and looked at him. Some of them tried to catch his pigtail with their beaks or claws.   Next day the Little Emperor was thoughtful. Could it be, he wondered, that a little bird's nest was as dear to it as his own bed with its rainbow coverlets and its moon and stars was to him? That a little bird liked ripe berries and cold brook water as much as he liked ripe peaches and tea with jasmine flowers? That a little bird was as frightened when he tried to catch its tail in his fingers as he was when the birds tried to catch his pigtail? And then he thought of how he had felt when the lady bird had wanted his pantaloons to line her nest, and, hot with shame, he remembered his glistening jewel-bright blue cloak made of thousands of kingfishers' feathers. It had made him miserable to think of their taking his clothes, but suppose his clothes grew on him as their feathers did on them? How would he have felt then, hearing the bird say: "I should pluck him. His little silk trousers would line my nest so softly"? He went to bed thinking about his little brown bird, and before he shut his eyes he made up his mind to set it free in the morning.   Then he fell asleep, and once again he dreamed that he was in the golden cage. Whir-rr! One of the great birds flew down by the cage door. With his claw he unfastened it – opened it! Oh, how exciting! The Little Emperor tore out, so afraid he would be stopped and put back in the cage! Oh, how he ran across the room and through the open door! Free! He was free! Tears rushed to his eyes, and his heart felt as if it would burst with happiness. But it was winter. The garden was deep in snow that was falling as if it would never stop. The peaches and plums were gone, and the lotus pond was frozen hard as stone. The Little Emperor had never been out in the snow before except when he was dressed in his warm padded clothes, with one Gentleman-In-Waiting carrying his porcelain stove, and another bringing tea, and a third with cakes in a box of yellow lacquer, and a fourth holding between the snowflakes and the Imperial head a great, moss-green umbrella. So small and helpless in so big and cold a world, what could a little boy find to eat or drink? Where could he warm himself? He ran frantically through the snow. The rose-colored ribbons that tied his pantaloons came untied and trailed behind him, and the cold snow went up his bare legs. Pausing to catch his sobbing breath, he looked up to see the thick snow sliding from a pine tree branch, and jumped aside just in time to keep from being buried beneath it. Then on he plunged again, growing with each step more weak and cold and hungry; stopping now and then to call for help in a quavering voice that grew feebler every time; blinking back the tears that froze on his lashes as he tried to remember that emperors must never cry; then struggling on through the blinding snow, a little boy lost and alone. Then, as it began to grow dark, he saw two great lanterns shining through the snow, coming slowly nearer. Perhaps his aunt and his Chief Gentleman-In-Waiting, Lord Mighty Swishing Dragon's Tail (Lord Dragon Tail, for short) had missed him and had come with lanterns to look for him! He tried to go toward them, to call, but he was too exhausted to move or make a sound. And then, imagine his terror when he realized that the glowing green lights were not lanterns at all, but the eyes of a great crouching animal – a cat! Gathering all his strength for one last desperate effort, he tried to run. But with a leap the cat was after him, and with a paw now rolled into a velvet ball, now unsheathing sharp curved claws, tapped him first on one side, then on the other, nearly let him go, caught him again with one bound, and with a harder blow sent him spinning into stars and darkness.   Some one was shaking him. Was it the cat? The Little Emperor opened his eyes and saw the frightened face of Princess Autumn Cloud bending over him, as yellow as a lemon, for she had jumped up out of bed when she heard him cry out in his sleep, and there hadn't been time to put on the honey and the powder, to paint on the surprised black eyebrows or the round red mouth. "Wake up, wake up, Little Old Ancestor!" she was crying as she shook him. "You're having a bad dream!" "Aren't you the cat?" asked the Little Emperor, who wasn't really awake yet. "Certainly not, Little Old Ancestor!" replied his aunt, rather offended. The Little Emperor climbed out of his bed. The room was full of the still white light that comes from snow, and looking out of the window he saw that the plum trees and the cherry trees looked as if they had blossomed in the night, the snow lay so white and light on every twig. Softly the snow fell, deep, deep it lay, and the people who passed by his windows went as silently as though they were shod in white velvet. The Little Emperor thought of his dream, and decided that his little bird might suffer and die if he let it go free before winter was over. But he explained to the bird, and tried to make it happier. "When summer comes, you shall fly away into the sky," he told it. He brought it fruit and green leaves to peck at, talking to it gently. And the little bird seemed to understand. The dull eyes grew brighter; and though it never sang it sometimes chirped as if it were trying to say: "Thank you."   On the first night of summer when the moon lay like a great round pearl in the deep blue sea of the sky, the Little Emperor slept, and dreamed again that the cage door opened for him and let him go free. But oh, what happiness now, happiness almost too great for a little boy to bear. Peonies were in bloom, each petal like a big seashell, and blue butterflies floated over them in the warm sunshine. Half hidden in the grass the Little Emperor found a great purple fruit – a mulberry. How good it was! The dewy spider webs glistened like the great tinsel Bridge to Heaven they built for him on every birthday. How happy he was! How happy! Free and safe! With the sun to warm him and the breeze to cool him; with food tumbling down from Heaven or the mulberry trees, he wasn't sure which, with a crystal clear dewdrop to drink on every blade of grass. How happy he was! The lake was full of great rustling leaves and big pink lotus flowers. Venturing out on one of the leaves, he paddled his feet over its edge in the gently lapping water. Then, climbing into one of the pink blossoms, he lay, so happy, so happy, looking up at the blue-green dragon flies darting overhead, and rocking gently in his rosy boat. No, it was not the lotus flower that rocked him on the water. It was Princess Autumn Cloud who was gently shaking him, and saying: "Wake up, if you please, dear Little Old Ancestor!" And hard as it is to believe, she was really smiling. The Little Emperor had been so good lately, and then it was such a beautiful day! He could not wait until after breakfast to let his little brown bird go free. As soon as he was dressed he ran as fast as he could to the room where the bird cage hung. Pat-a-pat-pat went his little feet in their blue satin shoes, and thud, thud! puff, puff! his fat old Gentlemen-In-Waiting lumbered along behind him. "I've come to set you free!" he whispered, as he carried the cage with its tassels of purple and pearls out into the beautiful day. For one minute he wanted to cry, for he had grown to love the little bird. But he remembered again that emperors must not cry. He opened the door of the cage. "Little Old Ancestor's bird has flown away!" cried the Mandarins. "It has flown so high in the sky that we can hardly see it," the Court Ladies answered; and they all wished that the Little Emperor would stop gazing up into the sky at the little dark speck, so that they might go in and have their breakfasts. But the Little Emperor, the empty bird cage in his hand, still looked up. High, high in the sky! And now, really, he could no longer see it. But a thread of song dropped down to him, a silver thread of song, a golden thread of love between the hearts of a little bird and a little boy. "Thank you, oh, thank you, my Little Emperor!" UP into the sky rose the hundred horses and their great Coach, until the roof of the Little Emperor's Palace with its bright yellow tiles looked only as big as a yellow autumn leaf – as a jasmine petal – as nothing at all! And along the Road of Stars they galloped, while notes of music sprayed from the wheels of the Coach, and, dropping to earth, gave the nightingales ideas for beautiful new songs. On through the sky and above the earth until the night was over, and at last, instead of a road, the hundred horses were galloping along a river. All along the river bank tall poplars rustled and whispered in the wind of the Coach's passing, and little waves, stirred up by the horses' hoofs, slapped against the small houses that rose from the water, small pink houses and blue houses and white red-roofed houses, each with its rowboat tied to its steps. White swans and green ducks rocked on the ripples, their feathers gilded by sunshine, for it was bright day now, and the rain that had been pouring down had stopped. It was bright day, and yet no one saw the Dream Coach except a little French boy, whose eyes were falling shut in one little pink cottage. "Philippe! Philippe!" the Driver called. "One last dream is left for you!" What was Philippe's dream? That you shall hear. "HOLD still then, my little monkey!" "But mother," wailed Philippe, "I have the soap in my eye!" "Soap is it, my angel?" asked his mother, lifting his face in her two wet hands. "Oh, but there is really no soap at all to speak about, just a bubble or two of suds. There!" and with the corner of her apron she wiped away the thick white lather around his eyelashes, so that Philippe looked like a little boy made of snow, except for his eyes which were large and brown and filled with tears from the painful smarting. From head to ankles he was covered with a froth of soapsuds, and his feet had stirred the warm water in the bottom of the wooden tub into rainbow-tinted mounds of bubbles which grew and grew and cascaded over the sides with a tiny fizzing sound. "You are giving our young one a very thorough tubbing," remarked Philippe's father. He was sitting under the narrow window of their cottage, cutting the yellow-white sprouts from a bag of potatoes which he intended to plant in the dark of the next moon. "Indeed I am. I shall scrub and rub and polish until he looks like a wax image, or as pink and shining as the inside of the seashell his Uncle Pablôt sent him from Paimpol." Philippe's father held a large brown potato at arm's length, and, regarding it with his head cocked to one side, said: "Very fine! Yes, very fine!" "A good size," agreed his wife, looking over her shoulder, while she absently bored into the ear of her long-suffering son with a bit of soapy rag. "Yes – but I was thinking rather of Philippe's Uncle Pablôt. It is he who is very fine, a grand gentleman who carries a gold-headed cane and has traveled far – to the very borders of our beloved France, and even beyond, so I hear." "Oh, very much beyond! He has been in every country in the world, according to the wonderful stories he tells, and the world, Pierre, I understand to be of a tremendous bigness; indeed, if what I am told is the truth, it must be three or four times as big as our own country!" "Is that so?" replied Pierre doubtfully, starting to cut the pallid sprouts again with quick motions of his work-hardened hands. "It may all be the truth, my good wife, but I have always taken the words of Pablôt with a grain of salt; I think, for that matter, that he is a little inclined to blow." "'Blow'?" asked Philippe from his tub. "I thought it was only the wind that could blow." But of course no one answered him, for he was only a little boy, and not expected to understand; instead, his father bent over his bag of potatoes to hide his smile, and his mother remembered that the pot-au-feu (which is a thick soup made of odds and ends and bits and scraps and almost everything you can think of mixed with water in a large pot and left on the fire to bubble sluggishly for many hours) needed stirring right away. "Take care," warned her husband, "that you do not drop soap into the soup from your wet hands, for I know of nothing that gives it a more curious flavor." "Just the same," said Philippe's mother, turning from the hearth, her cheeks flushed rosy red by the bright, hot embers, "just the same, it is a good thing that our little one should be invited to meet such a fine gentleman. It will teach him how to say the most ordinary thing elegantly, and how to carry his head high as if he were a born dandy. Philippe, repeat to your father the little speech you are to say when you meet your uncle." "Good health to you, my dear and illustrated uncle! It gives — " "No, no, my pet, 'my dear and illustrious uncle,' and was there not something that you forgot?" "Yes, Mother. I forgot to make my bow. Shall I make a new beginning?" "Do so." Whereupon Philippe bent nearly double over the edge of the tub, scattering drops of water upon the floor. "Good health to you, my dear and illustrious uncle. It gives me the most great pleasure to have – eugh! soap in my mouth. . . . Ptu! – " "Wait, then, until you are dressed in the new suit I have sewn for you," and his mother, taking an earthen jar of water from the side of the fire where it had been put to warm, poured it over his head, leaving him no longer a snow boy, but a boy made of the shiniest china you can imagine. "Is that pleasant, my brave one?" "It is warm, like rain," said Philippe, lifting his arms above his head. "I will not need another washing for a long, long time, will I, Mother?" Philippe's grandparents lived the distance of twelve fields, a small woods, three stiles, and the width of a brook from his own home. Just how far that is, is hard to say. You see it makes such a difference whom you ask. Ask the swallows and they will tell you airily that it is no distance at all, just a flick of the wing, and you are there. But ask the snails who live under the broad leaves of the flowering mulleins, and after pondering a long time, they will tell you that it gives them a headache to think of such a tremendous distance, that it would surely take several lifetimes to travel so far, and as for themselves, they would consider it very foolish to start out on such a dangerous adventure when there were plenty of young lettuces so close at hand! To a small boy of eight, it was quite a long journey, taken alone, particularly when he could not take the short cut by wriggling through the tangled copse for fear of tearing his new suit, or being covered with last year's burrs and barbed seeds of the undergrowth. But he reached his Grandparents' house at last. It was a little house built by the side of a river, actually touching the water on one side, so that you could step out of a door, down a step, and into a rowboat. And there were white swans and yellow-breasted ducks with bronze-green backs swimming in the reflection of the pink walls. On the land side was a poplar tree, very tall and dressed in silvery blue leaves, stand- ing erect like a giant soldier on guard before a toy house. Once Philippe's Grandfather had explained to him how he could tell the time of day by the shadow this tree cast: when it struck across the chimney at the corner of the house, it was time to go into the fields; when it crossed the front door, it was time to enter therein for the midday meal, and when it pointed out toward the fields, that was a signal for Grandmother to ring the great bell that would call the workers home. "And what," Philippe had asked, "do you do, Grandfather, when the sun is under the clouds, and there is no shadow to tell the time?" "Well, then we must needs look at the clock which ticks on the mantelshelf over the fire," Grandfather said with a twinkle of his old, blue eyes, eyes half hidden by the tufts of white eyebrows. Although the day had commenced unusually fine, and the calm, blue sea of sky had been without an island reef or bar of cloud to wreck the golden galleon of the sun, by the time Philippe had been tubbed, scrubbed, dressed in his best, had been rehearsed in his address to his uncle, kissed good-by, and given a little nosegay of pansies and lilies of the valley in a paper twist for his Grandmother, and had crossed the twelve fields and picked his way carefully through the woods to avoid the sharp brambles that reached out after him with long and sinuous arms – by the time all this had come to pass, and Philippe was actually in sight of his grandparents' cottage, it began to rain from a sky as heavily gray as it had been brightly blue before. It started so suddenly that Philippe had to run across the last field to keep the big drops from ruining his new black velvet cap. The inside of the house was very dark, with only two windows, like half-closed eyes, looking out on the world. Through these windows entered shafts of pale, watery light that cut blue paths in the wreaths of wood smoke creeping around the rafters. Pots, pans, and kettles of burnished copper hung from hooks in the ceiling, and mirrored in tiny points the flames leaping on the hearth. It was like another world, small but complete, inside Grandmother's and Grandfather's house: the floor was the earth itself, trampled until it was as hard as brick, the wreaths of smoke were thin clouds flung across a dark sky where yellow and red stars winked and twinkled. At one end of the room, where Grandmother and Anjou, the cat, were busy preparing dinner over the bright fire, it was gay and warm: Day; but at the farther end, where Grand- father sat stroking his long white beard, it was dark and chilly: Night. When Philippe entered, he had to blink his eyes for some time before he could adjust himself to the darkness. Then he handed his Grandmother the bouquet he had carried so carefully, politely wishing her health and happiness. There were tears in Grandmother's eyes as she bent over and kissed her Grandson's pink and shining cheek, but then there were always tears in Grandmother's eyes – why, Philippe never could understand. Did she weep because of the stinging smoke that the chimney seemed too small to carry off? Or because she was sad? Not sad, thought Philippe, or Grandmother would not be all the time smiling. "Hey-O!" sang Grandmother in her high little voice, dropping a tear in the yellow heart of a purple pansy. "What pretty flowers you have brought me, my Philippe, and see, here is a raindrop in one of them shining as prettily as a glass bead!" Philippe did not like to tell her that it was her own tear. "Then it is raining out?" she asked. "It will make a wet home-coming for your uncle, but it is lovely, nevertheless, and if it comes down hard enough, it will make the river flow along more happily than it has for a long day. Won't that be beautiful, Philippe?" "Yes, Grandmother Marianne," Philippe agreed politely, and then asked: "When will my Uncle Pablôt be here? Mother has taught me what to say when I make my bow to him, and if he is too long in coming, I am afraid that I may forget it." "He will come," said Grandmother, "when he has a mind to." "And is he coming from a great distance, maybe all the way from Paris?" (Philippe thought that Paris was the only city in the world, built on the world's very edge.) "Maybe, and then maybe not," Grandmother told him. "There is no telling where your uncle will come from; he is apt to blow in from any quarter." "Ah, then that explains it!" remarked Philippe innocently. "Father said he always thought Uncle Pablôt was a little inclined to blow." "Now did he!" Grandmother was frowning and smiling at one and the same time. "Have you spoken to your Grandfather yet?" "I did not know that Grandfather Joseph was home; I did not see him," said Philippe truthfully. "Use your young eyes sharply and look into every corner," advised Grandmother. "Anjou!" she cried warningly, "you will burn your nose if you get too close to that roasting duck." Philippe gazed into the farthest corner of the room where he saw two dim spots of white glowing like snow in the night; he had to advance quite near before he could be sure that what he saw was the long white hair and the long white beard of Grandfather. "Good day, Grandfather Joseph," said Philippe, bowing low before the old man who sat huddled in a chair, the arms of which were worn shiny by the grip of thin fingers. "'Good day'? A very bad day, Grandson. Though I no longer hear nor see as I used to, I can feel that it is raining. Tell me, is it raining?" "Yes, Grandfather," replied Philippe from the top of a churn where he had climbed to look out of the small window at the river. "It is falling so hard that the raindrops are bouncing from the surface of the water." Remembering what his Grandmother had told him, he added, "It will make the river flow along more happily than it has for a long time, and that will be very beautiful!" "Horrible!" said Grandfather with a sigh that was almost too soft to be heard. "It makes me feel weak clear through," he continued. "Give me the sharp cold and the sparkling frost when the river freezes so hard that it cracks and roars like a cannon. When I was a boy, I used to spread my cape and let the wind push me across the slippery ice — This soft weather will be the end of me!" There were three people living in the house that Philippe visited; besides Grandmother and Grandfather, there was little Avril, their grandniece, and therefore Philippe's cousin. Avril was a child of tender beauty, younger than Philippe, quite a baby in the sight of eyes that were eight long years old. Avril was very shy, so shy that she had hidden under the table when Philippe had entered the door, and it was not until he had paid his respects to Grandmother and Grandfather that he saw her there, peeking out at him like a flower from the dark shadow of a garden wall. "Hello, my little cousin," said Philippe with a grand and grown-up air. "Would you like to play a very important game with me that I have just thought of?" Avril laughed her pleasure. It was a most excellent game, so Philippe thought. He was King, enthroned on the churn, and Avril was his slave, and had to bring him anything he might request, with the penalty of having her head chopped off if she failed. King Philippe had just commanded the brightest star in the heavens to be brought him, when there was all at once a loud rapping and rattling of the wooden latch. The door flew open before anyone had time to answer, and a gust of chilly wind swept through the room, breaking the weaving rings of smoke, making the fire leap up the chimney, causing Grandmother in her excitement to drop the wooden spoon into the pudding, and even waving Grandfather's beard like a white flag. "Behold! I am here!" cried Uncle Pablôt from the threshold, withdrawing his right arm from the voluminous folds of his cape and making a magnificent sweeping gesture ending with his fingertips being pressed lightly against his expanded chest. "So I see," said Grandfather in a thin, complaining voice from his dark corner. "Close the door," he pleaded, tucking the end of his waving beard into his blue smock. "Close the door – the rain makes me feel very weak – " But no one paid the least bit of attention to him. Grandmother ran forward with squeaking noises of delight, throwing her arms around the newcomer, draping him with a link of sausage, which she had forgotten to put down in her hurry, in the manner of a necklace. Avril shyly retreated beneath the table again, and Philippe tried desperately to remember the pretty sentences with which he was to address the great man. He was in the very middle of trying to remember when his Grandmother took him by the hand. "And here is your little nephew," said Grandmother, "who has come all by himself a great distance to welcome you." Philippe stared dumbly, wishing that he had had the presence of mind to slip under the table with Avril. "Come! What do you say to your uncle, Philippe?" asked Grandmother. "I forget what I say," answered Philippe miserably, "but I am very glad to see you, my – my — Ah! Now it comes to me!" And he started again: "Good health to you, my dear and illustrious uncle. It gives me the most — " "Fiddlesticks!" interposed Uncle Pablôt, laughing. " – the most great pleasure to welcome you, and — " "Yes, yes – " said Uncle Pablôt, cutting him short again. "But what do you say to this?" and he reached into the folds of his cape and handed Philippe something small and shining. "What is it?" asked Philippe. "Ho! That is better. At least you did not learn that by heart, did you, my boy? Here, I will show you." Whereupon he put the bright present to his lips and blew a shrill blast that rattled the pots and made Grandmother drop her sausages in alarm. (She dusted them very carefully before putting them in the hot pan that was waiting to cook them.) "A whistle!" shouted Philippe, dancing with joy. Then he ducked under the table to show his beautiful new present to Avril. "And here is a present for the other little one," said Uncle Pablôt, handing the shyly smiling girl a toy spade with a bright green handle and a wreath of early spring flowers painted on the tiny blade. What a feast they had in honor of their distinguished guest! "I suppose," said Grandmother to Uncle Pablôt, "that you have traveled a great distance since last you visited us?" "Yes, yes," said Uncle Pablôt, flourishing the wing of a duck. "I have breezed about a bit, here, there, and everywhere. Would you like to hear a little about my travels?" "Oh, please!" begged Philippe, although the question had not been addressed to him. "Now there is India," commenced Uncle Pablôt, "a very hot country, but as gay as a circus — " And over the roast duck he told them many things in his soft and flowing voice, of elephants, their enormous bodies painted brilliantly in curlicues, circles, and zigzags, swaying through narrow streets like clumsy ships of the land, ridden by dark-skinned potentates robed in ivory satin and scarlet brocades, wearing precious jewels more sparkling than broken bits of colored glass . . . of softly stepping and treacherous tigers prowling in deep jungles, of lions and leopards, crouching panthers and laughing hyenas and all manner of beasts . . . of birds with emerald crests, sapphire wings, breasts of flaming orange, long, sweeping tails and screaming falsetto voices that seemed to shatter the air into sharp and hurtling splinters . . . of gorilla fathers with so terrible a power in their long arms that they could uproot a tree as easily as one would pick a dandelion, and gorilla mothers holding babies to their breasts as gently and lovingly as any human mothers . . . of chattering pink monkeys shouting in derisive laughter from their hiding places in the tree tops at passers-by. Leaving the wildness of the tropic forests, he told them of queer-shaped temples and pagodas, lifting to the blue of the sky, made of stone carved as beautifully as lace, where lived the leering and laughing gods of the heathen. By the time Grandmother had put the crisp green lettuces on the table, Uncle Pablôt had carried his little audience to far-away China and, without so much as a "by your leave," into the gardens of mandarins and emperors where jasmine filled the air with sweetness, and rose and white peonies bowed their heavy heads around the lily ponds. Far away and far away they flew on Uncle Pablôt's winged words: over snowy mountains tinted with the pink and lavender radiance of the dawn, through the fiery furnace of desert sands where haughty camels plodded their weary course to the beat of Arab drum and the mystical rhythm of Arab song, up broad rivers where crocodiles basked in the sun . . . past cities with towers and turrets, through the courtyards of palace and castle, into the riot of crowded markets with their laughter and shouting, buying and selling, into a land where the streets were water, where the buildings had wings that turned and turned, where the men and boys wore tight little jackets of velvet fastened with brass buttons, and trousers as big as two sacks sewn together. "Oh, yes," said Uncle Pablôt, "and they all wear wooden shoes so that they can walk safely across the streets of water without sinking." "Remarkable!" said Grandmother. "If true," said Grandfather, but he spoke so low that every one thought that he was merely choking, and paid no attention to him. "More!" pleaded Philippe. "And I was in England the other day," continued Uncle Pablôt, who needed little urging, "where I visited the Royal Family. That is nothing," he said, in answer to a look of proud astonishment from Grandmother. "I have a great many acquaintances in all walks of life. Once I mussed up the hair of a prince and ran off with the parasol of a duchess, just by way of a little joke, you know. Did I ever tell you — " But if he ever had, he told them again, and at such length that, though the dinner had come to an end, and Grandmother had cleared away the dishes and given Anjou a saucer of milk and a bone, he was still telling them this and other monstrous adventures in his quick, easy voice. How thrilling it all was to Philippe. It seemed to him that the gay words flew from his uncle's mouth and over his head like flocks of wild birds. Some of them were quite ordinary little words, as sparrows are ordinary little birds, but others were long and strange like the queer birds his uncle had told him about. Or again – this tale of other lands and peoples was like music to which the crackling of the fire and the drip, drip of the rain outside made a soothing accompaniment. He tried hard to keep his eyes and ears wide open, but, to tell the truth, he had eaten very heartily of Grandmother's delicious dinner, and that, with the darkness of the room, the lullaby singsong of his uncle's voice, and the soft purring of Anjou, made him heavy-headed and in danger of falling into sleep at any moment. Voices came to him through the fog of smoke, sounding far, far away. He heard his uncle say, "But you, Grandfather Joseph, you should go about the world a bit and see for yourself these wonderful things." "I am content," replied a soft, old voice. "Yes, you are content to stay where you are put, or at best to drift around a bit, eh?" And then the old man saying, "I drift – I drift – I drift – "   Maybe it was then that Philippe went to sleep, or, on the other hand, maybe it was then that Philippe overcame his drowsiness and woke up to a new interest in things. Certainly, strange and exciting happenings took place in rapid succession. It started with Grandmother going to the window where she stood on tiptoe and looked out at the river. "Oh," she cried, and her voice was younger and happier than Philippe had ever heard it before. "Oh! The river has grown up; never before have I seen my darling child so strong and beautiful. And how he runs and laughs! In another minute he will be at the sill of the window. I will open the door and invite him in." "No, no!" cried Grandfather weakly, jumping up from the chair and staring wildly about the room. "It will be the end of me." "But think, Joseph, how my child will love it! He will splash and laugh – why, even now I can see him creeping under the door in his eagerness." Without a word, gathering the baby Avril into his arms, Grandfather dashed out of the other door; and they watched him running across the fields and meadows, his white hair and beard flying back over his shoulders in the mad speed of his flight. "Now there is a strange man," Grandmother said to Uncle Pablôt. Pablôt only whistled softly and looked wise. "One would think," continued Grandmother, "that he would be grateful for a nice trip on the back of my child. He will come to my way of thinking all in good time." She looked around her critically. "The fire!" she said. "How fiercely the fire is burning! It quite makes me boil with anger; I won't have it, I hate it!" and she ran upon it, scattering the embers with a great hissing sound. "There now!" turning again to Pablôt. "Do you think that the room is in readiness for my son? Shall I open the floodgates and let him in?" "How about Anjou?" asked Uncle Pablôt. "Anjou can ride in his basket." "And Philippe?" "The little cradle by the bed that Avril sleeps in – an excellent boat! Jump in, Philippe, run and jump in, for we are going to make a voyage. I – let me see - this tub will suit me nicely; I have a fondness for tubs; and you, Pablôt, can run along the bank. Into your basket, Anjou, quick! You look strangely unhappy, my pet. Are we all ready? Enter, my son!" Grandmother unlatched the door facing on the river; it flew back against the wall with a crash. What happened next was very confused in the mind of the startled Philippe. There was a great, swishing roar as the water of the river, swollen to unheard-of heights by the hard rain, leaped and tumbled into the room in masses and billows of silver foam. Tightly he clutched the rail of the crib as his strange boat tossed and turned and ducked and pitched and bobbed and spun around and around in the currents and cross currents and boiling waves. At last, when the water in the room had reached the level of the water outside, and therefore had suddenly quieted, he dared to look about him. Uncle Pablôt had disappeared; Grandmother was calmly sitting in her tub with a rapturous smile on her old face. "So impulsive!" she remarked conversationally to Philippe. "My son, the River," she explained. "He is so very glad to see me. Did you notice how he jumped and romped when I let him in? It made me very proud! But we must not waste our time floating idly here; there is to be a very important reunion of my whole family." And with that they were caught in an eddying current and swept out of the door: Anjou, with tail as erect as a mast; Philippe, wide-eyed and silent in his cradle boat; and Grandmother in her wooden tub, pleased and proud, the happy tears streaming down her cheeks. Once you get over being frightened, it is really great good fun, so Philippe found, to go racing along a swift-flowing river in a little boat that nods to each passing wave. They passed tall reeds and rushes that waved gracefully to them from the shore, weeping willow trees, their wands gray-green and crystal with rain, gently caressing the surface of the water, emerald fields patterned with yellow flowers shining wet, mallows by the River's edge, white with glowing hearts of deep pink, deep pink with hearts of white. Sometimes swiftly, sometimes slowly, but always and ever onward, "Grandmother's Son" carried them on his strong back; now through lowlands, and now between high banks of dark chocolaty mud, where, from the black portals of burrows and tunnels, the bright eyes of water animals gazed at them in astonishment. Yes, it was thoroughly delightful, but it was puzzling Grandmother in her wood tub. to Philippe; there were many things that he did not understand. He decided that he would ask Grandmother, who was floating close to him in her wooden tub. "Grandmother Marianne," he called to her, "why do you call the river your son?" "Look at me, Philippe. Have I not changed?" asked Grandmother. "I am no longer Grandmother Marianne," she said, "I am Grandmother Rain! . . . Without me there would be no puddles, no pools, no lakes, no ponds, no rills and runs and rivulets, no brooks and streams, no waterfalls, no rivers – their lovely and happy voices would die from the land. They are all my children. And if it were not for my children, there would be no ocean." "What is the ocean?" asked Philippe, who had never been to the seashore. "That, my Philippe," said Grandmother Rain, "is where I was born, and where all my children return. It is a beautiful place! And how your uncle loves to play there – a decidedly worthy man, your uncle, though at times a trifle flighty." They passed a grove of trees, their bright branches reaching out over the water. "How fresh and strong they look," cried Grand- mother Rain. "They are always glad to see me, I can assure you. Oh, I have strange adventures, Philippe. Sometimes I am buried in the soft, brown earth, and you would think that would be the end of me, now wouldn't you? But no! I creep back into the air through trunks of trees, through blades of grass and stalks of flowers, and through the shoots of young corn. I trickle through an endless maze of underground passages into deep wells, or until I find a place where I can come bubbling up to the surface. Every living thing needs me and every living thing loves me, except sometimes little boys kept in from play – eh?" Philippe felt guilty, and was about to apologize when Grandmother Rain put him at rest. "That is not quite true. There are others," she said, "who do a good deal of complaining about me; they say that I am an old spoil-sport just because I try to make myself pleasant at their parties and picnics. But if I were to leave them forever – " she made an odd little gesture of despair. "Would you like me to sing you a song?" she asked unexpectedly. "It might serve to pass the time." "Please," said Philippe, who was getting a bit tired of floating aimlessly and never arriving anywhere. "Very well." And this is what she sang: GRANDMOTHER RAIN'S SONG "Pitapat, pitapat, drip, drip, drip – Pitapat, pitapat, slip, slip, slip, Over roofs and windows, over garden walls, Over fields and meadows – the gray rain falls! "I fall upon the countryside, upon the city square; I tap the silk umbrellas that are opened everywhere; I wash away the dirt and dust that cloud the flower's face; I fall on royal palaces, and in the market place – For no one is too regal, and no one is too low To receive the crystal blessing that I scatter as I go. I freshen up the thirsty world, and make it clean and green, The grass grows tall, and flowers bloom wherever I have been. Although I lie in gutters, and slip through hole and crack, And sometimes have my little joke by running down your back, I make small children happy, for on me they may float Their shiny bright, their red and white, their little new toy boat. So think not that because I fall like tears I may be sad: The sparkle in each drop of me is proof that I am glad! "Pitapat, pitapat, drip, drip, drip – Pitapat, pita – "Ah! There he comes!" cried Grandmother Rain excitedly, forgetting to finish her song. "Who?" asked Philippe, curious, like most boys. "Who indeed?" replied Grandmother. "Look up the shore. Now we will have some sport!" Philippe did as he was told, and saw a small figure hurrying toward them at a great pace. As the figure drew nearer, he saw that it was Uncle Pablôt, running along the edge of the water and stirring it to frenzy. "Hold tight!" warned Grandmother from her tub. Philippe needed no warning, for as Uncle Pablôt drew opposite to them, waves broke the smooth surface of the river and tossed his little crib about like a cockle shell. He could see, as he was twisted about, that the rising waves were creeping over the edge of Grandmother Rain's tub and swamping it – it was sinking lower and lower. "Be careful, Grandmother!" he cried frantically. "This is what I call delightful!" replied that remarkable woman, tipping her tub until the water ran in and filled it with a deep gurgle. As she sank into the river she clapped her hands, whereupon there was a blinding flash and a peal of sharp thunder. A bigger wave than the rest washed Philippe, cradle and all, upon the shore. He was too dazed to understand for some moments just what had happened, but at length he spied Grandmother, already at some distance, riding the waves and swimming strongly with the current. "Now I shall be in high time for the reunion!" she called back to him, the growing space between them making her voice very faint. Poor, dear Grandmother! Whatever would become of her? She would drown most surely. But perhaps Uncle Pablôt, who had raced on down the bank, could save her – But no! He was strolling back; he had given up. Philippe ran to meet his uncle with tears in his eyes. "Hello! So there you are, safe and sound and high and dry, eh? You see, I veered about; I thought we might take a little stroll together," explained Uncle Pablôt airily. "Save her!" pleaded Philippe tearfully. "Who? Grandmother Rain? Be calm, my boy, she is quite in her element." "But unless we do something, the river will carry her far away!" "Which is exactly what she wishes. She will be back again, never worry. She makes these little trips to the ocean quite frequently. Look, Philippe, the sun is coming out! The sun and Grandmother Rain do not get along well together; he always hides as soon as she has made her appearance, and when she has gone, he goes about mopping up the whole countryside." Uncle Pablôt's calmness gave Philippe some comfort. He was grown-up, and therefore wise; perhaps he knew the meaning of these strange things. "Do they always disagree, Grandmother and the sun?" asked Philippe. "Not always. Sometimes, though rarely, you may see them together, and then they hang a rainbow flag across the sky as a sign of their truce. But come! We have much land to cover, we must hurry a little more." "Where are we going, Uncle Pablôt?" "What a silly question! How am I to know? I go wherever it pleases me at the moment, sometimes for days in one direction, and at other times this way and that quicker than you can think. And please do not call me Uncle Pablôt; I am your Uncle Wind." Philippe felt rebuked; he trotted silently beside the tall, lean fellow, thinking him a not very pleasant companion. He would gladly have walked home alone, but he had no idea where he was, and he was afraid to be left alone. At length his Uncle Wind spoke to him: "Do not think unkindly of me, little Philippe. If I was cross to you, it is because I am given to complaining at times, but I am a good fellow at heart. With Grandmother Rain's help, I keep the world a nice clean place to live in. And do you know, Philippe, the best part of it is that I am such a humorous fellow; I am all the time playing the most amusing jokes! Why – once I mussed up the hair of a prince and ran off with the parasol of a duchess. . . . There now! I think I told you that once before, didn't I? But where and when it is quite past my ability to remember. Well, that gives you the idea. Hats? There is nothing quite so much fun as hats! Snatch a hat and run, drop it until its owner is just about to pick it up, and then snatch and run again. There's nothing that draws such a large and appreciative audience as the hat trick. Though, of course, umbrellas are great sport – but I need Grandmother Rain to help me with that trick. Maybe you think I am only a practical joker? Not at all! Do you remember that day you were sick, and your head felt as if it were on fire? Do you remember how I came and cooled it for you, and played with the tassels of the curtains until sundown to keep you amused? If I get a bit angry and rough at times, I am gentleness itself at others, and particularly am I loved in places that are hot and stuffy and saddened by ill health. I am one of the housekeepers of the earth, and I must be everlastingly at it to make things comfortable and shipshape. Oh! The dirt and the dust, the smoke and the foul smells people throw into my face in the cities, little dreaming that if it were not for me the earth would be unfit to live on. But I am strong without end and do my best. Yes, Philippe, I may bluster and blow and play tricks but for all that I am a very excellent fellow. And I am a traveler and adventurer over land and sea, such as one has never read of in the most thrilling books! No one has seen more of the world than I. I have seen strange parts of the world, looked behind walls of ice, where no living thing has ever been. Only the other day – " On and on talked Uncle Wind, and on and on traveled the two together. Over more meadows they went than Philippe thought could possibly be crowded into the world, and past innumerable herds of cows and flocks of sheep. It had grown warm with the coming of the sun, and often would workers in the fields spread wide their arms and speak words of welcome as they passed. The grass and the yellow wheat bowed as they stepped lightly over them and even the trees nodded in friendly recognition. Birds, stretching their wings, took rides on Uncle Wind's shoulders. At times Uncle Wind would go quite fast, so that Philippe had to run, and again, so slowly that they were scarcely creeping, until, after a long time, they stopped quite still on the top of a high hill. "I often lie down and rest at sunset," explained Uncle Wind in a voice that was scarcely above a whisper. Far, far away, Philippe saw, through a twilight haze of gold, what he had never seen before: the deep ocean where Grandmother Rain was holding her family reunion. The crimson sun was rolling over the blue edge of the world into its sparkling heart. He sat down in the crevice of a rock and thought long and wonderingly of the things that had come to pass that day, and he tried to see, in the land that was spread like a map before his eyes, the red roof and clump of trees that would be his own home. He did so long to be with his darling mother again! And very soon it would be dark. . . . Silver stars began to shine in a pale green sky. . . . Golden stars were lit in a sky of deepening purple. . . . More and more stars in a sky dark blue. Night had suddenly closed in around him, and he was frightened and started to cry. "Uncle Pablôt – I mean, Uncle Wind – I want to go home!" But where was Uncle Wind? There was no answer, no sound, and search as carefully as he would, Philippe could find no trace of him. It was as if he had utterly vanished, which, indeed, he had, for the time being. What was poor Philippe to do? The hilltop stones that surrounded him took menacing forms; he was sure that he saw the shining eyes, green and glowing, of prowling beasts. He summoned all his courage and bravely started to walk – where? Downhill, for he remembered that Grandmother Rain had told him, as they floated along the river, that that was the only way any sensible person would ever care to travel. Besides, when you were on the top of a hill, unless you stayed there, there was no other choice. Where else he was bound for he had no idea, but anything would be better than the unbroken stillness of the haunted rocks. How far he walked, at times ran, through the dark night, falling over roots and tearing his way through scratching brambles, pursued by unseen terrors of darkness, before he came to the old man, he had no idea. At first he was timid of approaching the bent figure sitting huddled on a stump, so dim under the starlight. But loneliness and the longing for companionship overcame his fear. "Please, sir," he said, drawing slowly closer, "please, sir, could you tell me – Grandfather Joseph! Grandfather Joseph!" – and he flung his arms around Grandfather's neck, the hot tears streaming down his cheeks. But how cold Grandfather was! The touch of Grandfather's face against Philippe's burned like ice. "Watch out!" said Grandfather sharply, "You are so insufferably warm you will melt me, if I do not succeed in freezing you first. And, young Philippe, be careful the names you call people. Look carefully at me again; do you not know me?" Philippe was doubtful. Surely it was Grandfather Joseph, and yet – Grandfather had never been so cold, nor so strange in his behavior. Did he know him? "Yes – no," answered Philippe, not being able to decide. "Yes, Snow, that is right! I am Grandfather Snow." "It's very upsetting!" remarked the puzzled boy. "Is it?" replied Grandfather Snow coldly. "But I may stay here with you, Grandfather? I was so frightened alone in the black night. I was out walking with Uncle Wind, and – and he seemed to disappear, and then I lost my way." "You may stay if you do not come too close. So Uncle Wind vanished, did he? Your Uncle Wind is a fickle, changeable, unreliable fellow, but he has a will of his own and will turn up in time. I am very dependent on Uncle Wind; I can do nothing but lie around, without him." "He is very nice, isn't he, Grandfather?" ventured Philippe. "Aye, sometimes," replied the old man. "He was all gentleness this afternoon, but wait until you see him to-night! If I'm not mistaken in the signs, he will be in a fury. Then watch out for yourself, Young Impudence! When Uncle Wind is in a fury, he is a hard master and drives every one before him with a stinging lash. You'll see!" Since Grandfather was in such a chilling mood, Philippe did not bother to talk with him, but sat at a little distance, thankful for companionship, and watched the winking of the stars, which, even as he watched them, sparkled and went out like sparks in the soot of a chimney, or as if a black curtain were being drawn across the black sky. After a long while, after the last star had vanished and the noiseless quiet of the night hemmed them in like an invisible wall, Grandfather Snow sprang to his feet and stood tensely listening with his hand to his ear. "What is it, Grandfather?" Philippe asked, alarmed. "Hush! . . . Hush! . . . Ah – now I hear it plainly!" Philippe put his hand to his ear as he had seen Grandfather do, and listened intently, holding his breath that he should not miss the tiniest sound. Nothing. Yes – a far away and tiny sound. It sounded to Philippe like the little gasping noises he had made when he was learning to whistle, before ever he had been able to attempt a tune, the noise of air breathed in and out through rounded lips. "He is coming!" Grandfather told him in a voice trembling with excitement. "And he is perfectly furious; seldom have I heard him whistle more beautifully. Listen!" Philippe no longer had to strain to hear the far-away whistling; it was growing nearer every second, and as it approached it became high and shrill. "Is that my Uncle Wind making all that noise, Grandfather?" "Aye!" said Grandfather shortly, crouching close to the ground in the position of a runner about to start a race. "I shall run and meet him," cried Philippe, delighted at the idea of seeing his old friend again, who was now evidently very close. He had not run twelve steps when something spinning through the dark ran squarely into him, bowled him off his feet and rolled him along the ground as easily as if he had been made of thistledown. It was a terrific struggle he had to gain his feet again, and even when he had, and would have liked to stop to catch his breath and dust off the new suit his mother had made for him, he found himself being shoved roughly from behind. "Faster! Faster! Faster!" screamed a voice in his very ears. And if he tried to slow up ever so little, "Rush! Rush! Rush!" the voice would command. "Faster! faster! faster!" "Please, Uncle Wind – oh, please, Uncle Wind – I can't go any faster – my legs aren't long enough!" "Faster!" screamed Uncle Wind in anger, prodding poor Philippe so hard that he was fairly lifted off his feet. Above them, and all around them, there was the noise of tearing leaves and crashing branches, there was the groaning of tortured trees as Uncle Wind lashed them with his invisible cat-o'-nine-tails. Dim shadows streaked past like flying beasts. "Rush!" shrieked Uncle Wind, "R-U-SHSHSH-shshshshsh–" Something cold and stinging struck across Philippe's face, and it was then, in spite of his breathless panic at the mad flight, that he wanted to burst out laughing, for he saw that Grandfather, who had all this time been running at his side, was going so fast that he was actually losing his whiskers! "Your whiskers, Grandfather! The wind is tearing your whiskers off!" But the old man, who was speeding along more lightly than any rabbit, paid no attention. In truth, it seemed no great calamity, for as fast as Uncle Wind would tear off his whiskers and his hair and scatter them on the ground, new would grow immediately – and so thick and fast they grew that the ground became covered with white. But whiskers were not cold and wet when they brushed across one's face: they scratched and tickled, as Philippe had found out on occasions when he had kissed Grandfather. This was snow! Grandfather Snow was spreading his white blanket over the earth. All night long Uncle Wind and Grandfather Snow sped across the dark country like mad men, and when little Philippe grew too tired to stand it any longer, Uncle Wind would lift him up in his strong arms and carry him. And the snow grew deep, and eddied and twisted into great mounds and high drifts with sharp, curved edges like the thin crests of waves – so that in the cold, pale light of the coming morning, the world looked like a beautiful dream cut from marble. And with the coming of dawn, Uncle Wind suddenly stopped driving them. "That was a great run!" said Uncle Wind. "It has left me completely out of puff. Philippe, my boy, I hope it hasn't tired you too much? Grandfather Snow, didn't I drive you beautifully?" "Aye." "And you have not done so badly. It will be some days before we are in shape for another run like that. Well, good-by! I think I shall do my famous vanishing act again. How about you, Grandfather?" "Not quite yet. I shall linger on a bit. There are a few touches, a few light touches I neglected in my hurry last night that I would like to attend to this morning. You see," he explained to Philippe when Uncle Wind had vanished, "I am quite an artist. Some people think I am very little use and only good for lying around. Not at all! I make excellent snowballs, for one thing, and Uncle Wind is not the only member of our family who has knocked a hat off! But of course I would never tell you of such a thing if I did not know that you were too much of a gentleman to use me for such a purpose. No, no, my child, I work as hard for the things that grow, in my own way, as Grandmother Rain does in hers, but chiefly I delight to make things beautiful. See that naked gray tree? How bare and cold it looks! It needs a few high lights that I could not stop to give it last night – " whereupon Grandfather Snow touched each branch and twig with a powdering from his white beard, and the twig and branch of every tree around, until the whole world above the level of the ground was a tracery of gleaming, fairy lace. "Not bad, Philippe, not a bit bad! Can you see anything else that needs touching up? Speak out before it is too late, for my supply is nearly exhausted." "Please, Grandfather, it is beautiful, but I am cold and tired, and I would like to go where it is warm." "Of course you would, my child. Look! Below us in the valley it is green, and even from here one can see that there are flowers. Run on down — " "I don't want to run; I'm tired of running!" "Well, well," laughed Grandfather, "walk then, if you wish. After a while, when the warm sun comes to view my handiwork, I, too, will slip down into the valley, but I shall not stop there. No, I have a long way to travel before I join Grandmother Rain once more." Philippe turned slowly away, touched by the purity and peace that surrounded him. "Good-by . . . Good-by . . . " said Grandfather Snow gently, very, very gently! As Philippe reached the green valley below, the sun broke through a thin veil of silver clouds. It had risen brilliant and white from its all night dip into the distant ocean, and its cheering warmth was gratefully received by the tired adventurer. A fragrance, mingled of evergreens and flowers, herbs and damp earth, filled the motionless air, and from the end of the grass-grown lane, along which he walked lazily, there was an amazing confusion of sounds, as if thousands of birds were singing at one time. The lane led him to a gate, and on the gate was a sign which said: PHILIPPE'S GARDEN "I must have been away a long time for my garden to have grown so big," Philippe told himself. Standing inside the gate was little Avril in a new green smock prettily embroidered with wreaths and garlands of flowers. She curtsied so low before him that the hem of her dress brushed the young shoots of grass; and she smiled at him tenderly. "And who are you?" asked Philippe warily. "Why, Philippe! Don't you know me?" "Yes, I think I do; but I thought that I knew Grandmother Marianne and she turned out to be Grandmother Rain. Uncle Pablôt, it seems, was not Uncle Pablôt at all, but Uncle Wind. And my Grandfather Joseph is Grandfather Snow and lies just above us on the hill. It is very puzzling; can I be sure that you have not changed your name?" "I have quite a number of names," explained the little girl. "Some call me Spring, some call me Flora, but you may call me Avril. Avril: April – it is all the same. Would you like me to show you your garden? It is very lovely, and I have worked hard to get it all in readiness for your coming." "You?" "Yes. I am your gardener, but I have had a lot of help. Every one has been so kind! Uncle Wind helped me plant it, Grandfather Snow prepared the ground in fine shape, and Grandmother Rain has been here often and often, giving my little plant babies their bottles. It has been a lot of worry and care, Philippe," Avril told him in a curiously grown-up voice, "but when you see my beautiful children, I am sure that you will think that it was worth while. "Now here," she said, smiling happily and taking him by the hand, "are some of my first babies: the snowdrops, named in honor of their godfather, Grandfather Snow. And here – " From bed to bed, from border to border they From flower to flower they wandered. wandered, looking at the flowers, breathing the sweet perfume, and watching the clumsy but clever bees, out marketing for honey which they would pay for with golden pollen dust carried on their velvet backs. There were soft-petaled pansies as dark as midnight, as purple as a queen's dress, as yellow as the sun, and sometimes of many colors curiously combined to form impish and laughing faces. There were lilies of the valley and violets, stonecrop and candytuft, peonies and roses, larkspur and bridal wreath – so many flowers that Philippe could not remember their names, but gave himself up to the enjoyment of their soft and gorgeous colors, their delicate and magnificent shapes. Farther along the maze of paths where he was led by Avril, the flowers were still furled in tight buds, and at length they came to beds where the dark loam was scarcely more than broken by lifting sprouts. "These are for later," explained his fairylike guide. "And these?" asked Philippe, when they had entered into a new part of the garden where straight rows of green-growing things were marked off in beds of checkerboard design. "These funny little fellows," Avril told him, "are not as beautiful and proud as the flowers; they hold their heads less high, but they are all extremely worthy and one would find it difficult to get along without them." "They look good enough to eat," said Philippe, who was beginning to feel very empty. "They are," said Avril. "And is all this garden mine?" asked Philippe. "Yes," answered the little girl, curtsying again before him, and added: "All yours — King Philippe!" "Oh, you mustn't call me 'King,' that is, when we're not playing games, you know," Philippe warned her, rather shocked. "Kings are grand people with treasures hidden away in strong chests, and they wear crowns of gold and have thousands of servants. I know, because I have read all about them in a book which my mother gave to me. I am a farmer's son, and can never be so wonderful a person as a King." His companion looked at him very thoughtfully, and at last spoke: "You are a King, Philippe. Sun, Moon, and Stars shine down upon your head a crown; the whole earth is yours, the great strong chest of hidden treasures. From the time the first small star hung like a lonely spark in space, your servants have been preparing for you a kingdom, the kingdom of Earth, than which there is only one greater. And that kingdom, too, will be yours some day if you rule wisely and well in this, and are kind, and strong – and gentle." "It may be true," said Philippe, rather bewildered by the wonderful things he was hearing. "But I am quite sure that I have no servants; why – little though I am, even I must help my father in the fields." "We are all your servants. Is it not true, Grandmother Rain?" A shower suddenly passed over the garden, decking the flowers in crystal splendor, and from a small cloud overhead Philippe could distinctly hear the voice of Grandmother: "Yes. I have worked for Philippe's father and his grandfathers from the very beginning of things, and I hope to work for his children and his childrens' children for time evermore. Do not think badly of me, Philippe, if I do not come and go just to your liking, for I am very busy,with much important work to attend to." "Is it not true, Grandfather Snow?" "Aye, so it is!" came a voice from the bright hill beyond the garden wall. "Is it not true, Uncle Wind?" "Well, well! I am just in time," remarked Uncle Wind, sauntering up the garden path, the flowers nodding to him as he passed. He had cast aside his great cloak, but even then looked a little warm. "Just wandered up from the Southlands," he continued. "Yes, my little darling, it is true enough what you are telling Philippe, but of course we are not to be bossed about like ordinary servants; we serve and yet we keep our independence; we have been at our various tasks so long that we know exactly what to do without being told, and if we seem a little lazy at times, or a little too enthusiastic at others, remember that we may have our own very good reasons. Yes, indeed," he went on, commencing to bluster a bit, "there are often reasons hidden in the strangest things we do. Did I ever tell you how once I mussed up the hair of a prince and ran off with the parasol of a duchess – " "The wind is capable of being a little monotonous at times," Avril whispered into Philippe's ear, but he could hardly hear her, for the garden was being filled with other voices, coming from here, there, and everywhere – from the grass, and the flowers, and the vegetables, and the trees, from the stones, and even from the brown earth itself, and they all were saying in their own way, the one thing: "We serve!" "Please listen to us a moment," pleaded the fragile voices of the flowers. "We serve too, though many consider us too delicate and concerned about our own looks to be of much use. But do not forget us, Philippe! Do not forget us when you are grown up and your mind is crowded with worries and cares and a lot of things that will seem more important to you than they really are. Keep a place for us in your mind and heart, and we will repay you in our mysterious way a hundredfold and more. Do as we ask; treasure beauty, purity, and truth – for though you may love us now, you will not understand the full importance of our message until you have grown up. Do not forget – " "The flowers are very talkative to-day," remarked one little lettuce to another. "The flattery of the bees has quite turned their heads," agreed a radish who was notably sharp, whereupon some of the more sensitive flowers who had overheard blushed deeply. But Philippe heard none of this chatter of the vegetables, for it seemed that the whole world, the ox and the ass, the horse and the cow, the tame beasts of the fields and the wild beasts of the spaces beyond, the fox and the rabbit, the mouse and the beetle, the creatures that crawled and the creatures that ran, the cricket and the grasshopper and the inhabitants of air and ocean, the little hills and high hills, the valleys and forests, the voice of water through the land, sky and earth – all, all were joining in a great, droning chant: "We serve – we serve – we serve – " "What utter nonsense!" shouted a little bird saucily, flying from the low branches of a tulip tree. "I serve no one; I just have lots of fun, and I'm going to have an exciting fly – and that's something little boys can't do, for they haven't even any pin feathers!" The cocky way the little bird flapped her wings and tossed her head made Philippe double up with laughter. "See!" said the little rebel's mate, flying close. "You have made the King laugh, so your empty boasting has broken like a bubble, for laughter is one of the greatest services in the world! And as for going on your wild flight, have you forgotten our pretty blue eggs in their soft brown nest?" "I am a King!" said Philippe in a daze of wonderment. "My darling Avril, tell me what I can do to show my gratitude to all my servants." "They love nothing better than that you use them, Philippe. Use them wisely and well, and not only for yourself – but for others." And gentle Spring kissed him upon the lips, filling his heart with love and happiness. "It is high time," said Philippe's mother to Philippe's father, "that our little one was back. Soon it will be dark." She went to the doorway and gazed across the fields. "Here comes Pablôt," she called back into the room, "and he is carrying the child in his arms." "Sh-h-h-h-h!" breathed Uncle Pablôt, drawing close. "Take your son gently into your arms; he has been sleeping bravely all the way from his grandparents'. And here," said Uncle Pablôt, "is his little silver whistle, by which I hope that he will remember me when he wakes up and finds me gone." About This Edition Illustrations may differ in size and location from those in the original book. Edited by Mary Mark Ockerbloom discoverysedge-mayo-edu-640 ---- A Line in the Sand – Mayo Clinic’s role in insulin research | Discovery's Edge RESEARCH AT MAYO CLINIC About MAYO CLINIC NEWS NETWORK ADVANCING THE SCIENCE Subscribe Español RESEARCH AT MAYO CLINIC About MAYO CLINIC NEWS NETWORK ADVANCING THE SCIENCE Subscribe Español Home More Stories A Line in the Sand – Mayo Clinic’s role in insulin research More Stories A Line in the Sand – Mayo Clinic’s role in insulin research Facebook Twitter Pinterest WhatsApp Millions of patients with diabetes lead normal lives because they take insulin on a daily basis to control their condition. Prior to 1922, their lives would have been much more tenuous. Mayo Clinic was at the forefront of the first clinical trials on insulin back then, ensuring that the new drug was safe and determining the proper doses for patients. Bessie Bakke had been perfectly healthy for 28 years until she suddenly became fatigued, lost weight and developed fainting spells. Misdiagnosed as having anemia, her condition deteriorated until she was nearly comatose. In October 1920, her parents brought her to Rochester, Minnesota, where she was diagnosed by Mayo Clinic endocrinologist and researcher Russell M. Wilder, M.D., as having juvenile diabetes, as type 1 diabetes was then known. Russell Wilder, M.D., 1935. Mayo's modest diabetes pioneer was considered by his peers to be "the ultimate gentleman." It was before the discovery of insulin, when patients with diabetes often were subjected to extreme treatments and had little hope for the future. "Bessie B.," as she became known in the scientific literature, stayed at St. Marys Hospital where she was placed on a ketogenic diet, similar to the one pioneered by Frederick Allen, M.D., of the Rockefeller Institute. The diet consisted of a precise proportion of carbohydrates, proteins and fats determined by her weight and her blood glucose levels, and it included a series of adjustments to customize it for her specific metabolism. Dr. Wilder and his colleague, physiologist Walter M. Boothby, M.D., tried Bessie B. on 12 different diets during her 74-day stay in the hospital. It was one of the first attempts at individualized medicine. Every element of her diet was prepared in a special kitchen under the direction of trained dietitians. Dr. Wilder and his staff accounted for and analyzed everything that went into and was eliminated from her body. Laboratory tests determined how quickly she was metabolizing food so that doctors could properly balance and time her feedings. The amount of food she received was small, just 850 to 900 calories a day. In order to survive, patients with diabetes during that time had to nearly starve. Insulin, normally produced by the pancreas, helps control glucose, or sugar, levels in the blood. When pancreatic islets or cells slow or stop producing insulin, diabetes mellitus results. The blood is flooded with glucose, potentially causing a coma and death. Hired by William J. Mayo, M.D., in 1919 to run Mayo Clinic's diabetes unit, Dr. Wilder had few options at his disposal to treat Bessie B. Putting her on an extreme diet was crucial. Dr. Wilder was not alone in attempting to control diabetes through diet. A handful of doctors in New York, Chicago and Boston also were having mixed success with these specialized diets. Some patients were living for three, four or five years after the onset of diabetes, provided they adhered strictly to the feeding regimen. One of those patients was Elizabeth Hughes, daughter of Charles Evans Hughes, the former U.S. chief justice and, in 1920, secretary of state. The "Hughes Child," as Dr. Wilder would later refer to her, was kept alive by Dr. Allen's special ketogenic diet. After several years on the treatment, she weighed 45 pounds and was barely able to walk. Another patient was a 12-year-old boy from Chicago named Randall Sprague, who was kept alive by Rollin Woodyatt, M.D., Dr. Wilder's mentor at the University of Chicago. What happened next was to change medicine forever. A new era Frederick Banting, M.D., and Charles Best with one of their test dogs on the roof of the Medical Building at the University of Toronto, August 1921. Photo courtesy of the Thomas Fisher Rare Book Library, University of Toronto. As they made their hospital rounds in the fall of 1921, Dr. Wilder and his colleagues at Mayo Clinic heard the news. Frederick Banting, M.D. and Charles Best with one of their test dogs on the roof of the Medical Building at the University of Toronto, August 1921. Courtesy, Thomas Fisher Rare Book Library, University of Toronto. University of Toronto researcher Dr. Frederick Banting and medical student Charles Best had been experimenting with dogs in the university laboratory of physiology professor Dr. J.J.R. Macleod. They had isolated, refined and proved the effectiveness of insulin in an arduous series of animal studies. The formal presentation of their discovery was made at a research conference that December in New Haven, Connecticut. "What a Christmas gift that was — an extract of the pancreas developed at Toronto, which effectively controls the symptoms of diabetes!" writes Dr. Wilder in his memoirs. "We learned still more about it at the meeting of the Association of American Physicians in the spring of 1922. Excitement prevailed." In early January 1922, using a modified formula for the dog-derived insulin developed by biochemist James Collip, Dr. Banting injected a 14-year-old boy named Leonard Thompson. Daily doses of insulin allowed him to live another 13 years. Drs. Banting and Macleod received the Nobel Prize for Physiology or Medicine in 1923 for the discovery of insulin. Correct insulin dosage critical Photo of the same patient on Dec. 15, 1922, and again on Feb. 15, 1923, after receiving insulin treatment. Photo courtesy of Eli Lilly and Company Archives. A stark contrast: The same patient on Dec. 15, 1922, and again on Feb. 15, 1923, after receiving insulin treatment. Photo courtesy of Eli Lilly and Company Archives. The Canadian doctors and researchers were instrumental in the first use of insulin, but still no one knew the best strategy to administer insulin to the wide variety of patients who were hanging on to life, some with multiple complications. Getting the dosage right was critical to keeping the patients alive. Too much insulin would cause a person to go hypoglycemic, with abnormally low blood glucose levels. With too little insulin, the body could no longer move glucose from the blood into the cells, causing high blood glucose levels. Both conditions could lead to diabetic comas and even death in severe cases. The Toronto group turned to a handful of top clinical researchers for help in determining optimal insulin dosing through experimental studies on patients. One of those was Dr. Russell Wilder at Mayo Clinic. "Samples of insulin were first received at Mayo Clinic in the early spring of 1922," reports Dr. Wilder. "They were for experimental trials ... but an adequate amount of insulin to insure everyone getting it who needed it was not available until the autumn of 1922, and October 1 of that year is the date which divides for us the insulin era from the pre-insulin era." It was as if someone had drawn an arbitrary line in the sands of time. A dramatic contrast: The same patient on Dec. 15, 1922, and again on Feb. 15, 1923, after insulin treatment. Courtesy, Eli Lilly and Company Archives. Once clinical studies and mass production would make insulin available, the line would be crossed. That was when the vast majority of patients with childhood diabetes had their first chance at a relatively normal life. Most people today have no idea how dire a diagnosis of diabetes was in that time. "We had 32 children with diabetes in the Mayo Clinic between October 1, 1919, and October 1, 1922, a three-year period," Dr. Wilder writes. "One was moribund on arrival, 28 received satisfactory training and a dietary regimen. Nine survived long enough to benefit from insulin. The others died before it came." Meeting of the minds In November 1922, Dr. Wilder traveled to Ontario to attend a meeting of North America's foremost clinical experts in diabetes at the time. In addition to the Toronto hosts, also present were experts from New York City, Chicago, Boston, Rochester, New York, and Indianapolis — and from Eli Lilly and Company. The international pharmaceutical manufacturer would play a major role in the first mass production of insulin. Dr. Wilder's enthusiasm about the meeting leaps from the page of his notes: "Never again was I to experience a thrill equal to that of being invited to attend the meeting in Toronto of a small committee of experts, called together by Professor J.J.R. Macleod to undertake an extensive clinical evaluation of the product, insulin." Over a cold weekend, the experts compared experiences on how to treat patients with the newly discovered insulin, what worked best to revive them from diabetic comas, and what symptoms might indicate potentially fatal complications. Dr. Wilder's notes show that the doctors were trying to establish dosage standards for insulin: "Allen has given one dose every six hours. Campbell gives his dosage 3/4 hour before meals." "Woodyatt — reported a death contributed to by overdosage — no post mortem. Four other patients receiving same dosages had symptoms but no ill effects." "Gilchrist — has been testing potency of preparations on himself. Reaction — fatigue — preparation increased pulse rate, tremor sensation." "Banting — four children tell that they feel shaky." The physicians also discussed hyperglycemia and how to treat it and any unusual cases. If one had observed something, the others had to know. It was their only guide at this point. And they discussed the handful of patient deaths — from heart failure, sepsis and tuberculosis. Autopsies were important, so they would know if insulin fatally interacted with other conditions. At the time, Eli Lilly and his managers had been urging Drs. Banting and Macleod to patent their insulin formula so that it could be standardized and safely manufactured. Patenting discoveries was not routine in academic circles at the time. In fact, it was frowned on, and Dr. Macleod was reluctant. Yet colleagues told him it was the only way dosages could be mass-produced for further study. Without a patented formula, others might develop weak or ineffective versions. In the middle of Dr. Wilder's notes is a stand-alone statement, apparently written as part of this effort and shared at the meeting: In my opinion, the course being pursued by the University of Toronto in offering a patent to control the manufacture of insulin is wise and commendable. Without such control it will be impossible to protect humanity from dangerous preparations. Dr. W. J. Mayo concurs in the above. — Russell M. Wilder The last three pages of notes are devoted to a clinical description of the Hughes Child. Any doubt of her identity is erased by Wilder's margin note: "This is the daughter of the Chief Justice of the Supreme Ct." The notes continue: "In August, at age 15, weight 45 lbs., height 60 inches. In November, the weight is 74 lbs., height 61 inches." The notes reveal that she was now on a different and more substantial diet, tolerating the insulin well, and thriving. After the discussion, the doctors were invited into her rooms, where they formed a semicircle around her as she was about to eat lunch. It was one of the most remarkable medical rounds in history. Beyond the line Randall Sprague, M.D., one patient who survived long enough to benefit from the discovery of insulin and become a physician at Mayo Clinic. Dr. Wilder returned to Minnesota after that November meeting but kept in close Randall Sprague, M.D., one patient who survived long enough to benefit from the discovery of insulin and become a physician at Mayo Clinic. correspondence with his colleagues, continuing his clinical studies in an attempt to refine insulin dosing. After 40 patients, they published a seminal paper in 1923 in the Journal of Metabolism Research, indicating that a range of 10 to 30 insulin units was needed to transition a patient to a normal diet. They dosed before breakfast and kept patients to a strict eating schedule, with meals at 8 a.m., noon and 5:30 p.m. They presented detailed charts on patients of varying ages, offered first aid (epinephrine and orange juice) for patients slipping into lethargy, and concluded that physicians needed to treat patients qualitatively, watching them closely to establish the right dose for the individual. Soon, 200 case studies confirmed their early findings, along with two more major papers. After the arrival of insulin, 167 children were admitted to the hospital at Mayo Clinic in those first six years of the new era. Only 17 were known to have died later of complications, and most of those Dr. Wilder attributed to care issues outside his influence. For Bessie Bakke, however, insulin came too late. After spending two and a half months at Mayo Clinic, she was doing well and had gone home, only to die a month later in October of 1921. Dr. Wilder, Dr. Walter Boothby and chemist Carol Beeler had fully documented Bessie B.'s complete metabolic activity for the 74 consecutive days, the first detailed clinical study of its kind. They presented their findings that December and published them in January 1922 in the Journal of Biological Chemistry. In contrast, Elizabeth Hughes, who was treated with insulin, lived long enough to become one of Dr. Banting's first patients. When she died, she had received 42,000 insulin injections in 58 years. Young Randall Sprague made it across the line into the insulin era, as well. In fact, he thrived, attended medical school and eventually became a Mayo Clinic physician and a world-class endocrinologist. One of Dr. Wilder's closest colleagues in later years, Dr. Sprague wrote his friend's obituary in 1959. The Mayo Clinic team treated diabetic patients, saving most of them thanks to insulin, while developing one of the best-tested metabolic diets for patients with diabetes in the nation and an excellent diabetes education program. They added papers on insulin use for pregnant women and patients with comorbidities. Today, Mayo Clinic still trains patients about the proper care for their diabetes using many of the methods established by Dr. Wilder. The diabetic handbook that bore Dr. Wilder's name — The Primer for Diabetic Patients — went through nine editions. He also launched The Mayo Clinic Diet Manual, which had six editions, the last authors being Drs. Michael Jensen and Cliff Gastineau and nutritionist Jennifer Nelson. The American Diabetes Association (ADA) now awards the Banting Medal for Scientific Achievement Award as its highest scientific honor. Three Mayo Clinic physicians have received the award: Drs. Wilder, Leonard Rowntree and Robert Rizza. And four have headed the ADA, including Dr. Wilder. Dr. Wilder, who treated and saved thousands of patients and aided thousands more through his pioneering clinical research, remained the modest gentleman. In one of his last presentations before his death he said, "I can lay no claim to any great discovery, but I was a member of the crew and several of the ships engaged in exploration … and I must admit to a degree of pleasure in recalling these adventures." March 2015 TAGS more stories Facebook Twitter Pinterest WhatsApp Previous articleClinician-investigator: Translating science into better care Next articleSecondary Genetic Findings: Do You Want to Know? Robert Nellis Copy link to clipboard Bookmark Get emails for all new posts in this blog post. Mute this Blog Post Report Blog Post Oldest to Newest Newest to Oldest Most Replied To Most Liked Please sign in or register to post a reply. © 1998-2019 Mayo Foundation for Medical Education and Research (MFMER). All rights reserved. distantreader-org-7516 ---- The Distant Reader - Project Gutenberg to study carrel Browse Carrels Browse Public Carrels Create Carrels New Carrel From Project Gutenberg From CORD-19 Sign In Home New Carrel Create Carrel From A... File Zip file URL List of URLs bioRxiv file HathiTrust file Gutenberg CORD-19 GutenbergCarrel Use this page to: 1) search a subset (30,000 items) of the venerable Project Gutenberg, 2) refine the results, and 3) create a study carrel from the list of found items. The content of Project Gutenberg is strong on English literature, American literature, and Western philosophy, but just about any word or phrase will return something. This is a quick an easy way to create carrels whose content represents the complete works of a given author or an introduction a given subject. Enter a word, a few words, or a few words surrounded by quote marks to begin. The index supports an expressive query language, described in a blog posting. Stymied? Search for everything and use the resulting page to limit your query. Query Search With thanks to our generous sponsors and partners About This Project Acknowledgments Contact Us distantreader-org-8506 ---- The Distant Reader - COVID-19 literature to study carrel Browse Carrels Browse Public Carrels Create Carrels New Carrel From Project Gutenberg From CORD-19 Sign In Home New Carrel Create Carrel From A... File Zip file URL List of URLs bioRxiv file HathiTrust file Gutenberg CORD-19 CORD-19Carrel Use this page to search a collection of more than 750,000 journal articles on the topic of COVID-19. Once you have identified a subset of the articles you would like to read more closely, you can create a study carrel from the results. The index supports an expressive query language, described in a blog posting. The index is populated with the content of a data set called CORD-19, a collection of scholarly scientific journal articles. The data set is updated on a regular basis, and we strive to keep the index up-to-date. Stymied? Enter a word, a few words, or a few words surrounded by quote marks to begin. Search With thanks to our generous sponsors and partners About This Project Acknowledgments Contact Us dla-library-upenn-edu-3031 ---- Theodore Dreiser papers, circa 1890-1965 (bulk dates 1897-1955) University of Pennsylvania Finding Aids Navigation Aids Skip to Main Content Skip to Main Search Skip to information about this record Skip to select related items. Use checkboxes to select any of the filters that apply to this item. Penn Libraries  •  Repositories  •  Penn Back to full page University of Pennsylvania Finding Aids Search Finding Aids   Sidebar Select Materials to View in Reading Room Finding Aids Home Information and Contacts Contact Us Contents for this Finding Aid Summary Information Biography/History Scope and Contents Administrative Information Collection Inventory Correspondence Miscellaneous correspondence Legal matters TD Writings: Books TD Writings: Essays TD Writings: Short stories TD Writings: Poems TD Writings: Plays TD Writings: Screenplays and radio scripts TD Writings: Addresses, lectures, interviews TD Writings: Introductions, prefaces Journals edited by TD Notes written and compiled by TD TD diaries Biographical material Family members Memorabilia Financial records Clippings Works by others Oversize Clippings (originals for microfilm) Appendices Expand all Collapse all Powered by the DLA View Finding AidFind Related Items Main Content Theodore Dreiser papers Ms. Coll. 30 This is a finding aid. It is a description of archival material held at the University of Pennsylvania. Unless otherwise noted, the materials described below are physically available in our reading room, and not digitally available through the web. Summary Information Repository: University of Pennsylvania: Kislak Center for Special Collections, Rare Books and Manuscripts Creator: Dreiser, Theodore, 1871-1945 Title: Theodore Dreiser papers Date: circa 1890-1965 (bulk dates 1897-1955) Call Number: Ms. Coll. 30 Extent: 244 linear feet (503 boxes) Language: English Abstract: Contains 22 series, including correspondence (118 boxes); legal matters (7 boxes); writings (260 boxes), comprising books, essays, short stories, poems, plays, screenplays, radio scripts, addresses, lectures, interviews, introductions, and prefaces; journals edited by Dreiser (6 boxes); notes (9 boxes); diaries (5 boxes); biographical material (1 box); memorabilia (41 boxes), comprising scrapbooks, photographs (many of which are available online), art work, promotional material, postcards, and miscellanea; financial records (5 boxes); clippings (23 boxes); works by others (12 boxes); and oversize materials (2 boxes). Also includes materials regarding various family members: brother Paul Dresser (8 boxes of correspondence, sheet music and lyric sheets, clippings and memorabilia, and two plays written by Dresser); second wife Helen Dreiser (4 boxes of diaries and other writings); and niece Vera Dreiser (2 boxes of correspondence). Cite as: Theodore Dreiser papers, Kislak Center for Special Collections, Rare Books and Manuscripts, University of Pennsylvania Finding Aid's Permanent URL: http://hdl.library.upenn.edu/1017/d/ead/upenn_rbml_MsColl30 PDF Version: Return to Top » Biography/History During the Congress on Literature at the Chicago World's Fair of 1893, Hamlin Garland expressed America's need for a new kind of literature. Garland called this new literature "veritism" and "local color"—something authentically American rather than derivative of Europe. At the same time, twenty-two-year-old Theodore Dreiser was in Chicago covering the World's Fair as a reporter for the St. Louis Republic. Although Dreiser did not attend the Congress on Literature, he was to play a principal role in the fulfillment of Garland's dream for American literature in the decades that followed. (Herman) Theodore Dreiser (1871-1945) was born in Terre Haute, Indiana on 27 August 1871. He was a sickly child, the ninth in a family of ten surviving children (three older boys had died in infancy). Theodore's mother, Sarah Maria Schänäb, of Czech ancestry, was reared in the Mennonite faith on a farm near Dayton, Ohio. His father, John Paul Dreiser, was a German immigrant, who left Mayen in 1844 at the age of twenty-three to avoid conscription. He eventually traveled to America to follow his trade as a weaver, ending up at a mill in Dayton, Ohio, where he met the then seventeen-year-old Sarah. John Paul Dreiser was a devout Catholic, Sarah Schänäb, somewhat Protestant and decidedly pagan in her approach to the world—she was extremely superstitious and romantic. The couple ran off together and married in 1851, Sarah not quite eighteen, John Paul then twenty-nine. Sarah was immediately disowned by her family, militant anti-Catholics. The couple settled first in Fort Wayne, Indiana and then in Terre Haute, where John Paul became quite successful in the woolen business. There were six children in the family in 1867 when the Dreisers moved to Sullivan, Indiana and John Paul borrowed significantly in the hopes of becoming an independent wool manufacturer. These hopes were destroyed in 1869 when his factory burned to the ground. John Paul was injured severely by falling timber as he tried to save his dream. By the time he recovered and moved his family back to Terre Haute, the Dreisers were deep in debt, for John Paul insisted on paying back every dollar that he owed. Discouraged to the point of despair, he abandoned his career and became obsessed with religion and the salvation of his family. When Theodore Dreiser was born in 1871, his family was settled firmly in the depths of poverty. There were eight older siblings: Paul, Marcus Romanus (known as Rome), Mary Frances (Mame), Emma, Theresa, Sylvia, Al, and Claire. Younger brother Ed would follow two years later. Dreiser's father was only sporadically employed. The older children were out of the home, picking up what work they could, mostly getting into trouble. The family had a reputation in Terre Haute for being behind in their bills with wild sons and flirty daughters. Each morning they knelt around the father as he asked for a blessing for the day, and there was a similar blessing each night. Despite these prayers and stern punishments at the hand of John Paul, it was too late. The older boys ran away from home; the older girls were involved in affairs. The Dreiser family was out of control, abetted by Sarah's leniency toward her children. Young Theodore Dreiser grew up in this environment of uncertainty. He often went to bed hungry. There was no money for coal, and Theodore would go with his older brother Al to pick some up along the tracks of the railroad. His mother took in washing and worked at scrubbing and cleaning. Always sensitive, Theodore was humiliated to wear ragged clothing and to sneak coal from the tracks. He stuttered; he cried easily; he was a homely child, with protruding teeth and a cast in one eye. Thin, pale, bullied by other boys, he spent his days alone for the most part. Yet Dreiser was also intensely curious about life, watching sunrises, observing birds in flight, exploring the Indiana countryside. He hated his father's world of censored joy and authority and loved his mother's romantic dreams. Dreiser realized that his family was poor and that they were looked down upon; he dreamed of having a home like those of the wealthy on Wabash Avenue, of having money and fine clothing. Within Theodore Dreiser's harsh world of poverty there was always a contrasting element of the fantastic. First it was his mother's world of fancy—the family constantly moved at her whim, for she was always certain that something better was just over the horizon. As he grew older, the world of the wealthy town became his fantasy. Then there was the fantastic success of his oldest brother, Paul Dreiser. Paul had left home, joined a minstrel troupe, and achieved much success with his musical talents. Writing, singing, and performing in minstrel shows, he even changed his name to Paul Dresser, which he felt would be more memorable to his public. When Theodore was twelve he moved with his mother to Chicago where his older sisters had secured an apartment. Again there was the fantastic contrast of his old life in a small Indiana town to the city, with its size, its activity, and its color. The ways of the city would continue to fascinate Dreiser throughout his life. When the venture in Chicago failed, Theodore's mother moved him to Warsaw, Indiana, near where she had some land that had been left to her by her father. It was in Warsaw that Theodore first attended a non-Catholic school. Instead of the fear and trepidation of his earlier education, he found encouragement, first in the person of twenty-one-year old May Calvert, his seventh grade teacher. Miss Calvert took an interest in Theodore, encouraging him to use the local library and his imagination. She remained his life-long friend and confidant. At the age of seventeen, in a hardware store in Chicago where Theodore had found work, he met up with a former teacher, Mildred Fielding, now principal of a Chicago high school. Miss Fielding had seen promise in him as well, thought him deserving, and wanted to send him to Indiana University at her own expense. In the fall of 1889 Dreiser arrived at the Bloomington campus. Dreiser spent only a year at Indiana University. The experience showed him a world of possibilities, but he felt socially outcast and unsuccessful and was not really stimulated by any of his courses. Theodore returned home, now almost nineteen years old, and found a job in a real estate office. He enjoyed some success in this field and gained a bit of confidence. That fall, however, his mother became ill. On 14 November 1890, Theodore came home for lunch to find her in bed. As he helped her sit up, she went limp: Sarah Dreiser died in her son's arms at the age of fifty-seven. Theodore, always his mother's favorite because he was so slight and sensitive, felt alone in the world. The Dreiser family, only held together at this point by Sarah's love for all, fell irreparably apart. Theodore drifted into one job after another: driver for a laundry; collector for a furniture store. While these jobs provided him with an income, none allowed for the expression of ambition and artistic ability that he felt within. In his memoirs Dreiser stated that it occurred to him at that time that newspaper reporters were men of importance and dignity, who by dint of interviewing the great were perceived their equal. It was now 1892 and Theodore had returned to Chicago, which was preparing for the upcoming World's Fair and the Democratic National Convention. Dreiser was curious enough about these events to write his own news stories about them, finding his to be as good as those published in the papers. In June of 1892—after much determined footwork on his part—Theodore Dreiser landed a job on the Chicago Globe. Dreiser's intense curiosity about life was well-suited to work as an investigative journalist. In Chicago and later, in 1893 when he went to St. Louis to work for the Globe-Democrat and the  Republic, Dreiser became known for his human interest pieces and "on-the-scene" reporting style: his articles were written in a manner that put the reader at the tragedy of a local fire or the action of a public debate. It was at the Republic in 1893 that Dreiser was given the job of escorting twenty female St. Louis school teachers to the Chicago World's fair and to write about their activities on the journey. One of these was Sara Osborne White, twenty-four and two years older than Dreiser. She came from Montgomery City, seventy-five miles west of St. Louis. Dreiser fell in love with her figure, dark eyes, and thick red hair (it was this last feature which led her friends and family to call her by the nickname "Jug," for her hair was so thick around her face that it was said to resemble a red jug). Dreiser, desiring her and aching for a chance to fulfill his always pressing sexual needs, took little time to propose. Dreiser, however, was also driven by a desire for fame. His brother Paul showed up in St. Louis, and his talk of New York was alluring. Theodore was ready for a change. A young reporter friend on the Republic told him of a country weekly in his home town of Grand Rapids, Ohio, which could be purchased for very little. Dreiser thought that he could have great success on his own. In 1894, with promises to send for Jug soon, Dreiser boarded a train for Ohio. He arrived to find that the paper was small, with a subscribership of less than five hundred. The office was a shambles. There wasn't enough to it to even attempt to make a go, Dreiser thought. He moved on to Toledo, where he asked for a job from the city editor of the Toledo Blade, twenty-six year old Arthur Henry. The two men got along quite well, and Henry found a few reporting assignments for Dreiser. Henry was an aspiring poet and novelist; Dreiser was aspiring to be a playwright. The men spent hours in talk about their literary dreams. Unfortunately, no permanent opening materialized at the  Blade, and Dreiser moved on to Cleveland to look for work. After doing some feature work for the  Leader, he moved to Pittsburgh in the same year, where he immersed himself in research and articles concerning labor disputes that had culminated in the Great Strike of 1892 at Homestead. From there he went to New York and received a job at Pulitzer's paper,  The World, which was leading the fight in the yellow journalism war against Hearst's  Journal. He covered a streetcar strike in Brooklyn by actually going out and riding the rails during the strike to see angry workers confronting scab drivers. He later incorporated these impressions into his first novel,  Sister Carrie. Dreiser was drawn to the contrasts between the wealthy and the poverty stricken in New York. He quit his job at The World after only a few months, because he wasn't being allowed to produce the type of human interest stories that he thought should be told. He then lived, partly by choice and partly by necessity, on the streets of New York, where he took in the life of the downcast. At last he turned up at the New York offices of Howley, Haviland & Company, the music publishing firm run by his brother Paul and associates. He proposed to the men the idea of selling a magazine of popular songs, stories, and pictures. He would edit the magazine and it would help sell the company's songs. Thus, in 1895 Dreiser became "Editor-Arranger" for  Ev'ry Month, "the Woman's Magazine of Literature and Music." In addition to writing his own "Reflections" column for each issue—in which he set forth his philosophies on such varied topics as the possibility of life on Mars, working conditions in the sweat shops, yellow journalism, and the plight of New York's poor—Dreiser also solicited syndicated stories by the better known American writers of his day, such as Stephen Crane and Bret Harte. After Ev'ry Month turned into a losing venture in 1897, Dreiser freelanced articles for various magazines. He was one of the original contributors to  Success magazine, for which he interviewed the successful men of his time: Andrew Carnegie, Marshall Field, Philip D. Armour, Thomas Edison, and Robert Todd Lincoln. As the twentieth century approached, Dreiser wrote articles on the advances of technology, with titles like "The Horseless Age" and "The Harlem River Speedway" for some of the most popular magazines of the day, such as  Leslie's,  Munsey's,  Ainslee's,  Metropolitan,  Cosmopolitan, and  Demorest's. He compiled the first article ever written about Alfred Stieglitz, who seemed to combine in one Dreiser's interest in art and technology. This writing set him in good straits financially. He now could afford to marry Jug, a marriage that, in spite of second thoughts on his part, he undertook in a very small ceremony in Washington, D.C., on 28 December 1898. The Dreisers took up residence in New York, but in the summer of 1899, at the request of Arthur Henry, made an extended visit to Ohio. Henry thought that it was time for Dreiser to work on his fiction. Together the two men spent the summer churning out articles and splitting the money that they earned fifty-fifty, thus giving each the time to work on his literary endeavors. It was here that Dreiser began Sister Carrie. At the same time he became interested in the plight of workers in the South. He did a series of special articles for  Pearson's Magazine, which included investigations of a "Model Farm" in South Carolina, Delaware's "Blue Laws," and Georgia's "Chain Gangs." All three dealt with society's punishment of those who transgressed, a theme that Dreiser would investigate thoroughly in his novels. In addition, Dreiser wrote six special articles on the inventor Elmer Gates, who had invested the money gained from his inventions on a facility for psychological research: it was called the Elmer Gates Laboratory of Psychology and Psychurgy. Gate's studies of learning, perception, the physiological effects of the emotions, and the will underlay the ways in which Dreiser shaped Hurstwood's actions in  Sister Carrie. Journalism remained a steady source of income for Dreiser throughout his life and supported his literary endeavors—he became a top editor for Butterick's Delineator in 1907, a silent publisher of the  Bohemian in 1909, and in the 1930s an editor of  The American Spectator. The events that led up to the publication of  Sister Carrie in 1900, however, began a new phase in Dreiser's career—that of the heavily-edited novelist. Before the book was published, Dreiser was forced to change all names that could be attached to any existing firms or corporations. All "swearing" was to be removed. Frank Doubleday demanded that the novel have a more romantic title, and on the original contract the work bears the name "The Flesh and the Spirit," with Dreiser's "Sister Carrie" penciled in beside it. Editing was performed even after Dreiser returned the author's proofs to Doubleday, Page & Co. When Frank Doubleday read the final draft (after, by the way, Page had already signed the contract with Dreiser), he pronounced the book "immoral" and "badly written" and wanted to back out of its publication. Dreiser held Doubleday, Page to its word, however, and  Sister Carrie was printed; but only 1,000 copies rolled off the presses, and 450 of these remained unbound. It was not listed in the Doubleday, Page catalogue. The firm refused to advertise the work in any way. A London edition of  Sister Carrie (published in 1901), however, did well and was favorably reviewed. The  London Daily Mail said: "At last a really strong novel has come from America." Dreiser would spend his entire literary career struggling with editors, publishers, and various political agencies, all of whom desired to make his works "suitable for the public." Although Dreiser began his second novel, Jennie Gerhardt (1911), upon completion of  Sister Carrie, his intense dissatisfaction with the changes and complaints that the publishers had made, combined with the treatment that  Sister Carrie was receiving, caused him to lose his health and delayed completion of  Jennie Gerhardt for nearly ten years. In 1916 Dreiser, along with H. L. Mencken, fought against the New York Society for the Suppression of Vice when its president, John Sumner, forced withdrawal of  The "Genius" (published in 1915) from bookstore shelves. The fight dragged on through 1918, and  The "Genius" remained in storerooms until 1923, when it was re-issued by Horace Liveright. In 1927 Liveright was to become involved in Dreiser's biggest battle for freedom of literary expression, when Dreiser's An American Tragedy (1925), the story of the Chester Gillette-Grace Brown murder case, was banned in Boston. Clarence Darrow was a witness for the defense. The case lingered in the courts, at great expense to both Dreiser and the Liveright firm. Between beginning the writing of The "Genius" and publishing  An American Tragedy, Dreiser was prolific. He published the first two novels in his Cowperwood trilogy,  The Financier (1912) and  The Titan (1914); a book of travel articles entitled  A Traveler At Forty (1913); a collection of plays,  Plays of the Natural and Supernatu ral (1916); and a travelogue of his experiences on a car trip through his home state of Indiana,  A Hoosier Holiday (1916). These were followed with  Free and Other Stories in 1918;  Twelve Men in 1919;  The Hand of the Potter (a Tragedy in Four Acts) also in 1919;  Hey Rub-a-Dub-Dub in 1920;  A Book About Myself, 1922; and  The Color of A Great City in 1923. In the meantime, Dreiser was beginning a third phase in his career, champion of freedom in all aspects of life. He made his first trip to Europe in 1912, and in London he picked up a prostitute and cross-examined her about life. He visited the House of Commons and was sickened by the slums of the East End. This experience, combined with a seeming inferiority complex on his part at the self-assurance apparently inborn in the British caused Dreiser to developed a life-long hatred of the British and may have had something to do with his sympathy for Germany during World War I. Back home in the United States he tried to organize a society to subsidize art and championed the causes of oppressed artists like himself. After the publication of An American Tragedy, Dreiser was more highly sought after by political organizations than before. In 1926, while visiting Europe, he commented on the events occurring in Germany: "Can one indict an entire people?" The answer, he felt, was yes. In 1927 Dreiser was invited to the U.S.S.R. by the Soviet Government. The Soviets thought that Dreiser's opinion of their nation would have weight in America and that he would be favorable to their system of government (Dreiser's books sold well in the Soviet Union). During the visit Dreiser met with Soviet heads of state, Russian literary critics, movie directors, and even Bill Haywood, former American labor leader. Dreiser kept extensive journals of the trip. He approved of the divorce of religion from the state, praised new schools and hospitals, but was repelled by the condition of hundreds of stray children scattered about the country. In 1928 Dreiser visited London, where he met with Winston Churchill, with whom he discussed Russia's social and military importance. He also took time to criticize the working conditions of mill workers in England. Dreiser escalated these political involvements throughout his life. He helped bring former Hungarian premier Count Michael Károly to the United States after the Communist takeover in 1930. During the 1930s he addressed protest rallies on behalf of Tom Mooney, whom he visited in San Quentin, where Mooney was serving a term for his alleged participation in a bombing incident in San Francisco. Dreiser met with Sir Rabindranath Tagore in 1930 to discuss the success of the Soviet government and the hopes of India. In 1931 Dreiser cooperated with the International Labor Defense Organization and took an active part in the social reform program of the American Writers' League, of which he would later become president. In 1931, as chairman of the National Committee for the Defense of Political Prisoners, Dreiser organized a special committee to infiltrate Kentucky's Harlan coal mines to investigate allegations of crimes and abuses against striking miners. Dreiser's life was threatened for calling attention to the matter. Dreiser, John Dos Passos, and others on the "Dreiser Committee," as it was called, were indicted by the Bell County Grand Jury for criminal syndicalism, and a warrant was issued for Dreiser's arres t. Franklin D. Roosevelt, Governor of New York at the time, said he would grant Dreiser an open hearing, and John W. Davis agreed to defend the Committee. Due to widespread publicity and public sentiment, however, all formal charges against Dreiser and the Committee were dropped. Dreiser became even more involved with social reform after this incident. In 1932 he met with members of the Communist Party in the United States. Dreiser criticized the U. S. Communist Party for being too disorganized. That year he was invited to write for a new literary magazine that would be free of advertising, the American Spectator. Dreiser became and remained associate editor of the paper until other editors agreed to accept advertising, at which point he resig ned. In 1937 Dreiser attended an international peace conference in Paris, because he was interested in the outcome of the Spanish Civil War. When he returned from Europe, he visited with President Roosevelt to discuss the problem and to try to influence him to send aid to Spain. In 1939 Dreiser again traveled to Washington, D.C. and to New York to lecture for the Committee for Soviet Friendship and American Peace Mobilization. He published pamphlets at his own expense and radio addresses. He publishe d  America Is Worth Saving, a work concerning economics and intended to convince Americans to avoid involvement in World War II. In 1945, just before his death, Dreiser joined the Communist Party to signify his protest again st America's involvement in the war. During these years, Dreiser was still publishing—articles, poems, pamphlets, leaflets, and novels. In 1926 he brought out an edition of poetry, Moods: Cadenced and Declaimed.  Chains followed in 1927, a book of short stories and "lesser novels." Other works include:  Dreiser Looks at Russia (1928);  The Carnegie Works at Pittsburgh (1929);  A Gallery of Women (1929);  My City (1929);  Fine Furniture (1930);  Dawn (1931);  Tragic America (1931); and  America Is Worth Saving (19 41). In addition, Dreiser was working on several things at the time of his death, some of which were published posthumously:  The Bulwark (1946);  The Stoic (1947); and a philosophical and scie ntific treatise that would later be edited and published by Marguerite Tjader and John J. McAleer and titled  Notes on Life (1974). There were many sides to Theodore Dreiser, beyond his literary and political efforts. He was greatly interested in scientific research and development; he collected a great many books and much information on the latest scientific concerns. In 1928 he met Jacques Loeb of the Rockefeller Institute and visited the Marine Biological Laboratory in Woods Hole, Massachusetts. Later visits to the Mt. Wilson Observatory in California and the California Institute of Technology would impress him greatly. He had a longstanding correspondence with Dr. A. A. Brill, psychologist, who was largely responsible for introducing Jungian and Freudian analysis to New York. He also championed the works of Charles Fort, a "free-thinker" who was determined to establish that science was "unscientific" and that his own vision of the universe as a place where "anything could happen and did" (Swanberg, 224) was the correct one. Dreiser was particularly fascinated with genetics, which he felt explored the true "mysteries of life." In 1933, he attended the Century of Progress Exposition in Chicago, specifically with the intent of working on a number of scientific essays, which he continued to compile over his lifetime (and which would later find their way into Notes On Life). Another area of special interest for Dreiser was philosophy, a subject that he explored in great detail and about which he collected and wrote extensively. His tastes ranged from Spencer to Loeb and from Social Darwinism to Marxism. His published and unpublished writings indicate that Dreiser drew heavily on such philosophers and philosophies to confirm his own views of the nature of man and life. No biography of Theodore Dreiser would be complete, however, if it did not touch upon his personal life: as one friend put it, it is hard to understand how Dreiser could be so concerned about humanity and at the same time so utterly cruel to an individual human being. His marriage to Sara Osborne White was on shaky ground from the start: he never seemed able to devote himself to one woman. As Sara herself put it: "All his life [Theo] has had an uncontrollable urge when near a woman to lay his hand upon her and stroke her or otherwise come into contact with her" (Swanberg, 137). The two separated in 1910, with Sara returning to Missouri for a time (she would later move to New York on her own) and Dreiser moving on to other women. In 1919, Helen Patges Richardson, a second cousin to Dreiser (her grandmother and Dreiser's mother were sisters), showed up at his doorstep, making the long journey from her home state of Oregon to meet her New York cousins. She would become Dreiser's companion for the rest of his life; they eventually married in 1944. Their relationship was stormy at best: Dreiser never changing his ways with regard to other women, Helen persisting—perhaps beyond all reason—in her devotion to his genius. As she phrased it: "He expected his complete freedom, in which he could indulge to the fullest, at the same time expecting my undivided devotion to him" (Swanberg, 290). In November 1951 Helen had the first of several strokes that would eventually incapacitate her; she moved to Oregon to live with her sister, Myrtle Butcher, and died in 1955. In addition to his infidelities with regard to women, Dreiser's professional relationships were periodically marred by scandal. He was in the habit of lifting material directly from sources and including it, for the most part, unchanged in his works. Many readers of An American Tragedy, for example, who lived in the Herkimer County area (where the Chester Gillete-Grace Brown incident had occurred), wrote to Dreiser concerned that his book contained sentences lifted directly from court documents or local newspapers. In 1926 it was announced by a knowing reader that Dreiser's poem "The Beautiful," published in the October issue of  Vanity Fair, was a plagiarism of Sherwood Anderson's poem "Tandy." Since Dreiser and Anderson were friends, the incident blew over rather quickly. Such was not the case, however, in 1928, when Dorothy Thompson accused Dreiser of plagiarizing her serialized newspaper articles regarding her trip to Russia (she and Dreiser had been there together) in his book Dreiser Looks At Russia (Ms. Thompson had published these articles in her own collected work,  The New Russia, two months prior to Dreiser's publication). Ms. Thompson filed suit against Dreiser, and the press took Dreiser to task on this and earlier cribs. Although Dorothy Thompson eventually dropped her suit, it colored the opinion of some of Dreiser's colleagues towards his works. It also led to another ugly incident in 1931, when at a dinner at the Metropolitan Club honoring visiting Russian novelist, Boris Pilnyak, Sinclair Lewis (Dorothy Thompson's husband and at that year's winner of the Nobel Prize in literature) stood up to speak to the gathered literary notables and, after stating his pleasure at meeting Mr. Pilnyak, added: "But I do not care to speak in the presence of one man who has plagiarized 3,000 words from my wife's book on Russia" (Swanberg, 372). At the end of the reception that followed, Dreiser walked over to Lewis and demanded explanation. Lewis repeated his accusation, at which point, Dreiser slapped his face. Lewis, undaunted, repeated the accusation a third time and received a second slap. Again, the incident was widely publicized in the papers and fueled an aversion on the part of many for Dreiser's private self. Yet despite his personal and public scandals, Dreiser's achievements in establishing a truly American literature and his one-man crusade for social justice set standards for those of his time and those who would follow. Sherwood Anderson, John Dos Pas sos, James T. Farrell, Edgar Lee Masters, H. L. Mencken, Upton Sinclair—these and many others— acknowledged publicly or privately a debt owed to the example of Dreiser. In a final tribute to Dreiser, upon his death in 1945, H. L. Mencken wrote: ‥ no other American of his generation left so wide and handsome a mark upon the national letters. American writing, before and after his time, differed almost as much as biology before and after Darwin. He was a man of large origi nality, of profound feeling, and of unshakeable courage. All of us who write are better off because he lived, worked and hoped. (Swanberg, 527) Return to Top » Scope and Contents The Theodore Dreiser collection at the University of Pennsylvania Library is the principal repository for books and documents concerning Dreiser's personal and literary life. The Collection at large includes Dreiser's own library and comprehensive holdings in both American and foreign editions of his writings, as well as secondary works. At the heart of the Collection, however, are the Theodore Dreiser Papers. They comprise 503 boxes and include correspondence; manuscripts of published and unpublished writings; notes; diaries; journals edited by Dreiser; biographical material; memorabilia, including scrapbooks, photographs, postcards, promotional material, art, and personal possessions; financial and legal records; clippings covering Dreiser's literary life, beginning with his career as a newspaper reporter in the 1890s; and microfilms of material housed in this and other collections. Also contained in the Papers are correspondence, works, and memorabilia of Dreiser's brother, Paul Dresser; his second wife, Helen Patges (Richardson) Dreiser; and his niece, Vera Dreiser Scott. Finally, the Papers include works of fiction, nonfiction, and poetry that were sent to Dreiser, as well as works that were written about him. Although the Papers contain documents dated as early as 1858 and as late as 1982, the bulk of the materials falls between the years 1897 and 1955. Dreiser's initial bequest of materials to the University of Pennsylvania occurred in 1942; shipments continued until 1955, the last following Helen Dreiser's death. Gifts and purchases have enriched Penn's Dreiser collection, including the Papers, to such an extent that little of significance regarding Dreiser's life and work is unavailable to the researcher working at Penn. It is no accident that the University of Pennsylvania became the home for Theodore Dreiser's papers. Historically, the study of American literature was undervalued by English literature departments, which often exhibited a provincial subservience to English letters.[1] At the University of Pennsylvania, however, pioneers like Arthur Hobson Quinn began teaching courses in the American novel in 1912 and in American drama in 1917. Dr. Quinn believed that one reason for the neglect of American writing in colleges was that "the literature had been approached as though it were in a vacuum, divorced from unique historical and economic conditions which had produced it." [2] Emphasizing the necessity for an historical approach to the subject, he was instrumental in the adoption in 1939 of a curriculum in American studies by the graduate school of the University of Pennsylvania and in 1942 by the undergraduate school. Other Penn faculty, such as E. Sculley Bradley and Robert Spiller, shared Dr. Quinn's devotion to and assessment of American studies. They actively sought to acquire the research materials that they deemed essential to an historical approach. In the late 1930s, Robert Elias, a graduate student in the English Department at Penn, sought out Dreiser in order to use Dreiser's papers for his doctoral dissertation. Penn faculty then approached Dreiser about depositing his collection with the University. Dreiser was aware of his place in the evolution of American literature and of the value of his papers to scholars and collectors. His first literary bequest was the manuscript of Sister Carrie, which was a gift to his frien d H. L. Mencken. Dreiser and Mencken often discussed the final disposition of their papers and agreed that settling on one institution for an entire collection was better than dividing it among several. Unfortunately, during periods of financial insecurity throughout his lifetime, Dreiser offered various pieces of his literary legacy to collectors or auctioneers in return for ready cash. Some of the manuscripts that were sold have found their way back to his own collection at Penn through donations or purchases, but writings not accounted for here or in other collections are presumed to be in private hands or lost. It is unlikely that Dreiser himself destroyed them, although others close to him may have done so to protect their privacy. He blamed his first wife, Sara White Dreiser, for the destruction of the first manuscript of The "Genius" and it is known that she and her relatives destroyed some of his letters to her and bowdlerized others that are held by the University of Indiana. Although the University of Pennsylvania has the largest and most comprehensive collection of Dreiser's papers, there are some gaps in its coverage. Over the years, Penn has acquired photocopies and microfilms of some holdings from other collections, w hich are mentioned either in the container list or in an appendix. A study of the series description and the container list confirms that, with few exceptions, even those writing projects for which gaps exist are represented by enough material to give the researcher a sense of Dreiser's plan for the work and its evolution as he worked it out from manuscript to publication. An annotated list of institutions with significant holdings on Dreiser can be found in Theodore Dreiser: A Primary Bibliography and Reference Guide (2nd ed.), by Donald Pizer, Richard W. Dowell, and Frederic E. Rusch (Boston: G. K. Hall & Co., 1991). Dreiser was a prolific writer and correspondent and one who saved almost everything he wrote, from the initial notes for a piece of writing to the discarded pages from revised manuscripts. In addition to preserving his manuscripts, Dreiser saved incom ing personal and business correspondence and made carbons of outgoing correspondence, especially after he began to have regular secretarial help in the 1920s. He was a compulsive rewriter of his own work and enlisted the aid of friends, associates, and p rofessional editors in the work of revision. After a manuscript was transformed into a typescript, carbons of it were often circulated among his associates for their editorial suggestions. Many of these copies, in addition to the drafts Dreiser revised himself, are housed in this collection, so it is possible to determine some of the influences on Dreiser's work and to better understand the way Dreiser carried out the process of writing. Correspondence is arranged alphabetically by correspondent and then chronologically within each correspondent's file. Items of incoming and outgoing correspondence are interfiled. Care should be taken by researchers not to remove or misplace the white interleaving sheets found in many folders; this paper is acting as a barrier to keep carbons of outgoing correspondence from acid-staining original letters housed next to them. Unidentified correspondence is housed immediately after the alphabetical correspondence files. Following the "Unidentified Correspondence" are two additional series of correspondence, one entitled  "Miscellaneous Correspondence," the other  "Legal Matters."  "Miscellaneous Correspondence" comprises two case files, one of materials relating to or collected by Estelle Kubitz Williams, the other of correspondence relating to exhibitions or the collecting of Dreiser's works by the Los Angeles Public Library .  "Legal Matters" consists of six distinct files pertaining to various legal matters involving Dreiser. The governing criteria for separating correspondence from the alphabetical correspondence file was whether the material in a file was collected primarily by Theodore or Helen Dreiser or by someone else. This rule explains why two other series, entitled  "Paul Dresser Materials" and  "Vera Dreiser Correspondence" have been separated from the alphabetical correspondence files and housed later in the coll ection under the general title  "Family Members." (It should be noted that, while  "Paul Dresser Materials" contains a large addition of materials from outside sources, many items in it were indeed collected by Theodore and Helen Dreiser; this file became so large, however, and contained so much material that was not correspondence that the decision was made to separate it from the main body of correspondence.) In organizing the manuscripts in this collection, consideration was given to Dreiser's habits of writing, his own presumed plan or arrangement of his papers, the scope of Penn's actual holdings, and the needs of researchers. The fact that the bulk of this collection has been at the University of Pennsylvania since the late 1940s and was opened to scholars before being completely processed makes Dreiser's own organizational schema difficult to determine in 1990. It is known that even before his papers were shipped to the University of Pennsylvania they were reordered several times by his wife or assistants. It is also known that during the preliminary sorting at Penn related items that had arrived clipped together were separated, and no record was ke pt of their original arrangement. Over the years users of the collection have rearranged files and papers to suit the purposes of their own research and have neglected to restore what they moved to its original order. Most unfortunately, some papers that arrived with the collection in the 1940s have disappeared. How did Dreiser's habits of research and writing influence the final arrangement of the papers? It is important to remember that he was an extremely productive writer in many genres: novels, essays, short stories, poetry, play scripts, and screenplay s. Because his funds were often low, he wanted to recycle his publications so that they generated more than one income. For example, he wrote novel-length works but hoped to sell to the periodicals short pieces adapted from these longer works and thus t o collect a book royalty as well as a payment for the extracted piece. He followed this process in reverse: manuscripts originally sold and published as essays, poems, or short stories were often combined later and sold as book-length units. Some books , such as An American Tragedy, were adapted into play scripts and motion picture screenplays and thus could be marketed again. How to order these related writings both to preserve their integrity as particular genres and to show their relationship to one another was an important consideration in processing Dreiser's papers. Because many of Dreiser's essays, short stories, poems, and play scripts were published both individually in periodicals and later as parts of collections of similar works, they could have been filed with others of the same genre or collected under the book title Dreiser eventually chose for them. Researchers should check the container list under TD Writings: Books and the appendices for other relevant genres because sometimes a piece of writing, or versions of it, will be found in both locations. For example, the stories that comprise Free and Other Stories and  Chains are filed alphabetically in TD Writings: Short Stories because the University of Pennsylvania Dreiser Papers lacks the "book manuscript" for these stories that is known to have existed at one time. By contrast, Penn does have manuscripts, typescripts, and typesetting copy for the studies that were published in  A Gallery of Women, and Dreiser's lists and correspondence indicate that he wanted these studies to be published as a unit even though he published some of them first in periodicals. Thus, the researcher will find some of these essays in two places: tearsheets from the periodical publication of the essay filed alphabetically in TD Writings: Essays and manuscripts and typescript s of the essays labeled by Dreiser  A Gallery of Women housed under that title in TD Writings: Books. In addition to recycling published works into other publications, Dreiser sometimes used the same title for writings in two different genres. For example, an essay and a short story are both entitled "Kismet"; "The Factory" is the title for both an es say and a poem; "Credo" is an essay but "The Credo" is a short story; three poems bear the title "Love" and two "Life." Using the same story line, Dreiser wrote a playscript and a screenplay called "The Choice." He wrote a playscript "Solution" based on his short story of the same title. The appendices for all the genres should be consulted for titles so that the researcher does not overlook any relevant adaptations. The autobiographical character of much of Dreiser's writing occasionally makes the distinction between an essay and a short story a problematic one. Unless Dreiser specified directly, his intent is impossible to recover at this point because the polic y followed for distinguishing between the two when the collection underwent its preliminary sorting in the 1940s is unknown. With the exception of a few obvious misfilings, the stories and essays have been left in their pre-1990 processing genre. Resear chers should check both TD Writings: Essays and TD Writings: Short Stories for titles. Dreiser's work habits and filing practices also meant that some flexibility was required in defining authorship of the papers in this collection. Sometimes Dreiser developed an idea or a theme for a series of articles, whereupon he would contact lesser-known writers and ask them to compose essays on this theme, with the understanding that he would edit and perhaps rewrite the essays and have the series published under his name. Occasionally the original writer of these pieces cannot be determined bec ause Dreiser had the essay retyped under his name before submitting it to a publisher. Because Dreiser was the author of the idea for the series, as well as the author of one or more of the essays, all manuscripts in the series are housed in TD Writings: Essays under the name of the series, with the name of the actual author of the essay (if known) noted on the folder. The same policy was followed for other works inspired by Dreiser's ideas or writing s. Dreiser's own identifying terminology is used to describe the contents of a folder unless it is clearly incorrect. Most of the manuscript material from the Dreisers was wrapped in brown paper or manila envelopes with a notation by Dreiser or Helen Dre iser describing the contents. Unfortunately, when the papers arrived at Penn and were rehoused in the preliminary sort, some sources of identification were not documented on the folders. Sources of identification that are questionable for any reason are so indicated on the folders. If the item was not identified originally or was identified incorrectly, a descriptive term has been supplied. In processing the Theodore Dreiser Papers, extensive use was made of the biographies Dreiser (1965), by W. A. Swanberg, and the two-volume study  Theodore Dreiser: At the Gates of the City, 1871-1907 (1986) and  Theodore Dreiser: An American Journey, 1908-1945 (1990), by Richard Lingeman; the biographical study  Forgotten Frontiers: Dreiser and the Land of the Free (1932), by Dorothy Dudley; the memoirs  My Life with Dreiser, by Helen Dreiser (1951),  Theodore Dreiser: A New Dimension, by Marguerite Tjader (1965), and  My Uncle Theodore, by Vera Dreiser with Brett Howard (1976); the collections  Letters of Theodore Dreiser: A Selection (3 vols.), edited by Robert H. Elias (1959),  Dreiser-Mencken Letters: The Correspondence of Theodore Dreiser & H. L. Mencken 1907-1945 (2 vols.), edited by Thomas P. Riggio (1986), and  Theodore Dreiser: American Diaries 1902-1926, edited by Thomas P. Riggio (1982); and the reference work  Theodore Dreiser: A Primary Bibliography and Reference Guide (2nd ed.), by Donald Pizer, Richard W. Dowell, and Frederic E. Rusch (1991). The last-mentioned work comprises not only a primary bibliography of the works of Theodore Dreiser, but also an annotated bibliography of writings about Dreiser from 1900 to 1989. Endnotes [1] In American Literature and the Academy Kermit Vanderbilt reviews in depth  "the embattled campaign to build respect for America's authors and create standards of excellence in the study and teaching of our own literature." His book was published in 1986 by the University of Pennsylvania Press. [2] Neda M. Westlake, "Arthur Hobson Quinn, Son of Pennsylvania,"  The University of Mississippi Studies in English, Volume 3, 1982, p. 15. Return to Top » Administrative Information Publication Information University of Pennsylvania: Kislak Center for Special Collections, Rare Books and Manuscripts,  1992 Finding Aid Author Finding aid prepared by Julie A. Reahard and Lee Ann Draud Sponsor The processing of the Theodore Dreiser Papers and the preparation of this register were made possible in part by a grant from the National Endowment for the Humanities and by the financial support of the Walter J. Miller Trust Use Restrictions Copyright restrictions may exist. For most library holdings, the Trustees of the University of Pennsylvania do not hold copyright. It is the responsibility of the requester to seek permission from the holder of the copyright to reproduce material from the Kislak Center for Special Collections, Rare Books and Manuscripts. Source of Acquisition Gift of Theodore and Helen Dreiser with additional donations from Myrtle Butcher; Louise Campbell; Harold J. Dies; Ralph Fabri; Mrs. William White Gleason [Dreiser-E. H. Smith correspondence]; Hazel Mack Godwin; Paul D. Gormley; Marguerite Tjader Harris; R. Sturgis Ingersoll [manuscript for Jennie Gerhardt]; Los Angeles Public Library; F. O. Matthiessen; Vera Dreiser Scott; Lorna D. Smith; Robert Spiller [galleys for  The Bulwark]; and Estelle Kubitz Williams plus purchased additions, 1942-1991. Return to Top » Controlled Access Headings Form/Genre(s) Clippings (information artifacts) Contracts Correspondence Diaries Essays Financial records Manuscripts, American--20th century Memorabilia Plays (performed works) Poems Short stories, American--19th century Speeches Writings (documents) Personal Name(s) Dreiser, Helen Patges, -1955 Dresser, Paul, 1858-1906 Subject(s) Authors Authors, American Authors, American--20th century Families Literature Return to Top » Other Finding Aids For a complete listing of correspondents, do the following title search in Franklin: Theodore Dreiser Papers Return to Top » Collection Inventory I.  Correspondence. Series Description This first extensive series contains letters written to and from Theodore and Helen Dreiser, arranged alphabetically by correspondent, of which there are approximately 6,000. Within each correspondence file, letters are arranged chronologically. Inco ming and outgoing correspondence has been interfiled. The researcher should keep in mind that letters may have crossed in the mail, especially in the case of foreign correspondence; a given letter may not have been received by Dreiser or his correspondent when one of a later date was sent. At the end of the alphabetical correspondence files is the unidentified correspondence, arranged in chronological order where possible. The majority of Dreiser's correspondence is work-related, pertaining to the various projects that he was working on at any given time. Still, the list of names of those having significant personal correspondence with Dreiser reads like a Who's Who among writers, artists, publishers, social critics, and notables of his time, for example, Sherwood Anderson, Harry Elmer Barnes, Jerome Blum, Franklin Booth, A. A. Brill, Pearl Buck, Bruce Crawford, Floyd Dell, Ben Dodge, John Dos Passos, Angna Enters, Whar ton Esherick, Ralph Fabri, James T. Farrell, Ford Madox Ford, Charles Fort, Waldo Frank, Hutchins Hapgood, Dorothy Dudley Harvey, Ripley Hitchcock, B. W. Huebsch, Otto Kyllmann, William C. Lengel, Horace Liveright, Edgar Lee Masters, H. L. Mencken, Frank Norris, John Cowper and Llewelyn Powys, Grant Richards, Kathryn D. Sayre, Hans Stengel, George Sterling, Dorothy Thompson, Carl Van Vechten, and Charles Yost. Helen Dreiser's correspondence appears in the files with Theodore Dreiser's, because she often served as principal contact for Dreiser's friends and business associates: Dreiser was often either ill or busy attempting to complete book projects (especially in the later years of his life, 1943 to 1945). While the larger correspondence files relating to Dreiser's brother, Paul Dresser, and his niece, Vera Dreiser, have been moved to another section of the Papers, the alphabetical correspondence series does contain family correspondence and some significant correspondence with personal friends of Dreiser, such as that with his teacher, May Calvert Baker, and friends Lillian Rosedale Goodman and Kirah Markham. The Department of Special Collections has obtained some photocopies of Dreiser letters housed in other repositories: these are filed just as if they were original documents. All such photocopies are so marked. Receipts, canceled checks, and income tax returns are housed as series filed later in the papers. While some royalty statements do reside in the alphabetical correspondence section (when they came enclosed in letters from various publishing firms), the bulk is housed in the series titled "Financial Records." Box Folder A & C Black, Ltd. - Alleman, Marta. 1 1-77 Allen, Ben - American Federation of Labor (1929-1931 July 14). 2 78-128 American Federation of Labor (1931 July 17-23) - American Society of Composers, Authors and Publishers. 3 129-173 American Spectator - Anderson, Sherwood. 4 174-220 Andrea, Leonardo - Austrian, Delia. 5 221-314 Author's and Writer's Who's Who - Baker & Taylor Co. 6 315-364 Balch, Jean Allen - Beard, Lina. 7 365-454 Beck, Clyde - Bicknell, George. 8 455-537 Big Brothers of America - Bland, H. Raymond. 9 538-568 Blau, Perlman & Polakoff - Boni & Liveright (1917-1921). 10 569-616 Boni & Liveright, 1922-1933. 11 617-627 Boni & Liveright (1934-1938) - Bowdoin College. 12 628-670 Bowen, Croswell - Brandt & Brandt. 13 671-719 Brandt Theatres - Brodsky, Nauda Auslien. 14 720-770 Brody, Paul A. - Burns, Lee. 15 771-864 Burnside, L. Brooks - Campbell, Louise (1917-1929). 16 865-920 Campbell, Louise, 1930-1963, undated. 17 921-930 Campbell, Mary - Chadwick Productions. 18 931-1005 Chalian, Edward - Church Management: Journal of Parish Administration. 19 1006-1076 Churchill, Judith Chase - Cluett, Peabody & Co. 20 1077-1133 Coakley, Elizabeth - Commonwealth College (Mena, Ark.). 21 1134-1197 Communist Party of the United States of America - Constable & Company (1929-1934). 22 1198-1224 Constable & Company (1935-1947) - Cotton, Mother Emma. 23 1225-1273 Coulter, Ernest Kent - The Crusaders. 24 1274-1331 Crutcher, Ernest - Curtis Brown, Ltd. (1907-1933). 25 1332-1364 Curtis Brown, Ltd. (1934-1940) - Davidson, Jo. 26 1365-1413 Davies, Marion - Delteil, Caroline Dudley. 27 1414-1469 DeMille, Cecil B. - Dimock & Fink Company. 28 1470-1529 Dinamov, Sergei - Doty, Douglas Zabriskie. 29 1530-1569 Doubleday, Doran & Company - Dreier, Thomas. 30 1570-1601 Dreiser, Albert J. - Dreiser, Helen Patges. 31 1602-1617 Dreiser, Henry - Dyer, Francis John. 32 1618-1690 E. P. Dutton - Emeline Fairbanks Memorial Library, Terre Haute, Ind. 33 1691-1772 Emergency Committee for Southern Political Prisoners - Ettelson, Samuel A. 34 1773-1831 Ettinge, James A. - Fabri, Ralph (1929-1933). 35 1832-1870 Fabri, Ralph, 1934-1943. 36 1871-1880 Fabri, Ralph (1944-1955) - Fasola, F. B. 37 1881-1915 Fassett, Lillian - Fischl, George. 38 1916-1978 Fischler, Joseph - Ford Hall Forum (Boston, Mass.). 39 1979-2032 Foreign Policy Association - Freedman, May Brandstone. 40 2033-2092 Freeman, Helen - Geisel, K. 41 2093-2182 Gelfand, Hyman A. - Goldberg, Isaac. 42 2183-2273 Golden, John - Graham, Marcus. 43 2274-2336 Grand Army of the Republic - Gunther, Ferdinand. 44 2337-2426 Guthrie, William Norman - Hampshire County Progressive Club. 45 2427-2487 Hampton, David B. - Harper & Brothers (1899-1920). 46 2488-2537 Harper & Brothers (1921-1946) - Hartwell Stafford, Publisher. 47 2538-2584 Hartwick, Harry - Hedrick, T. K. (Tubman K.). 48 2585-2638 Heilbrunn, L. V. (Lewis Victor) - Herdan, Gerald S. 49 2639-2682 Hergesheimer, Joseph - Hoffmann, W. 50 2683-2761 Hofschulte, Frank - Howe, L. V. 51 2762-2843 Howell, E. L. - Hume, Cameron & Paseltiner (1920-1933). 52 2844-2880 Hume, Cameron & Pasteltiner (1934-1942) - Ilhardt, Emil, Mrs. 53 2881-2928 Illes, Bela - International League of Leavers of Footprints in the Sands of Time. 54 2929-2975 International Literary Bureau - Isbey, H. E. F. 55 2976-3000 Isham, Frederic Stewart - Jenkins, William W. 56 3001-3057 Jenks, George C. - Johns Hopkins University. 57 3058-3098 Johnson, A. D. - Juggler(Notre Dame, Ind.). 58 3099-3173 Jules C. Goldstone Agency - Kelley, F. F. 59 3174-3250 Kelly, Fred C. (Fred Charters) - Kerpel, Eugen (1936). 60 3251-3286 Kerpel, Eugen (1937-1941) - The Knoxville News-Sentinel. 61 3287-3353 Knudsen, Paol - Labor Research Association (U.S.). 62 3354-3420 Labor Temple School (New York, N.Y.) - Larrimer, Mary. 63 3421-3469 Larsh, Theodora - Lemon, Willis S. 64 3470-3550 Lengel, William C., 1910-1957. 65 3551-3562 Lenitz, Josephine H. - Liesee, Edith M. 66 3563-3640 Life(New York, N.Y.) - Livraria Garnier. 67 3641-3690 Llona, Victor - Lyons & Carnahan. 68 3691-3787 M. Witmark & Sons - McCoy, Esther (1924-1933). 69 3788-3824 McCoy, Esther (1934-1977) - Mack, Hazel (1936-1944, April). 70 3825-3869 Mack, Hazel (1944 May-1946) - Malmin, Lucius J. M. 71 3870-3939 Management Ernest Briggs (Firm) - Mason, Walt. 72 3940-4006 Masseck, C. J. - Masters, Edgar Lee. 73 4007-4024 Masters, Marcia Lee - Meltzer, E., Mrs. 74 4025-4081 Mencken, H. L. (Henry Louis), 1907-1917. 75 4082-4093 Mencken, H. L. (Henry Louis), 1918-1935. 76 4094-4105 Mencken, H. L. (Henry Louis), 1936-1954, undated. 77 4106-4117 Mendelson, Edna G. - Milwaukee Writers Union. 78 4118-4202 Mind, Inc. - Monahan, Yvette. 79 4203-4239 Monatshefte für deutschen Unterricht - Motuby, Betty. 80 4240-4303 Mount, Richard - National Committee for the Defense of Political Prisoners (1931). 81 4304-4379 National Committee for the Defense of Political Prisoners (1932-1937) - Nervous and Mental Disease Publishing Co. 82 4380-4439 Nesbit, Wilbur D. - New York Library Association. 83 4440-4503 New York Mirror(New York, N.Y.) - Norstedts tryckeri. 84 4504-4567 The North American - 130 Washington Place West Holding Corp. 85 4568-4654 O'Neil, James - Oxford University Press. 86 4655-4712 P.E.N. Czechoslovakia - Patterson, William Morrison. 87 4713-4780 Pauker, Edmond - Pennsylvania Railroad. 88 4781-4825 People's Forum of Philadelphia - Piwonka, Hubert. 89 4826-4910 Plantin Press - Powys, John Cowper. 90 4911-4971 Powys, Llewelyn - Quintanilla, Luis. 91 4972-5062 R - Revue Internationale des Questions Politiques Diplomatiques et Economiques. 92 5063-5160 Rey, John B. - Roberts, William. 93 5161-5236 Robertson, John Wooster - Rossman, Carl. 94 5237-5325 The Rotarian - Salzman, Maurice. 95 5326-5421 Sampson, Emma - Schilling, Theodore. 96 5422-5486 Schindler, H. - Seldes, George. 97 5487-5570 Seldon, Lynde - Simon, Nelly. 98 5571-5653 Simon and Schuster, Inc. - Sinclair, Elsie. 99 5654-5673 Sinclair, Upton - Smith, Edward H. (1913-1921). 100 5674-5719 Smith, Edward H. (1922-1927) - Smith Book Company. 101 5720-5728 Smyser, William Leon - Stalin, Joseph. 102 5729-5852 Stanchfield & Levy - Stoddart, Dayton. 103 5853-5932 Stokely, James - Swarthmore College. 104 5933-6020 Sweeney, Ben - Telephone Subscribers Protective League. 105 6021-6084 Temple University Woman's Club - Tomas, D. 106 6085-6176 Toner, Williams McCulloch - United Press International. 107 6177-6276 United States. Assistant Secretary of State - University of Iowa. 108 6277-6332 University of Michigan - Veritas Press. 109 6333-6392 Verlag J. Engelhorns Nachf. Stuttgart - Wake, B. H. 110 6393-6458 Walburn, Nancy - Weiss, Rudolph. 111 6459-6557 Weissenberger, M. C. - Whitlock, Douglas. 112 6558-6644 Whitman, Charles Sidney - Willson, Bob William. 113 6645-6718 Wilson, Charles Morrow - Wood, Robert Scofield. 114 6719-6797 Woodbourne Correctional Facility - Woythaler, Erich. 115 6798-6844 Wrenn, Charles I. - Youngblood, Jean. 116 6845-6902 Your LifeZweiger, William L. & unidentified. 117 6903-6935 Return to Top » II.  Miscellaneous correspondence. Series Description This series is divided into two sections: Estelle Kubitz Williams materials and materials relating to the Los Angeles Public Library's exhibitions and acquisitions of Dreiser materials. Estelle Kubitz Williams materials include correspondence between Ms. Williams and her sister Marion; her husband Arthur P. Williams; and Harold Hersey. Each of these is housed in a separate folder, organized chronologically. Other titles in this series (all collected by Ms. Williams) are: recipes; jokes; typed fact s about European history; excerpts from books; poetry; lists of names; travel notes on Jews and Jerusalem; proverbs from different countries; and miscellaneous materials. The Los Angeles Public Library correspondence is housed in two folders arranged chronologically. One folder contains correspondence between the Library and Helen Dreiser, the other between the Library and Lorna D. Smith. Box Folder Materials collected by or related to Estelle Kubitz Williams. 118 6936-6952 Files relating to the Los Angeles Public Library concerning Dreiser exhibition and acquisitions, 1946-1951. 118 6953-6954 Return to Top » III.  Legal matters. Series Description This series divides as follows: Theodore Dreiser's Will, 1/2 box; publishers contracts, arranged alphabetically by publisher name, and copyrights arranged by book title, 1 1/2 boxes; foreign language contracts, 1 box; Dreiser's legal dealings with Hor ace Liveright Theatrical Productions, 1 box; Dreiser's legal battles with Erwin Piscator, 1 box; Dreiser's lawyers' files concerning various cases (including: Dreiser v. Dreiser; The "Genius"; the Paramount cases regarding  An American Tragedy; and South American lawsuits pertaining to the publishing of  America is Worth Saving and  Jennie Gerhardt), 1 box. Finally, legal papers in volving the trial of the book  An American Tragedy in Boston and  The "Genius" protest, 1 box. Box Folder Theodore Dreiser's Last Will and Testament. 119 6955 Contracts: Horace Liveright, Inc., 1929-1938. 119 6956 Contracts: G. P. Putnam's Sons, 1934-1942. 119 6957 Contracts: Simon & Schuster, Inc., 1939-1941. 119 6958-6959 Contracts: World Publishing Company, 1946-1949. 119 6960 Contracts: University of Pennsylvania, 1942-1949. 119 6961 Copyrights: "An Address to Caliban" -  "Epitaph" . 119 6962-6975 Copyrights: The Financier -  "You, the Phantom" . 120 6976-7010 Contracts: Argentina. 121 7011 Contracts: Austria. 121 7012 Contracts: Canada. 121 7013 Contracts: Czechoslovakia. 121 7014 Contracts: Denmark. 121 7015 Contracts: England. 121 7016 Contracts: Finland. 121 7017 Contracts: France. 121 7018 Contracts: Germany. 121 7019 Contracts: Holland. 121 7020 Contracts: Hungary. 121 7021 Contracts: Italy. 121 7022 Contracts: Japan. 121 7023 Contracts: Norway. 121 7024 Contracts: Poland. 121 7025 Contracts: Portugal. 121 7026 Contracts: Russia. 121 7027 Contracts: South America. 121 7028 Contracts: Sweden. 121 7029 Contracts: Switzerland. 121 7030 Contracts & Correspondence: Horace Liveright Theatrical Productions, 1926-1932. 122 7031-7037 Correspondence & Accounts: Piscator-Bühne (Dramaturgie), 1929-1937. 123 7038-7048 Lawyers' Files: Dreiser v. Dreiser, 1926. 124 7049 Lawyers' Files: "The Genius" , 1929. 124 7050 Lawyers' Files: Paramount Publix Corp. cases, 1931-1938. 124 7051-7054 Notes & Clippings: Paramount Publix Corp./ An American Tragedycase, 1930-1932. 124 7055-7056 South American Lawsuits: America Is Worth Saving & Jennie Gerhardt, 1941-1943. 124 7057-7058 An American Tragedy: trial of the book in Boston, Commonwealth of Mass. v. Donald S. Friede, 1929. 125 7059-7061 The "Genius" : protest, 1916. 125 7062-7066 The "Genius" : lawsuit, Theodore Dreiser v. John Lane Co., 1921. 125 7067-7073 The "Genius" : memorandum of law re proposed moving picture production, 1929. 125 7074 Return to Top » IV.  TD Writings: Books. Series Description This series includes everything Dreiser himself labeled a book manuscript, all works that were adapted by Dreiser or someone else from one of his books, and secondary material used to promote his books or related works. The order of arrangement for each title is chronological, following the process of writing from initial planning to publication: notes and outlines, pamphlets, and other research materials; manuscripts; typescripts; printers' proofs; book jackets, dummies, and advertising copy; discarded manuscript fragments; and adaptations from the book. Thus, under An American Tragedy, researchers will find not only all manuscripts, typescripts, proofs, and dust jackets for the book, but also a tabloid and a condensed version of the novel; all the playscripts in English and other languages, plus playbills and programs from any of these versions that were actually produced; a scenario for an opera; and movie scripts from the 1931  An American Tragedy and the 1951  A Place in the Sun. This series also includes all the material that Dreiser filed under "Philosophical Notes." He intended to publish a book that clarified his philosophy of the meaning of life and the workings of the universe: these notes represent his research and efforts thereon. Dreiser, however, died before finishing all the manuscripts for the project. Because these materials ultimately did form the basis of a published book, Notes on Life (1973), they are located in this series.  Notes on Life represents a selection of the material found here and was edited by Marguerite Tjader. Her papers for this work follow Dreiser's notes. Not included in this series, however, are a few "false starts" or beginnings of fictional works that Dreiser may have intended to expand into novels but that remained unfinished, e.g., "Mea Culpa," "Our Neighborhood," and "The Rake." These titles are located in the series Notes Written and Compiled by TD in boxes 396 and 397 under the heading "Novels, unfinished." Also not included in this series are published reviews of Dreiser's books. Reviews can be found in several locations. Box 468 contains miscellaneous clippings of reviews organized chronologically by title, but researchers should note the location of other reviews in the container list under the respective book titles. The amount of material listed for each title varies. Penn's Dreiser Papers does not contain all of Dreiser's book manuscripts in their original form, but the collection does include photocopies of some manuscript materials held by other institutions or individuals. Such material is noted on the container list. As mentioned in the Scope and Content Note, some books that contain previously published essays or stories (e.g., Free and Other Stories) are not included in TD Writings: Books, because Penn's collection does not have an actual book manuscript as identified by Dreiser. Manuscripts for these shorter pieces are housed under their respective genre titles (e.g., short stories, plays). When Dreiser's manuscripts were typed, he usually asked for an original and several carbons, which he then distributed to his friends for their comments and editorial suggestions. Thus, some typescripts in the Dreiser Papers may contain revisions in a hand other than Dreiser's; when this handwriting could be identified, the information was noted on the folder. The manuscripts, typescripts, and proofs are given Dreiser's term of identification unless it is obviously incorrect. If no identifying term was assigned by Dreiser, an arbitrary term has been supplied, based on the item's chronological position within Penn's holdings for that book. Therefore, if several typescripts of a book were unidentified or were all identified as "revised typescripts," they have been arranged chronologically and given designations such as "Typescript A, B, C‥" if they are different typescripts or "Typescript A," "Typescript A, revised," and so forth, if they are revised versions of the same typescript. A.  Sister Carrie. Note For reviews of Sister Carrie, see Box 420 Box Folder Sister Carrie: 1st typescript (chaps. I-XLVII). chaps. I-XLVII. 126 7075-7098 Sister Carrie: book jackets. 126 7099 Sister Carrie (Pa. ed.): emendations in the copy-text by James L. W. West III (chaps. I-XXIX). Description Letter from West to Neda Westlake; note on comparison of handwriting of Arthur Henry and Sara White Dreiser on the typescript. 126 7100 Sister Carrie (Pa. ed.): emendations in the copy-text by West (chaps. XXX-L). 126 7101 Sister Carrie (Pa. ed.): rejected proof alterations and sample historical collation. 126 7102 Sister Carrie: two outlines by?. 127 7103 Sister Carrie: dramatization by H. S. Kraft (dramatic outline; acts I, II, III). 127 7104-7106 Sister Carrie: dramatization by H. S. Kraft (?) (acts I, II, III). 127 7107-7109 Sister Carrie: dramatization by John Howard. 127 7110 Sister Carrie: dramatization by Kathryn Sayre (synopsis of scenes; prologue, acts, I, 2, 3). 127 7111-7114 Sister Carrie: dramatization by Kathryn Sayre (prologue, acts 1, 2, 3). 127 7115-7117 Sister Carrie: synopsis by Elizabeth Kearney. 127 7118 Sister Carrie: screen adaptation by Helen Richardson. 127 7119 B.  Jennie Gerhardt. Box Folder Jennie Gerhardt ("The Transgressor"). Description Sample front cover and title page; 2 typeset pages; ms from which typeset pages were made; note from James L. W. West III; note about sale of ms. 128 7120 Jennie Gerhardt: early ms (chaps. I-X). 128 7121-7133 Jennie Gerhardt: early ms (chap. X-XII). 128 7134 Jennie Gerhardt: early ms (chap. XII (conc.); chap. XIII; earlier version of chap. XII; fragment of early version of chap. XII). 128 7135 Jennie Gerhardt: early ms (chaps. XIV-XXV)). 129 7136-7141 Jennie Gerhardt: early ms (chaps. XXVI; XVIII; another version of XXVI?). 129 7142 Jennie Gerhardt: early ms (unnumbered chap. that follows chap. XXVI). 129 7143 Jennie Gerhardt: early ms (chaps.XXVII-XXIX). 129 7144-7146 Jennie Gerhardt: early ms (chap. XXX; also other chaps.?). 129 7147 Jennie Gerhardt: ms (chaps. XIV-XXXVI). 130 7148-7170 Jennie Gerhardt: ms (chaps. XXXVII-LX). 131 7171-7194 Jennie Gerhardt: annotated typescript (chaps. I-XIII). 132 7195-7204 Jennie Gerhardt: typescript (chaps. I-XXX). 132 7205-7218 Jennie Gerhardt: book jackets. 132 7219 Jennie Gerhardt: lists of people to receive complimentary copies. 132 7220 Jennie Gerhardt: outline for a play?. 132 7221 "The Story of Jennie," playscript by? (acts I,II). 132 7222-7223 C.  The Financier, The Titan, and  The Stoic. Box Folder Dates TD worked on The Financier, The Titan, and  The Stoic. 133 7224 Notes on characters in The Financier. 133 7225 Notes on characters in The Titan. 133 7226 Notes for The Financier and  The Titan. 133 7227-7243 Notes for The Financier and  The Titan. 134 7244-7262 The Financier: original ms. (chaps. I-XLIII), 1912. 135 7263-7305 The Financier: original mas. (chaps. XLIV-LI), 1912. 136 7306-7313 The Financier: original ms. (chaps. 48-56), 1912. 136 7314-7322 The Financier: original ms. (chaps. 62-70), 1912. 136 7323-7331 The Financier: original ms. (chaps. LXXI-80), 1912. 137 7332-7341 The Financier: typescript carbon (chaps. I-XXXVIII), 1912. 137 7342-7379 The Financier: page proofs, 1912. 138 7380 The Financier: typescript carbon (chaps. I-LXX), 1927. 139 7381-7406 The Financier: 1st galleys, 1927. 140 7407 The Financier: revised galleys, 1927. 140 7408 "The Cowperwood Story," a streamlined plot synopsis of  The Financier, The Titan, and  The Stoic, version 1. 141 7409 "The Cowperwood Story," version 2. 141 7410-7412 The Financier and  The Titan: synopses by?. 141 7413-7418 The Financier: synopsis by Alvin G. Manuel, annotated by TD. 141 7419 The Financier: synopsis by Lorna D. Smith. 141 7420 The Financier and  The Titan: synopses by Elizabeth Kearney. 141 7421-7424 The Financier: book jackets. 141 7425 The Financier: advertising copy, with additions by Anna Tatum. 141 7426 The Financier: dramatization by Rella Abell Armstrong of  The Financier & The Titan,annotated by TD. 141 7427-7430 The Financier: dramatization by Rella Abell Armstrong of  The Financier and  The Titan. 141 7431-7432 The Financier: scenario by Rella Abell Armstrong. 141 7433 D.  A Traveler at Forty. Note For reviews of A Traveler at Forty, see Box 421. Box Folder A Traveler at Forty: diary notes, 1911 Nov. 25-16 Jan. 1912. 142 7434-7439 A Traveler at Forty: diary notes, 1912 Jan.17-March 18. 143 7440-7454 A Traveler at Forty: drawings made for TD by other travelers. 143 7455 A Traveler at Forty: diary notes, 1912 March 19- April 25. 144 7456-7466 A Traveler at Forty: newspaper clippings re the sinking of  The Titanic, 1912 April 23-24 . 144 7467 A Traveler at Forty: typescript (chaps. I-XLVI). 145 7468-7514 A Traveler at Forty: typescript (chaps. XLVII-103). 146 7515-7571 A Traveler at Forty: revised typescript (chaps. 1-XI). 147 7572-7584 A Traveler at Forty: revised typescript (chaps. 36-37). 147 7585-7587 A Traveler at Forty: revised typescript,  "The Quest for My Ancestral Home" . 147 7588 A Traveler at Forty: revised typescript,  "The Berlin Public Service" . 147 7589 A Traveler at Forty: revised typescript,  "Night-Life in Berlin" . 147 7590 A Traveler at Forty: revised typescript. 147 7591-7592 A Traveler at Forty: excerpts for advertising purposes?. 147 7593 A Traveler at Forty: advertising or review copy?. 147 7594 E.  The Titan. Box Folder The Titan: ms (chaps. I-26). 148 7595-7621 The Titan: ms (chaps. XXVII-L). 149 7622-7645 The Titan: ms (chaps. LI-LXXIV). 150 7646-7669 The Titan: ms (chaps. LXXV-XC). 151 7670-7686 The Titan: ms (chaps. 67-71). 151 7687-7691 The Titan: ms (chaps. 72-77). 152 7692-7697 The Titan: ms (chaps. XCI-XCII). 152 7698-7699 The Titan: ms (chaps. CII-CIII). 152 7700-7701 The Titan: typescript carbon (chaps. I-29); with editing by Anna Tatum (typed from ms in Boxes 148 and 149). 153 7702-7714 The Titan: chap. 66; revised typescript and retyped version, with editing by Anna Tatum. 153 7715-7716 The Titan: chap. 67 (ms); chap. 67 (typescript typed from ms chap. 67). 153 7717-7718 The Titan: chap. 68 (ms); chap. 68 (typescript typed from ms chap. 68, 2 pages missing). 153 7719-7720 The Titan: chap. 69 (ms); chap. 72 (typescript typed from ms chap. 69). 153 7721-7722 The Titan: chap. 70 (ms); chap. 73 (typescript typed from ms chap. 70). 153 7723-7724 The Titan: chap. 71 (ms); chap. 74 (typescript typed from ms chap. 71). 153 7725-7726 The Titan: chap. 72. 153 7727 The Titan: chaps. 67-77. 153 7728-7733 The Titan: chaps. CII, CIII. 153 7734 The Titan: 1st revised galleys. 154 7735 The Titan: 2nd revised galleys. 154 7736 The Titan: ms and typescript fragments from various versions. 155 7737-7771 The Titan: book jacket. 155 7772 "Law and Lawyers," written for  The Titan?. 155 7773 The Titan: scenes to make a play. 155 7774 F.  The "Genius" . Note For reviews of The "Genius", see Box 423. Box Folder The "Genius": ms (chaps. I-XXX). 156 7775-7804 The "Genius": ms (chaps. XXXI-LX). 157 7805-7834 The "Genius": ms (chaps. LXI-XC). 158 7835-7864 The "Genius": ms (chaps. XCI-CV). 159 7865-7879 The "Genius": lst typescript A (chaps. I-LXXIX [1st typescripts A and B begin to diverge at chap. LXXVIII]). Description 1st typescripts A and B begin to diverge at chap. LXXVIII. 160 7880-7914 The "Genius": 1st typescript A (chaps. LXXX-CIII). 161 7915-7928 The "Genius": revised typescript (chap. CIV). 161 7929 The "Genius": 1st typescript A (chap. CV). 161 7930 The "Genius": 1st typescript B (chaps. I-XLVI). 162 7931-7966 The "Genius": 1st typescript B (chaps. XLXII-CIV). 163 7967-7977 The "Genius": revised typescript. 164 7978-8012 The "Genius": book jackets. 164 8013 The "Genius": 1st German printing. 164 8014 The "Genius": galley proofs. 165 8015 The "Genius": long and short résumés of the book by Lorna D. Smith; synopsis of a screen adaptation by?. 166 8016 The "Genius": ideas for dramatization. 166 8017 The "Genius": letter to Louise Campbell with versions of dramatizations. 166 8018 The "Genius": proposals by TD for a play or movie version; newspaper clipping. 166 8019 "The Stuff of Dreams" ( The "Genius") play: 1st draft. 166 8020-8022 The "Genius": summary of a play version by TD. 166 8023 The "Genius": proposal for a play version by TD; prologue. 166 8024-8027 The "Genius": play version by TD. 166 8028-8032 The "Genius": dramatic adaptation by?. 166 8033-8034 The "Genius": dramatization by?. 167 8035-8040 The "Genius": a play based on TD's novel by Odin Gregory. 167 8041-8044 The "Genius": discarded fragments and versions from acts I and II of typescripts in Boxes 166 and 167. 168 8045-8061 The "Genius": discarded fragments and versions from acts III and IV and final scene. 169 8062-8069 The "Genius": criticism and comments on the novel. 169 8070 The "Genius": pages from a scrapbook with clippings of reviews. 169 8071 The "Genius": documents pertaining to the book's suppression. 169 8072 The "Genius": miscellaneous. 169 8073 The "Genius": magazine version, published in  Metropolitan Magazine, 1923. 170 8074-8083 G.  A Hoosier Holiday. Note See Box 455 for the postcards that TD collected on his trip to Indiana, which was the basis of A Hoosier Holiday. Box Folder A Hoosier Holiday: diary notes. 171 8084-8085 A Hoosier Holiday: maps and schedules re trip to Indiana. Note See Box 484, folder 14680 for oversize map. 171 8086 A Hoosier Holiday: ms. 171 8087-8121 A Hoosier Holiday: ms. 172 8122-8154 A Hoosier Holiday: typescript with additions by TD and?. 173 8155-8187 A Hoosier Holiday: sample copy of jacket; corrections for galleys. 173 8188 A Hoosier Holiday: book jacket. 173 8189 A Hoosier Holiday: miscellaneous. 173 8190 "From , by Theodore Dreiser," printed version of article in  The Hoosier, 1917. 173 8191 A Hoosier Holiday: 1st galleys (?). 174 8192 A Hoosier Holiday: revised galleys (?). 174 8193 H.  Twelve Men. Note For reviews of Twelve Men, see Box 423. Box Folder Twelve Men:  "My Brother Paul," printed version. 175 8194 Twelve Men: notes and essays relating to  "The Country Doctor" . 175 8195-8205 Twelve Men:  "Heart Bowed Down" (  "The Village Feudists" ). 175 8206 Twelve Men:  "The Village Feudists,"  reprint published in Famous Story Magazine. 175 8207 Twelve Men:  "Sonntag-A Record" (  "W.L.S." ). 175 8208 Twelve Men:  "W.L.S.," printed version. 175 8209 Twelve Men: notes and clippings on the Robin case used for  "Vanity, Vanity Saith the Preacher" . 175 8210-8216 Twelve Men: book jackets. 175 8217 Twelve Men: corrected page proofs. 176 8218 I.  Hey Rub-a-Dub-Dub. Box Folder Hey Rub-a-Dub-Dub: notes. 177 8219 Hey Rub-a-Dub-Dub:  "Hey Rub-a-Dub-Dub" . 177 8220-8221 Hey Rub-a-Dub-Dub:  "Change," version published in  New York Call (1918). 177 8222 Hey Rub-a-Dub-Dub:  "Change" . 177 8223-8224 Hey Rub-a-Dub-Dub:  "Some Aspects of Our National Character" . 177 8225 Hey Rub-a-Dub-Dub:  "The Dream" . 177 8226 Hey Rub-a-Dub-Dub:  "The American Financier" . 177 8227-8228 Hey Rub-a-Dub-Dub: (  "The Toil of the Laboring Man" ). 177 8229 Hey Rub-a-Dub-Dub:  "The Toil of the Laborer" (  "The Toil of the Laboring Man" ). 177 8230 Hey Rub-a-Dub-Dub:  "Personality" . 177 8231-8232 Hey Rub-a-Dub-Dub:  "Secrecy" . 177 8233 Hey Rub-a-Dub-Dub:  "Neurotic America and the Sex Impulse" . 177 8234 Hey Rub-a-Dub-Dub:  "Ideals, Morals, and the Daily Newspaper" . 177 8235-8237 Hey Rub-a-Dub-Dub:  "Equation Inevitable" . 177 8238-8239 Hey Rub-a-Dub-Dub:  "Ashtoreth" . 177 8240-8241 Hey Rub-a-Dub-Dub:  "The Reformer" . 177 8242 Hey Rub-a-Dub-Dub:  "Marriage and Divorce: An Interview" . 177 8243-8244 Hey Rub-a-Dub-Dub: (  "More Democracy or Less? An Inquiry" ). 177 8245 Hey Rub-a-Dub-Dub:  "More Democracy or Less? An Inquiry" . 177 8246-8247 Hey Rub-a-Dub-Dub:  "The Essential Tragedy of Life" . 177 8248-8250 Hey Rub-a-Dub-Dub:  "Life, Art, and America" . 177 8251 Hey Rub-a-Dub-Dub:  "The Court of Progress" . 177 8252 Hey Rub-a-Dub-Dub:  "Neurotic America and the Sex Impulse" and  "Some Aspects of Our National Character," printed versions. 177 8253 J.  Newspaper Days. Note For reviews of Newspaper Days, see Box 423. Box Folder Newspaper Days: topics to be covered; notes for catalog copy. 178 8254 Newspaper Days: miscellaneous. 178 8255 Newspaper Days: ms. 178 8256-8288 Newspaper Days: ms. 179 8289-8329 Newspaper Days: 1st typescript. 180 8330-8364 Newspaper Days: typescript 1A with TD's corrections. 181 8365-8370 Newspaper Days: "Yellow Manuscript". 181 8371-8380 Newspaper Days: 2nd typescript. 182 8381-8423 Newspaper Days: unrevised 2nd typescript. 183 8424-8466 Newspaper Days: copy of typesetting copy (chaps. I-XLV). 184 8467-8511 Newspaper Days: copy of typesetting copy (chaps. XLVI-LXXX). 185 8512-8546 Newspaper Days: index to 1st edition of  A Book about Myself (  Newspaper Days) edited by T. D. Nostwichitle, 1922. 185 8547 Newspaper Days: book jackets for  A Book about Myself (  Newspaper Days). 185 8548 Newspaper Days: foreword and author's note to edition, 1931. 185 8549 Newspaper Days: corrected galley proofs and note. 186 8550 Newspaper Days: uncorrected galley proofs, with missing pages from chap. XXXVI included. 186 8551 Newspaper Days: bound Vol. 1 of corrected page proofs. 187 8552 Newspaper Days: bound Vol. 2 of corrected page proofs. 188 8553 K.  The Color of a Great City. Note For reviews of The Color of a Great City, see Box 423. Box Folder The Color of a Great City: proposed chapter order. 189 8554 The Color of a Great City: foreword by TD. 189 8555 The Color of a Great City: "A Week with Ocean Pilots" (version of "Log of a Harbor Pilot"). 189 8556 The Color of a Great City: "Bums". 189 8557 The Color of a Great City: "The Car Yard". 189 8558 The Color of a Great City: "The Flight of Pidgeons". 189 8559 The Color of a Great City: "On Being Poor". 189 8560 The Color of a Great City: "Six o'Clock". 189 8561 The Color of a Great City: "The Toilers of the Tenements" ("The Inspector"). 189 8562 The Color of a Great City: "The Inspector". 189 8563 The Color of a Great City: ("The End of a Vacation"). 189 8564 The Color of a Great City: "The Track Walker". 189 8565 The Color of a Great City: "The Realization of an Ideal". 189 8566-8567 The Color of a Great City: "The Pushcart Man". 189 8568-8569 The Color of a Great City: "The Bread Line". 189 8570-8571 The Color of a Great City: "Our Red Slayer". 189 8572-8573 The Color of a Great City: "Whence the Song". 189 8574 The Color of a Great City: "Characters". 189 8575-8576 The Color of a Great City: "The Beauty of Life". 189 8577-8578 The Color of a Great City: "The Way Place of the Fallen". 189 8579 The Color of a Great City: "A Way Place of the Fallen". 189 8580 The Color of a Great City: "Bayonne" (a version of "A Certain Oil Refinery"). 189 8581 The Color of a Great City: "The Bowery Mission". 189 8582-8583 The Color of a Great City: "The Wonder of the Water". 189 8584 The Color of a Great City: "The Man on the Bench". 189 8585-8586 The Color of a Great City: "The Men in the Dark". 189 8587-8588 The Color of a Great City: "The Men in the Snow". 189 8589 The Color of a Great City: "The Freshness of the Universe". 189 8590 The Color of a Great City: "The Freshness of the Universe". 189 8591 The Color of a Great City: "The Cradle of Tears". 189 8592 The Color of a Great City: "The Sandwich Man". 189 8593 The Color of a Great City: "The Sandwich Man". 189 8594 The Color of a Great City: "The Love Affairs of Little Italy". 189 8595 The Color of a Great City: "Christmas in the Tenements". 189 8596 The Color of a Great City: "Christmas in the Tenements". 189 8597 The Color of a Great City: "The Rivers of the Nameless Dead". 189 8598 The Color of a Great City: "The Rivers of the Nameless Dead". 189 8599 The Color of a Great City: foreword by TD. 190 8600 The Color of a Great City: "The City of My Dreams". 190 8601 The Color of a Great City: "The City Awakes". 190 8602 The Color of a Great City: "The Waterfront". 190 8603 The Color of a Great City: "The Log of a Harbor Pilot". 190 8604 The Color of a Great City: "Bums". 190 8605-8606 The Color of a Great City: "The Michael J. Powers Association". 190 8607 The Color of a Great City: "The Fire". 190 8608 The Color of a Great City: "The Flight of Pigeons". 190 8609 The Color of a Great City: "On Being Poor". 190 8610 The Color of a Great City: "Six o'Clock". 190 8611 The Color of a Great City: "The Toilers of the Tenements". 190 8612 The Color of a Great City: "The End of a Vacation". 190 8613 The Color of a Great City: "The Track Walker". 190 8614 The Color of a Great City: "The Realization of an Ideal". 190 8615 The Color of a Great City: "The Pushcart Man". 190 8616 The Color of a Great City: "Manhattan Beach" ("A Vanished Seaside Resort"). 190 8617 The Color of a Great City: "The Bread Line". 190 8618 The Color of a Great City: "Our Red Slayer". 190 8619 The Color of a Great City: "When the Sails Are Furled". 190 8620 The Color of a Great City: "Characters". 190 8621 The Color of a Great City: "The Beauty of Life". 190 8622 The Color of a Great City: "The Way Place of the Fallen". 190 8623 The Color of a Great City: "Hell's Kitchen". 190 8624 The Color of a Great City: "A Certain Oil Works Refinery". 190 8625 The Color of a Great City: "The Bowery Mission". 190 8626 The Color of a Great City: "The Wonder of the Water". 190 8627 The Color of a Great City: "The Man on the Bench". 190 8628 The Color of a Great City: "The Men in the Dark". 190 8629 The Color of a Great City: "The Men in the Storm". 190 8630 The Color of a Great City: "The Men in the Snow". 190 8631 The Color of a Great City: "The Freshness of the Universe". 190 8632 The Color of a Great City: "The Cradle of Tears". 190 8633 The Color of a Great City: "The Sandwich Man". 190 8634 The Color of a Great City: "The Love Affairs of Little Italy". 190 8635 The Color of a Great City: "Christmas in the Tenements". 190 8636 The Color of a Great City: "The Rivers of the Nameless Dead". 190 8637 The Color of a Great City: typesetting version; note from TD. 191 8638-8676 The Color of a Great City: book jacket. 191 8677 The Color of a Great City: early galleys, with illustrations attached by TD, 1923 Oct. 192 8678 The Color of a Great City: early galleys, proofreader's copy(?). 192 8679 The Color of a Great City: early galleys, with TD's corrections. 192 8680 The Color of a Great City: 3rd revised galleys, with original and substituted preface, 1923 Oct. 192 8681 The Color of a Great City: 3rd revised galleys, unmarked, missing p. 2 of foreword and some pages from last essay. 192 8682 L.  An American Tragedy. Box Folder An American Tragedy: original ms (chaps. IV-XX), 1920-1921. 193 8683-8700 An American Tragedy: typescript of ms (chaps. I-XX), 1920-1921. 193 8701-8710 An American Tragedy: Book I, ms (chaps. I-32). 194 8711-8744 An American Tragedy: Book II, ms (chaps. I-20). 195 8745-8770 An American Tragedy: Book II, ms (chaps. 21-40). 196 8771-8794 An American Tragedy: Book II, ms (chaps. 41-57). 197 8795-8821 An American Tragedy: Book II, ms (chaps. 58-71). 198 8822-8841 An American Tragedy: Book III, ms (chaps. 1-14). 199 8842-8859 An American Tragedy: Book III, ms (chaps. 15-24). 200 8860-8874 An American Tragedy: Book III, ms (chaps. 25-35). 201 8875-8894 An American Tragedy: Book II, typescript B (chaps. XXX-LIV). 203 8928-8954 An American Tragedy: Book II, typescript B (fragments). Description Although chapter numbering is not continuous, events discussed in typescript B follow immediately the events discussed in typescript A in Box 202; some editing of typescript B by Sally Kussell. 203 8955 An American Tragedy: Book II, revised typescript A (chaps. I-XXI) revised by Louise Campbell; few additions by TD. 204 8956-8969 An American Tragedy: Book III, typescript C (chaps. I-II). Description Some revisions of chaps. in this box by Louise Campbell and ?. 205 8970-8971 An American Tragedy: Book III, revised typescript C (chap. II). 205 8972 An American Tragedy: Book III, revised typescript C, with corrections (chap. II and a fragment). 205 8973 An American Tragedy: Book III, typescript C (chaps. 3-XXI). 205 8974-9005 An American Tragedy: Book III, typescript C (chaps. XXII-XXXV). 206 9006-9025 An American Tragedy: Book I, 1st typescript (chaps. I, II). 207 9026 An American Tragedy: Book I, final revised typescript? (chaps. I-XXIX). 207 9027-9039 An American Tragedy: Book II, final revised typescript? (chaps. I-XXXXIX) revisions by TD, Louise Campbell, Helen Dreiser, T. R. Smith, and?. 208 9040-9075 An American Tragedy: Book III, revised typescript C (chaps. I-XXXIV). 209 9076-9099 An American Tragedy: front matter pages for typesetting. 210 9100 An American Tragedy: Book I, typesetting copy (chaps. I-XIX). 210 9101-9112 An American Tragedy: Book II, typesetting copy (chaps. I-XXXIV). 210 9113-9128 An American Tragedy: Book II, typesetting copy (chaps. XXXV-XLVIII). 211 9129-9135 An American Tragedy: Book III, typesetting copy (chaps. I-XXXV). Description Gap in chapter numbering, but nothing missing. 211 9136-9153 An American Tragedy: book jackets and hard cover. 211 9154 An American Tragedy: condensed version, published in  Bestsellers, 1946 Oct. . 211 9155 An American Tragedy: Book II, revised typesetting carbon (chaps. I-XI, XIII-XLV, XLVII-XLIX). 212 9156-9180 An American Tragedy: Book I, author's galleys. 213 9181 An American Tragedy: Book II, author's galleys. 213 9182 An American Tragedy: Book III, author's galleys. 213 9183 An American Tragedy: Book I, revised pages. 214 9184 An American Tragedy: Book II, 1st pages. 214 9185 An American Tragedy: Book II, revised pages. 214 9186 An American Tragedy: Book III, 1st pages. 214 9187 An American Tragedy: dramatization by Frederick Thon. 215 9188-9189 An American Tragedy: dramatization by Patrick Kearney. 215 9190-9211 An American Tragedy: dramatization by Georges Jamin and Jean Servais. 215 9212-9217 An American Tragedy: tabloid version. 215 9218 An American Tragedy: Dezso D'Antalffy scenario for an opera. 215 9219 An American Tragedy: dramatization by Erwin Piscator. 216 9220-9235 An American Tragedy: dramatization by Erwin Piscator and Lina Goldschmidt. 216 9236-9249 Case of Clyde Griffiths [  An American Tragedy]: dramatization by Piscator and Goldschmidt. 216 9250 An American Tragedy: dramatization by Erwin Piscator and Lina Goldschmidt. 216 9251 Eine amerikanische Tragödie: dramatization by Erwin Piscator. 217 9252-9266 The Law of Lycurgus (  An American Tradegy): dramatization by H. Basilewsky. 217 9267-9268 De Tragedie van Clyde Griffiths (  An American Tragedy): Dutch-language dramatization. 217 9269 An American Tragedy: film scenario by S. M. Eisenstein, G. V. Alexandrov, and Ivor Montagu. 218 9270-9278 An American Tragedy: Josef Von Sternberg-Samuel H. Hoffenstein film. Description 1st yellow script, annotated by ?, 30 Jan. 1931; synopsis by Eleanor McGeary; sequences A-Z, AA-HH. 218 9279-9283 An American Tragedy: Sternberg-Hoffenstein film. Description White script, 12 Feb. 1931, sequences A-Z, AA-II. 218 9284-9287 An American Tragedy: Sternberg-Hoffenstein film. Description Form #3, release dialogue script, 27 July 1931, reels 1-10. 218 9288-9290 A Place in the Sun (  An American Tragedy): Harry Brown and Michael Wilson film final white film script with changes, 1949 Sept. 30. 218 9291-9296 An American Tragedy: miscellaneous notes. 218 9297 M.  Moods. Box Folder Moods: typesetting copy for 1926 and 1928 editions. 219 9298-9308 Moods (1928 ed.): typesetting copy for poems added to this ed. 219 9309-9311 Moods (1928 ed.): galley proofs, with revisions, of poems added to this ed. 220 9312 Moods (1928 ed.): page proofs, with revisions, of poems added to this ed. 220 9313 Moods (1935 ed.): typesetting copy, introduction by Sulamith Ish-Kishor; contents pages. 221 9314 Moods (1935 ed.): contents page. 221 9315 Moods (1935 ed.): typesetting copy for poems. 221 9316-9332 Moods (1935 ed.): poems rejected for this ed. (never published). 221 9333 N.  Dreiser Looks at Russia. Box Folder Dreiser Looks at Russia: diary kept by TD in Russia, and used in writing this work, 1927-1928. 222 9334 Dreiser Looks at Russia: contents page; "Russia ", 1928. 223 9335 Dreiser Looks at Russia: "Russia ", 1928. 223 9336 Dreiser Looks at Russia: "The Tyranny of Communism". 223 9337 Dreiser Looks at Russia: "The Capital of Communism". 223 9338-9343 Dreiser Looks at Russia: "Moscow". 223 9344-9345 Dreiser Looks at Russia:"Communism Theory and Practice". 223 9346 Dreiser Looks at Russia: "The Tyranny of Communism". 223 9347 Dreiser Looks at Russia: "A Former Capital of Tyranny". 223 9348 Dreiser Looks at Russia: "Some Russian Factories and Industries". 223 9349 Dreiser Looks at Russia: "Religion in Russia". 223 9350 Dreiser Looks at Russia: "Present Day Art in Russia". 223 9351 Dreiser Looks at Russia: "Bolshevik Art Literature Music (A)". 223 9352 Dreiser Looks at Russia:"Bolshevik Art, Literature, Music (B)". 223 9353 Dreiser Looks at Russia:"Three Russian Restaurants". 223 9354 Dreiser Looks at Russia:"Russian Restaurants—Three". 223 9355 Dreiser Looks at Russia: "Propaganda Plus". 223 9356 Dreiser Looks at Russia: fragment of chap. on propaganda. 223 9357 Dreiser Looks at Russia: fragment of chap. on peasant problem. 223 9358 Dreiser Looks at Russia: "Russian Vignettes". 223 9359 Dreiser Looks at Russia:"The Russian versus the American Spirit". 223 9360 Dreiser Looks at Russia:"The Russian versus the American Temperament". 223 9361 Dreiser Looks at Russia:"Random Reflections". 223 9362 Dreiser Looks at Russia:"The Current Soviet Economic Plan". 223 9363 Dreiser Looks at Russia: typesetting copy (chaps. I-XVIII). 223 9364-9381 Dreiser Looks at Russia: book jacket and hard cover. 223 9382 Dreiser Looks at Russia: revised galley proofs. 224 9383 Dreiser Looks at Russia: 2nd revised galley proofs. 224 9384 Dreiser Looks at Russia: page proofs. 224 9385 O.  A Gallery of Women. Box Folder A Gallery of Women: proposed chapters. 225 9386 A Gallery of Women: "Mary Pyne" ("Esther Norn"). 225 9387-9389 A Gallery of Women: "M.T." ("Regina C—"). 225 9390 A Gallery of Women: "Yvonne (Ellen) Adams Wrynn". 225 9391-9393 A Gallery of Women: "Ida Hauchawout". 225 9394-9395 A Gallery of Women: "Gloom". 225 9396 A Gallery of Women: "Lucia". 225 9397 A Gallery of Women: "Ernita". 225 9398-9399 A Gallery of Women: "Albertine". 225 9400-9407 A Gallery of Women: "Dinan". 225 9408 A Gallery of Women: "M.J.C." ("Emanuela"). 226 9409-9412 A Gallery of Women: "Mrs. Hevessy" ("Bridget Mullanphy"). 226 9413-9416 A Gallery of Women: "A Daughter of the Puritans". Note Not used in book; see also "This Madness: The Story of Elizabeth," in TD Writings: Essays. 227 9417-9427 A Gallery of Women: "Ernestine". 228 9428-9430 A Gallery of Women: "Mary Pyne" ("Esther Norn"). 228 9431 A Gallery of Women: "Esther Norn". 228 9432 A Gallery of Women: "Rella". 228 9433-9438 A Gallery of Women: "Reina". 228 9439-9440 A Gallery of Women: "Regina C—". 228 9441-9442 A Gallery of Women: "Yvonne (Ellen) Adams Wrynn". 228 9443-9447 A Gallery of Women: "Ellen Adams Wrynn". 228 9448 A Gallery of Women: "A Daughter of the Puritans". 229 9449-9453 A Gallery of Women: "Spaff" ("Giff"). 229 9454-9458 A Gallery of Women: "Giff". 229 9459 A Gallery of Women: "Out of the City of the Prophet" ("Olive Brand"). 229 9460-9461 A Gallery of Women: "Olive Brand". 229 9462-9464 A Gallery of Women: "Lolita". 229 9465-9466 A Gallery of Women: "Ida Hauchawout". 229 9467-9468 A Gallery of Women: "Gloom". 229 9469 A Gallery of Women: "Loretta". 230 9470-9475 A Gallery of Women: notes on psychology of women, parts of which were used in "Loretta". 230 9476 A Gallery of Women: "Lucia". 230 9477-9478 A Gallery of Women: "Ernita". 230 9479-9480 A Gallery of Women: "Albertine". 230 9481-9483 A Gallery of Women: "Emanuela". 230 9484-9487 A Gallery of Women: "Mrs. Mullanphy" ("Bridget Mullanphy"). 230 9488 A Gallery of Women: "Bridget Mullanphy". 230 9489 A Gallery of Women: "Bridget Mullanphy". 230 9490 A Gallery of Women: "Rona Murtha". 231 9491-9503 A Gallery of Women: 1st galley proofs with author's corrections. 232 9504 A Gallery of Women: 2nd galley proofs. 232 9505 A Gallery of Women: Vol. I. 233 9506-9507 A Gallery of Women: Vol. II. 233 9508-9509 A Gallery of Women: book jackets. 234 9510 A Gallery of Women: hard covers for book. 234 9511-9513 A Gallery of Women: preface to the Russian edition by Sergey Dinamov. 234 9514 "A Gallery of Women:" radio adaptation by William Watters. 234 9515 "A Gallery of Women:" screen adapt. by Helen Mitchell, 1934. 234 9516 P.  My City. Box Folder My City: clipping and xerox. 235 9517 My City: color proofs of etchings by Max Pollak used in book. 235 9518 Q.  Dawn. Box Folder Dawn: xerox of ms at Lilly Library (chaps. I-XX), editing on ms by TD and Anna Tatum. 236 9519-9538 Dawn: xerox of ms at Lilly Library (chaps. XXI-XL). 237 9539-9558 Dawn: xerox of ms at Lilly Library (chaps. XLI-LX). 238 9559-9578 Dawn: xerox of ms at Lilly Library (chaps. LXI-LXXVII). 239 9579-9595 Dawn: xerox of ms at Lilly Library (chaps. LXXIX-LXXX) and note from Helen Dreiser re chap. LXXVIII. 239 9596-9597 Dawn: xerox of ms at Lilly Library (chaps. LXXXI-XCVII). 240 9598-9614 Dawn: xerox of ms at Lilly Library (chaps. XCVIII-CVI). 241 9615-9623 Dawn: xerox of 1st rough emended typescript at Lilly Library (chaps. I-III). 242 9624 Dawn: xerox of 1st rough emended typescript at Lilly Library (chap. IV). 242 9625 Dawn: xerox of 1st rough emended typescript at Lilly Library (chap. V). 242 9626 Dawn: xerox of 1st rough emended typescript at Lilly Library (chaps. VI-XXXII). 242 9627-9639 Dawn: 1st typescript (chaps. XXX-[XCIII]). Arrangement The chapters in this box follow consecutively those in Box 242 even though the numbering system does not. 243 9640-9675 Dawn: 2nd(?) typescript (chaps. I-XXXIV). 244 9676-9698 Dawn: note from Kathryn Sayre, circa 1931. 244 9699 Dawn: sample pages, typeset. 245 9700 Dawn: book jacket and 2 book dummies. 245 9701 Dawn: 1st bound copy. 245 9702 Dawn: French translation (chaps. 17-23 and 3 unnumbered). 245 9703-9705 Dawn: French translation (unnumbered chaps.). 245 9706-9710 Dawn: new French translation (chaps. I-XXIX), 1935. 245 9711-9721 R.  Tragic America. Box Folder Tragic America: plan(s) of book and partial outline of topics to be covered. 246 9722 Tragic America: "Preface". 246 9723 Tragic America: "As America Looks Now" ("The American Scene"). 246 9724 Tragic America: "I Visit an Actual Mill Town" [part of "Present Day Living Conditions for Many"]. 246 9725 Tragic America: "Exploitation—Rule by Force" ("Exploitation—the American Rule by Force"). 246 9726 Tragic America: "Our Banks and Corporations as Government (A)" (version 1). 246 9727-9728 Tragic America: "Our Banks and Corporations as Government (A)" (versions 2 and 3). 246 9729-9730 Tragic America: "Our Banks and Corporations as Government (B)". 246 9731 Tragic America: "The Profits of Our American Railways from Their Inertia (A)" ("Our American Railways--Their Profits and Greed"). 246 9732 Tragic America: "The Profits of Our American Railway from Their Inertia (B)" ("Our American Railways—Their Profits and Greed"). 246 9733 Tragic America: "Government Operation of the Express Companies for Private Profit". 246 9734 Tragic America: "The Supreme Court as a Corporation Service Station" ("The Supreme Court as a Corporation-Minded Institution"). 246 9735 Tragic America: "The Constitution as a Scrap of Paper". 246 9736 Tragic America: "The Position of Labor". 246 9737 Tragic America: "The Growth of Police Power". 246 9738 Tragic America: "Abuse to the Individual" ("The Abuse of the Individual") (version 1). 246 9739-9740 Tragic America: "Abuse to the Individual" 9"The Abuse of the Individual") (version 2). 246 9741 Tragic America: "Charity and Wealth in America" (version 1). 246 9742 Tragic America: "Charity and Wealth in America" (version 2). 246 9743-9744 Tragic America: "Crime and Why". 246 9745 Tragic America: "Why the Ballot?". 246 9746 Tragic America: "Why Government Ownership?". 246 9747 Tragic America: "Analysis of Statecraft for the Future" ("Suggestions toward a New Statecraft"). 246 9748-9749 Tragic America: "What the Meaning of Education Should Be". 246 9750 Tragic America: correspondence re "A Sample Trust". Description Extra chap. meant for 2nd edition of Tragic America. 246 9751 Tragic America: "A Sample Trust". Description Chapter not used in book, written by Kathryn Sayre. 246 9752-9754 Tragic America: "A Sample Trust". Description By Kathryn Sayre, edited by Anna Tatum (typescript); xerox of Tatum letter. 246 9755 Tragic America: "A Sample Trust". Description By Kathryn Sayre, 11 Jan. 1933, with comments by Evelyn Light (typescript). 246 9756 Tragic America: typesetting copy. 247 9757-9781 Tragic America: translator's note comparing American wages with American living costs. 247 9782 Tragic America: corrections to be made in future printings. 247 9783 Tragic America: corrections sent to TD by Kathryn Sayre. 247 9784 Tragic America: book jackets. 247 9785 Tragic America: miscellaneous. Note See also Box 484, folder 14681, for excerpts of Tragic America in Italian in  Ottobre. 247 9786 Tragic America: translation into French of chap. 20 ("Who Owns America?") and chap. 21 ("Is America Dominant?"). 247 9787 Tragic America: carbon of typesetting copy. 248 9788-9808 Tragic America: 1st galley proofs, revised. 249 9809 Tragic America: 1st galley proofs with corrections. 249 9810 Tragic America: 2nd galley proofs. 249 9811 Tragic America: 2nd galley proofs with corrections. 249 9812 Tragic America: page proofs. 250 9813 S.  America Is Worth Saving. Box Folder America Is Worth Saving: letter and notes from Oskar Piest; plan of book and copies of Piest's notes as revised by TD. 251 9814 America Is Worth Saving: "Are the Masses Worth While". 251 9815 America Is Worth Saving: notes and clippings for "Will American Democracy Endure?". 251 9816-9832 America Is Worth Saving: notes and clippings for "What Should Be the Objectives of the American People?". 251 9833 America Is Worth Saving: notes and clippings for "Has America a `Save the World' Complex?". 251 9834-9835 America Is Worth Saving: notes and clippings for "What Are the Defects of American Democracy?". 251 9836-9837 America Is Worth Saving: notes and clippings for "What Is Democracy?". 252 9838 America Is Worth Saving: notes and clippings for "Scarcity and Plenty". 252 9839 America Is Worth Saving: notes and clippings for "Europe and Its Entanglements". 252 9840 America Is Worth Saving: notes for "English Critics of English Imperialism". 252 9841 America Is Worth Saving: notes for "Can the British Endure?". 252 9842 America Is Worth Saving: notes and clippings for "Has England Democratized the Peoples of Its Empire?". 252 9843 America Is Worth Saving: "Have English and American Finance Cooperated with Hitler to Destroy Democracy?". 252 9844 America Is Worth Saving: notes and clippings for "Does England Love Us as We Love England?". 252 9845 America Is Worth Saving: notes for "How Democratic Is England?". 252 9846 America Is Worth Saving: notes and clippings for chapters on England. 252 9847-9857 America Is Worth Saving: notes and clippings for Russia. 252 9858-9860 America Is Worth Saving: notes and clippings for "The Lesson of France". 252 9861-9864 America Is Worth Saving: notes and clippings for "Practical Reasons for Keeping Out of War". 253 9865-9873 America Is Worth Saving: notes and clippings for "A Few Kind Words for Your Uncle Samuel". 253 9874 America Is Worth Saving: notes and clippings for chaps. on America. 253 9875-9885 America Is Worth Saving: clippings on Tom Mooney case. 253 9886 America Is Worth Saving: foreword. 254 9887 America Is Worth Saving: contents and chap. 1, "Does the World Move?". 254 9888 America Is Worth Saving: chap. 2, "Scarcity and Plenty". 254 9889 America Is Worth Saving: chap. 3, "Europe and Its Entanglements". 254 9890 America Is Worth Saving: chap. 4, "Has America a 'Save the World' Complex?". 254 9891 America Is Worth Saving: chap. 5, "Practical Reasons for Keeping Out of War". 254 9892 America Is Worth Saving: chap. 6, "Does England Love Us as We Love England?". 254 9893 America Is Worth Saving: chap. 7, "How Democratic Is England?". 254 9894 America Is Worth Saving: chap. 8, "Has England Democratized the Peoples of Its Empire?". 254 9895 America Is Worth Saving: chap. 9, "English Critics on [of] English Imperialism". 254 9896 America Is Worth Saving: chap. 10, "Has England Done More for Its People Than Nazism [Fascism] or Communism [Socialism]?". 254 9897 America Is Worth Saving: chap. 11, "What Is Democracy?". 254 9898 America Is Worth Saving: chap. 12, "What Are the Defects of American Democracy?". 254 9899 America Is Worth Saving: chap. 13, "What Are the Objectives of American Finance?". 254 9900 America Is Worth Saving: chap. 14, "Have English and American Finance Cooperated with Hitler to Destroy Democracy?". 254 9901 America Is Worth Saving: chap. 15, "Can The British Empire Endure?" ("Can the British Endure?"). 254 9902 America Is Worth Saving: chap. 16, "Will American Democracy Endure?". 254 9903 America Is Worth Saving: chap. 17, "The Lesson of France". 254 9904 America Is Worth Saving: chap. 18 [19], "What Should Be the Objectives of the American People?". 254 9905 America Is Worth Saving: chap. 16 [18], "A Few Kind Words for Your Uncle Samuel". 254 9906 America Is Worth Saving: chap. 19 [18], "A Few Kind Words for Your Uncle Samuel. 254 9907 America Is Worth Saving: typesetting copy of book revisions by TD, Helen Dreiser, William Lengel, and?. 254 9908-9926 America Is Worth Saving: discarded typescript fragments. 254 9927 America Is Worth Saving: lawyer's list of potentially libelous statements and TD's responses. 254 9928 America Is Worth Saving: 1st unrevised galley proofs containing material later omitted. 255 9929 America Is Worth Saving: 1st page proofs. 255 9930 T.  The Bulwark. Box Folder The Bulwark: xerox of letter from Louise Campbell re origin of early ms; synopsis of characters. 256 9931 The Bulwark: early ms (chaps. I, II). 256 9932-9933 The Bulwark: early ms (chap. III). 256 9934-9935 The Bulwark: early ms (chap. IV). 256 9936-9937 The Bulwark: early ms (chap. V). 256 9938 The Bulwark: early ms (chap. VI). 256 9939-9942 The Bulwark: early ms (chap. VII). 256 9943 The Bulwark: early ms (chap. VIII). 256 9944-9947 The Bulwark: early ms (chap. X). 256 9948-9951 The Bulwark: early ms (chap. XI). 256 9952-9953 The Bulwark: early ms (chap. XII). 256 9954 The Bulwark: early ms (chap. XIII). 256 9955 The Bulwark: early ms (chap. XIV). 256 9956-9957 The Bulwark: early ms (chap. XV). 256 9958-9959 The Bulwark: early ms (chap. XVI). 256 9960-9961 The Bulwark: early ms (chap. XVII). 256 9962 The Bulwark: early ms. 256 9963-9969 The Bulwark: copy meant for publicity for 1920 publication. 256 9970 The Bulwark: financial version (?) (chaps. I-IV); notes by TD and Marguerite Tjader Harris. Description Some chaps. incomplete; numbers at bottom of pages should be disregarded. 257 9971-9974 The Bulwark: financial version(?) (chap. V). 257 9975-9976 The Bulwark: financial version(?) (chap. VI?). 257 9977 The Bulwark: financial version(?) (chaps. XI-XXIV). 257 9978-9993 The Bulwark: financial version(?) (chaps. XXVI-XXVII). 257 9994-9995 The Bulwark: financial version(?) (ms fragments [some written by Estelle Kubitz). 257 9996 The Bulwark: financial version(?) (chaps. I-XXVII). 257 9997-10013 The Bulwark: green hard cover and pages found inside. 258 10014-10015 The Bulwark: red hard cover; early typeset version of chap. I. 258 10016 The Bulwark: papers found inside red hard cover. 258 10017-10023 The Bulwark: notes and fragments on Quakerism; some copied by Helen Dreiser. 258 10024-10025 The Bulwark: ms (chaps. II-XXXVII). 258 10026-10063 The Bulwark: order and contents for chaps. for Part II; typed summary of end of Part I. Description Includes chaps. that were originally marked for Part II. 259 10064 The Bulwark: ms (Part II). Description Some chaps. incomplete; notes on ms by Marguerite Tjader Harris; numbers on bottom of pages should be disregarded. 259 10065-10085 The Bulwark: ms (Part II). 260 10086-10102 The Bulwark: ms (Part III). 261 10103-10121 The Bulwark: discarded ms fragments (Part I). 261 10122 The Bulwark: discarded ms fragments (Part II). 261 10123 The Bulwark: discarded ms fragments (Part III), some dictated by TD to Marguerite Tjader Harris. 261 10124 The Bulwark: early typescript (Part I). 262 10125-10132 The Bulwark: early typescript (Part II, chaps. 39-41). 262 10133 The Bulwark: early typescript (Part II, chaps. 42-69). 262 10134-10143 The Bulwark: early typescript (Part III, chaps. 1-20, finis). 262 10144-10150 The Bulwark: typescript, 1941-1942. Description Dates TD worked on this version after beginning again in the 1940s [The 1941-1942 typescript extends into 1943; Parts I and II are divided differently in the final version; numbers on the bottom of pages should be disregarded.]. 263 10151 The Bulwark: typescript, 1941-1942. Description Sample chaps. I-IV sent to Balch of G. P. Putnam's Sons, 1942. 263 10152-10153 The Bulwark: typescript (Part I, chaps. I-XXXV), 1941-1942. General note (multiple versions of some chaps.) [handwritten corrections on these chaps. by TD, Helen Dreiser, Marguerite Tjader Harris] 263 10154-10191 The Bulwark: typescript (Part II, chaps. A (XXXVI)-E), 1941-1942. 263 10192-10198 The Bulwark: revised typescript (Part I: chaps. I-24); corrections by TD, Helen Dreiser, Marguerite Tjader Harris, 1941-1942. 264 10199-10221 The Bulwark: outline of plots and chapters as planned with note about completion of  The Bulwark, 1944 Oct. . 265 10222 The Bulwark: unedited 1945 typescript. Description Folder and note by Marguerite Tjader Harris [Part I typed by Helen Dreiser; Parts II and III typed by Marguerite Tjader Harris]. 265 10223 The Bulwark: unedited typescript (Part I: introduction, chaps. I-XXIV), 1945. 265 10224-10231 The Bulwark: unedited typescript (Part II: chaps. XXV-LI), 1945. 265 10232-10239 The Bulwark: unedited typescript (Part II: chap. LII; Part III: chaps. LIII-LVI), 1945. 265 10240 The Bulwark: unedited typescript (Part III: chaps. LVII-LXX, finis), 1945. 265 10241-10244 The Bulwark: edited typescript (Part I: introduction, chaps. I-II), 1945. Description Note from Marguerite Tjader Harris [corrections in 1945 edited typescript by Helen Dreiser, Marguerite Tjader Harris, Louise Campbell; Part I typed by Helen Dreiser; Parts II and III typed by Marguerite Tjader Harris]. 265 10245 The Bulwark: edited typescript (Part I: chaps. IV [III]-XXI), 1945. 265 10246-10251 The Bulwark: edited typescript (Part II: chaps. XXII-XLIV), 1945. 265 10252-10258 The Bulwark: edited typescript (Part II: chaps. XLV-LII(XLVII); Part III: LIII(?)), 1945. 265 10259 The Bulwark: edited typescript (Part III: chaps. XLVIII-LXIV, finis), 1945. 265 10260-10264 The Bulwark: typesetting version (front matter; reviewer's proof; note by Marguerite Tjader Harris). 266 10265 The Bulwark: typesetting version (introduction, Part I: chaps. 1-24). 266 10266-10271 The Bulwark: typesetting version (Part II: chaps. 25-49). 266 10272-10278 The Bulwark: typesetting version (Part III: chaps. 50-67, finis). 266 10279-10283 The Bulwark: book jackets. 266 10284 "The Bulwark": U.S. State Department radio script, presented , as a book review, 1946 Sept. 17. 266 10285-10286 The Bulwark: condensed version, published in  Omnibook, 1946 July. 266 10287 The Bulwark: condensed version in French ("Le Rempart') in  Omnibook (Paris: Edition Française, Mars 1948). 266 10288 The Bulwark: 1st galley proofs. 267 10289 The Bulwark: 1st galley proofs, uncorrected. 267 10290 The Bulwark: discarded typescript fragments from all versions; corrections by TD, Louise Campbell, Marguerite Tjader Harris. 268 10291-10325 U.  The Stoic. Box Folder The Stoic: publisher's summary of  The Stoic and "The Trilogy of Desire"; list of persons, businesses, and places mentioned, 1932. 269 10326 The Stoic: notes on Cowperwood and London subway system. 269 10327-10330 The Stoic: summary of Cowperwood. 269 10331-10332 The Stoic: summary of Berenice and Aileen. 269 10333 The Stoic: summary of Ethel Yerkes and Gladys Unger. 269 10334 The Stoic: summary of all characters. 269 10335 The Stoic: summary of settlement of Cowperwood's property and affairs. 269 10336 The Stoic: queries, M.E.L. on typescript, 30 June 1932; note. 269 10337 The Stoic: notes and clippings on book's characters and events. 269 10338-10354 The Stoic: notes and clippings on book's characters and events. 270 10355-10375 The Stoic: notes and clippings on book's characters and events. 271 10376-10378 The Stoic: typed versions of some original notes in other folders. 271 10379-10380 The Stoic: court records relating to the will of Charles Yerkes. 271 10381 The Stoic: notes on architecture, furniture, art, musicians, books, writers, actors (for  The Stoic ?). 271 10382 The Stoic: miscellaneous. 271 10383-10384 The Stoic:  National Geographic with article on Norway marked by TD, 1930 July. 271 10385 The Stoic: notes on characters and surviving manuscripts and typescripts by Evelyn Light. 271 10386 The Stoic: auction catalogue of the Charles T. Yerkes art collection, 1910. 271 10387 The Stoic: Supreme Court brief on behalf of Louis Owsley, executor of Charles Yerkes; note. 271 10388 The Stoic: Housman et al. v. Owsley, brief for plaintiffs, 1910. 271 10389 The Stoic: Housman et al. v. Owsley, referee's opinion, 1910. 271 10390 The Stoic: early ms (chaps. I-X, 2 versions each of chaps. 1, 3, 5); some dictated by TD to Clara Clark(?); see chaps. XVI (third version), XVII, XVIII. 272 10391-10404 The Stoic: 1st, 2nd, and 3rd early typescripts, revised (chap. X). 272 10405-10407 The Stoic: ms (chap. XI). 272 10408-10409 The Stoic: 1st and 2nd early typescripts, revised (chaps. XI, XII). 272 10410-10413 The Stoic: ms (chap. XIV). 272 10414 The Stoic: early typescript (chap. XV[XIV?]). 272 10415 The Stoic: ms (chap. XV). 272 10416 The Stoic: 1st and 2nd(?) early typescript (chap. XV). 272 10417-10418 The Stoic: ms (chap. XVI). 272 10419-10421 The Stoic: ms (chaps. XVII-XXV). 273 10422-10440 The Stoic: early revised typescript (chap. XXXVI). 273 10441 The Stoic: ms (chap. XXXVI). 273 10442-10443 The Stoic: ms (chap. XXXVII). 273 10444-10445 The Stoic: ms (chap. XXXVIII). 273 10446-10447 The Stoic: ms (chap. XXIX). 273 10448 The Stoic: ms (chap. XL); note from TD. 274 10449 The Stoic: ms (chaps. XLI, 42). 274 10450-10451 The Stoic: early revised typescript (chap. XLIII). 274 10452 The Stoic: ms (chaps. XLIIII-XLVIIII). 274 10453-10458 The Stoic: ms (chaps. LI-LIV). 274 10459-10462 The Stoic: typescript A (chaps. I-54, no chap. 42) with corrections by TD, Helen Dreiser, and Louise Campbell. 275 10463-10487 The Stoic: typescript A carbon, with corrections (chaps. I-54, no chap. 42). 276 10488-10513 The Stoic: typescript B (chaps. I-90) with corrections by TD and Helen Dreiser. 277 10514-10549 The Stoic: corrected typescript B (chaps. 1-91) P.S. Concerning Good and Evil, with corrections by TD and Helen Dreiser. 278 10550-10593 The Stoic: typescript edited by Anna Tatum (chaps. I-48, no chaps. 11, 37). 279 10594-10617 The Stoic: Louise Campbell typescript (chaps. 1-78, no chap. 27) P.S. Concerning Good and Evil, with revisions by LC, Helen Dreiser, and?. 280 10618-10659 The Stoic: (chap. 91) prepared by Helen Dreiser from notes by TD(?); chap. fragments. 280 10660 The Stoic: revised Louise Campbell typescript, typed by her (chaps. 1-18). 280 10661-10668 The Stoic: revised Louise Campbell typescript, typed by her (chaps. 19-78). 281 10669-10691 The Stoic: typesetting copy (chaps. 1-79, appendix). 282 10692-10718 The Stoic: synopsis. 282 10719 The Stoic: literary criticism written for publicity? (ms in Helen Dreiser's handwriting). 282 10720 The Stoic: galley proofs, with corrections by Helen Dreiser, 1947. 283 10721 The Stoic: front matter and page proofs, with corrections by Helen Dreiser, 1947. 283 10722 The Stoic: discarded fragments and chaps. from various versions. 284 10723-10741 The Stoic: early chaps. edited by Louise Campbell. 284 10742-10748 V.  Philosophical Notes. Arrangement TD's outline of categories for this material has been followed, but his original order of papers within the categories cannot be reconstructed, because the papers have been reorganized by at least two people since his death: Sydney Horovitz and Marguerite Tjader Harris. Some of the material in these folders has been typed and annotated by Harris. The early folders within each category contain the material that she selected for use in her book Notes on Life (see Boxes 330-333). TD's long manuscripts in each category have been placed at the beginning of their respective categories, preceding the notes and clippings. Box Folder Philosophical Notes: notes and outlines by Sydney Horovitz, 1953. 285 10749 Philosophical Notes: TD's outlines. 285 10750 Philosophical Notes: introduction by John Cowper Powys. 285 10751 Philosophical Notes: early articles expressing TD's philosophy: "The Force of a Great Religion" and "What I Believe," note by Marguerite Tjader Harris. 285 10752 Philosophical Notes: I1. Mechanism Called the Universe, "Mechanism Called the Universe". 285 10753 Philosophical Notes: I1. Mechanism Called the Universe, "The Mighty Atom". 285 10754 Philosophical Notes: I1. Mechanism Called the Universe, notes, clippings, mss. 285 10755-10767 Philosophical Notes: I1. Mechanism Called the Universe, notes, clippings, mss. 286 10768-10784 Philosophical Notes: I1. Mechanism Called the Universe, notes, clippings, mss. 287 10785-10799 Philosophical Notes: I2. Mechanism Called Life, notes, clippings, mss. 288 10800-10820 Philosophical Notes: I2. Mechanism Called Life, notes, clippings, mss. 289 10821-10838 Philosophical Notes: I2. Mechanism Called Life, notes, clippings, mss. 290 10839-10848 Philosophical Notes: I3. Necessity for Repetition, notes, clippings, mss. 290 10849 Philosophical Notes: I4. Material Base of Form—"The Problem of Form". 290 10850 Philosophical Notes: I4. Material Base of Form, outline and notes for an essay on form; note from Marguerite Tjader Harris. 290 10851-10853 Philosophical Notes: I4. Material Base of Form, notes, clippings. 290 10854-10858 Philosophical Notes: I4. Material Base of Form, notes, clippings, mss. 291 10859-10867 Philosophical Notes: I5. The Factor Called Time, notes, clippings, mss. 291 10868-10874 Philosophical Notes: I6. The Factor Called Chance, notes, clippings, mss. 291 10875-10881 Philosophical Notes: I6. The Factor Called Chance, notes, clippings, mss. 292 10882-10888 Philosophical Notes: I7. Weights and Measures, notes, clippings, mss. 292 10889-10897 Philosophical Notes: I8. Mechanism Called Man, "You, the Phantom," typescript, note, and printed version. 292 10898 Philosophical Notes: I8. Mechanism Called Man, notes, clippings, mss. 292 10899-10903 Philosophical Notes: I8. Mechanism Called Man, notes, clippings, mss. 293 10904-10923 Philosophical Notes: I8. Mechanism Called Man, notes, clippings, mss. 294 10924-10934 Philosophical Notes: I9. Physical and Chemical Character of His Actions, "Us". 294 10935 Philosophical Notes: I9. Physical and Chemical Character of His Actions, notes, clippings, mss. 294 10936-10945 Philosophical Notes: I10. Mechanism Called Mind, notes, clippings, mss. 295 10946-10966 Philosophical Notes: I10. Mechanism Called Mind, notes, clippings, mss. 296 10967-10986 Philosophical Notes: I10. Mechanism Called Mind, notes, clippings, mss. 297 10987-11002 Philosophical Notes: I11. The Emotions, notes, clippings, mss. 298 11003-11024 Philosophical Notes: I11. The Emotions, notes, clippings, mss. 299 11025-11034 Philosophical Notes: I12. The So-called Progress of Mind, notes, clippings, mss. 299 11035-11037 Philosophical Notes: I13. Mechanism Called Memory, notes, clippings, mss. 299 11038-11042 Philosophical Notes: I14. Myth of Individuality—"The Myth of Individuality". 300 11043 Philosophical Notes: I14. Myth of Individuality, notes, clippings, mss. 300 11044-11060 Philosophical Notes: I15. Myth of Individual Thinking, "It". 300 11061 Philosophical Notes: I15. Myth of Individual Thinking, notes, clippings, mss. 300 11062-11066 Philosophical Notes: I15. Myth of Individual Thinking, notes, clippings, mss. 301 11067-11090 Philosophical Notes: I16. Myth of Free Will"—Suggesting the Possible Substructure of Ethics," "old" typescript and "new" typescript. 302 11091-11092 Philosophical Notes: I16. Myth of Free Will, notes, clippings, mss. 302 11093-11109 Philosophical Notes: I17. Myth of Individual Creative Power—"Myth of the Creative Mind". 302 11110-11111 Philosophical Notes: I17. Myth of Individual Creative Power, notes, clippings, mss. 302 11112-11116 Philosophical Notes: I17. Myth of Individual Creative Power, notes, clippings, mss. 303 11117-11134 Philosophical Notes: I18. Myth of Individual Possession. 304 11135-11136 Philosophical Notes: I18. Myth of Individual Possession, notes, clippings, mss. 304 11137-11141 Philosophical Notes: I19. Myth of Individual Responsibility,"If Man Is Free, So Is All Matter". 304 11142 Philosophical Notes: I19. Myth of Individual Responsibility, "Kismet". 304 11143 Philosophical Notes: I19. Myth of Individual Responsibility, "Responsibility". 304 11144 Philosophical Notes: I19. Myth of Individual Responsibility, notes, clippings, mss. 304 11145-11150 Philosophical Notes: I20. Myth of Individual and Race Memory, notes, clippings, mss. 304 11151-11157 Philosophical Notes: I21. The Force Called Illusion, "Concerning Mycteroperca Bonaci". 305 11158 Philosophical Notes: I21. The Force Called Illusion, "Man and Romance". 305 11159 Philosophical Notes: I21. The Force Called Illusion—"The Myth of Reality". 305 11160-11163 Philosophical Notes: I21. The Force Called Illusion, notes, clippings, mss. 305 11164-11184 Philosophical Notes: I21. The Force Called Illusion, notes, clippings, mss. 306 11185-11191 Philosophical Notes: I22. Varieties of Force, "The Force of a Great Religion". 306 11192 Philosophical Notes: I22. Varieties of Force, "On the Dreams of Our Childhood". 306 11193 Philosophical Notes: I22. Varieties of Force, "Some Additional Comments on the Life Force, or God". 306 11194 Philosophical Notes: I22. Varieties of Force, notes, clippings, mss. 306 11195-11210 Philosophical Notes: I22. Varieties of Force, notes, clippings, mss. 307 11211-11216 Philosophical Notes: I23. Transmutation of Personality—"Transmutation of Personality". 307 11217-11219 Philosophical Notes: I23. Transmutation of Personality, notes, clippings, mss. 307 11220-11231 Philosophical Notes: I24. The Problem of Genius, notes, clippings, mss. 307 11232-11236 Philosophical Notes: II1. The Theory That Life Is a Game, notes, clippings, mss. 308 11237-11262 Philosophical Notes: II2. Special and Favoring Phases of the Solar System, notes, clippings, mss. 309 11263 Philosophical Notes: II3. Necessity for Contrast, "Peace and War". 309 11264 Philosophical Notes: II3. Necessity for Contrast, notes, clippings, mss. 309 11265-11284 Philosophical Notes: II4. The Necessity for Limitation—"Concerning the Multiplicity of Things". 310 11285 Philosophical Notes: II4. The Necessity for Limitation, notes, clippings, mss. 310 11286-11293 Philosophical Notes: II5. The Necessity for Change, "Change". 310 11294 Philosophical Notes: II5. The Necessity for Change, notes, clippings, mss. 310 11295-11299 Philosophical Notes: II6. The Necessity for Interest and Reward, notes, clippings, mss. 310 11300-11301 Philosophical Notes: II7. The Necessity for Ignorance, notes, clippings, mss. 310 11302-11313 Philosophical Notes: II8. The Necessity for Secrecy, notes, clippings, mss. 311 11314-11318 Philosophical Notes: II9. The Necessity for Youth and Age, Old and New, notes, clippings, mss. 311 11319 Philosophical Notes: II10. Scarcity and Plenty, notes, clippings, mss. 311 11320-11328 Philosophical Notes: II11. Strength and Weakness—"The Strong and the Weak". 311 11329 Philosophical Notes: II11. Strength and Weakness, notes, clippings, mss. 311 11330-11333 Philosophical Notes: II12. Courage and Fear, "Courage and Fear". 312 11334-11336 Philosophical Notes: II12. Courage and Fear, notes, clippings, mss. 312 11337-11342 Philosophical Notes: II13. Mercy and Cruelty, "The Right to Kill". 312 11343 Philosophical Notes: II13. Mercy and Cruelty, notes, clippings, mss. 312 11344-11358 Philosophical Notes: II14. Beauty and Ugliness, general plan, outline, notes, and partial early typescript for an essay on beauty. 313 11359 Philosophical Notes: II14. Beauty and Ugliness, "The Problem of Beauty". 313 11360 Philosophical Notes: II14. Beauty and Ugliness, "The Problem of Beauty". 313 11361 Philosophical Notes: II14. Beauty and Ugliness, "The Value of Beauty". 313 11362 Philosophical Notes: II14. Beauty and Ugliness, notes, clippings, mss. 313 11363-11370 Philosophical Notes: II15. Order and Disorder, notes, clippings, mss. 313 11371-11379 Philosophical Notes: II16. Good and Evil, "Can There Be Good in Evil". 314 11380 Philosophical Notes: II16. Good and Evil,"Concerning Good and Evil". 314 11381 Philosophical Notes: II16. Good and Evil,"Concerning Good and Evil," note from Helen Dreiser. 314 11382 Philosophical Notes: II16. Good and Evil, "Good and Evil". 314 11383 Philosophical Notes: II16. Good and Evil, "Good and Evil," typescript A. 314 11384 Philosophical Notes: II16. Good and Evil, "Good and Evil," typescript B. 314 11385 Philosophical Notes: II16. Good and Evil,"Good and Evil," typescipt B revised [by William Lengel?]. 314 11386 Philosophical Notes: II16. Good and Evil, "Good and Evil," typescripts C and D. 314 11387-11388 Philosophical Notes: II16. Good and Evil, "Good and Evil," typescript E. 314 11389 Philosophical Notes: II16. Good and Evil, notes, clippings, mss. 314 11390-11403 Philosophical Notes: II16. Good and Evil, notes, clippings, mss. 315 11404-11406 Philosophical Notes: II17. Problem of Knowledge—"Education". 315 11407 Philosophical Notes: II17. Problem of Knowledge, notes, clippings, mss. 315 11408-11426 Philosophical Notes: II17. Problem of Knowledge, notes, clippings, mss. 316 11427-11445 Philosophical Notes: II17. Problem of Knowledge, notes, clippings, mss. 317 11446-11455 Philosophical Notes: II18. The Equation Called Morality, notes, clippings, mss. 317 11456-11468 Philosophical Notes: II18. The Equation Called Morality, notes, clippings, mss. 318 11469-11476 Philosophical Notes: II19. The Compromise Called Justice—"The Ultimate Justice of Life". 318 11477-11478 Philosophical Notes: II19. The Compromise Called Justice, notes, clippings, mss. 318 11479-11487 Philosophical Notes: II20. The Salve Called Religion—"Religion—Theory—Dogma". 318 11488 Philosophical Notes: II20. The Slave Called Religion—"Saving the World". 318 11489 Philosophical Notes: II20. The Salve Called Religion, notes, clippings, mss. 318 11490-11494 Philosophical Notes: II20. The Salve Called Religion, notes, clippings, mss. 319 11495-11501 Philosophical Notes: II21. The Problem of Progress and Purpose, notes, clippings, mss. 319 11502-11516 Philosophical Notes: II21. The Problem of Progress and Purpose, notes, clippings, mss. 320 11517-11535 Philosophical Notes: II21. The Problem of Progress and Purpose, notes, clippings, mss. 321 11536-11540 Philosophical Notes: II22. The Myth of the Perfect Social Order, notes, clippings, mss. 321 11541-11553 Philosophical Notes: II22. The Myth of the Perfect Social Order, notes, clippings, mss. 322 11554-11569 Philosophical Notes: II23. The Essential Tragedy of Life—"A Counsel to Perfection". 322 11570-11571 Philosophical Notes: II23. The Essential Tragedy of Life—"The Essential Tragedy of Life". 322 11572-11573 Philosophical Notes: II23. The Essential Tragedy of Life, notes, clippings, mss. 322 11574 Philosophical Notes: II24. The Problem of Death—"Life after Death". 323 11575 Philosophical Notes: II24. The Problem of Death, notes, clippings, mss. 323 11576-11582 Philosophical Notes: II25. Equation Inevitable—"Equation Inevitable" (parts 2, 3, V). 323 11583-11585 Philosophical Notes: II25. Equation Inevitable—"Equation Inevitable: A Variant in Philosophic Viewpoint" (typescript A, typescript B, revised typescript B). 323 11586-11588 Philosophical Notes: II25. Equation Inevitable, notes, clippings, mss. 323 11589-11590 Philosophical Notes: II26. Laughter, "An Address All to Electrons, Protons, Neutrons, Deutrons, Quantums". 323 11591 Philosophical Notes: II26. Laughter, "An Address All to Electrons, Protons, Neutrons, Deutrons, Quantums". 323 11592 Philosophical Notes: II26. Laughter, notes, clippings, mss. 323 11593-11598 Philosophical Notes: II27. Music, notes, clippings, mss. 324 11599-11601 Philosophical Notes: "My Creator", 1943 Nov. 18. 324 11602 Philosophical Notes: "My Creator", 1943 Oct. 324 11603 Philosophical Notes: "My Creator" inscribed by Myrtle Butcher, Nov. 1943; corrections on typescript by Helen Dreiser. 324 11604 Philosophical Notes: TD's notebook containing handwritten selections from many categories. 324 11605 Philosophical Notes: Art and Science, notes, clippings, mss. 324 11606 Philosophical Notes: Medicine, notes, clippings, mss. 324 11607-11609 Philosophical Notes: The Myth of Complete Understanding, notes. 324 11610 Philosophical Notes: The Myth of Pure Reason, notes. 324 11611 Philosophical Notes: Necessity for Union, notes, clippings, mss. 324 11612 Philosophical Notes: On Friendship, notes. 324 11613 Philosophical Notes: On the Credibility of the Senses, notes. 324 11614 Philosophical Notes: Pleasure and Pain, notes, clippings, mss. 324 11615-11616 Philosophical Notes: The Wisdom of the Unconscious, notes, clippings. 324 11617 Philosophical Notes: Notes from the Vedas and the Upanishads. 325 11618-11633 Philosophical Notes: Unclassified notes (Menninger). 325 11634-11635 Philosophical Notes: Unclassified notes (Dr. Wm. J. Robinson). 325 11636-11638 Philosophical Notes: Unclassified notes (Wm. Moulton Marston, "Monkey Thinking"). 325 11639 Philosophical Notes: Unclassified notes (Henry Thomas,  The Story of the Human Race). 325 11640-11642 Philosophical Notes: Unclassified notes (Robert Chambers,  The Life of the Cell). 325 11643 Philosophical Notes: Unclassified notes (  Riddle of the Universe). 325 11644-11645 Philosophical Notes: Unclassified notes (Remy de Gourmant). 325 11646-11648 Philosophical Notes: Unclassified notes (  Green Laurels). 325 11649 Philosophical Notes: Unclassified notes (Loeb). 325 11650-11651 Philosophical Notes: Unclassified notes ("Lesson No. 2: The Nature of the Human Animal"). 326 11652 Philosophical Notes: Unclassified notes (  Data of Ethics). 326 11653 Philosophical Notes: Unclassified notes (Henry Adams, "The Rule of Phase Applied to History"). 326 11654 Philosophical Notes: Unclassified notes (Crile). 326 11655-11657 Philosophical Notes: Unclassified notes (Carrel). 326 11658-11660 Philosophical Notes: Unclassified notes (William James,  A Pluralistic Universe). 326 11661-11662 Philosophical Notes: Unclassified notes (Townsend). 326 11663 Philosophical Notes: Unclassified notes (Jules de Gaultier,  Bovarism). 326 11664-11669 Philosophical Notes: Unclassified notes (Thomas Henry Huxley,  Essays Selected from Lay Sermons). 326 11670-11671 Philosophical Notes: Unclassified notes (August Strindberg,  Zones of the Spirit). 326 11672 Philosophical Notes: Unclassified notes (Gustave Le Bon,  The Crowd). 327 11673-11674 Philosophical Notes: Unclassified notes (Oliver Lodge,  Ether and Reality). 327 11675-11678 Philosophical Notes: Unclassified notes (  Man, the Unknown). 327 11679-11682 Philosophical Notes: Unclassified notes (  Outposts of Science). 327 11683 Philosophical Notes: Unclassified notes (  March of Science). 327 11684 Philosophical Notes: Unclassified notes (Schrodinger). 327 11685 Philosophical Notes: Unclassified notes (Clendening). 327 11686-11689 Philosophical Notes: Unclassified notes (Sigmund Freud,  The Future of an Illusion). 327 11690 Philosophical Notes: Unclassified notes (Robert A. Millikan,  Time, Matter, and Values). 327 11691 Philosophical Notes: Unclassified notes (Lemon,  From Galileo to Cosmic Rays). 327 11692-11693 Philosophical Notes: Unclassified notes (P. W. Bridgman,  The Logic of Modern Physics). 327 11694-11696 Philosophical Notes: Unclassified notes. 327 11697-11699 Philosophical Notes: Unclassified notes. 328 11700-11721 Philosophical Notes: 2 reprints by Dr. Albert F. Blakeslee: "Demonstration of Differences between People in the Sense of Smell" and "A Dinner Demonstration of Threshold Differences in Taste and Smell", 1935. 329 11722 Philosophical Notes: A. A. Brill, "The Psychopathology of Noise," 1916; "The Psychopathology of Selections of Vocations," 1918. 329 11723 Philosophical Notes: C. L. Christensen, "Man and Woman in Prehistory," 1937 Edwin G. Conklin, "A Generation's Progress in the Study of Evolution," 1934. 329 11724 Philosophical Notes: Sigmund Freud, "Three Contributions to the Theory of Sex", 1916. 329 11725 Philosophical Notes: Basil C. H. Harvey, "The Nature of Vital Processes According to Rignano", 1909. 329 11726 Philosophical Notes: Purl Holzer,  Mind and Consciousness, v. 1, 1948. 329 11727 Philosophical Notes: Jacques Loeb, "The Mechanistic Conception of Life", 1912. 329 11728 Philosophical Notes: J. W. Miller, "Accidents Will Happen," 1937 and "The Paradox of Cause," 1935 Thomas Hunt Morgan, "The Relation of Genetics to Physiology and Medicine," 1934. 329 11729 Philosophical Notes: Oscar Riddle, "The Confusion of Tongues," 1936 and "The Relative Claims of Natural Science and of Social Studies to a Core Place in the Secondary School C urriculum: A.—for Natural Science," 1937. 329 11730 Philosophical Notes: Wm. Seifriz, "The Structure of Protoplasm," 1935 H. Riley Spitler, "Some Circulatory Changes Caused by Ocular Fixation of Selected Light Frequencies in t he Visible Range," 1935. 329 11731 Philosophical Notes: Leonard Thompson Troland, "The Chemical Origin and Regulation of Life", 1914. 329 11732 Philosophical Notes: Arthur Waley, "Zen Buddhism and Its Relation to Art", 1922. 329 11733 W.  Notes on Life. Box Folder Notes on Life: "Memo on a Project for Editing Dreiser's  Notes on Life, " by Marguerite Tjader Harris, submitted to the University of Pennsylvania Dreiser Committee,, 1965 March 26. 330 11734 Notes on Life: Report of the material taken from the University of Pennsylvania Library in by M. T. Harris, 1965 Aug. . 330 11735 Notes on Life: 2 readers' reports. 330 11736 Notes on Life: TD's outline, annotated by M. T. Harris. 330 11737 Notes on Life: Miscellaneous notes re contents of book and introductory statements by M. T. Harris. 330 11738 Notes on Life: "Editorial Report," by M. Tjader. 330 11739 Notes on Life: "Editorial Report," by M. Tjader and John McAleer. 330 11740 Notes on Life: Notes by Dr. Frank Muhlfeld; note to Muhlfeld from M. T. Harris. 330 11741 Notes on Life: Editor's foreword by M. Tjader,, 1966 April. 330 11742 Notes on Life: End notes and letter to M. T. Harris, 1971 Dec. 3. 330 11743 Notes on Life: Tentative rough draft and outline (Part I); Introductory material, Mechanism Called the Universe, Mechanism Called Life, 1965 Summer-Autumn. 330 11744 Notes on Life: Necessity for Repetition, Material Base of Form, Factor Called Time. 330 11745 Notes on Life: Factor Called Chance, Weights and Measures, Mechanism Called Man. 330 11746 Notes on Life: Physical and Chemical Character of His Actions, Mechanism Called Mind. 330 11747 Notes on Life: The Emotions, The So-called Progress of Mind, Mechanism Called Memory. 330 11748 Notes on Life: Myth of Individuality, Myth of Individual Thinking, Myth of Free Will. 330 11749 Notes on Life: Myth of Individual Creative Power, Myth of Individual Possession, Myth of Individual Responsibility. 330 11750 Notes on Life: Myth of Individual and Race Memory, The Force Called Illusion. 330 11751 Notes on Life: Varieties of Force. 330 11752 Notes on Life: Transmutation of Personality, The Problem of Genius. 330 11753 Notes on Life: Part II: Theory That Life Is a Game, Special and Favoring Phases of the Solar System. 330 11754 Notes on Life: Necessity for Contrast, Necessity for Limitation, Necessity for Change. 330 11755 Notes on Life: Necessity for Interest and Reward; Necessity for Ignorance; Necessity for Secrecy; Necessity for Youth and Age, Old and New. 330 11756 Notes on Life: Scarcity and Plenty, Strength and Weakness, Courage and Fear, Mercy and Cruelty. 330 11757 Notes on Life: Beauty and Ugliness, Order and Disorder, Good and Evil. 330 11758 Notes on Life: Problem of Knowledge, Equation Called Morality, Compromise Called Justice. 330 11759 Notes on Life: Salve Called Religion, Problem of Progress and Purpose, Myth of a Perfect Social Order. 330 11760 Notes on Life: Essential Tragedy of Life, Problem of Death. 330 11761 Notes on Life: Equation Inevitable. 330 11762 Notes on Life: Laughter, Music. 330 11763 Notes on Life: typescript sent to M. T. Harris's agent. 331 11764-11780 Notes on Life: edited by Marguerite Tjader Harris and John McAleer. 332 11781-11803 Notes on Life, edited by Marguerite Tjader and John McAleer. 333 11804-11830 X.  An Amateur Laborer. Box Folder An Amateur Laborer: note from TD; fragment from chap. I. 334 11831 An Amateur Laborer: "The Cruise of the Idlewild". 334 11832 An Amateur Laborer: "The Mighty Burke". 334 11833 An Amateur Laborer: "The Toil of the Laborer". 334 11834 An Amateur Laborer (chaps. I-XXIII). 334 11835-11851 An Amateur Laborer: (chaps. XXIII-XXV). 335 11852-11854 An Amateur Laborer: ms fragments. 335 11855-11874 An Amateur Laborer (Pa. ed.): the Pennsylvania edition, contents, acknowledgments, preface. 336 11875 An Amateur Laborer (Pa. ed.): introduction by Richard W. Dowell. 336 11876 An Amateur Laborer (Pa. ed.): editorial principles by James L. W. West III. 336 11877 An Amateur Laborer (Pa. ed.): textual apparatus. 336 11878 An Amateur Laborer (Pa. ed.) (chaps. I-XXV). 336 11879-11890 An Amateur Laborer (Pa. ed.): fragments. 336 11891-11895 An Amateur Laborer (Pa. ed.): explanatory notes. 336 11896 An Amateur Laborer (Pa. ed.): illustration page, word division, design specifications. 336 11897 An Amateur Laborer (Pa. ed.): fragments not used in book. 336 11898-11901 Return to Top » V.  TD Writings: Essays. Series Description This series includes Dreiser's published and unpublished essays, reviews, and letters to the editor. Some photostats of articles that Dreiser wrote as a newspaper reporter are filed here as well; printed versions of other Dreiser newspaper articles are located in the clippings file or on microfilm. In addition, essays for series developed by Dreiser, whether written by him or by someone else, are housed here. They are collected together under the series title (e.g., "Baa! Baa! Black Sheep," "I Remember, I Remember"). The essay title and author are listed on the folder. The order of filing the holdings for each essay is the same as that followed in TD Writings: Books: notes, manuscripts, typescripts, proofs, and printed versions. For published essays, the journal and year of first publication are noted on the folder. The essays are filed alphabetically by the title on the first page of the essay; the title used for publication is also noted on the folder with the other publication information when it differs from the first-page title. If the publication title is radically different from the original title, researchers can find in Appendix A a cross-reference under the publication title to the essay's title in the collection. Some of Dreiser's published essays were later included in his nonfiction book publications: A Traveler at Forty,  Twelve Men,  Hey Rub-a-Dub-Dub,  Newspaper Days (A Book about Myself),  The Color of a Great City,  Dreiser Looks at Russia,  A Gallery of Women,  My Citye>, and America Is Worth Saving. Researchers interested in some of these essays should check for holdings in both TD Writings: Books and TD Writings: Essays, because versions of the essay may be found in both locations. Box Folder A. 337 11902-11924 "Baa! Baa! Black Sheep" series for Esquire. 338 11925-11949 Bal - Com. 339 11950-11983 Con - El. 340 11984-12023 Em - Go. 341 12024-12054 Gr - H. 342 12055-12082 I - "I Find...". 343 12083-12106 "I Remember! I Remember!" series - Is. 344 12107-12136 It - L. 345 12137-12174 Ma. 346 12175-12206 Me - On. 347 12207-12244 Ou - P. 348 12245-12267 R. 349 12268-12291 S - "This Florida...". 350 12292-12333 "This Madness:" "Aglaia"; "Elizabeth". 351 12334-12362 "This Madness:" "Sidonie". 352 12363-12391 "This Madness:" "Camilla". 353 12392-12418 "This Madness:" "Aglaia," "Elizabeth," "Sidonie". 354 12419-12424 Tho - "Why Help...". 355 12425-12470 "Why I..." - Z and untitled. 356 12471-12489 Return to Top » VI.  TD Writings: Short stories. Series Description Dreiser wrote many more short stories than were ever published and started many stories that he never completed. He often recorded and filed ideas for them: sometimes a title with a plot summary, sometimes only a title. Friends and researchers that he employed would also send him newspaper clippings describing crimes with an unusual psychological twist and inexplicable events involving humans or phenomena in the natural world: he collected and filed such information under "ideas for stories." Also included are clippings that describe crimes that Dreiser considered using as the basis for what would later become An American Tragedy. The first boxes contain all completed and unfinished short stories (arranged alphabetically), including those consisting only of a title and plot summary. [ Appendix B comprises an alphabetical li st of the short stories.] Filed next are two boxes of ideas for short stories; they contain lists of titles only or clippings that he collected or were sent to him. As in the previous series, the order of arrangement for the manuscripts for each title is chronological: notes, manuscripts, typescripts, proofs, and printed version. First publication data are noted on the folder of published stories. Box Folder A - D. 357 12490-12512 E - Hei. 358 12513-12544 Her - Lo. 359 12545-12576 Ly - P. 360 12577-12607 R - S. 361 12608-12635 T - Z and untitled. 362 12636-12667 Ideas for short stories. 363 12668-12686 Ideas for short stories (Wynkoop murder case). 364 12687-12699 Return to Top » VII.  TD Writings: Poems. Series Description Because poems are filed in two locations in the Dreiser Papers, researchers should check both in this series and in TD Writings: Books under " Moods" ( Boxes 219-221). Copies or versions of some poems are found in both locations. Dreiser began writing poetry in the 1890s and continued throughout his lifetime; the collection contains poems from the entire period. I n Boxes 365 through 369 the poems are arranged alphabetically by title. This grouping includes poems written by Dreiser but scored for music by someone else: they are filed under the title of the poem, with the name of the composer of the music listed on the folder. Boxes 369 and 370 contain selections of Dreiser's poems, chosen by Dreiser and others, on particular themes or for specific purposes. [Appendix Ccomprises an alphabetical list of the poems.] Box Folder A - For. 365 12700-12789 Fou - L. 366 12790-12873 M - Q. 367 12874-12946 R - Y. 368 12947-13052 Selected poems for a small book of poetry. 369 13053-13056 Rhymed verse. 369 13057-13058 Selection of poems by TD for?. 369 13059-13060 "Sonnets in Recollection". 369 13061 Verses, 1895. 369 13062 Selection of poems typed by?. Description For inclusion in Robert Palmer Saalbach, Selected Poems from Moods  by Theodore Dreiser, 1969? 369 13063 Poems by TD translated into German by F. C. Steinermayr and Lind Goldschmidt. 370 13064-13065 Poems by TD typed by Estelle Kubitz. 370 13066-13069 Return to Top » VIII.  TD Writings: Plays. Series Description One of Dreiser's first pieces of creative writing was a playscript, Jeremiah I, which is in this collection. Dreiser enjoyed writing plays and often had ideas for playscripts, which he would briefly summarize with the i ntent of developing them later. Sometimes he collaborated with another person in translating his idea into a playscript. This series contains both fully developed playscripts and Dreiser's ideas for plays, arranged alphabetically. Some of Dreiser's pla ys were scored for music, in which case the play is filed under its title and the name of the composer is listed on the folder. In addition to the plays in this series, the researcher should see Boxes 166- 168, which contain playscripts of  The "Genius," some of which were written by Dreiser. [Appendix D comprises an alphabetical list of the plays.] Box Folder A - C. 371 13070-13095 D - J. 372 13096-13128 L - Z and untitled fragments. 373 13129-13149 Return to Top » IX.  TD Writings: Screenplays and radio scripts. Series Description Even before his arrival in California in 1919, Dreiser had been impressed by the popularity of motion pictures and by the size of the potential audience for movies compared with that for books. He believed that screenwriting could boost his income dra matically. In addition to creating new screenplays, Dreiser also saw possibilities for screen adaptations of his novels and short stories. During his lifetime, motion pictures versions of An American Tragedy, Jennie Gerhardte>, and My Gal Sal were produced, although Dreiser himself did not write any of these screenplays. Dreiser encouraged other writers who wanted to adapt his novels and short stories. In fact, he often worked with other wri ters on screenplays: he presented an idea or a plot and his collaborator translated it into an actual screenplay. He followed a similar pattern with radio scripts. No screenplays written by Dreiser were ever produced. This series includes (1) screenplays and radio scripts written by Dreiser, (2) those written by a collaborator based on an idea by Dreiser, and (3) Dreiser's ideas for screenplays that were never developed. The file on "Revolt or Tobacco" also include s notes and clippings on the tobacco industry and photographs from a field trip to Tennessee that were used as background material in writing the script, as well as incorporation papers and bylaws for Super Pictures, Inc., the company created to produce t he movie. [ Appendix E comprises an alphabetical list of the screenplays and radio scripts.] Box Folder A - K. 374 13150-13182 L - P. 375 13183-13206 "Revolt or Tobacco". 376 13207-13230 "Revolt or Tobacco". 377 13231-13262 "Revolt or Tobacco". Note See also Box 468 , folder 14358 for reviews of Borden Deal's 1965 book, The Tobacco Men, which was based on TD's notes for this screenplay. 378 13263-13294 S - Z and untitled. 379 13295-13321 Return to Top » X.  TD Writings: Addresses, lectures, interviews. Series Description The writings in this series are filed chronologically. Some addresses and interviews were published; thus, the holdings in this series range from notes to printed versions. Dreiser received many requests for interviews and for answers to specific que stions. After replying, he often filed these requests under "Questions and Answers" without indicating the source or the date. If the year can be determined or estimated approximately, the material is filed using that year; if not, the material is filed at the end of the chronologically arranged folders. Box Folder 1912-1934. 380 13322-13367 Miscellaneous questions and answers, 1935-1944. 381 13368-13418 Return to Top » XI.  TD Writings: Introductions, prefaces. Series Description Writings in this series include everything from research notes to printed versions and range in length from a few paragraphs to a long essay. In addition to traditional introductions to books, Dreiser wrote introductory material for catalogs of painti ngs, new literary journals, labor pamphlets, and film series. Notes for the introductions of Harlan Miners Speak and  The Living Thoughts of Thoreau are extensive and varied in character; some of them were collected by others but annotated by Dreiser. Box Folder 1914-1932. 382 13419-13461 TD's Introduction to Harlan Miners Speak, 1932. 383 13462 1933-1938 May . 384 13463-13474 1938 Nov.-1941. 385 13475-13500 Return to Top » XII.  Journals edited by TD. Series Description Before his novel-writing career really took hold, Dreiser was editor of Ev'ry Month,   Smith's Magazine,   Broadway Magazine,   The Delineator , and  Bohemian Magazine. In the 1930s, when he became more involved in political issues, he agreed to be an editor of  American Spectator. Holdings in this series include some notes, financial data, production material, and proposed articles for Broadway Magazine, Bohemian Magazine, and  American Spectator; they also include som e issues of  Ev'ry Month, Broadway Magazine, Bohemian Magazine, and  American Spectator. Researchers interested in Dreiser's career at  The Delineator should also se e folder 13812 (Box 405) and Box 421, which contains a scrapbook of clippings documenting Dreiser's editorship of this journal. Box Folder Notes: contents and cost sheets for the issues of Broadway Magazine, 1906 July and August. 386 13501 Notes: production material and proposed articles for Bohemian Magazine. 386 13502-13524 Notes: American Spectator: New York Times editorial, ; policy statements; potential contributors, 1932 Oct. 20. 387 13525 Notes: American Spectator: ideas for articles. 387 13526 Notes: American Spectator: suggestions for articles. 387 13527 Notes: American Spectator: articles written and expected. 387 13528 Notes: American Spectator: comments re contributors or articles from Evelyn Light to TD. 387 13529 Notes: American Spectator: "The Editors Believe" material. 387 13530 Notes: American Spectator: material submitted for publication. 387 13531-13533 Notes: American Spectator: information on distribution, advertising, printing, and financial matters supplied to TD by Evelyn Light. 387 13534 Notes: American Spectator: radio broadcast, 1933. 387 13535 Notes: American Spectator: miscellaneous. 387 13536 Copies: Ev'ry Month, 1895 October. 388 13537 Copies: Ev'ry Month, 1896 Nov-Dec. 388 13538-13539 Copies: Ev'ry Month, 1897 Jan. 388 13540 Copies: Ev'ry Month, 1897 March-May . 388 13541-13543 Copies: Ev'ry Month, 1897 Nov-Dec. 388 13544-13545 Copies: Ev'ry Month, 1898 March . 388 13546 Copies: Ev'ry Month, 1896 April-1897 May. 389 13547 Copies: Ev'ry Month, 1898 June-1899 May. 389 13547 Copies: Broadway Magazine, 1906. 390 13548 Copies: Bohemian Magazine, 1909. 390 13548 Copies: American Spectator, 1932 Nov.-1933 Oct. Note These copies are very fragile. 391 13549 Return to Top » XIII.  Notes written and compiled by TD. Series Description Dreiser's note-taking habits probably began during his days as a newspaper reporter. He took notes (or hired others to do so), kept diaries, and collected clippings as an aide-mémoire for his writing projects. Dreiser's habit was to file the notes wit h the relevant manuscripts and typescripts for a piece of writing, and his practice has been followed in organizing this collection. Notes on the life and career of Charles Yerkes, for example, are housed with the manuscripts for T he Financier, The Titan, and  The Stoic, because they were an integral source of information for the writing of those works. The material filed in this series indicates the breadth of Dreiser's interests and concerns and the kinds of sources that he consulted when doing research. The notes in this series may have been collected with particular projects in mind that were nev er written or published; they may represent information Dreiser wanted for general purposes; they may have been kept by chance or for idiosyncratic reasons. They probably had multiple uses: what Dreiser labeled "notes on the American scene" and "capital and labor" might have been used in any number of his political writings in the 1930s and 1940s, including his book Tragic America. Notes are filed alphabetically by subject, so researchers should check the container list fo r topics of interest. The quantity of notes on any subject varies from a paragraph to more than a box. Because of the fragmentary nature of the holdings, the categories "Novels, proposed" and "Novels, unfinished" are housed in this series rather than in TD Writings: Books. One of the unfinished novels, "The Rake," was Dreiser's early attempt to write what eventually became An American Tragedy. Dreiser collected clippings and notes and wrote a prologue and several chapters for this work but decided at some point that this was not the story that he wanted to write. A.  Notes: A - Cap. Box Folder Notes on the American scene: includes notes on political parties, corporations, charity, banks, revision of the New York constitution [many of these notes probably were collected for the writing of Tragic America ]. 392 13550-13555 Notes on amnesia; idea for a story about an amnesia victim. 392 13556 Notes on TD's books. 392 13557 Notes on capital and labor (many of these notes were probably collected for the writing of Tragic America). 392 13558-13564 B.  Notes: Cap. Box Folder Notes on capital and labor. 393 13565-13574 Notes on capital and labor: United States v. Haywood et al., 1929 Aug. 9-13. 393 13575-13592 C.  Notes on the Catholic Church. Box Folder "Sex". 394 13593 "Adultery, the Church and Law", after 1931. 394 13594 "The Catholic Church and the Labor Movement," by David J. Saposs. 394 13595 "Catholics in Education": outline and division into chapters by Esther McCoy(?). 394 13596 "Catholic's Progress," by ?. 394 13597 Miscellaneous notes on the Catholic church. 394 13598-13606 "The Church and Double-Quick Time". 394 13607 Version of "The Church and Wealth in America" in Tragic America. 394 13608 "Church Support in the U.S.," from a thesis by Michael N. Kremer. 394 13609 "Church Support in the United States". 394 13610 "Church Support in the United States," by Michael N. Kremer. 394 13611 "Concerning Mr. Guthrie's Opinion on Church and State in Mexico," by Charles C. Marshall. 394 13612 "The Holy Roman Church". 394 13613 Letters re the Catholic church. 394 13614 "My Quarrel with the Catholic Church". 394 13615 "A Roman Catholic and the Presidency," by Charles C. Marshall. 394 13616 "The Roman Catholic Church as a Business and Political Organization," by ?. 394 13617 "Simony: An Historical Synopsis and Commentary," by Rev. Raymond A. Ryder. 394 13618 "The Support of the Catholic Church" restatement of data from "Church Support in the United States," by Michael N. Kremer. 394 13619 D.  Notes: Ce - L. Box 395 Box Folder Notes on censorship. 395 13620 Notes on dictatorship: European, Central and South American countries, and U.S. 395 13621 Notes on dreams: accounts of TD's dreams. 395 13622-13623 Notes and articles re the Federal Arts Program. 395 13624-13626 Notes on and by Charles Fort; autobiographical statement; list of his writings; reviews of his works; Fort memorabilia. 395 13627-13631 Notes on Germany. 395 13632 Notes on Emma Goldman. 395 13633-13634 Notes on Alexander Hamilton, Grover Cleveland, and James G. Blaine. 395 13635 Notes on insurance by ?. 395 13636 Notes on interdependence. 395 13637 Notes on Japan, 1932-1934. 395 13638 Notes on the Jewish question. 395 13639 Notes for an article on Los Angeles. 395 13640-13642 E.  Notes: M - N. Box Folder Notes on the Mechanics & Traders-Union bank scandal, Brooklyn,, 1906-1915. 396 13643-13658 Notes on music. 396 13659 Lists of names and word substitutions. 396 13660 Novels, proposed: outlines. 396 13661 Novels, unfinished: "Mea Culpa". 396 13662-13668 Novels, unfinished: "Our Neighborhood: A Book of Present Day Life," by C. T. Allison (written in TD's hand: foreword; chaps. I, II, III). Note See also "Hollywood Now," Box 342. 396 13669-13670 F.  Notes: N. Box Folder Novels, unfinished: "The Rake": list of incidents; prologue; 7 chaps. (some incomplete); notes; related clippings. 397 13671-13683 G.  Notes: O - P. Box Folder Ouija board notes. 398 13684 Notes on philosophers. 398 13685 Notes on philosophy and science typed by Estelle Kubitz. 398 13686-13694 Notes on production and machinery taken from Howard Scott of Technocracy. 398 13695 H.  Notes: R - Z. Box Folder TD's notes on reading. 399 13696-13700 Notes on realism and other literature. 399 13701 Notes on Russia, 1932-1934. 399 13702-13703 Notes on Russian writers. 399 13704 Notes on relief for Spain; copies of The War in Spain, ; copies of  Voice of Spain, 1939. 399 13705 Miscellaneous notes. 399 13706 Philadelphia diary: prescriptions, 1902-1903. Description Xerox of originals at Lilly Library, Univ. of Indiana. 400 13707 Philadelphia diary, 1902 Oct. 22-1903 Feb. 17. 400 13708-13713 Philadelphia diary: explanatory letters and transcription by Neda Westlake for entries for, 1902 Oct. 22-1903 Feb. 17. 400 13714-13719 Return to Top » XIV.  TD diaries. Series Description Dreiser kept two types of diaries at irregular intervals throughout his lifetime: the kind that noted his daily activities, thoughts, and contacts and the kind that recorded events, people, places, and reflections that he intended to use in a piece of writing. This series contains the former type of diary; examples of the latter are housed with the published work that they helped to generate. For example, the diaries from Dreiser's European tour in 1911-1912, used while to write A Traveler at Forty, are stored with the typescripts for that book; likewise, the diary that Dreiser kept on his trip to Russia in 1927-1928 is located with the typescripts for  Dreiser Looks at Russia. Dreiser's private diaries contain more than pages of notes; he often pasted in postcards, prescriptions for medicine, letters, menus, and souvenirs. Sometimes he made drawings of certain architectural details or designs that he liked. At the end of t he container list for this series is a note regarding the location of other diaries in the collection. Box Folder Diary fragments, 1913-1919. 400 13720 Savannah diary, 1916 26 Jan.-18 Feb. 400 13721-13726 Savannah diary: transcription by Neda Westlake for entries for, 1916 26 Jan.-18 Feb. 400 13727 Greenwich Village diary: xerox of letters establishing provenance of diary; entries for, 1917 May 15-1918 March 4. 400 13728-13731 Indiana diary, 1919 June 15-July 2. 400 13732-13733 Diary of trip to Grove and Asbury Park, New Jersey:, 1919 July 12-14. 400 13734 Helen diary, 1919 July 26-1924 July 2. 401 13735-13761 Florida diary, maps, bills, guides, telegrams, miscellaneous, 1925-1926. 402 13762 Florida diary, 1925 Dec. 8-1926 Jan. 25. 402 13763-13766 Florida diary: copy of Sunland magazine, 1926 Jan. 402 13767 Florida diary: newspaper clippings re real estate development in Florida, 1925 Dec. 13, 28, 29 ; 1926 Jan. 24. 402 13768-13769 European diary, 1926 June 22-Oct. 21. 403 13770 Theodore Dreiser: American Diaries, (Thomas P. Riggio, editor; James L. W. West III, textual editor) (Philadelphia: University of Pennsylvania Press, ): suggested illustrations, 1902-1926, 1982. 404 13771 American Diaries (Pa. ed.): copies of correspondence re publication. 404 13772 American Diaries (Pa. ed.): front matter. 404 13773 American Diaries (Pa. ed.): introduction by Riggio. 404 13774-13775 American Diaries (Pa. ed.): editorial principles by West. 404 13776 American Diaries (Pa. ed.): Philadelphia diary; notes, 1902 Oct. 22-1903 Feb. 17. 404 13777-13778 American Diaries (Pa. ed.): Savannah diary; notes, 1916. 404 13779 American Diaries (Pa. ed.): Greenwich Village diary; notes, 1917 May 15-1918 March 4. 404 13780-13784 American Diaries (Pa. ed.): Home to Indiana; notes, 1919. 404 13785 American Diaries (Pa. ed.): A Trip to the Jersey Shore; notes, 1919. 404 13786 American Diaries (Pa. ed.): Helen, Hollywood, and the  Tragedy; notes, 1919 July 19-1924 July 2. 404 13787-13793 American Diaries (Pa. ed.): Motoring to Florid; notes, 1925 Dec. 8-1926 Jan. 25. 404 13794-13795 American Diaries (Pa. ed.): appendix—diary fragments, 1914-1918. 404 13796 American Diaries (Pa. ed.): textual apparatus. Note For other TD diaries, see Boxes 142 , 143, 144(European diary, 1911-1912, used in writing A Traveler at Forty); Box 171(diary notes for  A Hoosier Holiday); and Box 222 (Russian diary, 1927-1928, used in writing  Dreiser Looks at Russia). 404 13797 Return to Top » XV.  Biographical material. Series Description This material is difficult to categorize, as it ranges from pages from the Dreiser family Bible to a copy of Dreiser's memorial service on 3 January 1946. Housed here, for example, are some short autobiographical works; biographical summaries by other s; lists of Dreiser's writings, addresses, and places of employment; addresses of associates; papers and books stored in warehouses; personal manuscripts for sale; invitees to a Simon & Schuster reception at Mt. Kisco; and awards. The container list provides more details. Box Folder Pages from Dreiser family Bible; title page from Dawn. 405 13798 List of TD domiciles and places of employment. 405 13799 "A Dreiser Chronology," by John G. Moore, 1946 Feb. 22. 405 13800 Autobiographical sketch by TD for Household Magazine, 1929 Nov. 405 13801 TD's account of his life for Eric Possell, 1928 March 16. 405 13802 List of TD's writings in various forms and their owners as of (?); later lists of TD manuscripts for auction, 1922. 405 13803 List of TD's magazine articles and other writings. 405 13804 Writings by or about TD in the State Library, Salem, Oregon, after 1940. 405 13805 Accident reports: TD hit by auto and auto accident involving TD, Helen Richardson, and Clara Clark, 1919, 1932. 405 13806 List of invitees for Simon & Schuster reception for TD at Iroki, Mt. Kisco, N.Y., 1934 Oct. 405 13807 TD address list. 405 13808 Miscellaneous addresses of TD associates. 405 13809 Biographies of TD in reference books. 405 13810 Miscellaneous biographical data. 405 13811 Press release announcing TD's appointment as editor of The Delineator. 405 13812 TD's plan for making money after being fired from The Delineator(?). 405 13813 TD horoscopes. 405 13814 TD's proposal for a society to help young authors, (?), 1919 Jan. 23. 405 13815 "A Literary Apprenticeship," autobiographical ms (incomplete) and notes; notes for an autobiographical work, "Literary Experience". 405 13816 Architect's sketches of Iroki [TD's Mt. Kisco home], ; advertisement for sale of Iroki; directions to Iroki; furniture advertisement with note from Evelyn Light, 1930 March 12. Note See Box 484, folder 14691, for map of Mt. Kisco. 405 13817 Inventory of TD's papers at Mt. Kisco and Manhattan Storage, 1933. 405 13818 Inventory of TD's papers at Mt. Kisco and Manhattan Storage, revised later by TD and Helen Dreiser, 1838. 405 13819 Inventory of TD material at Manhattan Storage, annotated by Helen Dreiser and Harriet Bissell, 1938. 405 13820 Lists and receipts of transfers of material in storage at Mt. Kisco and Manhattan Storage, and other inventoried papers, 1931-1946. 405 13821 Miscellaneous lists. 405 13822 TD awards; obituaries. 405 13823 Memorial service for TD, 1946 Jan. 3. 405 13824 Miscellaneous items re Dreiser family members: Edward Dreiser, Mary Frances Dreiser Brennan, John Paul Dreiser. 405 13825 TD notes and souvenirs from trips. Note See Box 484, folder 14692, for souvenir map of Big Moose Lake, New York. 405 13826 Return to Top » XVI.  Family members. A.  Paul Dresser Materials. Description & Arrangement This subseries begins with two boxes of Theodore Dreiser correspondence, which deals exclusively with business concerns related to the music of his brother, Paul Dresser. The first is correspondence between Dreiser and several music publishing firms (i.e., Paul Dresser Music, Richmond Music, Edward B. Marks, and Paull-Pioneer). The second houses correspondence with Theodore and Helen Dreiser from many private and corporate correspondents concerning the making of the movie about Paul Dresser's life, My Gal Sal (this box is arranged chronologically). The remainder of the material comprises: Paul Dresser sheet music, filed alphabetically by title, with miscellaneous sheet music and lyric sheets following (3 boxes, a list of titles of these works may be found in Appendix F); a scrapbook of articles related to Paul Dresser (1 box); Paul Dresser Memorabilia and Clippings (1 box); two plays written by Paul Dresser,  After Many Years and  Timothy and Clover (1/2 box); and Dresser memorabilia collected by Paul Gormley, including photos, clippings and cards (1/2 box). Box Folder Biographical information on Paul Dresser, written by TD. 406 13827 TD correspondence pertaining to Paul Dresser music. 406 13828-13834 TD correspondence pertaining to My Gal Sal. 407 13835-13845 Paul Dresser sheet music: original board; "After the Battle," "Her Tears Drifted Out with the Tide". 408 13846-13871 Paul Dresser sheet music: "I Long to Hear from Home," "The Old Flame Flickers and I Wonder Why". 409 13872-13898 Paul Dresser sheet music: "On the Banks of the Wabash Far Away," "You're Just a Little Nigger..." miscellaneous sheet music, lyric sheets. 410 13899-13927 Paul Dresser scrapbook. 411 13928-13997 Paul Dresser memorabilia and clippings. 412-416 13998-14002 Paul Dresser material: Paul Gormley's collected memorabilia; plays: "After Many Years," "Timothy and Clover". 412-416 14003-14006 B.  Helen Dreiser Diaries and Other Writings. Description & Arrangement Because the Theodore Dreiser Papers contains so much material by and about Helen, and because she and Dreiser were associated for so many years in a business as well as a personal relationship, her writings have been gathered in a separate series. In addition to Helen Dreiser's daybooks, kept between 1938 and 1951, this series contains typescripts and notes from her My Life with Dreiser (1951) and a movie script for a sequel to  My Gal Sal--"Sal o' My Heart." Helen sometimes worked with Dreiser on screenplays; her work is housed with Dreiser's writings when she adapts one of his works. See, for example, her screen adaptation of  Sister Carrie in Box 127, folder 7119 and her work on  My Gal Sal in Box 375. Box Folder Helen Dreiser's daybooks, 1938-1941, 1943-1944. 412-416 14007 Helen Dreiser's daybooks, 1945-1947. 412-416 14008 Helen Dreiser's daybooks, 1948-1951. 412-416 14009 Genealogical chart of Patges lineage. 417 14010 Miscellaneous notes and clippings. 417 14011 "Journey Eternelle". 417 14012 My Life with Dreiser (chaps. I-LI, Epilogue). 417 14013-14022 My Life with Dreiser (fragments from chaps. 2-28). 417 14023-14024 My Life with Dreiser, miscellaneous notes and corrections. 417 14025-14027 My Life with Dreiser, promotional material. 417 14028 Helen Richardson [Dreiser] and Lucile Nelson, "The Blessed Damozel," synopsis for a movie, 1942. 417 14029-14030 "A Few Notes on The Dream, Manuscript Which Was Inspired by Charles Fort's First Full Length Manuscript 'X'". 417 14031 "Sal o' My Heart," movie script, 1943. 417 14032 "Sal o' My Heart," movie script with songs by Clare Kummer, 1943. 417 14033 C.  Vera Dreiser Correspondence. Description & Arrangement This material includes personal correspondence between Vera Dreiser and others, mainly concerning her two famous uncles, Theodore Dreiser and Paul Dresser. Files are ordered alphabetically by correspondent and chronologically within each folder; incom ing and outgoing letters are interfiled. Following the correspondence are a few subject folders; they comprise: articles and information about Dreiser; Vera's diary concerning Theodore; Dreiser family history; notes concerning Paul Dresser; and memorabilia. Box Folder Correspondents A - P. 418 14034-14104 Correspondents R - Z; miscellaneous notes; memorabilia. 419 14105-14135 Return to Top » XVII.  Memorabilia. A.  Scrapbooks. Description & Arrangement These scrapbooks were not all compiled by Dreiser, but they all focus on his activities and interests. They are arranged chronologically, with the earliest scrapbook presenting reviews of Sister Carrie and the last one dash;kept by Lorna Smith between 1963 and 1966—containing clippings and souvenirs of Dreiser and Helen. Six scrapbooks hold reviews of Dreiser's books. In addition to the one for Sister Carrie, there are scrapbooks for  A Traveler at Forty, The "Genius", "Twelve Men,"  Newspaper Days (A Book about Myself), and  The Color of a Great City. The last four are book dummies filled with blank pages, onto which clippings of book reviews are pasted. Hazel Godwin kept a scrapbook of clippings regarding Dreiser's visit to Toronto in 1942. Helen Dreiser compiled six scrapbooks between 1926 and 1950 that contained Christmas and other holiday cards sent to Dreiser and herself; clippings about Dreiser's activities and speeches and world events; programs and other souvenirs; reviews of and music from  My Gal Sal; telegrams, cards, and letters that she received after Dreiser's death; reviews of  The Bulwark and  The Stoic; and accounts of h er speeches and activities. Scrapbooks covering Dreiser's career with  The Delineator, his activities between 1914 and 1916 and miscellaneous literary selections, and the All Russian Ballet project are also housed here. Box Folder Sister Carrie: scrapbook of reviews, 1901-1911. 420 14136 Sister Carrie: folder of loose reviews found in scrapbook but not pasted in first page of scrapbook of letters, 1907-1912. 420 14137 Miscellaneous clippings re TD at The Delineator. 421 14138 A Traveler at Forty: clippings of reviews, 1913-1916. 421 14139 Scrapbook kept by Kirah Markham of writings, some by or about TD, circa 1914-1916. 422 14140 Loose items found in scrapbook. 422 14140 Book dummies of The "Genius",  Twelve Men,  Newspaper Days (A Book about My self), and  The Color of a Great City, each containing pasted-in reviews of the respective books, 1915-1923. 423 14141 Scrapbook kept by Helen Dreiser of clippings re TD and current events, Christmas cards, and souvenirs, 1926-1938. 424 14142 All Russian Ballet, Inc.: scrapbook empty except for letter to Arthur Carter Hume , copy of woodcut of TD, and few items relating its incorporation, 1934 Nov. 7. 425 14143 Scrapbook kept by Helen Dreiser of clippings re TD and current events, reviews of My Gal Sal, souvenirs, and programs, 1938-1942. 426 14144 Scrapbook kept by Hazel Godwin re TD's trip to Toronto, Canada,, 1942 October . 427 14145 Scrapbook kept by Helen Dreiser of clippings re TD and current events, music from and reviews of My Gal Sal, Christmas and other holiday cards, programs, and souvenirs, 1941-1944. 428 14146 Scrapbook kept by Helen Dreiser of clippings re TD and current events, programs, holiday cards, souvenirs, copies of her speeches about TD, a few clippings re TD's death, 1944-1948. 428 14147 "The Passing of Theodore Dreiser": scrapbook kept by Helen Dreiser, containing letters, telegrams, and cards from friends; clippings; and other memorabilia re the death of TD. 429 14148 Scrapbook kept by Helen Dreiser of clippings re TD and his writings; some reviews of The Bulwark and  The Stoic and of books written about TD, 1948-1950. 430 14149 Scrapbook kept by Lorna Smith with clippings and souvenirs re TD and Helen Dreiser, 1963-1966. 431 14150 B.  Photographs. Description The photographs (many of which may be viewed online) in this series range from informal snapshots to formal portraits and provide extensive documentation of the personal lives and careers of Theodore and Helen Dreiser and Vera Dreiser Scott (Dreiser's niece). In addition to collecting in dividual photographs, Helen compiled photograph albums that pictured her friends and relatives as well as her activities and travels with Dreiser. All photographs in the collection are housed in this series with two exceptions: (1) photographs that were enclosed with correspondence originally and that were still housed with that correspondence in 1990 and (2) photographs that Dreiser filed with research notes (these photographs have been left in place). Theodore and Helen Dreiser, Myrtle Butcher (Helen 's sister), Vera Dreiser Scott, and Ralph Fabri are the major donors of photographs to the Dreiser Papers. This series comprises photographs of Dreiser alone and with others; persons associated with Dreiser; Dreiser's parents and siblings; Helen Patges Richardson Dreiser, alone and with others; Helen Richardson's family album; photograph albums compiled by Helen; Dreiser residences; artistic representations of Dreiser; Edward Dreiser, Mai Skelly Dreiser, Vera Dreiser, and their friends and relatives; identifiable friends or associates of Vera Dreiser; and publicity photographs of associates of Vera Dreiser who were involved in musical or theatrical productions. In addition, there are photographs that have been used in publications about Dreiser and to promote motion pictures based on his works. Box Folder Photographs of TD, 1894-1942. 432 14151 Photographs of TD with others, 1888-1945. 433 14152 Photographs of persons associated with TD. Description Does not including photographs of Helen Dreiser or of TD's parents and siblings. 434 14153 Photographs of TD's parents and siblings. 435 14154 Photographs of Helen Patges Richardson Dreiser, alone and with others, circa 1895-1953. Note Photographs of Helen with TD can be found in Boxes 433, 438, and 439. 436 14155 Helen Richardson family album, 1914-1919. 437 14156 Photo album compiled by Helen Richardson, , containing photographs of herself, TD, friends, family, residences, and places visited, 1920-1933. 438 14157 Photograph albums compiled by Helen Richardson, , containing photos of herself, TD, friends, family, residences, and places visited, 1927-1937. 439 14158 Photographs of Dreiser residences, 1871-1945. 440 14159 Photographs of artistic representations of TD. 441 14160 Photographs that have been used in publications about TD and to promote motion pictures based on his works. Description Illustrations from Pennsylvania Dreiser Edition of Sister Carrie, An Amateur Laborer, Theodore Dreiser: American Diaries, 1902-1926, Dreiser-Mencken Letters; motion picture stills from  Jennie Gerh ardt and  My Gal Sal. 442 14161 Photographs that have been used in periodical publications re TD or his writings. 443 14162-14172 Photographs of Edward Dreiser, Mai Skelly Dreiser, Vera Dreiser, and their friends and relatives, late 1800s-1939. 444 14173 Photographs of Edward Dreiser, Mai Skelly Dreiser, Vera Dreiser, and their friends and relatives, 1940-1980s. 445 14174 Identifiable friends or associates of Vera Dreiser. 446 14175 Publicity photographs of associates of Vera Dreiser who were involved in musical or theatrical productions, A - K. 447 14176 Publicity photographs of associates of Vera Dreiser who were involved in musical or theatrical productions, L - Z and unidentified. 448 14177 Oversize photographs of TD and his friends, relatives, and associates. Arrangement Arranged chronologically. 449 14178-14192 Oversize photographs of Vera Dreiser and her family. Arrangement Arranged chronologically. 449 14193-14202 C.  Art Work. Description These boxes contain prints, drawings, and caricatures, some of which are originals, some copies. Original prints by Wharton Esherick, some inscribed to Dreiser, are housed here, as is the original of the bookplate made for Dreiser by Franklin Booth. The container list outlines specific holdings. Box Folder Adams, Wayman: reproductions of second painting of TD, 1927. 450 14203 Amick, Robert: sketches of TD. 450 14204 1909 Aug. 17. 450 14205 Davis, Hubert: "The Essence of Irony" and "The Griffith Family in Kansas City". 450 14206 Dürer, Albrecht: "The Arraignment of Jesus before Pilot" and "The Resurrection". 450 14207 Esherick, Wharton, 1925-1933. Contents * "Map showing good old Barnegat Bay and the happy ports for great sloop `Kitnkat'" (annotated by Esherick re TD's visit 13 June 1925) * "Free" (1925) * "The Lee Rail" (1925) * "Of a Great City" (1925)(multiple copies, including ones ins cribed to TD, Louise Campbell, and Burton Rascoe and metal plate used in printing) * "Chick's Ship" (1929) * illustration for  Tristram and Iseult (1930) * August (1933) * "The Bid" (1933) (lithographs) * "As I Watched the Ploughman Ploughing" by Walt Whitman (1928) (woodcuts by Esherick) 451 14208 King, Alexander: caricature of TD and Sherwood Anderson, circa 1925. Description Inscribed "Theodore Dreiser and Sherwood Anderson peeping at Misery." 452 14209 Kelly, James E. and John W. Evans: drawings of Thomas Edison and Oscar Wilde by Kelly, from engravings made by Evans; letter from Evans to TD re Wilde drawing. 453 14210 Kolski, Gan, 1928-1929, undated. Contents * "Sunrise at Provincetown" (1928) * "Steam under Bridge" (1929) * "After the Storm" (undated)(lithographs) 453 14211 Kubitz, Estelle: cartoon drawing of TD and Estelle Kubitz. 453 14212 Lubbers, Adrian: drawings, 1929. Contents * "Brooklyn Bridge" (1929) * "South Ferry" (1929) * "Times Square from Times Building" (1929) 453 14213 Miller, D.: Marguerite Tjader Harris. 453 14214 Reich, A.: prints,, 1912, undated. Contents * "Amberg, Martinskirche u. Schiffersteg" (1912) * "Auf der Landstrasse" (n.d.) * "Aus dem Oberpfälzer Jura" (1912) * "Aus Neustadt a./Waldnaab" (1912) * "Die Ruine" (1912) * "Schloss Prunn im Altmühltal" (1912) 453 14215 Rivera, Diego: details of murals, 1927. 453 14216 Rivera, Diego: mural and detail from mural, 1933. 453 14217 Siporin: illustration for "Kismet". 453 14218 Stengel, Hans: caricature of TD with women, 1923. 453 14219 Duddy, Lynn: Vera Dreiser. 453 14220 ?, Elaine: Vera Dreiser. 453 14221 Drawing of a house by?, 1916 Spring . 453 14222 D.  Promotional material. Description & Arrangement Dreiser saved advertisements, programs, and other types of promotional material for his books, political causes, activities of his friends, and items that he wanted to buy. The promotional material for Dreiser's books has been filed alphabetically by publisher; other promotional material has been ordered chronologically. Box Folder Promotional material for TD's books by B. W. Dodge & Co., Boni & Liveright (later Horace Liveright), and Cin (Czechoslovakian publisher). 454 14223 Promotional material for TD's books by Constable & Co. 454 14224 Promotional material for TD's books by Doubleday & Co., Ediciones Hoy (Spanish publisher), Golden Book News, G. P. Putnam's, Heron Press. 454 14225 Promotional material for TD's books by John Lane Co. 454 14226 Promotional material for TD's books by Limited Editions Club, Longman's Modern Age, Népszava Könyvkereskedés (Hungarian publisher). 454 14227 Promotional material for TD's books by Paul Zsolnay Verlag (German publisher), Samuel French, World Publishing Co. 454 14228 Promotional material for books of interest to or about TD,, 1911-1949. 454 14229 Promotional material for various products and causes of interest to TD. 454 14230 Promotional material: programs, 1911-1919. 454 14231 Promotional material: programs, 1920-1935. 454 14232 Promotional material: programs, 1936-1947 and undated. 454 14233 E.  Postcards. Description & Arrangement Dreiser collected postcards during his travels in the United States, Cuba, Europe, Turkey, and Russia. Most of them are unmarked, but some have annotations on the back by either Theodore or Helen Dreiser. Postcards of the United States are filed by s tate, and the others are filed by country of origin, with one exception. Box 455 contains the postcards that Dreiser collected on his round trip from New York to Indiana, the experiences from which were the basis of his book A Hoosier Holiday. He stored these postcards together as a group, as they remain in this collection. Box Folder Postcards from "Hoosier Holiday" trip, Arizona, New Mexico, Texas, Georgia, Florida. 455 14234 Postcards from California, Oregon, Washington, Yellowstone National Park, Montana, New Jersey, Pennsylvania, Kentucky, Maryland, Virginia, West Virginia, Illinois, Minnesota, North Dakota, New York, miscellaneous United States, France, England. 456 14235 Postcards from Austria, Czechoslovakia, Denmark, Scandinavia, Germany, Monaco, Monte Carlo, Russia, Switzerland. 457 14236 Postcards from Belgium, Cuba, Italy, The Netherlands, Turkey. 458 14237 F.  Miscellaneous. Description Various small personal items belonging to Theodore and Helen Dreiser are stored here, including their passports, flowers from Dreiser's memorial service, and the newspaper clipping announcing Helen's first marriage to Frank Richardson. The memorabilia are arranged chronologically, with Theodore's first, followed by Helen's. In addition, there is a 33-1/3 LP recording of a 1939 interview with Dreiser. Box Folder TD memorabilia: TD's passport, 1926 May 24. 459 14238 TD memorabilia: souvenirs from trip to Russia, 1927-1928. 459 14239 TD memorabilia: framed photograph of Charles Fort. 459 14240 TD memorabilia: desk diary sent to TD by John H. Mackey, 1937. 459 14241 TD memorabilia: miscellaneous papers. 459 14242 TD memorabilia: miscellaneous cards, including TD-Kirah Markham "at home" card. 459 14243 TD memorabilia: TD signatures. 459 14244 Helen Dreiser memorabilia: newspaper account of double wedding of Hazel Patges (Helen's sister) to David Pettie and of Helen Patges to Francis Richardson; memorial booklet from funeral of Hazel Pettie, 1916?, 1917. 459 14245 Helen Dreiser memorabilia: proposal to paint Ida Patges's (Helen's mother's) house. 459 14246 Helen Dreiser memorabilia: promotional literature ("Theodore Dreiser: America's Foremost Novelist") given to Helen by TD on the day they met, 1919 Sept. 459 14247 Helen Dreiser memorabilia: Helen Richardson's passport, 1926 June 2. 459 14248 Helen Dreiser memorabilia: bird feather from "Hopsie," a one-legged bird. 459 14249 Helen Dreiser memorabilia: roses from the scarf covering TD's casket, roses sent to Helen on another occasion, 1946 Jan. 3. 459 14250 Helen Dreiser memorabilia: Helen's Metropolitan Museum of Art (New York) lifetime membership certificate and card. 459 14251 Helen Dreiser memorabilia: program and 3 tickets for premiàre of A Place in the Sun, 1951 Aug. 14. 459 14252 Helen Dreiser memorabilia: cards re flowers sent to memorial service for Helen Dreiser, 1955 September. 459 14253 Interview with TD, 1939 Feb. 13. 460 14254 Return to Top » XVIII.  Financial records. A.  Authors Royalties/ Authors Holding Company. Description This box contains statements of expenses for this company from October 1926 through October 1932. There is also an account book covering the period June 1926-December 1931. Box Folder Authors Royalties/Authors Holding Company statements, 1926 Oct. 27 - 1932 Oct. . 461 14255-14261 Authors Royalties Co., Inc.: account book, 1926 June - 1931 Dec. 461 14262 B.  Book sales statistics and reprint rights. Description Housed here are sales statistics for all of Dreiser's books from 1900 to 1932 and sales statistics for his books in the United States from 1900 to 1933. Also filed here are miscellaneous notes about reprint rights. Box Folder Sales statistics on TD's books, 1900-1932. 462 14263-14265 Sales statistics on TD's books in the United States, 1900 - 1933 June. Note See Box 484, folder 14693, for sales statistics for 3/1/34. 462 14266 Reprint rights for TD's writings, 1934 and undated. 462 14267 C.  Receipts. Description & Arrangement Bills sent to and receipts received by Dreiser are filed alphabetically in this box. Box Folder Receipts. 463 14268-14321 D.  Taxes. Description This box contains various state and federal tax forms for Theodore Dreiser for 1919 through 1928, as well as 1931, and for Helen Dreiser for 1945 through 1948. Bills, receipts, and lists of expenses and income accompany the forms for 1945 through 1948. Box Folder TD: U.S. individual income tax returns, 1919-1928, 1931. 464 14322 TD: New York State income tax returns, 1924-1928. 464 14323 Authors Royalties Co., Inc.: corporation income tax returns, 1926-1928. 464 14324 Brief for Appellant: People of the State of New York, on relation of Elmer L. Rice v. Mark Graves et al. as Tax Commissioners (New York), Court of Appeals, 1933. 464 14325 TD and Helen Dreiser: U.S. and California individual income tax returns for ; U.S. estimated tax return for; statement of income and expenses, 1945-1946. 464 14326 Receipts, bills, and royalty statements used in preparing tax returns, 1945. 464 14327 TD (estate) and Helen Dreiser: U.S. and California individual income tax returns for ; U.S. partnership return; California fiduciary return; estimated income tax forms; statements of income, 1946. 464 14328 Receipts, bills, and royalty statements used in preparing tax returns, 1946. 464 14329-14330 TD (estate) and Helen Dreiser: U.S. and California individual, partnership, and fiduciary income tax returns; statements of income, 1947. 464 14331 Receipts, bills, and royalty statements used in preparing tax returns, 1947. 464 14332-14333 TD (estate) and Helen Dreiser: income and expenses, 1948. 464 14334 E.  Canceled checks. Description The checks in this box were written by Dreiser during 1922-1923 and 1925-1926. Box Folder TD canceled checks, 1922. 465 14335 TD canceled checks, 1923. 465 14336-14337 TD canceled checks, 1925. 465 14338 TD canceled checks, 1926. 465 14339 Return to Top » XIX.  Clippings. Description & Arrangement Dreiser and Helen saved clippings themselves but also subscribed to clipping services and received clippings from friends and associates. The largest group of these in the Dreiser Papers has been organized into categories and microfilmed. The clippings in the four boxes in this series duplicate some of those in the larger microfilmed collection. Two of the boxes contain miscellaneous clippings from 1900 to 1984 that mention some aspect of Dreiser's life or work. Another box contains clippings of reviews of Dreiser's books or books about Dreiser, arranged chronologically. Included in this box are reviews of Borden Deal's 1965 book The Tobacco Men, which was based on Dreiser's notes for his screenplay, "Revolt or Tobac co." The final box contains clippings of reviews of motion pictures based on Dreiser's works:  The Prince Who Was a Thief, a Place in the Sun, and  Carrie. Box Folder Clippings about TD, 1900-1959. 466 14340-14347 Clippings about TD, 1960-1984. 467 14348-14350 Clippings: reviews of Sister Carrie, Jennie Gerhardt, The Financier, A Traveler at Forty, The Titan, The "Genius," The Hand of the Potter. 468-469 14351 Clippings: reviews of The Color of a Great City, Newspaper Days (A Book about Myself), An American Tragedy, Moods. 468-469 14352 Clippings: reviews of A Gallery of Women, Tragic America, Dawn, America Is Worth Saving, Best Short Stories of Theodore Dreiser. 468-469 14353 Clippings: reviews of The Bulwark. 468-469 14354 Clippings: reviews of Theodore Dreiser: Apostle of Nature, by Robert H. Elias, and  The Letters of Theodore Dreiser, edited by Robert H. Elias. 468-469 14355 Clippings: reviews of Theodore Dreiser by F. O. Matthiessen, and  My Life with Dreiser by Helen Dreiser. 468-469 14356 Clippings: reviews of Dreiser by W. A. Swanberg, and  Letters to Louise by Louise Campbell. 468-469 14357 Reviews of The Tobacco Men by Borden Deal , which was based on TD's notes for his screenplay "Revolt or Tobacco", 1965. 468-469 14358 Reviews or articles on The Prince Who Was a Thief, 1951. 468-469 14359 Reviews or articles on Carrie, 1952. 468-469 14360 Reviews or articles on A Place in the Sun, 1951. 468-469 14361-14370 Return to Top » XX.  Works by others. Series Description Beginning during his career as a magazine editor and continuing throughout his lifetime, Dreiser was a willing and helpful critic to writers who asked his advice about their work. This series consists of (1) manuscripts, typescripts, printer's proofs, and printed versions of writings that these aspiring writers, as well as Dreiser's friends and associates, sent him during his lifetime and (2) writings about Dreiser that the Dreiser Collection has received since his papers were deposited here. These w ritings are filed alphabetically, and researchers should check Appendix G for specific authors and titles. Box Folder A - B. 470 14371-14405 C - D. 471 14406-14444 E - Go. 472 14445-14475 Gr - Har. 473 14476-14495 Harvey Dudley, Dorothy: galleys and book jacket for Forgotten Frontiers: Dreiser and the Land of the Free, 1932. 474 14496 Haz - Hu. 475 14497-14511 I - McD. 476 14512-14545 Mar - Mo. 477 14546-14574 N - P. 478 14575-14591 Powys, John Cowper: bound page proofs for Wolf Solent, 1929. 479 14592 R - S. 480 14593-14627 T - Z and untitled. 481 14628-14666 Cassette tape of lecture on TD by Fred C. Harrison, and letter re lecture from Harrison to Myrtle Butcher, 1974 Nov. 19. 482 14667 Videotape of "Murder on Big Moose?" and note from Trina Carman, 1988 Sept. 28. 482 14668 Return to Top » XXI.  Oversize. Description & Arrangement The first box in this series contains oversize periodical publications, arranged chronologically. Some were owned by Dreiser; some contain works by him. The second box includes oversize items from several different series in the Theodore Dreiser Papers and is arranged in series order. Researchers should consult the Container List for specific holdings. Box Folder Russian magazine on the building of the Moscow metro, 1935. 483 14669 USSR in Construction, nos. 9-12, 1937. 483 14670 L'Illustration, 1937 Dec. 4. 483 14671 "The Tithe of the Lord" : printed version in Esquire, 1938 July. 483 14672 "The Story of Harry Bridges" : printed version in Friday, 1940 Oct. 4. 483 14673 Brandt & Brandt correspondence, 1930? Dec. 484 14674 Butcher, Myrtle Patges correspondence: Christmas card from TD, Helen Richardson, and Ida Patges, 1931. 484 14675 Gredler correspondence: Christmas card to TD, undated. 484 14676 Heinl, Robert D. correspondence, : galleys for "Bill," by Paul Dresser, 1934. 484 14677 Masters, Edgar Lee correspondence: galleys for "Masters—on the Mason County Hills: Butterfly Hid in the Room". 484 14678 Paul Zsolnay correspondence: foreign accounts, 1930 Dec. 484 14679 Map of automobile routes used by TD on "Hoosier Holiday" trip to Indiana, 1915. 484 14680 Issues of Ottobre containing excerpts from  Tragic America, 1933. 484 14681 "Concerning Dives and Lazarus": broadside, 1940. 484 14682 "Editor and Publisher": broadside, 1940. 484 14683 "Humanitarianism in the Scottsboro Case": printed version in Contempo, 1931. 484 14684 "The Pushcart Man": printed version in New York Call Magazine, 1919 March 30. 484 14685 "The Standard Oil Works at Bayonne": printed version in New York Call Magazine, 1919 March 16. 484 14686 "Toilers of the Tenements": printed version in New York Call Magazine, 1919 Aug. 24. 484 14687 "Women Can Take It": reprint of "Women Are the Realists" in New York Journal-American, Saturday Home Magazine, 1946. 484 14688 "Butcher Rogaum's Door" : printed version in Reedy's Mirror, 1901 Dec. 12. 484 14689 "Solution" : printed version in Women's Home Companion, 1933 Nov. 484 14690 Map of TD's property, Mt. Kisco, N.Y. 484 14691 Souvenir map of Big Moose Lake, N.Y. 484 14692 Randolph Bourne Award, presented to TD by American Writers Congress, 1941 June 6. 484 14693 Sales statistics on TD's books, 1934 March 1. 484 14694 Lyon, Harris Merton: "The Chorus Girl". 484 14695 Return to Top » XXII.  Clippings (originals for microfilm). Description This series comprises clippings that Theodore and Helen Dreiser collected, as well as those sent to them by their friends and by various clipping services that the Dreisers used. These clippings are very fragile; some folders of clippings have disappeared, and many clippings are unreadable in their current condition. The entire clipping collection was microfilmed, and the microfilm is available to readers. Box Folder Biographical: miscellaneous personal items. 485 14696-14725 Biographical: newspaper photographs; caricatures; TD trip to Europe, 1911-1912; TD trip to Europe, 1926-1927; TD trip to Russia, 1927-1928; TD tour of U.S., 1930; Coal mine strikes, 1931-1932, . 486 14726-14768 Biographical: death notices, 1945; Helen Dreiser activities, 1945-1950; early periodical stories; interviews with TD. 487 14769-14794 Biographical: Miscellaneous opinions; forewords, introductions; poems. Literary criticism: in newspapers and periodicals; reviews and notices of books on TD by Burton Rascoe, Vrest Orton, Dorothy Dudley, Robert Elias, F. O. Matthiessen, Helen Dreiser. 488 14795-14832 Literary criticism: general literary comment. 489 14833-14866 Literary criticism: general literary comment (cont.). 490 14867-14899 Literary criticism: poems; Sister Carrie; "The Mighty Burke;"  Jennie Gerhardt; "The Men in the Dark". 491 14900-14939 Literary criticism: The Financier; A Traveler at Forty; "An Episode;" "The First Voyage Over;" "An Uncommercial Traveler in London;"  The Girl in the Coffin; "Paris;" "Impressions of the Old World". 492 14940-14976 Literary criticism: The Titan; The "Genius". 493 14977-15023 Literary criticism: The "Genius" (cont.);  The Blue Sphere; In the Dark; Laughing Gas; Plays of the Natural and the Supernatural; The Rag Pickers; "Epic of Desire;"  The Light in the Window; "The Lost Phoebe;"  The Bulwark. 494 15024-15070 Literary criticism: The Bulwark (cont.);  A Hoosier Holiday. 495 15071-15111 Literary criticism: "Life, Art and America;" "Married;" "Change;" Free and Other Stories; "The Right to Kill;" "The Country Doctor;"  The Hand of the Potter; Twelve Men; "The Pushcar t Man;" "Love;" "Ashtoreth;"  Hey Rub-a-Dub-Dub; "More Democracy or Less;"  A Book about Myself; "Indiana;"  The Color of a Great City. 496 15112-15143 Literary criticism: An American Tragedy. 497 15144-15177 Literary criticism: An American Tragedy (cont.);  Chains; "Mildred My Mildred;"  Moods; A Gallery of Women; Dreiser Looks at Russia; "This Madness;" "Epitaph;"  Dawn; Newspaper Days; Tragic America. 498 15178-15219 Literary criticism: The Stoic; "Winterton;"  Moods; the Edwards case;  The Living Thoughts of Thoreau; America Is Worth Saving; World Publishing Co. reprints;  Best Short Stories of Theodore Dreiser; "St. Columba and the River;" "The Prince of Thieves." Items of Special Interest to TD: source material. 499 15220-15268 Items of Special Interest to TD: source material (cont.). 500 15269-15298 Items of Special Interest to TD: "On the Banks of the Wabash;" John Cowper Powys lecture on TD; H. L. Mencken; Edgar Lee Masters; Windy McPherson's Son, by Sherwood Anderson;  Contemporary Portraits by Frank Harris;  American Literature of the Present by Herman G. Scheffauer;  My Gal Sal. Foreign Language and Influence: foreign influence; British. 501 15299-15341 Foreign Language and Influence: British (cont.); Czechoslovakian; Danish; Dutch; French; German. 502 15342-15401 Foreign Language and Influence: Philippine; Italian; Mexican; Russian; Spanish; Swedish; Yiddish. Sheri Scott folder. 503 15402-15440 Return to Top » Appendices. Appendix A: Location List of Essays by Theodore Dreiser Title (Folders) "An Address to Caliban" (11902-11905) "Ah! Robert Taylor" (11906) "All Life Is Sacred. Oh Yes" (11907) "America" (11908-11909) "America: A Chain of Phylacteries" (11910) "America and the Artist" (11911) "America—and War" (11912) "American Democracy Against Fascism" (11913) "American Restlessness" (11914) "American Tragedies" (11915) "American Tragedies" [book review] (11916) "America's Foremost Author Protests Against Suppression of Great Books and Art by Self-Constituted Moral Censors" (11917) "America's Only Genius—Boosting" (11918) "And the Greatest of These" (11919) "Appearance and Reality" (11920) "Arbeitslose in New York" (11921) "Are the Masses Worth Saving" (11922) "Armenia Today" (11923) "The Artistic Temperament" (11923) "As If in Old Toledo" (11924) "Ashtoreth" (see Box 177, folders 8240-8241) "Baa! Baa! Black Sheep" series: "Johnny" (11925-11927) "Baa! Baa! Black Sheep" series: "Otie" (11928-11929) "Baa! Baa! Black Sheep" series: "Bill Brown" [by Hazel Godwin] (11930-11931) "Baa! Baa! Black Sheep" series: "Ethelda" (11932-11933) "Baa! Baa! Black Sheep" series: "Clarence" (11934-11935) "Baa! Baa! Black Sheep" series: "Harrison Barr" (11936-11937) "Baa! Baa! Black Sheep" series: "Arthur Baker" [not used] (11938) "Baa! Baa! Black Sheep" series: "Artie and Jean" [not used] (11939) "Baa! Baa! Black Sheep" series: "Christine Marsten" [not used] (11940) "Baa! Baa! Black Sheep" series: "George" [not used] (11941) "Baa! Baa! Black Sheep" series: "Jimmy and the Pituitary Gland" [by Marcia Lee Masters?; not used] (11942) "Baa! Baa! Black Sheep" series: "Louisa" [not used] (11943) "Baa! Baa! Black Sheep" series: "The Meanest Man" [by Marcia Lee Masters; not used] (11944) "Baa! Baa! Black Sheep" series: "Orville Signs the Checks" [not used] (11945-11946) "Baa! Baa! Black Sheep" series: "Our Way of Life" [not used] (11947) "Baa! Baa! Black Sheep" series: "This Is Ida" [not used] (11948) "Baa! Baa! Black Sheep" series: "Uncle Jeffry" [not used] (11949) "The Balance for Right" (11950) "The Beauty of the Tree" (11951) "Berlin" (11952) "The Best Motion Picture Interview Ever Written" (see "Mack Sennett") [Comment on] Books in Brief (11953) "The Bread Line" (see Box 189, folders 8570-8571; Box 190, folder 8618; Box 191, folder 8654) "Brown Fell Dead" (11954) "California Committee Against Initiative Proposition No. 1" (11955) "A Call for a True Relationship" (11956) "Challenge to the Creative Man" (11957-11958) "Change" (see Box 177, folders 8222-8224) "Chaos" (11959-11960) "Charles Fort" (11961) "Chauncey M. Depew" (11962-11967) "A Certain Oil Refinery" (see "The Standard Oil Works at Bayonne") [Chicago] (11968) "Chile as a Prey to American Imperialism" (11969) [China] (11970) "Christmas in the Tenements" (see Box 189, folders 8596-8597; Box 190, folder 8636; Box 191, folder 8675) [The Church and Wealth in America] (11971) "Citizens of Moscow" (11972) [see also Box 223, folders 9354-9355, 9379] "Civilization Where? What?" (11973-11974) "The Cliff Dwellers" (11975-11976) "Cold Spring Harbor" (11977) "The Color of To-day" (11978) [see also "Sonntag—A Record," Box 175, folders 8208-8209] "Come All Ye Who Are Weary and Heavy Laden" (11979) "Comment on Experimental Cinema" (11980) "Commercial Exploitation in America" (11981-11982) [Communist Party] (11983) "Concerning Dives and Lazarus" (see Box 484, folder 14682) "Concerning Our Helping England Again" (11984) "Concerning the Elemental" (11985) "Concerning the Joy of Living and Doing" (11986) "Concerning Religious Charities" (11987) "A Confession of Faith" (11988) "The Control of Sex" (11989) "A Conversation" [between TD and John Dos Passos] (11990) [Comment on] "Co-op," by Upton Sinclair (11991) "The Country Doctor" (11992) [see also Box 175, folders 8195-8205] "The Cradle of Tears" (see Box 189, folder 8592; Box 190, folder 8633; Box 191, folder 8652) "Credo" (11993) [Review of] Crime and Punishment, by F. Dostoievsky (11994) "Crime and Punishment Here" (11995) "A Cripple Whose Energy Gives Inspiration" (see "The Noank Boy") "The Crowding of the Cities" (11996-11997) "Curious Shifts of the Poor" (see "The Old Captain") "Daily News Ears Batted Down by Dreiser" (11998) "The Dawn Is in the East" (11999-12000) "The Day of Surfeit" (12001) "The Democracy of the Funny Bone" (12002) "The Descent of the Horse" (12003) "A Doer of the Word" (12004) "Down Hill and Up: Part I—Down" (12005-12008) "Down Hill and Up: Part II—Up" (12009-12012) "The Dream" (see Box 177, folder 8226) "Dreiser Defends Norris on Power" (see "Reply to Mr. Paul S. Clapp") "Dreiser Describes Spain's Tense Air" (12013) "Dreiser Discusses Dewey Plan" (12014) "Dreiser Finds Morale of Barcelonians High" (12015) "Dreiser on Scottsboro" (see "Public Opinion and the Negro") "Dreiser Sees No Progress" (12016) "Earl Browder—July 9, 1931" (12017) "Earl Browder—Terre Haute" (12018) [The Early Adventures of "Sister Carrie"] (12019) "Editor and Publisher" (see Box 484, folder 14683) "Editorial Conference" (12020) "Edmund Clarence Stedman at Home" (12021) "Education and Civilization" (12022) "Electricity in the Household" (12023) [Emergency Unemployment Relief Committee] (12024) "The Epic Sinclair" (12025-12028) "Epic Technologists Must Plan" (12029) "The Factory (12030-12031) "Fall River" (12032) "Fifty Million Frenchmen" (12033) "Flies and Locusts" (12034) "The Flight of Pigeons" (see Box 189, folder 8559; Box 190, folder 8609; Box 191, folder 8647) "Fools of Love" (12035) "The Fools of Love and the Fools of Success" (12036-12037) "'Free the Class War Prisoners in Boss Jails'—Dreiser" (12038) "Freedom for the Honest Writer" (12039-12040) "Fruit Growing in America" (12041-12042) [review of] Gandbi: The Magic Man (12043) "A Garbled Report" (12043) [The Genesis of the Peach Crop] (12044) [George Ade] (12045) [German temperament] (12046) "The god Forgotten" (12047-12048) "Good and Evil" (12049) "The Gordian Knot" (12050-12054) "The Great American Novel" (12055) [Comment on] The Great Hunger, by Johan Bojer (12056) "Great Problems of Organization. III. The Chicago Packing Industry" (12057) "Greenwich Village" (12058) "Greetings to the Canadian Workers in Their Struggle for Freedom" (12059) "The Harp" (12060) "The Haunts of Bayard Taylor" (12061) "Helen" (12062) "Henry L. Mencken and Myself" (12063) "Hey, Rub-a-Dub-Dub" (12064) [see also Box 177, folders 8219-8221] "Heywood Broun" (12065) "The Hidden God" (12066) "Hitler, Fascism and the Jews" (12067) [Hitler's invasion of Russia, 1941] (12068) "Hollywood: Its Morals and Manners" [parts 1-4] (12069-12073) "Hollywood Now" (12074-12077) "The Holy Roman Church" (12078) "Hoover and the Red Cross: Russia 1918-1922" (12079) "How Russia Handles the Sex Question" (12080) "How the Great Corporations Rule the United States" (12081) "Humanitarianism in the Scottsboro Case" (see Box 484, folder 14684) "Hungary and the Hungarians" (12082) "I Am Grateful to Soviet Russia" (12083) "I Find the Real American Tragedy" (12084-12102) [I Find the Real American Tragedy] [testimony of Robert Allan Edwards on cross-examination from 1934 trial] (12103-12106) "I Hope the War Will Blow Our Minds Clear of the Miasma of Puritanism" (see "What the War Should Do for American Literature") "I Remember! I Remember!" series: contributions by TD, Louise Campbell, Marcia Masters, Mary Donovan, Dagmar Deering, Lulla Adler, and Yvette Szekely (12107-12113) "Ida Hauchawout" (12114-12115) [see also Box 225, folders 9394-9395; Box 229, folders 9467-9468] "If Man Is Free, So Is All Matter" (12116) "Illinois" (12117-12118) "In Mizzouri" (12119) "Incentive—a Problem Essay" (12120) "Indiana" (12121-12123) "Intellectual Unemployment" (12124) "Interdependence" (12125) "Interview between Theodore Dreiser and Harry Bridges" (12126-12128) [see also Box 483, folder 14673] "An Interview with Ty Cobb" (12129-12131) "The Irish Section Foreman Who Taught Me How to Live" (12132) "Is American Freedom of the Press to End?" (12133) "Is Fascism Coming to America?" (12134) "Is There a Future for American Letters?" (12135-12136) "It Is Official Lawlessness in America That Makes Government Regulation or Aid in Any Quarter Wholly Futile" (12137) "It Is Parallels That Are Deadly" (12138-12143) [see also "The Coward" in TD Writings: Short Stories] "J. Q. A. Ward" (12144-12145) "John Reed Club Answer" (12146) "Judge Jones, the Harlan Miners and Myself" (12146) [Comment on] Judgment Day, by Elmer Rice (12147) "Just How Our Corporations Work and Rule" (12148) "Keep Moving [or Starve]" (12149-12150) [Kentucky coal miners and situation in Harlan County] (12151) "Kismet" (12152-12153) "The Laziest Man. A Case of Real Idleness" (12154) "A Lesson from the Aquarium" (12155-12156) "Lessons I Learned from an Old Man" (12157) "Let the Dead Bury the Dead" (12157) "Let Us Look Honestly at the Cause of Sex Crimes" (12158) "A Letter about Stephen Crane" (12159) "A Letter from Rex Beach & the Authors' League of America to T. Dreiser and an Answer" (12160) [Letter to editor re TD's reply of 25 Sept. 1942 to Writers War Board, 6 Oct. 1942] (12161) "Letter to Governor Young" [re Tom Mooney] (12162) [Letter to New York World Telegram in reply to TD re American Federation of Labor] (12163) [Letter to the president and congress of the United States States re the Communist party] (12164) "Letters and Opinions on the Land of the Soviets" (12165) "Libel à la Mode" (12166) "'Liberty': What Price?" (12167) "Life After Death" (12167) "Life, Art and America" (see Box 177, folder 8251) "Life at Sixty-seven" (12168-12169) "Literary Immorality" (12170) "Literature and Journalism" (12171) "The Log of an Ocean Pilot" (12172) [see also Box 189, folder 8556; Box 190, folder 8604; Box 191, folder 8642] "The Loneliness of the City" (12173) "The Love Affairs of Little Italy" (see Box 189, folder 8595; Box 190, folder 8635; Box 191, folder 8662) "Loyalists Tell Dreiser They Will Not Surrender" (12174) "Mack Sennett" (12175-12178) "The Making of Small Arms" (12179) "The Making of Stained-Glass Windows" (12180) "Man and Romance" (12181) "The Man on the Bench" (see Box 189, f. 8585-8586; Box 190, f. 8628; Box 191, f. 8664) "The Man on the Sidewalk" (12182-12183) "The Man Who Bakes Your Bread" (12184-12185) "The Man Who Wanted to Be a Poet" (12186) "Manhattan Beach" (12187) "The Mansions of the Father" (12188-12189) [Marden, Orison Swett, and Success magazine] (12190) "Mark the Double Twain" (12191-12193) "Mark Twain—Three Contacts" (12194-12201) [Massie crime in Hawaii] (12202) "Mathewson" (12203-12205) "The Matter of Labor's Share" (12206) "Meaning of the USSR in the World Today" (12207-12208) "The Men in the Dark" (12209) [see also Box 189, folders 8587-8588; Box 190, folder 8629; Box 191, folder 8665] "The Men in the Snow" (see Box 189, folder 8589; Box 190, folder 8631; Box 191, folder 8667) "The Men in the Storm" (see Box 190, folder 8630; Box 191, folder 8666) "The Mighty Burke" (12210) "Miss Fielding" (12211) "A Modern Advance in the Novel" (12212) "Mooney and America" (12213) [Essay on Tom Mooney] (12214) "More Democracy or Less? An Inquiry" (see Box 177, folders 8245-8247) "The Most Successful Ballplayer of Them All" (see "An Interview with Ty Cobb") "My City" (12215-12216) [see also Box 235] "My Creator" (12217-12218) "My Favorite Fiction Character" (12219) "Myself and the Movies" (12220-12222 "The Myth of Individuality" (12223) "The New and the Old" (12224) "The New Day" (12225) "The New Humanism" (12226) [ New Masses ] (12227) [New York] (12228) "New York" (12229) "Nigger Jeff" (12230) "Nikolai Lenin" (12231) "No Advice to Young Writers" (12232) "No Cars Running" (12233) [Review of] No for an Answer, by Marc Blitzstein (12234) "The Noank Boy" (12235) "The Noise of the Strenuous" (12236) [Review of] Of Human Bondage (12237) "The Old Captain" (12238) "An Old Spanish Custom" (12239) "Olive Brand" (12240) [see also Box 229, folders 9460-9464] "On Doctors" and "On Physicians" (12241) "On—Myself" (12242) "One Day" (12243) [Review of] One Man, by Robert Steele (12244) "Our Amazing Illusioned Press" (see "What Is the Matter with the American Newspaper") "Our American Press and Our Political Prisoners" (12245) "Our Creator" (12246) "Our Democracy: Will It Endure?" (see Box 254, folders 9903, 9923) "Our Greatest Writer Tells What's Wrong with Our Newspapers" (12247) "Our Red Slayer" (see Box 189, folders 8572-8573; Box 190, folder 8619; Box 191, folder 8656) "Out of My Newspaper Days. I. Chicago" (12248) [see also Box 184, folder 8467] "Out of My Newspaper Days. II. St. Louis" (12249) [see also Box 184, folders 8491-8492] "Out of My Newspaper Days. III. 'Red' Galvin" (12250) [see also Box 185, folders 8512-8513] "Out of My Newspaper Days. IV. The Bandit" (12251) [see also Box 185, folder 8514] "Out of My Newspaper Days. V. I Quit the Game" (12252) [see also Box 185, folders 8544-8546] "An Overcrowded Entryway" (see "Hollywood: Its Morals and Manners," Box 342, folder 12069) "Overland [Journey]" (12253-12255) "Paris—1926" (12256) "Policy of National Committee for Defense of Political Prisoners" (12257) "Portrait of a Woman" (12258) [see also "Ernestine" in Box 228, folders 9428-9430] "Portrait of an Artist" (12259) "The Position of Labor" (12260) [Present revolt of the arts in America] (12261) "The Problem of Distribution" (12262) "The Professional Intellectual and His Present Place" (12263) "The Profit-makers Are Thieves" (12264) "Prosperity for Only One Percent of the People" (12265) "Public Opinion and the Negro" (12266) "The Pushcart Man" (see Box 484, folder 14685) [see also Box 189, folders 8568-8569; Box 190, folder 8616; Box 191, folder 8653] "Pushkin" (12267) "Rally Round the Flag" (12268-12269) "The Real Sins of Hollywood" (12270) "The Realistic Parade" (12271) "Rebellious Women and Marriage" (12272-12273) "The Red Cross Brings Poverty and Misery" (12274) "Regina C—" (12275) [see also Box 225, folder 9390; Box 228, folders 9441-9442 "Reina." See also Box 228, folders 9439-9440 (12276) "Rella." See also Box 228, folders 9433-9438 (12277) "Reply to Mr. Paul S. Clapp" (12278) "The Right to Revolution" (12279) "The Rivers of the Nameless Dead" (see Box 189, folders 8598-8599; Box 190, folder 8637; Box 191, folder 8676) "Robison Cars Running" (12280) "The Romance of Power" (12281-12285) "Running the Railroads" (12286-12287) [see also "A Splash of Cold Water on the Railroads"] "Rural America in Wartime" (12288-12289) "Russia: The Great Experiment" (see Box 223, folder 9366) "The Russian Advance" (12290) "Russian Vignettes" (12291) [see also Box 223, folders 9359, 9380] "The Saddest Story" [review of The Good Soldier, by Ford Madox Hueffner (Ford)] (12292) "Samuel Butler" (12292) "Sarah Schanab" (12293) "Scenes in a Cartridge Factory" (12294) "The Scope of Fiction" (12295) "A Sea Marsh" (12296) "The Seventh Commandment" (12297-12299) "Sex Crimes and Morals" (12300-12302) "Sherwood Anderson" (12303-12304) "Should Capitalistic United States Treat Latin America Imperialistically?" (12305-12306) "Should Communism Be Outlawed in America" (12307) "Should the Government Compete in Business with Private Individuals?" (12308) "Should Hungary Have Been Crunched Under Heel?" (12309-12311) "The Silent Worker" (12312) "Six o'Clock" (12313) [See also Box 189, folder 8561; Box 190, folder 8611; Box 191, folder 8649] "The Six Worst Pictures of the Year" (12314) [Sombre Annals], review of Undertow, by Henry K. Marks (12315) [Soviet Union] (12316) "Speaking of Censorship" (12317) "The Spider and the Fly" (12318) "A Splash of Cold Water on the Railroads" (12319) [see also "Running the Railroads"] "Stamp Out Want" (12320) "A Stand in Life" (12320) "The Standard Oil Works at Bayonne" (see Box 484, folder 14686) [see also Box 189, folder 8581; Box 190, folder 8625; (Box 191, folder 8661>] [A Start in Life] (12321-12322) "A Statement by Theodore Dreiser" (see "Comment on Experimental Cinema") [Sterling, George] (12323) "The Story of Harry Bridges" (see "Interview between Theodore Dreiser and Harry Bridges") [see also Box 483, folder 14673] "The Story of the States: No. III—Illinois" (see "Illinois") "The Strike To-day" (12324) "Strikers Arrested" (12325) "A Suggestion for the Communist Party" (12326) "The Superstition of My Birth" (12327) "Symposium on the Medical Profession" (see "On Doctors") "Take a Look at Our Railroads" (see "Running the Railroads" and "A Splash of Cold Water on the Railroads") "Temperaments—Artistic and Otherwise" (12328) "Theodore Dreiser and the Free Press" (12329) "Theodore Dreiser Condemns War" (see "War") "Theodore Dreiser's Interview of Anna Fort" (12330-12331) "Theodore Dreiser Picks the Six Worst Pictures of the Year" (see "The Six Worst Pictures ‥" "They Shall Not Die" (12332) "This Florida Scene" (12333) "This Madness" series: "Introduction" (12334-12336) "This Madness" series: "Aglaia" (12337-12357) "This Madness" series: "Elizabeth" (12358-12362) [see also "A Daughter of the Puritans," Box 227; Box 229, folders 9449-9453] "This Madness" series: "Sidonie" (12363-12391) "This Madness" series: "Camilla" [not used] (12392-12418) "This Madness" series: "Aglaia" [printed version] (12419-12420) "This Madness" series: "The Story of Elizabeth" [printed version] (12421-12422) "This Madness" series: "The Book of Sidonie" [printed version] (12423-12424) [Thompson family] (12425) "The Threat of War and the Youth" (12426) [Time capsule, TD's message for] (12427) "The Tippicanoe" (12428) "The Titan in England" (12429) "To Be or Not to Be" (12429) "To Those Whom It Should Concern" (12430) "The Toil of the Laborer: A Trilogy" [see also Box 177, folder 8229-8230] (12431) "Toilers of the Tenement" (see Box 484, folder 14687) [see also Box 189, folders 8562-8563; Box 190, folder 8612; Box 191, folder 8668] [Toilers of the Tenement: untitled article similar to the one with this title] (12432) "The Training of the Senses" (12433) "The Treasure House of Natural History" (12434) "The Trial of the Negro Communists" (12435) [Tribute to Gorky] (12436) [Unemployment and the WPA] (12437) "Unemployment in America" (12438-12439) "Unemployment in New York" (12440-12441) "U[nited].S[tates]. Must Not Be Bled for Imperial Britain"(12442) "Upton Sinclair" (12443) "War" (12444-12445) [War: TD's denunciation of, 1930s] (12446) "War is a Racket" (12447) "War Is a Racket" (12447) "War or No War" (12448) "The Waterfront" (see Box 190, folder 8603; Box 191, folder 8641) "We Hold These Truths...," (12449) "What Are America's Powerful Motion Picture Companies Doing?" (12450) "What Has the Great War Taught Me?" (12451) "What I Believe: Living Philosophies--III" (12452) [see also "Credo"] "What Is Americanism?" (12453) "What Is Democracy?" (12454) [see also Box 252, folder 9838; Box 254, folders 9898, 9918] "What Is the Matter with the American Newspaper" (12455-12458) [see also "Our Greatest Writer Tells What's Wrong with Our Newspapers"] "What My Mother Meant to Me" (12459) "What the War Should Do for American Literature" (12460) "What to Do" (12461) "When the Sails Are Furled: Sailor's Snug Harbor" (12462) [see also Box 190, folder 8620; Box 191, folder 8657] "When Will the Next War Start?" (12463) "Whence the Song" (see Box 189, folder 8574; Box 191, folder 8643) "Where is Labor's Share?" (12464) "Where Is Leadership for the Workingman?" (12465) "White Magic" (12466-12467) "Whom God Hath Joined Together" (12468-12469) "Why Help Russia?" (12470) "Why I Believe the Daily Worker Should Live?" (12471) "Why I Like the Russian People" (12472) "Why I Propose to Vote for the Communist Ticket" (12473) "Why Physical Morality?" (12474) "Will Fascism Come to America?" (see "Is Fascism Coming to America?") "Winterton" (12475) "Women Are the Realists" (12476-12477) [see Box 484, folder 14688 for reprint] "Woods Hole and the Marine Biological Laboratory" (12478) "A Word Concerning Birth Control" (12479) "Work of Mrs. Kenyon Cox" (12480) "Work of Vengeance" (12481) "Writers Declare: 'We Have a War to Win'" (12482) "Writers Take Sides" (12483) "The Yield of the Rivers" (12484) "You, the Phantom" (12485-12486) 3 untitled essays (12487-12489) Appendix B: Location List of Short Stories by Theodore Dreiser Title (Folders) "Ambling Sam" (12490) "Art for Art's Sake" (12491) "As the Hart Panteth after the Roe" (12492) "The Bargainers—Mrs. P.A.s Romance" (12493) "Beauty" (12494) "Bleeding Hearts" (12495) "The Building of New York's First Apartment Hotel" (12496) ["The Door of the] Butcher Rogaum" (12497) [see also Box 484, folder 14689] "Chains" [story plus proposed table of contents for book of short stories using this title] (12498) "Choosing" (12499) [see also Newspaper Days : ms, chaps. XXV-XXX] ["The Power of] Convention" (12500-12503) "The Coward" (12504) [see also "It Is Parallels That Are Deadly" in TD Writings: Essays] "The Credo (I Believe)" (12505) "The Crime" (12506) "The Cruise of the Idlewild" (12507-12509) "Cut Out" (12510) "De Lusco" (12511-12512) "The Empty Nest" (12513) "Enchantment" (12514) "The End of the Day" (12515) "The Ex Governor" (12516) "The Failure" (12517) "The Failure—the Other One" (12518) "The Fairy" (12519) "Father" (12520) "The Father" (12521) "The Favor" (12522) "Fine Feathers" (12522) "Fine Feathers" (12523) "Fine Furniture" (12524-12530) "Fulfillment" (12531-12532) "The Fur Merchant" (12533) "The Gentler Sex" (12534) "A Girl" (12535) "Gold Teeth" (12536-12538) "The Gulls" (12539) "The Hand" (12540-12541) "The Happy Marriage" (12542) "The Hedonist" (12543) "The Heir" (12544) "Her Boy" (12545-12558) "Her Problem" (12559) "The Hermit" (12560) "His Sister" (12561) "The Homely Woman" (12562) "How She Won—the Girl Who Woke Up" (12563) "In Memory" (12564) "Irrepressible Edward" (12565) "Is Life Worth Living" (12566) "It Shall Not Be" (12567) "Jealousy" (12568) [see also "The Shadow"] "Khat" (12569) "The King of Shadows" (12570) "Kismet" (12571) "The Last Sip" (12572) "Let the Dead Bury the Dead" (12573) "The Lost Father" (12574) "The Lost Phoebe" (12575-12576) "The Man Who Wanted to Be a Poet" (12577) "Marriage—for One" (12578) "The Mercy of God" (12579-12582) "Mrs. George Sweeny" (12583) "Mr. Grillsnider" (12584-12585) "Mobgallia" (12586) "Nemesis" (12587) ["The Lynching of] Nigger Jeff" (12588-12593) "No Sale" (12594) "The Old Neighborhood" (12595-12600) "Old Rogaum and His Theresa" (see ["The Door of the] Butcher Rogaum") [Olga and her "true" love] (12601) "Oolah, Boolah, Boolah!" (12602) "Paternity" (12603) "Phantom Gold" (12604) "The Prince Who Was a Thief" (12605-12606) "Pure Chemistry" (12607) "The Reigning Success" (12608) "Revenge" (12609-12610) "The Reward" (12611) "The Rivals" (12612) "The Road to Happiness" (12613) "The Sailor Who Would Not Sail" (12614-12616) "Sanctuary" (12617) "The Second Choice" (12618-12620) "The Second Motive" (12621) "A Sentimental Journey" (12622) "The Shadow" (12623) [see also "Jealousy"] "Shadows" (12624) "So Nice of You" (12625) "Solution" (12626-12629) [see also Box 484, folder 14690, and "Solution" in TD Writings: Screenplays and Radio Scripts ] "A Story of Stories" (12630-12632) "The Strangers" (12633) "Surcease" (12634) "Sympathy in Grey" (12635) "Tabloid Tragedy" (12636) "That Which I Feared" (12637) "Three Hundred Dollars" (12638) "The Tithe of the Lord" (13639-12640 [see also Box 483, folder 14672] "The Total Stranger" (12641-12642) "Transubstantiation" (12643) "Two Hundred Dollars" (12644) "Typhoon" (12645-12652) "The Virtues of Abner Nail" (12653) "The Voice from Heaven" (12654) "The Wages of Sin" (see "Typhoon") "What's Right" (12655) "When the Old Century Was New" (12656-12657) "Willard and Claire" (12658) "The Writer" (12659) Untitled story manuscripts (12660-12663) Untitled story of an unfaithful wife (12664) Untitled story outline (12665) Untitled story outline [related to "Revenge"?] (12666) Untitled story typescript (12667) Appendix C: Location List of Poems by Theodore Dreiser Title (Folders) "An Address to the Sun" (12700) "All" (12701) "All in All" (12702) "All Thought—All Sorrow" (12703) "Allegory" (12704) "Ambition" (12705) "Amid the Ruins of My Dreams" (12706) "And Continueth Not" (12707) "Arizona" (12708) "As a Lone Horseman, Waiting" (12709) "As with a Finger in Water" (12710) "The Ascent" (12711) "Asia" (12712) "The Aspirant" (12713) "Avatar" (12714) "The `Bad' House" (12715) "The Balance" (12716) "Bayonne" (12717) "The Beauty" (12718) "Before the Accusing Faces of Billions" (12719) "Bells" (12720) "Beyond the Tracks" (12721) "The Blurred of Vision" (12722) "Boom—Boom—Boom" (12723) "Borealis" (12724) "Brahma" (12725) "The Brief Moment" (12726) "The Broken Ship" (12727) "The Brook" (12728) "Brooklyn Bridge" (12729) "By the Waterside" (12730) "Cattails—November" (12731) "The Cattle Train" (12732) "Chief Strong Bow Speaks" (12733) "The City" (12734) "City's Accidents" (12735) "The City's Night" (12736) "The Coal Shute" (12737) "Commune" (12738) "Conclusion" (12739) "Confession" ["I!"] (12740) "Confession" ["Love has done this for me:"] (12741) "Contest" (12742) "Crowds" (12743) "Crows" (12744) "The Dancers" (12745) "The Dark Hazard" (12746) "Darkling Desires" (12747 "Dawn" (12748) "The Deathless Princess" (see "I Am Repaid") "Decadence" (12749) "Defeat" (12750) "Demogorgon" (12751) "Demons" (12752) "Desire—Ecstasy" (12753) "Die Sensucht" (12754) "Dives Advises" (12755) "Divine Fire" (12756) "Dreams" ["Always within the heart,"] (12757) "Dreams" ["Transitory dreams"] (12758) "Driven" (12759) "Elegy" (12760) "Epitaph" (12761) "Epitaph" [scored for music by Walter Grondstay] (12762) "Equation" (see "Exchange") "Escape" (12763) "Etching" (see "Pastel" ["The hills flow like waves"]) "Eunuch" (12764) "The Evanescent Moment" (see "The Brief Moment") "Evening—Mountains" (12765) "Evensong" (12766) "Everything" (12767) "The Evil Treasure" (12768) "Exchange" (12769) "The Excuse" ["It has been my lacks"] (12770) "The Excuse" ["Those things"] (12771) "Eyes" (12772) "The Factory" (12773) "Factory Walls" (12774) "The Failure" ["Always a man will take color from his work"] (12775) "The Failure" ["The unconscious that drove me"] (12776) "Fata Morgana" (12777) "The Favorite" (12778) "The Fire of Hell" (12779) "Five Moods in Minor Key" [Includes "Tribute," "The Loafer," "Improvisation," "Machine," and "Escape"] (12780) "Five Poems by Theodore Dreiser" [includes "Tall Towers," "The Poet," "In a Country Graveyard," "The Hidden God," and "The New Day"] (12781) "Flower and Rain" (12782) "The Fomentor" (12783) "The Fool" (12784) "For a Moment the Wind Died" (12785) "For a Moment the Wind Died" [scored for music by Lillian Rosedale Goodman] (12786 "For I Have Made Me a Garden" (12787) "The Forest" (12788) "Foreword" (12789) "Four Poems" [includes "Wood Note, "For a Moment the Wind Died," "They Shall Fall as Stripped Garments," and "Ye Ages, Ye Tribes!"] (12790) "14th Street" (12791) "Freedom" (12792) "Frustrated Desire" (12793) "Fugue" (12794) "The Funeral" (12795) "The Furred and Feathery" (12796) "The Galley Slave" (12797) "The Garden" (12798) "Geddo Street" (12799) "The Ghetto" (12800) "The Gift" (12801) "The Gifted Company" (12802) "The Gladiator" (12803) "Gold" (12804) "Good Fortune" (12805) "The Granted Dream" (12806) "Grant's Tomb" (12807) "The Great Face" (12808) "The Great Lack" (12809) "The Great Silence" (12810) "The Great Voice" (12811) "The Greater Need" (12812) "Harbor—Evening" (12813) "Heaven" (12814) "Heights" (12815) "Hell Gate" (12816) "Hey Rube!" (12817) "The Hidden Poet" (12818) "His Mother" (12819) "Home" (12820) "The Home Maker" (12821) "Honest Katie" (12822) "The House of Dreams" (12823) "The Hudson" (12824) "The Hudson—Morning" (12825) "The Hudson—West Shore—Evening" (12826) "The Husbandman" (12827) "I Am Repaid" (12828) "If Beauty Would But Dwell with Me" (12829) "The Image of Our Dreams" (12830) "Improvisation" (12831) "In a Negro Graveyard" (12832) "In Rebuttal" (12833) [In the Park] (12834) "In This Park" (12835) "In the Seaside Auditorium" (12836) "Individuality" (12837) "Innocence" (12838) "Inquiry" (12839) "Interrogation" (12840) "Intruders" (12841) "It" (12842) ["It is with these living"] (12843) "Ita Est" (12844) "Job and You" (12845) "Kansas and Nebraska" (12846) "Karma" (12847) "The Kiln" (12848) "Laborer—Mexico" (12849) "The Lack" (12850) "The Last Voice" (12851) "Let Me Know More of Thee" (12852) "Liberty" (12853) "Life"—2 versions: (1) ["Ever a greater illusion"] and (2) ["It is so beautiful"], scored for music by Lillian Rosedale Goodman (12854) "Light and Shadow" (12855) "Lillies and Roses" (12856) "Links" (12857) "Little Dreams, Little Wishes" (12858) "The Little Flower of Love and Wonder" (12859) "The Little Home" (12860) "Little Keys" (12861) "Little Moonlight Things of Song" (12862) "The Little Shops" (12863) "The Loafers" (12864) "Love" ["I am but a spoonful of honey"] (12865) "Love" ["I stood in the rain"] (12866) "Love" ["Like a cactus in a desert"] (12867) "The Love-Death" (12868) "Love Song" ["To me"] (12869) "Love Song" ["To me"] [scored for music by Hermann Erdlen; German libretto for baritone and string quartet by Lina Goldschmidt] (12870) "Love Song" ["You have entered my dreams!"] (12871) "The Lovers" ["Today!"] (12872) "The Lovers" ["Two resplendent flames"] (12873) "Machine" (12874) "Machines" (See "Summer") "Man" (12875) "The March" (12876) "Marriage" (12877) "Marsh Bubbles" (12878) "The Martyr" (12879-12880) "The Masque" (12881) "Material' Possessions" (12882) "The Meadows" (12883) "A Mean Street" (12884) "Melody" (12885) "The Merging" (12886) "Messenger" (12887) "The Miracle" (12888) "Mirage" (12889) "Miserere" (12890) "Mood Music" (12891) "Moon Moth" (12892) "Morning—East River" (12893) "Morning in the Woods" (12894) "Morning—North River 1." (12895) "Morning—North River 2." (12896) "Morning—the Whistle" (12897) "Mortuarium" (12898) "Mothers" (12899) "The Mourner" (12900) "The Muffled Oar" (12901-12902) "The Multitude" (12903) "The Mysterious Master" (12904) "Mystery" (12905) "The Myth of Possessions" (12906-12907) "Nature" (12908) "The Nestlings" (12909) "The New Day" (12910) "New Faces for Old" (12911) "The New World" (12912) "Newark Bay" (12913) "Nocturne—North River" (12914) "Not Forgotten" (12915) "Nothing" (12916) "Obliteration" (12917) "October" (12918) "Oh Urgent Seeking Soul" (12919) "The Old South" (12920) "The One and Only" (see "Die Sensucht") "Orchestra" (12921) "The Orient" (12922) "Out of? In?" (12923) "Outcast" (12924) "Passion" (12925) "Pastel" ["A grey day—"] (12926) "Pastel" ["The hills flow like waves"] (12927) "Pastel: Twilight" (12928) "The Perfect Room" (12929) "The Pervert" (12930) "Phantasm" (12931) "Phantasmagoria" (12932) "Pierrot" (12933) "Pigeons" (12934) "Polarity" (12935) "The Possible" (12936) "The Prisoner" (12937) "The Process" (12938) "Proclamation" (12939) "The Prophet" (12940) "Proteus" (12941) [see also "The Fomentor"] "The Psychic Wound" (12942) "Question" (12943) "The Question" ["More life for more people—"] (12944) "The Question" ["No gratitude?"] (12945) "The Questioner" (12946) "Rain" (12947) "Rain—November" (12948) "' Reality, '" (12949) "Recent Poems of Life and Labour" [includes "The Factory," "The Stream," and "Geddo Street"] (12950) "The Reformer Speaks" (12951) "Regret" (12952) "Religion" (12953) "Requiem" (12954) "Requiem" [scored for music by Vera Dreiser] (12955) "Resignation" (12956) "Revenge" (12957) "Revery" (12958) "Revolt" (12959) "Reward" (12960) "The Riddle" (12961) "The River Dirge" (12962) "River Scene" (12963) "The Sailor" (12964) "St. Francis to His God" (12965) "St. George's Ferry" (12966) "St. John" (12967) "St. Lukes" (12968) "Sanctuary" (12969) "The Savage" (12970) "Schimpfen Sie" (12971) "Search Song" (12972) "Selah" (12973) "The Self-Liberator" (12974) "Seraphim" (12975) "Shadow" (12976) "The Shadow" (12977) "Shimtu" (12978) "Siderial" (12979) "The Singer" (12980) "Something Is Thinking" (12981) "Song" ["Blow winds of summer, blow"] (12982) "Song" ["Old woman"] (12983) "Song—Rain" (12984) "The Sons of Prometheus" (12985) "Soo-ey" (12986) "The Sower" (12987) "The Sowing" (12988) "Static" (12989) "The Storm" (12990) "The Stranger" (12991) "The Stylist" (12992) "Summer" (12993) "A Summer Evening" (12994) "Sun and Flowers and Rats" (12995) ["Sunday again the city will sleep late"] (12996) "Sunset" (12997) "Sunset and Dawn" (12998) "Supplication" (12999) "Sutra" (13000) "Take Hands" [scored for music by Carl E. Gehring] (13001) "Tenantless" (13002) "That Accursed Symbol" (13003) "They Have Conferred with Me in Solemn Counsel" (13004) ["The things of death are bitter and complete"] (13005) "The Thinker" ["Majestic"] (13006) "The Thinker" ["Out of Boost Pegram's poolroom"] (13007) "Thought" (13008) "Thoughts" (13009) "Through All Adversity" (13010) "Tigress and Zebra" (13011) "Time" (13012) [see also "The New World"] "The Time-Keeper" (13013) "Times Square (Midnight)" (13014) "Tis Thus You Torture Me" (13015) "To a Windflower" (13016) "To a Wood Dove" [scored for music by Lillian Rosedale Goodman] (13017) "To Make Him Know" (13018) "To Oscar Wilde" (13019) "To You" (13020) "The Torrent" (13021) "The Tower" (13022) "The Toymaker" (13023) "The Traveler" (13024) "Trees" (13025) "Tribute" (13026) "The Triumph" (13027) "The Troubadour" (13028) "Tryst" (13029) "Two by Two (13030) "The Ultimate" (13031) "The Ultimate Necessity" (13032) "The Unterrified" (see "Love" ["Like a cactus in a desert"]) "Us" (13033) "The Victor" (13034) "The Vigil" (13035) "The Voyage" (13035) "Walls" (13036) "The Wanderer" (13037) "The Watch" (13038) "The Waterside" (13039) "What" (13040) "What to Do" (13041) "Who Lurks in the Shadow?" (13042) "Winter" (13043) "With Whom Is Shadow of Turning" (13044) "Wood Tryst" (13045) "Words" (13046) "Wounded by Beauty" (13047) "The Wraith" (13048) "You Are the Silence" (13049) "The Young Girl" (13050) "Young Love" (13051) "Youth" (13052) Appendix D: Location List of Plays by Theodore Dreiser Title (Folders) "The Bargainers—a Modern Drama" (13070-13071) "The Bell" (13072) "The Best People" (13073) "The Blue Sphere" (13074-13080) ["The Blue Sphere"] "Die blaue Kugel" [scored for music by Hermann Erdlen; translation by Lina Goldschmidt and Hans Bodenstedt] (13081-13084) "The Choice" (13085-13095) [see also "The Choice" in TD Writings: Screenplays and Radio Scripts.] "The Dream" (13096-13100) "The End: A Reading Play in Scenes" (13101) "Fidelity" (13102) "The Fool: A Tragedy" (13103) "The Girl in the Coffin" (13104-13109) "Gorm: A Tragedy" (13110) "The Hand of the Potter" (13111-13124) "The Herald" (13125) "In the Dark" (13126-13127) "Jeremiah I" (13128) "Laughing Gas" (13129-13130) "Laughing Gas" [scored for music by Ivan Boutnikoff] (13131) "The Legacy" (13132) "The Light in the Window" (13133-13134) ["The Light in the Window"] "Das Licht im Fenster" [German translation by Lina Goldschmidt] (13135) Mildred—My Mildred" (13136-13140) "The Neer-do-Well" (13141) "Old Rag Picker" (13142) "Phantasmagoria" (13143) "The Spring Recital" (13144) "The Spring Recital" (ballet-pantomime) [music by Ivan Boutnikoff] (13145) "Town and Country" (13146) "The Voice" (13147) Fragments and outlines (13148-13149) Appendix E: Location List of Screenplays and Radio Scripts by Theodore Dreiser Title (Folders) Memorandum re possible movie material in TD's work (13150) List of movie scenarios by TD or of TD's works (13151) "Arda Cavanaugh" [screen adaptation by Elizabeth Coakley] (13152) [see also "Cinderella the Second" "Big Town: Death Weather" [radio adaptation by Marian Spitzer and Milton Merlin] (13153-13157) "Box Office" [screen adaptation by Elizabeth Coakley] (13158-13159) "Chaduji" (13160-13162) "The Choice" (13163-13165) [see also "The Choice" in TD Writings: Plays ] "Cinderella the Second" [screen adaptation by Elizabeth Coakley] (13166-13168) [see also "Arda Cavanaugh"] "The Clod" (13169) "Culhane, the Solid Man" (13170-13171) "The Door of the Trap" (13172-13174) "Hadassah or Ishtar or Esther" (13175) "The Hand" (13176-13177) "Helen of Troy" (13178) "Home Is the Sailor" [outline for movie script by Esther McCoy] (13179-13182) "Lady bountiful, Jr." (13183-13184) "The Long Long Trail" (13185-13187) "The Lorlei" (13188) "My Gal Sal" (13189-13193) "My Gal Sal" [outline for a movie script by Helen Dreiser] (13194-13196) "My Gal Sal" [by?] (13197-13198) "My Gal Sal" [a review by C. J. Dyer] (13199) "Our America" [proposal for radio series] (13200-13202) "The Prince Who Was a Thief" (13203-13206) "Revolt or Tobacco" [source material] (13207-13221) "Revolt or Tobacco" [synopses, outline, and summary] (13222-13225) "Revolt or Tobacco" [photographs from trip] (13226) "Revolt or Tobacco" [notes from trip] (13227) "Revolt or Tobacco" [material on Super Pictures, Inc.] (13228-13230) "Revolt or Tobacco" (13231-13294) "Sanctuary" [screen adaptation by Helen Dreiser] (13295) "Solution" [outline, synopsis by Elizabeth Kearney, screen adaptation] (13296-13304) [see also "Solution" in TD Writings: Short Stories ] "Storm Tossed" (13305) "Stuck with the Glue: A Detective Drama" (13306) "Suggested script for Anna Sten" (13307) "Suicide Clinic" [screen adaptation by Esther McCoy] (13308-13309) "The Tables Turned" (13310) "The Tiger" (13311) "The Tithe of the Lord" [synopsis for a motion picture by Elizabeth Coakley] (13312) "The Twenty Wishes" (13313) "Vaitua" (13314-13316) "Women Always Knit" [by Ladislas Foodor, with comments and suggestions by TD and Elizabeth Coakley] (13317) Untitled ideas for screenplays (13318-13321) Appendix F: Manuscript and Sheet Music by Paul Dresser "After the Battle" (1905) - 2 copies "The Army of Half-Starved Men" (1902) - includes advertisement for "Glory to God" inside front cover "Ave Maria" (1908) "A Baby Adrift at Sea, Song and Chorus" (1890) "Baby's Tears, Song and Chorus" (1889) "The Battery" (1895) "The Boys are Coming Home To-day" (1903) "Come Tell Me What's Your Answer, Yes or No" (1908) - 2 copies "Coontown Capers, Two-Step March (A Negrosyncrasy)" (1907) - by Theo. F. Morse with characteristic verse by Paul Dresser "The Curse of the Dreamer, Descriptive Solo for Baritone or Mezzo-Soprano" (1908) "The Day That You Grew Colder, A Retrospective Ballad" (1904) - includes advertisement for "Mary Mine" "Days Gone By, Song and Chorus" (1900) "Did You Ever Hear a Nigger Say 'Wow'" (1900) - 2 copies "Don't Forget Your Parents" (1889) - minor lyric changes and key change from 1887 version "Don't Forget Your Parents at Home" (1887) "A Dream of my Boyhood's Days" (1906) "Every Night There's a Light, or, The Light in the Window Pane" (1908) "Gath'ring Roses for Her hair, Sentimental Song" (?) "Glory to God, Sacred Song" (1902) "The Green Above the Red" (1900) - 2 copies, both include advertisement for "In Good Old New York Town" on p. 5 "He Brought Home Another" (1896) - 2 copies, one published by Howley, Haviland and Co., the other by Herbert H. Taylor, inc. "He Didn't Seem Glad to See Me" (1903) "He Fought for the Cause He Thought was Right" (1906) "He Loves Me, He Loves Me Not" (1906) "He Was a Soldier" (1902) "Her Tears Drifted Out With the Tide" (1900) "I Long To Hear from You" (1888) "I Send to Them My Love" (1888) "I Was Looking for My Boy, She Said; or Decoration Day" (1905) - 2 copies "I Wish that You Were Here Tonight" (1896) "I Wonder If She'll Ever Come Back To Me" (1906) "I Wonder If There's Someone Who Loves Me" (1900) "If You See My Sweetheart" (1907) "I'm Going Far Away, Love" (1902) "In Dear Old Illinois" (1902) "In the Sweet Summer Time" (1907) - 2 copies "Jim Judson (From the Town of Hackensack)" (1905) "The Judgement is at Hand (Paul Dresser's Last Song)" (1906) "Just to See Mother's Face Once Again" (1901) "The Limit Was Fifty Cents" (1900) "Little Fanny McIntyre, Waltz Song" (1900) "Little Jim" (1900) "The Lone Grave" (1900) "Love's Promise" (1887) "Mary Mine" (1904) - 2 copies "Mother Will Stand By Me" (1889) "Mr. Volunteer; or, You Don't Belong to the Regulars, You're Just a Volunteer" (1901) - includes advertisement for "The Voice of the Hudson" on p. 4 "My Flag! My Flag!" (1902) "My Gal Sal; or, They Called Her Frivolous Sal" (1905) - includes sample quartet chorus inside front cover "My Sweetheart of Long, Long Ago" (1901) "Never Speak Again" (1887) "Niggah Loves His Possum; or, Deed, He Do, Do, Do" (1905) "The Old Flame Flickers, and I Wonder Why" (1908) "On the Banks of the Wabash, Far Away" - one copy is missing the music but has P. Dresser's autograph inside back cover, signature dated Jan. 6, 1899; another copy (copyright, 1907) is complete and includes a sample of "You Mother Wants You Home, Bo y (And She Wants You Mighty Bad)" inside front cover; 2 other copies (copyright, 1912) and another (1922) which touts silent screen star Madge Evans "On the Shore of Havana, Far Away (A Paraphrase)": to the melody of the Famous Song "On the Banks of the Wabash" (1908) "Once Every Year" (1908) - 2 copies "Our Country, May She Always Be Right, But Our Country Right or Wrong" (1908) "Perhaps You'll Regret Someday" (1908) - 2 copies "A Sailor's Grave by the Sea" (1907) - 2 copies "Say Yes, Love!" (1907) - 2 copies, one with front cover missing "Show Me the Way, Sacred Song" (1906) "The Songs We Loved, Dear Tom" (1888) "A Stitch in Time Saves Nine" (1889) "The Story of the Winds" (1888) "Sweet Savannah" (1908) - 2 copies "Take a Seat Old Lady" (1901) "There's a Ship" (1902) - 2 copies "We are Coming Cuba Coming" (1908) "We'll Fight Tomorrow Mother" (1908) "When I'm Away From You, Dear" (1904) "When Mammy's By Yo' Side" (1900) "When Zaza Sits on the Piazza" (1905) - words by Jos. Farrell and music by Henry Frantzen; includes advertisement for "Jim Judson (From the Town of Hackensack)" inside front cover; on p.3 a note by Theodore Dreiser (T.D.) states that Paul Dresser w rote both the music and the lyrics "White Apple Blossoms" (1901) "Wrap Me in the Stars and Stripes" (1900) "Your God Comes First, Your Country Next, Then Mother Dear" (1908) "Your Mother Wants You Home, Boy (And She Wants You Mighty Bad)" (1908) "You're Going Far Away, Lad; or, I'm Still Your Mother Dear" (1907) "You'se Just a Little Nigger, Still You'se Mine All Mine" (1908) Additional Material Letter - from Emily Grant von Tetzel to the editor of "The World"; includes Dresser's verses "The Wolves of Finance", dated March 15, 1917 Clippings of lyrics - "Mother Told Me So" and "The Letter that Never Came" Clipping - Paul Dresser's obituary, February 10, 1906 Lyric Sheets - typed and handwritten - "Drink to Your Sweethearts Dear," "I Hate to Leave You Behind" and "The Judgement is at Hand"; 2 sheets have notes by Theodore Dreiser Picture of Paul Dresser Cards from Paul Dresser's funeral (also "Mementos") Copyright certificate for "You Are My Sunshine Sue" made in the name of Theodore Dreiser, dated 6/26/43 Ms. - "Baby Mine" Ms. - "The Great Old Organ" Ms. - "Marching through Georgia" - includes typed lyric sheet for same Ms. - "The People are Marching By" Ms. - "Would I Were a Child Again" Ms. - "You are my Sunshine Sue" Appendix G: Works by Others in the Theodore Dreiser Papers Description (Folders) Adams, Henry. "The Rule of Phase Applied to History" [1909] (14371) American Civil Liberties Union. "Legal Tactics for Labor's Rights" [1930] (14372) "American Literature in the U.S.S.R. (1939-1940)" (14373) Andrews, John William. "Georgia Transport" [1937] (14374) "Apostle of Naturalism" [1971] (14375) "An Appreciation of Dreiser's Dawn " [1931] (14376) Aragon, Louis. "When We Met Dreiser"; Burgum, Edwin Berry. "Dreiser and His America" [1946] (14377) Auchincloss, Louis. "Introduction" [to Sister Carrie ] [1969] (14378) Auerbach, Joseph. "Authorship and Liberty" [1918] (14379) Avary, Myrta Lockett. "Success—and Dreiser" [1938] (14380) Bardeleben, Renate von. "Dreiser's English Virgil" [1992] (14381) Bardeleben, Renate von. "Personal, Ethnic, and National Identity: Theodore Dreiser's Difficult Heritage" [1991] (14382) Bardeleben, Renate von. "The Thousand and Second Nights in 19th-century American Writing" [1991] (14383) Barnett, James. "Speeding Up the Workers" [1930] (14384) Becker, George J. "Theodore Dreiser: The Realist as Social Critic" [1955] (14385) Beerman, Herman, and Emma S. Beerman. "A Meeting of Two Famous Benefactors of the Library of the University of Pennsylvania—Louis Adolphus Duhring and Theodore Dreiser" [1974] (14386) Bein, Albert. "Straight from the Heart" [1938] (14387) Benezet, Carol. "To Theodore Dreiser" [poem] (14388) Beverly, Judith de. "The Genius: An Appreciation of Theodore Dreiser" [1921] [poem] (14389) Bingham, Robert W. "Buffalo's Mark Twain" [1935] (14390) Bird, Carol. "Dreiser on Censorship" [1949] (14391) Birinsky, Leon, and Kurt Siodmek. "Whitechapel" (14392) Bloom, Marion. [account of a nurse's experiences in World War I] (14393) Book Find News [issue in tribute to TD, March 1946] (14394) Book Find News, January 1947 (14395) Book Find News [issues with ads for TD's books, May and December 1946, April 1947] (14396) "Books of the Month: Floyd Dell and Theodore Dreiser" [1921] (14397) Bornstein, Josef. "Ein Dichter besichtigt Russland" [1929] (14398) Bourne, Randolph. "The Art of Theodore Dreiser" [1917] (14399) Bowman, Heath. Hoosier, chap. 18 [1941] (14400) Boyd, Willilam Riley. "A Contrast between the Whipping Post of 'Darkest Delaware' and the Convict Camps of Georgia" [1901] [speech] (14401) Braley, Berton. "Three--Minus One" [1920] (14402) Brand, Milton. [review of The Outward Room ] (14403) Braziller, George. "How Will Dreiser Be Honored?" [1946] (14404) Bulletin of the League of American Writers. [announcement of a dinner honoring TD, 1938] (14405) C.K. "To a Realist" [poem; see Harvey, Dorothy, "To T.D."] (14406) Campbell, Louise. "An Afternoon in a Boardwalk Auction Shop" (14407) Campbell, Louise. "Career" (14408) Campbell, Louise. "I'm Seventeen To-day!" (14409) [N.B.: other writings by Louise Campbell are in her correspondence file] Čapek, J. B. "Interview o Theodoru Dreiserovi" [1930] (14410) Carringer, Robert, and Scott Bennett. "Dreiser to Sandberg: Three Unpublished Letters" (14411) Čelakovský, F. L. Ohlasy Písní Českých [1925] (14412) [T]Chekhov, Anton. A Bear [1909] (14413) Chekhov, Anton. The Cherry Garden [1908] (14414) Chevalier, Haekon M. "The Intellectual in the American Community" [1933] (14415) [Clark, Clara L.]. "Challenge" [1933] (14416) [Clark, Clara L.]. "My Solitude" [1933] (14417) Clark, Clara L. [review of Beyond Women, by Maurice Samuel, 1934] (14418) Coakley, Elizabeth. [ideas for scenes for a movie, 1943] (14419) Conrad, Lawrence. "Theodore Dreiser" [1930] (14420) Cosulich, Gilbert. "Mr. Dreiser Looks at Probation" [1938] (14421) Cosulich, Gilbert. "Recent Data on Female Criminals" [1937] (14422) Cowley, Malcolm. "The Slow Triumph of Sister Carrie" [1947] (14423) Cunard, Nancy. "Black Man and White Ladyship" [1931] (14424) Cuthbert, Clifton. "An American Tragedy" [1930] (14425) Dash, Mike. "Charles Fort and a Man Named Dreiser" [after 1986] (14426) "David, the Story of a Soul" (14427-14429) [Davis, Mrs.]. [outline and script for a movie?] (14430) De Kruif, Paul. "Jacques Loeb" [fragment, 1925] (14431) Dietrich, John H. "Personal Beliefs of Noted Men" [1932] (14432) [Dostoyevsky, Fyodor]. "The Idiot" [playscript by Powys?] (14433-14435) Douglas, George. "For Theodore Dreiser" (14436) Dowell, Richard W. "'On the Banks of the Wabash': A Musical Whodunit" [1970] (14437) Dowell, Richard W. "'You Will Not Like Me, I'm Sure" [1970] (14438) Dreiser, Edward M. "Theodore Dreiser" [1946] (14439) "Dreiser: Detroit's Favorite Author" [1926?] (14440) "Dreiser in Passaic" [1932] (14441) Duis, Perry. Chicago: Creating New Traditions [1976] (14442) Dumont, Henry. [introduction to a biography of George Sterling, with additional material by Henry von Sabern] (14443) Dunsany, Lord. "A Night at an Inn" [1916] (14444) Elias, Robert. "Dreiser: Bibliography and the Biographer" [1971] (14445) Elias, Robert. "The Library's Dreiser Collection" [1950] (14446) Elias, Robert. "Theodore Dreiser: A Classic of Tomorrow" [ca. 1937] (14447) Esherick, Wharton. "He Helps Me Build a Building" (14448) "F." "Our Civilization" (14449) Farrell, James T. "The Fate of Writing in America" [1946] (14450) Farrell, James T. "A Night in August, 1928" (14451) Farrell, James T. "Some Correspondence with Theodore Dreiser" [1951] (14452) Farrell, James T. "Theodore Dreiser" [1946] (14453) Fast, Howard. [introduction to Best Short Stories of Theodore Dreiser, 1947] (14454) Fawcett, James Waldo. "The Genius" [poem] (14455) Ficke, Arthur Davison. "Memory of Theodore Dreiser" [1933] (14456) Ficke, Arthur Davison. "To Theodore Dreiser on Reading 'The Genius'" [1915] (14457) [review of The Financier ] (14458) Fort, Charles. "Had to Go Somewhere" [1910] (14459) Fox, George L. "The Panama Canal as a Business Venture" [1919?] (14460) Freeman, John. "An American Tragedy" [review of TD's book, 1927] (14461) "The French in Syria" [after 1926] (14462) Friedman, Stanley J. "Theodore Dreiser and the Dispossessed" [1948] (14463) Furmańczyk, Wiesĺaw. "A Naturalist's View of Ethics" [1979] (14464) Furmańczyk, Wiesĺaw. "Theodore Dreiser's Views on Religion in the Light of His Philosophical Papers" [1977] (14465) Gerber, Philip L. "Dreiser Meets Balzac at the 'Allegheny Carnegie'" [1972] (14466) Gerber, Philip L. "Dreiser's Financier: A Genius" [1971] (14467) Gerson, Thomas. "For Theodore Dreiser" [poem] (14468) Gibson, Pauline. "The Ghost of Benjamin Sweet" [1938] (14469) Gilman, Lawrence. "An Author's Famous Friends" (14470) Glaenzer, Richard Butler. "Dreiser" [1917] [poem] (14471) Goldschmidt, Alfonso. "Holitscher und Dreiser" [1929] (14472) Goldschmidt, Alfonso and Lina Goldschmidt. [comments on TD, in Spanish, 1928] (14473) Goldschmidt, Lina. "Theodore Dreiser" [in German] (14474) Goodman, Lillian Rosedale. "You Have My Heart" [song] (14475) Griffin, Joseph. "Butcher Rogaum's Door': Dreiser's Early Tale of New York" [1984] (14476) Griffin, Joseph. "Dreiser Revealed and Restored" [1984] (14477) Griffin, Joseph. "Theodore Dreiser Visits Toronto" [1983] (14478) Grosch, Anthony R. "Social Issues in Early Chicago Novels" [1975] (14479) Halstead, Blanche. "And Yet?" [poem] (14480) Halstead, Blanche. "To a Rose" [poem] (14481) Hamilton, James Burr (ed.). "The Whipping Block: A Study of English Education" [1941?] (14482) Hapgood, Hutchins. "Out of the Darkness" [a dialogue] (14483) Hapgood, Hutchins. "The Primrose Path" [play] (14484) "Harlan County" and "Revolt or Tobacco" (14485) Harris, Marguerite Tjader. "Call for a Re-issuing of Dreiser's Bulwark " [after 1965] (14486) Harris, Marguerite Tjader. "Dreiser's Popularity in Russia" [1963] (14487) Harris, Marguerite Tjader. "Dreiser's Style" (14488-14490) Harris, Marguerite Tjader. "God as Looser" (14491) Harris, Marguerite Tjader. "Theodore Dreiser Loved Science" [in Russian, 1964] (14492) Hartmann, Sadakichi. "Passport to Immortality" [1927] (14493) [Harvey, Alexander]. [on the suppression of The "Genius," 1916] (14494) Harvey, Dorothy Dudley. "To T.D." (14495) [Harvey], Dorothy Dudley. Forgotten Frontiers: Dreiser and the Land of the Free [galleys, 1932] (14496) Hazlitt, Henry. "Our Greatest Authors: How Great Are They?" [1932] (14497) Hidaka, Masayoshi. [5 articles on TD in Japanese] (14498) Hill, Lawrence. [paper written for English course at Yale University, 1933] (14499) Hoffman, Helene. "This Myth Virginity" (14500) [Holloway, Mrs.?]. Ancient Cosmologies and Symbolisms (14501-14505) Holtz, Sophie. "A Devil Personified" (14506) Huddleston, Sisley. [essay in Back to Montparnasse ] (14507) Hurst, Fannie. "Back Street" [outline for a movie script by?] (14508) Huth, John E., Jr. "Dreiser and Success: An Additional Note" [1938] (14509) Huth, John E., Jr. "Theodore Dreiser, Success Monger" [1938] (14510) Huth, John E., Jr. "Theodore Dreiser: `The Prophet'" [1937] (14511) International Labor Defense. "The International Labor Defense: Its Constitution and Organization Resolution" [1929] and "Death Penalty" [1930] (14512) [introductory remarks by? on appearance together of Rabindranath Tagore and Ruth St. Denis] (14513) Jarmuth, Edith DeLong. "To Theodore Dreiser" [poem] (14514) Jerome, Helen. "Dreiser: The Man of Sorrow" [poem] (14515) Kalinka, Maga. "To T.D." [poem] (14516) Kapustka, Bruce. "Shadows of Dreams and Souls" [poem] (14517) Kazin, Alfred. "The Lady and the Tiger: Edith Wharton and Theodore Dreiser" [1941] (14518) Keeffe, Grace M. "Novelistas de la nueva generación: Louis Bromfield" [1930] (14519) King, Alexandra C. "Theodore Dreiser: An Impression" [poem] (14520) Knight, Eric M. "Pimpery—Twentieth Century" (14521) Kraft, H. S. "Dreiser's War in Hollywood" [1946] (14522) Kussell, Sally. "The Cheat" (14523) [Kussell, Sally.]. "The Love of Lizzie Morris" (14524) Kuttner, Alfred B. "The Lyrical Mr. Dreiser" [1912] (14525) La Follette, Suzanne. "The Modern Maecenas" [1925] [fragment] (14526) Latour, Marian. "To T.D." [poem] (14527) LeBerthon, Ted. "This Side of Nirvana" [1930s] (14528) Le Clercq, J. G. C., and W. H. Chamberlin. "Books, Art and Morality" [1917] (14529) Lee, Gerald Stanley. [from "The Lost Art of Reading," 1912/1913?] (14530) Lengel, William C. "Books That Made Me What I Am Today" [1930] (14531) Lengel, William C. "The `Genius' Himself" [1938] (14532) Lengel, William C. "Theodore Dreiser" [poem] (14533) Llona, Victor. "Les U.S.A. jugés par Théodore Dreiser" [1932] (14534) Logan, Chass. "Sister Carrie" [review] (14535) Lord, David. "Dreiser Today" [1941] (14536) Lyon, Harris Merton. "The Chorus Girl" (see Box 484, folder 14694) Lyon, Harris Merton. "Eve and the Walled-In Boy" (14537) Lyon, Harris Merton. "From Fancy's Point of Views" (14538) Lyon, Harris Merton. "An Unused Pattlesnake" (14539) Lyon, Harris Merton. "The Weaver Who Clad the Summer" (14540) [McCord, Donald P.]. "One Night" [by "Michael Vivadieu"] (14541) [McCord, Donald P.]. "We, The People" [by "Michael Vivadieu"] (14542) McCord, P[eter] B. "Niangua's Tears" (14543) McCoy, Esther. "Outward Journey" (14544) [N.B.: other writings by Esther McCoy are in her correspondence file] McDonald, Edward. "Dreiser before `Sister Carrie'" [1928] (14545) Markham, Kirah. "K.M. to Th.D." and "To My Love" (14546) [Markham, Kirah?]. "Sisters" [play] (14547-14549) [Markham, Kirah?]. [untitled play] (14550) Masters, Edgar Lee. "The Return" [1938] (14551) Masters, Edgar Lee. "Taking Dreiser to Spoon River" [1939] (14552) Masters, Edgar Lee. "Theodore Dreiser—a Portrait" [1915] (14553) Masters, Edgar Lee. "Theodore the Poet" (14554) Masters, Marcia Lee. "Ghostwriting for Theodore Dreiser" [1991] (14555) Mencken, H. L. "American Street Names [1948] (14556) Mencken, H. L. "The Birth of New Verbs" [after 1948] (14557) Mencken, H. L. "Bulletin on `Hon'" [1946] (14558) Mencken, H. L. "Designations for Colored Folk" [after 1944] (14559) Mencken, H. L. [review of A Gallery of Women, 1930] (14560) Mencken, H. L. "Names for Americans" [1947] (14561) Mencken, H. L. "Some Opprobrious Nicknames" [1949] (14562) Mencken, H. L. "War Words in England" [1944] (14563) Mencken, H. L. "What the People of American Towns Call Themselves" [1948] (14564) Mencken, H. L. [statement used in TD's memorial service, 1946] (14565) Michail Gourakin, by Lappo Danileveskaya [book review by?] (14566) Miller, William E., and Neda M. Westlake (eds.). "Essays in Honor of Theodore Dreiser's Sister Carrie " [special issue of  Library Chronicle, 1979] (14567) Minor, Robert. [address to 6 Dec. 1931 meeting of the National Committee for the Defense of Political Prisoners] (14568) Mizuguchi, Shigeo. "The Dreiser Collection at the University of Pennsylvania" [in Japanese] (14569) Mizuguchi, Shigeo. [article on TD in Japanese, 1970] (14570 Mooney, Martin. [statement on his firing by Universal Studios, after March 1932] (14571) Mordell, Albert. "My Relations with Theodore Dreiser" [1951] (14572-14573 Mouri, Itaru. [4 articles on TD in Japanese, with synopses for 3 of the 4 in English, 1969-1973] (14574) National Grays Harbor Committee. Defend Civil Rights in Grays Harbor County" [1940] (14575) "Notes of Mr. Theodore Dreiser's Ideas on: The Stabilizing of Personal Emotions " (14576) Oppenheim, James. "Theodore Dreiser" [poem] (14577) Palmer, Erwin. "Theodore Dreiser, Poet" [1971] (14578) "The Passing of Pan" [poem] (14579) Patel, Rajni. "Brother India" [1940] [preface by Paul Robeson] (14580) Paz, Magdeleine. "Vue sur l'Amerique" [after 1931] (14581) Perdeck, A. "Realism in Modern American Fiction" [1931] (14582) Perfilieff, Vladimir. [untitled account of incidents in the Far North among the Eskimo] (14583) [Perfilieff, Vladimir]. [untitled essay] (14584) Pizer, Donald. "Dreiser's Novels: The Editorial Problem" [1971] (14585) Poe, Edgar Allen. "The Tell Tale Heart" [radio dramatization by ?, 1937] (14586) [poetry by ?] (14587) "Policy" and "Note of separate comment" (14588) "The Pool" [poem] (14589) Powys, John Cowper. "Nietzsche" [notebook] (14590) Powys, John Cowper. Wolf Solent [1929] [bound page proofs] (14592) "Public Sucker Number One" [by I. N. Weber or William C. Lengel, after 1933] (14591) Raja, L. Jeganatha (ed.). Journal of Life, Art and Literature [special issue on Theodore Dreiser, 1984] (14593) Reilly, William J. "Of the Screen By the Screen and For the Screen" [1926] (14594) Reis, Irving. "St. Louis Blues" [1937] [radio play] (14595) Riggs, Lynn. "The Lonesome West" [1928] [play] (14596-14597) Robinson, LeRoy. "John Howard Lawson's Struggle with Sister Carrie " [1983] (14598) "Romance" [plot for a play] (14599) Roosevelt, Franklin Delano. "Our Realization of Tomorrow" [1945] (14600) Root, Waverly Lewis. [review of French translation of "Nigger Jeff," by Victor Llona, in Contemporary Foreign Novelists, 1931] (14601) Rosenthal, Elias. "Theodore Dreiser's 'Genius' Damned" [1916] (14602) Salzman, Jack. "The Publication of Sister Carrie : Fact and Fiction" [1967] (14603) Salzman, Jack. (ed.). Modern Fiction Studies [special issue on Theodore Dreiser, 1977] (14604) [Sayre, Kathryn]. "A Cosmos of Women" (14605) [Sayre, Kathryn]. "The Themes of Dreiser" (14606) [N.B.: other writings by Kathryn Sayre are in her correspondence file] [Scottsboro trial, press release and notes, 1931] (14607) Scudder, Raymond. "Samuel F. B. Morse" [1938] (14608) Sebestyén, Karl. "Theodore Dreiser at Home" [1930] (14609) Seymour, Katherine. "Famous Loves: Cleopatra: Episode No. 1" [1929] (14610) Seymour, Katherine. "Famous Loves: Episode 11: Heloise and Abelard" (14611) "Seymour Seligman on 'Theodore Dreiser and His Gallery of Women'" (14612) "Shaw on Dreiser" [1942] (14613) Shively, Henry L. "How Hickey Escaped the Fate of Lot's Wife" (14614) [review of Sister Carrie in  Style and American Dressmaker, 1907] (14615) Smith, Edward H. "Dreiser—after Twenty Years" [1921] (14616) Smith, Lorna. "Theodore Dreiser" [2 essays] (14617) Smith, Mary Elizabeth. "Theodore Dreiser: A Great American" (14618) Spector, Frank. "Story of the Imperial Valley" [1930] (14619) "Stars at a Glance" (14620) Sterling, George. "Everest" [poem] (14621) Sterling, George. "Intimations of Infinity" (14622) Sterling, George. "Sonnets to Craig" [1928] (14623) Sterling, George. "Strange Waters" [poem] (14624) Stevenson, Lionel. "George Sterling's Place in Modern Poetry" [1929] (14625) "Story for a Musical Comedy" (14626) "Suggestions for Radio Playwrights: Campana's 'First Nighter' 'Grand Hotel' Broadcasts" (14627) Tatum, Anna P. "Christ Petrified" [poem] (14628) Taylor, G. R. Stirling. "Theodore Dreiser" [1926] (14629) "Theodore Dreiser" (14630) "Theodore Dreiser" [poem] (14631) "Theodore Dreiser: Court Reporter" (14632) "Theodore Dreiser Centenary Exhibit" [catalog, 1971] (14633) Theodore Dreiser Centenary Issue of The Library Chronicle [1972] (14634) Thomas, Norman. "Will Fascism Come to America?" [1934] (14635) "To Theodore Dreiser author of 'Chains'" [poem] (14636) "Tom Kromer's Autobiography" (14637) Troy, William. "The Eisenstein Muddle" [1933] (14638) "Under Currents" (14639) Wadsworth, P. Beaumont. "America Ueber Alles" [1929] (14640) Warren, Whitney. "'The Vicious Circle'" [1932] (14641) Weaver, Raymond. "A Complete Handbook of Opinion" [1927] (14642) "The Weavers" [play] (14643) [Williams, Alexander]. [essay in response to Tragic America ] (14644) [Williams, Estelle Kubitz?]. "An Aristocrat" (14645) [Williams, Estelle Kubitz?]. "The Austrian Tangle" (14646) [Williams, Alexander]. [autobiographical account written after 1923] (14647) [Williams, Alexander]. "Bee" (14648) [Williams, Alexander]. [diary notes from 24 July - 1 Sept. 1912] (14649) [Williams, Alexander]. [diary notes from 1-11 March 1919] (14650) [Williams, Alexander]. "A Dream" (14651) [Williams, Estelle Kubitz?]. "The Heir" (14652) [Williams, Alexander]. "An Idyl" (14653) [Williams, Alexander]. "Misplaced Ambition" (14654) [Williams, Alexander]. "My Stage Experiences" [by "Miss Nonentity"] (14655) [Williams, Alexander]. "The One Hundred Hoddy-Doddys" (14656) [Williams, Alexander]. [poems, jokes] (14657) [Williams, Estelle Kubitz?]. "Tissemao and the Cuttlefish" (14658) [Williams, Estelle Kubitz?]. [untitled story] (14659) Woljeska, Helen. "The End of the Ideal" [1916) [play] (14660) Yewdall, Merton S. "Theodore Dreiser—Man and Scientific Mystic" (14661) Zanine, Louis J. "From Mechanism to Mysticism: Theodore Dreiser and the Religion of Science" [1981] (14662-14663) [3 untitled typescripts] (14664-14666) Cassette tape of lecture on TD by Fred C. Harrison, and note to Myrtle Butcher, 19 Nov. 1974 (14667) "Murder on Big Moose?": videotape and note from Trina Carman, 28 Sept. 1988 (14668) Return to Top » © University of Pennsylvania | dmcknigh@pobox.upenn.edu     dlfteach-pubpub-org-5025 ---- #DLFteach Toolkit Volume 3 CFP · #DLFteach Skip to main content SearchDashboardcaret-downLogin or Signup Home #DLFteach Toolkits Teaching with Digital Primary Sources #DLFTeach Toolkit Volume 3: Call for Proposals The DLF Digital Library Pedagogy group invites all interested digital pedagogy practitioners to contribute to a literacy and competency centered #DLFteach Toolkit, an online, open resource focused on lesson plans and concrete instructional strategies. We welcome contributors from academic and other educational institutions, including public and special libraries, in any setting, role, and career stage.   The DLF Digital Library Pedagogy group (aka #DLFteach) is a grassroots community of practice within the larger Digital Library Federation that is open to anyone interested in learning about or collaborating on digital library pedagogy. Toolkit Volume 3 will emphasize the teaching of literacies and competencies foundational for digital scholarship and digital humanities work and/or literacies and competencies acquired through the act of engaging in such work. By “literacies” we mean visual literacy, digital literacy, data literacy, and information literacy, etc. By “competencies” we mean foundational digital skills that provide both a practical and critical understanding of digital technologies (See Bryn Mawr's Digital Competencies for more information).  Example lessons include: A semester long digital exhibit project that has students creating metadata for visual materials related to the topic of their course. In addition to a librarian-led session on the mechanics of the platform, students learn about critical approaches to metadata and must include a reflective analysis on the potential for bias in their choice of descriptive keywords and subject terms. The project engages with information literacy frames like "Authority is Constructed and Contextual" as well as visual literacy competencies like "acquires and organizes images and source information."  A project training session in which team members (faculty, staff, and students) design a data model that can address the project's research questions. The exercise works for multiple audiences and establishes a foundation in data literacy for all participants. Concepts like tidy data, data types, and query languages are introduced. A blog post assignment that asks students to verbally interpret a privacy statement of an online company or institution to another student. Students must include an audio file of their conversation with a peer (and fill out a consent form). This assignment has students engaging with multiple digital competencies: managing a digital identity, privacy, and security; collaborative communication; and digital writing and publishing. “Lesson plans” are activities, basic exercises, assignments, project instruction, and the like that are used in situations ranging from one-off library sessions to multi-day workshops to semester-long courses. Lesson plans can be designed for synchronous or asynchronous/remote instruction. The works we seek to include will be creative and critical in nature. Ideally, they will push the boundaries of traditional approaches and frameworks. For instance, we are interested in lessons that highlight the intersections of literacies from different frameworks, not just alignment with the ACRL Framework for Information Literacy.  Areas of critical importance include: Transferable skills. How are the literacy skills gained transferable beyond that particular lesson? How might your lesson promote digital citizenship in an age of misinformation? Attention to "implementation fidelity." How have you improved your lesson in response to assessment? Accessibility and the digital divide. How is accessibility baked into your lesson? How do we teach technology based lessons when not everyone has equal access? Our aim is to provide practitioners with lessons that can be adapted for a variety of curricular contexts and instructional roles. We highly encourage submissions that demonstrate collaborations between library staff in different roles or with instructors outside the library.   Proposals are due by September 1st, 2021 and should be limited to 250 words. When writing your proposals, please consider the toolkit template. Proposals should include: A description of your lesson Learning outcomes A statement on the literacies involved in your lesson Note any collaborators (collaboration with other instructional partners is encouraged!) Toolkit 3 proposal submission form #DLFteach RSS Legal Published with dlfteach-pubpub-org-9351 ---- #DLFteach Toolkits · #DLFteach Skip to main content SearchDashboardcaret-downLogin or Signup Home #DLFteach Toolkits Teaching with Digital Primary Sources Welcome to the #DLFteach Toolkits: Lesson Plans for Digital Library Instruction! This series of openly available, peer-reviewed lesson plans and concrete instructional strategies is the result of a project led by the professional development and resource sharing subgroup. This publication emerged from #DLFteach workshops, office hours, Twitter chats, and open meetings. Community members and digital pedagogy practitioners expressed interest in lesson plans and session outlines which they could use as a jumping-off point for their own instruction and adapt for local contexts. #DLFteach Toolkit 1.0 was published in 2019 and contains twenty-one lesson plans on a variety of topics related to digital library instruction. In 2020, the organizers of Immersive Pedagogy: A Symposium on Teaching and Learning with 3D, Augmented and Virtual Reality, along with #DLFteach members, initiated Volume 2 of the toolkit with a focus on immersive technologies. To come in 2022 is a third edition of the toolkit series focused on the intersection of literacies in digital library instruciton. All lessons include learning goals, preparation, and a session outline. Additional materials — including slides, handouts, assessments, and datasets — are hosted in the DLF OSF repository as well as linked from each lesson. Download slides to see notes for presenters, and data that is too large to render in preview. There you will also find markdown versions of each lesson plan for you to use. #DLFteach Toolkit 1.0 #DLFteach Toolkit Volume 2: Lesson Plans on Immersive Pedagogy #DLFteach Toolkit Volume 3 CFP #DLFteach RSS Legal Published with dltj-org-1701 ---- Disruptive Library Technology Jester Skip to primary navigation Skip to content Skip to footer Disruptive Library Technology Jester About Resume Toggle search Toggle menu Peter Murray Library technologist, open source advocate, striving to think globally while acting locally Follow Columbus, Ohio Email Mastodon Twitter Keybase / PGP key GitHub LinkedIn StackOverflow ORCID Email Recent Posts DLTJ Now Uses Webmention and Bridgy to Aggregate Social Media Commentary Posted on July 11, 2021 2 minute read When I converted this blog from WordPress to a static site generated with Jekyll in 2018, I lost the ability for readers to make comments. At the time, I t... Digital Repository Software: How Far Have We Come? How Far Do We Have to Go? Posted on June 23, 2021 5 minute read Bryan Brown’s tweet led me to Ruth Kitchin Tillman’s Repository Ouroboros post about the treadmill of software development/deployment. And wow do I have th... Thoughts on Growing Up Posted on May 28, 2021 less than 1 minute read It ‘tis the season for graduations, and this year my nephew is graduating from high school. My sister-in-law created a memory book—”a surprise Book of Advice... More Thoughts on Pre-recording Conference Talks Posted on April 8, 2021 7 minute read Over the weekend, I posted an article here about pre-recording conference talks and sent a tweet about the idea on Monday. I hoped to generate discussion abo... Should All Conference Talks be Pre-recorded? Posted on April 3, 2021 6 minute read The Code4Lib conference was last week. That meeting used all pre-recorded talks, and we saw the benefits of pre-recording for attendees, presenters, and con... Previous 1 2 3 … 131 Next Enter your search term... Twitter GitHub Feed © 2021 Peter Murray. Powered by Jekyll & Minimal Mistakes. doc-rust-lang-org-2701 ---- Introduction - The Cargo Book Introduction 1. Getting Started 1.1. Installation 1.2. First Steps with Cargo 2. Cargo Guide 2.1. Why Cargo Exists 2.2. Creating a New Package 2.3. Working on an Existing Package 2.4. Dependencies 2.5. Package Layout 2.6. Cargo.toml vs Cargo.lock 2.7. Tests 2.8. Continuous Integration 2.9. Cargo Home 2.10. Build Cache 3. Cargo Reference 3.1. Specifying Dependencies 3.1.1. Overriding Dependencies 3.2. The Manifest Format 3.2.1. Cargo Targets 3.3. Workspaces 3.4. Features 3.4.1. Features Examples 3.5. Profiles 3.6. Configuration 3.7. Environment Variables 3.8. Build Scripts 3.8.1. Build Script Examples 3.9. Publishing on crates.io 3.10. Package ID Specifications 3.11. Source Replacement 3.12. External Tools 3.13. Registries 3.14. Dependency Resolution 3.15. SemVer Compatibility 3.16. Unstable Features 4. Cargo Commands 4.1. General Commands 4.1.1. cargo 4.1.2. cargo help 4.1.3. cargo version 4.2. Build Commands 4.2.1. cargo bench 4.2.2. cargo build 4.2.3. cargo check 4.2.4. cargo clean 4.2.5. cargo doc 4.2.6. cargo fetch 4.2.7. cargo fix 4.2.8. cargo run 4.2.9. cargo rustc 4.2.10. cargo rustdoc 4.2.11. cargo test 4.3. Manifest Commands 4.3.1. cargo generate-lockfile 4.3.2. cargo locate-project 4.3.3. cargo metadata 4.3.4. cargo pkgid 4.3.5. cargo tree 4.3.6. cargo update 4.3.7. cargo vendor 4.3.8. cargo verify-project 4.4. Package Commands 4.4.1. cargo init 4.4.2. cargo install 4.4.3. cargo new 4.4.4. cargo search 4.4.5. cargo uninstall 4.5. Publishing Commands 4.5.1. cargo login 4.5.2. cargo owner 4.5.3. cargo package 4.5.4. cargo publish 4.5.5. cargo yank 5. FAQ 6. Appendix: Glossary 7. Appendix: Git Authentication Light (default) Rust Coal Navy Ayu The Cargo Book The Cargo Book Cargo is the Rust package manager. Cargo downloads your Rust package's dependencies, compiles your packages, makes distributable packages, and uploads them to crates.io, the Rust community’s package registry. You can contribute to this book on GitHub. Sections Getting Started To get started with Cargo, install Cargo (and Rust) and set up your first crate. Cargo Guide The guide will give you all you need to know about how to use Cargo to develop Rust packages. Cargo Reference The reference covers the details of various areas of Cargo. Cargo Commands The commands will let you interact with Cargo using its command-line interface. Frequently Asked Questions Appendices: Glossary Git Authentication Other Documentation: Changelog — Detailed notes about changes in Cargo in each release. Rust documentation website — Links to official Rust documentation and tools. docs-google-com-1947 ---- Register your Interest: Open Knowledge Justice Programme Community Meetups Register your Interest: Open Knowledge Justice Programme Community Meetups The Open Knowledge Justice Programme is kicking off a series of free, monthly community meetups to talk about Public Impact Algorithms. Do you want to learn more about PIAs, what they are, how to spot them, and how they may affect your clients? Join us: to listen to each other and our guest speakers; perhaps to share and learn about this fast-changing issue. When? Lunch time every second Thursday of the month How? Register your interest using the form below More info: www.thejusticeprogramme.org/community * Required Name * Your answer Email * Your answer Affiliation Organisation you're working with. If not affiliated to any organisation, please write 'No affiliation' Your answer Are you a... * Barrister Solicitor Student Academic Civil Society Organisation Other: Country * Your answer Why are you interested in joining the community meetups? * Your answer Sign up to be added to the Open Knowledge Justice Programme Mailing list. You will receive an occasional newsletter with curated news on Public Impact Algorithms and announcements of upcoming trainings. Please add me to the mailing list Anything else you would like to share? Your answer I agree with Open Knowledge Foundation retaining the provided personal information in order to communicate with me as part of the Open Knowledge Justice Programme * We only retain minimal information for the purpose of facilitating the community meetups and share updates on the programme. Agreeing to this use of your data allows us to send you a calendar invite and register your name and email in the list of participants. Yes Required Submit Never submit passwords through Google Forms. This form was created inside of Open Knowledge Foundation.  Forms     docs-google-com-4954 ---- ePADD Discovery Module Collection Contributor Guide - Google Docs JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload. ePADD Discovery Module Collection Contributor Guide        Share Sign in The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss File Edit View Tools Help Accessibility Debug See new changes docs-google-com-7294 ---- #DLFteach Pedagogy Toolkit 3 CFP Request edit access #DLFteach Pedagogy Toolkit 3 CFP See CFP: https://dlfteach.pubpub.org/dlfteach-toolkit-3-cfp -Proposals are due by September 1st, 2021 and should be limited to 250 words. -When writing your proposals, please consider the toolkit template: https://rb.gy/huhgc9 -Proposals should include: * A description of your lesson * Learning outcomes * A statement on the literacies involved in your lesson * Note any collaborators (collaboration with other instructional partners is encouraged!) * Required Participation * I am submitting a proposal. I would like to be a peer reviewer. I am submitting a proposal and would like to be a peer reviewer. Contributor Name (main contact) * Your answer Email Address * Your answer Institution * Your answer Additional Contributors and Email Addresses Format as follows and use commas between contributors: First Name Last Name, Email Address Your answer Proposal (250 words max) Your answer Submit Never submit passwords through Google Forms. This form was created inside of BC. Report Abuse  Forms     docs-google-com-7392 ---- FT Alphaville’s Electric Vehicle Bubble Watch v15 - Google Sheets JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload. FT Alphaville’s Electric Vehicle Bubble Watch v15        Share Sign in The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss File Edit View Insert Format Data Tools Form Add-ons Help Accessibility Unsaved changes to Drive See new changes                     Accessibility     View only               A B C D E F G H I J K L M N O P Q R S T U V W X Y Z AA AB AC AD AE AF AG AH AI 1 FT ALPHAVILLE'S ELECTRIC VEHICLE BUBBLE WATCH 2 ONCE YOU POP, YOU CAN'T STOP 3 4 Latest update 13.08.21 - Added: EVGO. Earnings from: RIDE, ARVL, PTRA, BLNK, SOLO, NIO, BEEM, XL, HYZN, GPV 5 6 Company name Ticker SPAC? Price 52 week high % from high Annual return 90-day return 30-day return Daily change Shares (m) Market cap ($m) Net debt ($m) Enterprise value ($m) TTM Revenues ($m) TTM EBITDA ($m) EV/ Sales Rev growth FY22 Revenues ($m) EV/ '22 Sales Implied Sales CAGR Gross Margin TTM Capex ($m) TTM R&D ($m) Peak Market Cap ($m) Notes 7 Manufacturers 8 Tesla TSLA No 717.2 900.4 -20.3% 117.2% 24.3% 10.2% -0.7% 990.2 710,142 -3,613 706,529 41,862 5,751 16.9 63% 67,370 10.5 48% 22% 6,560 2,130 891,576 9 Nio NIO No 41.0 67.0 -38.8% 213.2% 21.4% -6.1% -3.4% 1,638.5 67,228 -4,000 63,229 4,259 -293 14.8 182% 9,286 6.8 95% 18% 174 455 109,764 10 Nikola NKLA Yes 9.5 54.6 -82.5% -79.3% -26.5% -33.2% -4.2% 397.0 3,783 -618 3,165 0 -502 85,542 n/a 170 18.6 10208% n/a 80 242 21,658 11 Xpeng XPEV No 40.2 74.5 -46.1% n/a 53.7% 4.3% -2.1% 780.9 31,353 -4,843 26,510 1,299 -637 20.4 296% 4,188 6.3 121% 8.0% 150 301 58,169 12 Arrival ARVL Yes 10.8 37.2 -71.1% n/a -41.7% -20.4% -9.4% 620.4 6,670 -472 6,198 0 -129 n/a n/a 983 6.3 n/a n/a 36 22 23,068 13 Arcimoto FUV No 14.1 36.8 -61.7% 96.2% 76.6% 1.1% -20.6% 35.8 505 -43 462 3 -21 184.8 84% 41 11.3 270% -232% 3 5 1,317 14 Li Auto LI No 28.7 47.7 -39.9% 96.3% 52.0% -7.5% -4.8% 904.6 25,927 -4,051 21,876 1,912 -73 11.4 972% 5,103 4.3 91% 17% 140 217 43,151 15 Canoo GOEV Yes 7.0 24.9 -71.8% -34.9% -7.0% -20.4% -8.1% 237.5 1,667 -620 1,047 3 -264 418.8 n/a 155 6.8 672% 74% 19 163 5,913 16 Fisker FSR Yes 14.4 32.0 -54.9% 18.0% 28.3% -7.3% -5.0% 296.0 4,263 -942 3,321 0 -126 67,770 n/a 308 10.8 988% 37% 66 93 9,461 17 Lordstown RIDE Yes 5.4 31.8 -83.1% -58.6% -36.1% -39.6% -6.5% 176.6 948 -366 582 0 0 n/a n/a 1,045 0.6 n/a n/a 90 157 5,616 18 Workhorse Group WKHS No 9.4 43.0 -78.1% -38.4% 14.1% -19.4% -5.4% 123.9 1,166 -137 1,029 3 -58 354.9 1792% 112 9.2 648% n/a 8 12 5,325 19 Electrameccanica SOLO No 3.5 13.6 -74.4% 27.5% 5.1% -2.0% -6.2% 113.0 393 -248 145 1 -40 145.4 93% 29 5.0 694% -7% 4 12 1,537 20 Lion Electric LEV Yes 14.1 35.3 -60.1% n/a -15.3% -7.1% -2.2% 188.6 2,650 123 2,773 27 -78 102.0 n/a 486 5.7 309% n/a 2 15 6,648 21 Electric Last Mile ELMS Yes 8.7 15.3 -43.4% n/a -12.5% -16.9% 1.3% 142.4 1,233 -229 1,004 0 0 n/a n/a 613 1.6 n/a n/a n/a n/a 2,179 22 Lightning eMotors ZEV Yes 9.0 17.4 -48.1% n/a 24.3% 32.5% -9.9% 73.2 660 80 740 13 -16 56.9 n/a 352 2.1 525% -20% 2 2 1,271 23 Kandi Technologies KNDI No 5.0 17.5 -71.1% -45.4% -1.2% -4.0% -2.3% 75.9 383 -186 197 97 -26 2.0 -19% n/a n/a n/a 18% 23 31 1,324 24 Greenpower Motor GPV.TO No 19.0 43.6 -56.4% 87.5% -2.4% -14.9% -2.6% 21.5 327 -8 319 12 -7 26.6 -8% 50 6.3 72% 31% 0.3 1 939 25 Proterra PTRA Yes 10.6 31.1 -66.0% n/a -29.2% -12.0% -4.8% 207.6 2,190 -650 1,540 214 -93 7.2 n/a 408 3.8 45% 2% 18 39 6,448 26 BYD Co BYDDY No 70.1 72.9 -3.9% 272.2% 77.4% 27.4% -0.6% 2,861.0 200,413 5,574 205,987 32,948 3,456 6.3 52% 42,983 4.8 14% 18% 1,860 1,224 208,596 27 Niu NIU No 21.3 53.4 -60.2% 2.2% -26.3% -26.1% -5.4% 76.3 1,622 -145 1,477 383 35 3.9 18% 972 1.5 62% 23% 17 16 4,071 28 Faraday Future FFIE Yes 11.0 20.8 -47.2% n/a -0.8% -11.7% -3.6% 337.0 3,690 -748 2,942 0 0 n/a 0 504 5.8 n/a n/a n/a n/a 6,993 29 Hyzon Corp HYZN Yes 7.5 20.0 -62.4% n/a -24.4% -25.0% 7.3% 247.0 1,852 19.5 1,872 0 -28 n/a 0 185 10.1 n/a n/a n/a 6 4,927 30 Lucid Motors LCID Yes 23.5 64.9 -63.8% n/a 28.2% 2.3% -1.6% 1,599.7 37,561 -4,500 33,061 0 0 n/a 0 2,219 14.9 n/a n/a n/a n/a 103,757 31 32 Charging 33 Blink Charging BLNK No 32.5 64.5 -49.6% 193.9% 5.7% 3.7% -4.7% 42.2 1,370 -193 1,177 10 -29 117.7 129% 29 40.2 71% 25% 7 n/a 2,719 34 EVBox Group TPGY Yes 10.4 34.3 -69.6% n/a -25.7% -15.4% -2.3% 139.0 1,448 -425 1,023 83 n/a 12.4 17% 265 3.9 79% 24% n/a n/a 4,765 35 Nuvve NVVE Yes 11.1 22.7 -51.1% n/a n/a 2.0% -14.7% 20.2 224 -70 154 6 n/a 25.7 57% 93.4 1.7 312% 16% n/a n/a 459 36 ChargePoint CHPT Yes 24.0 49.5 -51.5% n/a 3.9% 3.1% -4.7% 277.8 6,664 603 7,267 147 -111 49.4 1% 339 21.4 52% 23% -12 75 13,744 37 Beam Global BEEM No 31.3 75.9 -58.8% 141.5% 35.5% 6.2% 0.0% 8.9 278 -23 255 7 -6 37.0 37% 22 11.6 91% -17% 0.3 0.3 675 38 Fastned FAST.AS No 62.0 111.0 -44.1% 453.6% 17.0% 8.4% 0.0% 17.0 1,244 -68 1,176 10 -12 121.6 63% 38 31.0 152% 81% 12 n/a 1,888 39 Compleo Charging C0M.F No 105.5 116.0 -9.1% n/a 44.5% 18.8% -0.9% 3.9 485 -27 458 41 -8 11.1 107% 157 2.9 125% 24% 1 6 452 40 Volta SNPR Yes 10.0 18.3 -45.6% n/a n/a 0.7% -0.1% 130.0 1,296 -610 686 0 0 n/a n/a 141 4.9 n/a n/a n/a n/a 2,383 41 Alfen ALFEN.AS No 90.0 95.5 -5.8% 125.0% 46.6% 17.3% -1.6% 21.7 2,303 -38 2,265 223 23 10.2 32% 409 5.5 263% 37% 5.4 n/a 2,071 42 EVgo EVGO Yes 10.3 24.3 -57.7% n/a -6.2% -14.7% -6.5% 68.7 708 59 766 17 -44 45.9 20% 59 13.0 38% -75% 35 n/a 1,673 43 44 Batteries/ Cells 45 Plug Power PLUG No 25.0 75.5 -66.9% 119.2% 0.2% -6.7% -5.4% 584.0 14,594 -597 13,997 -100 -535 -140.0 n/a 721 19.4 56% n/a 48 51 44,086 46 QuantumScape QS Yes 21.7 132.7 -83.6% n/a -21.5% -8.2% -3.5% 414.6 9,014 -1,503 7,511 0 0 n/a n/a n/a n/a n/a n/a 58 105 55,032 47 Romeo Power RMO Yes 6.6 38.9 -83.1% n/a -10.6% -11.9% -4.8% 130.5 858 -279 579 8 -52 77.2 n/a 208 2.8 335% -155% 2 10 5,078 48 CBAK Energy Technology CBAT No 3.2 11.4 -71.9% 285.5% -13.7% -18.8% -7.0% 79.2 253 3 257 37 -8.5 6.9 69% n/a n/a n/a 7% 2 2 902 49 FuelCell Energy FCEL No 6.1 29.4 -79.1% 124.9% -22.8% -11.5% -8.2% 322.4 1,980 -18 1,962 69.5 -26 28.2 17% 119 16.5 31% -15% 29 6 9,492 50 Ballard Power Systems BLDP No 15.1 42.3 -64.2% 5.2% 5.8% -0.6% -4.1% 297.5 4,498 -1,221 3,277 95 -39 34.5 -14% 148 22.2 25% 19% 10 31 12,578 51 Flux Power FLUX No 9.7 22.5 -56.8% n/a -3.3% 11.1% -0.5% 13.0 126 1 127 24 -12 5.3 107% 60 2.1 58% 21% 1 6 293 52 FREYR FREY Yes 9.4 15.3 -38.8% n/a n/a n/a -6.1% 137.7 1,287 849 2,136 0 0 n/a n/a 11 194.2 n/a n/a n/a n/a 2,104 53 Microvast MVST Yes 11.5 25.2 -54.4% n/a 8.6% -13.4% 5.3% 300.5 3,450 -601 2,849 101 -12 28.2 n/a 460 6.2 113% n/a n/a n/a 7,573 54 55 Other 56 Hyliion Holdings HYLN Yes 8.7 58.7 -85.1% n/a -2.6% -11.1% -4.0% 172.8 1,507 -448 1,059 0 -55 n/a n/a 57 18.6 n/a n/a 1 30 10,136 57 XL Fleet XL Yes 6.1 35.0 -82.5% n/a -1.8% -10.7% -13.8% 139.1 851 -379 472 22 -30 21.5 258% 51 9.3 126% 14% 1 7 4,869 58 59 Aggregate 1,161,067 1,135,459 83,835 13.5 140,950 8.1 9,455 5,472 1,706,681 60 Peak market capitalisation 1,706,681 61 Losses 545,615 62 % -31.97% 63 Traditional OEMs 64 General Motors GM 53.7 64.3 -16.6% 92.6% -4.3% -5.8% -1.78% 1,451.7 77,882 33,392 111,274 139,639 20,226 0.8 21% 153,941 0.7 13% 15% 27,033 6,200 R&D is only given in annual reports so figure is for 2020 FY 65 Ford F 13.6 16.5 -17.4% 93.0% 11.9% -3.0% -2.23% 3,994.8 54,289 31,103 85,392 127,144 10,573 0.7 5% 160,713 0.5 17% 10% 5,668 7,100 66 Stellantis STLA 18.6 18.7 -0.7% n/a 18.6% 16.2% 0.61% 3,131.5 68,746 34,175 102,921 134,160 16,553 0.8 n/a 199,026 0.5 22% 19% 2,885 6,200 67 Volkswagen VOW3.F 207.4 250.0 -17.1% 49.9% -2.9% -1.7% -0.55% 501.3 155,535 20,430 175,966 256,422 36,142 0.7 -7% 266,463 0.7 2% 17% 13,298 16,379 68 BMW BMW.F 82.9 96.3 -13.9% 43.1% -2.1% -5.2% -0.41% 659.7 63,608 5,490 69,098 131,087 20,116 0.5 12% 139,305 0.5 3% 18% 6,969 6,711 R&D/ Capex is only given in annual reports so figure is for 2020 FY 69 Toyota TM 181.3 186.0 -2.5% 35.1% 14.3% 1.3% 0.22% 2,785.8 294,272 -16,588 277,684 278,773 41,593 1.0 14% 299,328 0.9 4% 19% 34,850 10,132 R&D is only given in annual reports so figure is for 2020 FY 70 Daimler DAI.F 75.8 80.4 -5.6% 79.9% 1.9% 3.9% 0.04% 1,069.8 112,221 22,441 134,662 202,191 26,241 0.7 9% 213,687 0.6 3% 21% 6,297 7,630 71 Honda HMC 32.5 33.4 -2.6% 28.0% 8.8% 1.5% -0.64% 1,726.8 58,739 -1,777 56,961 133,509 20,336 0.4 12% 149,031 0.4 6% 21% 2,871 6,873 72 Nissan NSANY 11.2 12.7 -12.5% 44.4% 13.2% 8.7% -2.11% 3,913.1 23,379 -1,579 21,800 79,360 3,172 0.3 0% 86,474 0.3 4% 15% 11,631 7,338 R&D is only given in annual reports so figure is for 2020 FY 73 Renault RNO.PA 33.7 41.4 -18.7% 37.8% -0.1% 7.4% -0.61% 271.8 11,713 764 12,477 57,102 4,486 0.2 5% 62,199 0.2 4% 18% 2,511 2,956 74 75 Aggregate 920,384 ** 1,048,236 1,539,388 0.7 1,730,167 0.6 114,014 77,519 76 77 Sources: Google Finance, CapIQ, Refinitiv, Company documents 78 For SPACs that have yet to merge, the figures presented are calculated from the Pro Forma figures presented in each SPAC"s investor presentation deck and will change once the mergers are completed. 79 For the OEMs, net debt has been calculated as net debt + financing debt - financing receivables. 80 **For Japenese OEMs -- Google keeps swtiching the market cap between JPY and USD values -- so if they look weird that's whats happened. 81 82 FX Rates 83 CAD/USD 0.79922 84 EUR/USD 1.179639 85 CNY/USD 0.15438293 86 JPY/USD 0.009125337 87 HKD/USD 0.12848598 88 89 90 91 92 93 94 95 96 97 98 99 100 Quotes are not sourced from all markets and may be delayed up to 20 minutes. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice.Disclaimer       Sheet1     A browser error has occurred. Please press Ctrl-F5 to refresh the page and try again. A browser error has occurred. Please hold the Shift key and click the Refresh button to try again. docs-google-com-7773 ---- Chia Consensus - Google Docs JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload. Chia Consensus        Share Sign in The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss File Edit View Tools Help Accessibility Debug See new changes documentation-solarwinds-com-1118 ---- Ruby on Rails SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our Cookie Policy. Continue Visit SolarWinds.com Documentation Contact Us Customer Portal Toggle navigation Academy SOLARWINDS ACADEMY CLASSES GUIDED CURRICULUM ELEARNING CERTIFICATION SOLARWINDS ACADEMY The SolarWinds Academy offers education resources to learn more about your product. The curriculum provides a comprehensive understanding of our portfolio of products through virtual classrooms, eLearning videos, and professional certification. See What's Offered AVAILABLE RESOURCES Virtual Classrooms Calendar eLearning Video Index SolarWinds Certified Professional Program VIRTUAL CLASSROOMS Attend virtual classes on your product and a wide array of topics with live instructor sessions or watch on-demand videos to help you get the most out of your purchase. Find a Class Open Sessions and Popular Classes General Office Hours Orion Platform Network Performance Monitor NetFlow Traffic Analyzer See All IP Address Manager Network Configuration Manager Server & Application Monitor Virtualization Manager GUIDED CURRICULUM Whether learning a newly-purchased SolarWinds product or finding information to optimize the software you already own, we have guided product training paths that help get customers up to speed quickly. View Suggested Paths ELEARNING On-demand videos on installation, optimization, and troubleshooting. See All Videos Popular Videos Upgrading Isn't as Daunting as You May Think Upgrading Your Orion Platform Deployment Using Microsoft Azure Upgrading From the Orion Platform 2016.1 to 2019.4 Don't Let the Gotchas Get You How to Install NPM and Other Orion Platform Products Upgrading the Orion Platform See All Videos Navigating the Web Console Prepare a SAM Installation Installing Server & Application Monitor How to Install SEM on VMware Customer Success with the SolarWinds Support Community New job, New to SolarWinds? SOLARWINDS CERTIFIED PROFESSIONAL PROGRAM Become a SolarWinds Certified Professional to demonstrate you have the technical expertise to effectively set up, use, and maintain SolarWinds’ products. Learn More STUDY AIDS Access Rights Manager Architecture and Design Database Performance Analyzer Diagnostics NetFlow Traffic Analyzer Network Configuration Manager Network Performance Monitor Server & Application Monitor ONBOARDING & UPGRADING NEW TO SOLARWINDS ORION ASSISTANCE PROGRAM UPGRADE RESOURCE CENTER SUPPORT OFFERINGS SMARTSTART WHAT’S NEW UPGRADE RESOURCE CENTER See helpful resources, answers to frequently asked questions, available assistance options, and product-specific details to make your upgrade go quickly and smoothly. Visit the Upgrade Resource Center PRODUCT-SPECIFIC UPGRADE RESOURCES Network Performance Monitor NetFlow Traffic Analyzer Network Configuration Manager Server & Application Monitor Storage Resource Monitor Virtualization Manager Web Performance Monitor Log Analyzer ORION ASSISTANCE PROGRAM This program connects you with professional consulting resources who are experienced with the Orion Platform and its products. These services are provided at no additional charge for customers who were/are running one of the Orion Platform versions affected by SUNBURST or SUPERNOVA. Learn More SUPPORT OFFERINGS Our Customer Support plans provide assistance to install, upgrade, and troubleshoot your product. Choose what best fits your environment and organization, and let us help you get the most out of your purchase. We support all our products, 24/7/365. Learn more AVAILABLE PROGRAMS Professional Premier Premier Enterprise SMARTSTART Our SmartStart programs help you install and configure or upgrade your product. Get assistance from SolarWinds’ technical support experts with our Onboarding and Upgrading options. We also offer a self-led program for Network Performance Monitor (NPM) and Server & Application Monitor (SAM) if you need help doing it yourself. Learn more AVAILABLE PROGRAMS SmartStart for Onboarding SmartStart for Upgrading SmartStart Self-Led for NPM and SAM WHAT’S NEW AT SOLARWINDS Find the latest release notes, system requirements, and links to upgrade your product. Learn More NEW TO SOLARWINDS You just bought your first product. Now what? Find out more about how to get the most out of your purchase. From installation and configuration to training and support, we've got you covered. Learn More Support Offerings PREMIER SUPPORT SMARTSTART WORKING WITH SUPPORT PREMIER SUPPORT We offer paid Customer Support programs to assist you with installation, upgrading and troubleshooting. Choose what best fits your environment and budget to get the most out of your software. Get priority call queuing and escalation to an advanced team of support specialist. AVAILABLE PROGRAMS Premier Support Premier Enterprise Support SMARTSTART Our SmartStart paid programs are intended help you install and configure or upgrade your product. You’ll be assisted by SolarWinds’ technical support experts who are dedicated to quickly and efficiently help you with getting up and running or moving to the latest version of your product. AVAILABLE PROGRAMS SmartStart for Onboarding SmartStart for Upgrading Working with Support WORKING WITH SUPPORT A glossary of support availability, tips, contact info, and customer success resources. We're here to help. Learn More PRODUCTS NETWORK MANAGEMENT SYSTEMS MANAGEMENT DATABASE MANAGEMENT IT SECURITY IT SERVICE MANAGEMENT APPLICATION MANAGEMENT DOCUMENTATION NETWORK MANAGEMENT Orion Platform Network Performance Monitor NetFlow Traffic Analyzer IP Address Manager Network Configuration Manager Engineer's Toolset View All Network Management Products Network Topology Mapper User Device Tracker VoIP Network Quality Manager Log Analyzer Enterprise Operations Console Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class SYSTEMS MANAGEMENT Server & Application Monitor Virtualization Manager Storage Resource Monitor Web Performance Monitor Server Configuration Monitor Backup View All Systems Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class IT SECURITY Security Event Manager Access Rights Manager Serv-U Managed File Transfer Server Serv-U FTP Server Patch Manager Identity Monitor View All IT Security Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class DATABASE MANAGEMENT Database Performance Analyzer Database Performance Monitor View All Database Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class IT SERVICE MANAGEMENT Dameware Remote Everywhere Dameware Remote Support Dameware Mini Remote Control Service Desk Web Help Desk View All IT Service Management Products Kiwi Syslog Server Kiwi CatTools ipMonitor Mobile Admin Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class APPLICATION MANAGEMENT AppOptics Pingdom Papertrail Loggly View All Application Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class COMMUNITY THWACK® ORANGE MATTER LOGICALREAD THWACK® Over 150,000 users—get help, be heard, improve your product skills Visit THWACK AVAILABLE PROGRAMS SolarWinds Lab THWACK Tuesday Tips (TTT) THWACKcamp™ 2020 On-demand Orange Matter Practical advice on managing IT infrastructure from up-and-coming industry voices and well-known tech leaders View Orange Matter LogicalRead Blog Articles, code, and a community of database experts Read the Blog SUBMIT A TICKET Academy SOLARWINDS ACADEMY See What's Offered Virtual Classrooms Calendar eLearning Video Index SolarWinds Certified Professional Program CLASSES Find a Class General Office Hours Orion Platform Network Performance Monitor NetFlow Traffic Analyzer See All IP Address Manager Network Configuration Manager Server & Application Monitor Virtualization Manager GUIDED CURRICULUM View Suggested Paths ELEARNING See All Videos Upgrading Isn't as Daunting as You May Think Upgrading Your Orion Platform Deployment Using Microsoft Azure Upgrading From the Orion Platform 2016.1 to 2019.4 Don't Let the Gotchas Get You How to Install NPM and Other Orion Platform Products Upgrading the Orion Platform See All Videos Navigating the Web Console Prepare a SAM Installation Installing Server & Application Monitor How to Install SEM on VMware Customer Success with the SolarWinds Support Community New job, New to SolarWinds? CERTIFICATION Learn More Access Rights Manager Architecture and Design Database Performance Analyzer Diagnostics NetFlow Traffic Analyzer Network Configuration Manager Network Performance Monitor Server & Application Monitor ONBOARDING & UPGRADING NEW TO SOLARWINDS Learn More ORION ASSISTANCE PROGRAM Learn More UPGRADE RESOURCE CENTER Visit the Upgrade Resource Center Network Performance Monitor NetFlow Traffic Analyzer Network Configuration Manager Server & Application Monitor Storage Resource Monitor Virtualization Manager Web Performance Monitor Log Analyzer SUPPORT OFFERINGS Learn More Professional Premier Premier Enterprise SMARTSTART Learn more SmartStart for Onboarding SmartStart for Upgrading SmartStart Self-Led for NPM and SAM WHAT’S NEW Learn More Support Offerings PREMIER SUPPORT Premier Support Premier Enterprise Support SMARTSTART SmartStart for Onboarding SmartStart for Upgrading Working with Support WORKING WITH SUPPORT Learn More PRODUCTS NETWORK MANAGEMENT Orion Platform Network Performance Monitor NetFlow Traffic Analyzer IP Address Manager Network Configuration Manager Engineer's Toolset View All Network Management Products Network Topology Mapper User Device Tracker VoIP Network Quality Manager Log Analyzer Enterprise Operations Console SYSTEMS MANAGEMENT Server & Application Monitor Virtualization Manager Storage Resource Monitor Web Performance Monitor Server Configuration Monitor Backup View All Systems Management Products IT SECURITY Security Event Manager Access Rights Manager Serv-U Managed File Transfer Server Serv-U FTP Server Patch Manager Identity Monitor View All IT Security Products DATABASE MANAGEMENT Database Performance Analyzer Database Performance Monitor View All Database Management Products IT SERVICE MANAGEMENT Dameware Remote Everywhere Dameware Remote Support Dameware Mini Remote Control Service Desk Web Help Desk View All IT Service Management Products Kiwi Syslog Server Kiwi CatTools ipMonitor Mobile Admin APPLICATION MANAGEMENT AppOptics Pingdom Papertrail Loggly View All Application Management Products DOCUMENTATION COMMUNITY THWACK® Visit THWACK ORANGE MATTER View Orange Matter LOGICALREAD Read the Blog SUBMIT A TICKET Documentation forPapertrail Ruby on Rails To send Ruby on Rails request logs, either: use Papertrail's tiny remote_syslog2 daemon to read an existing log file (like production.log), or change Rails' environment config to use the remote_syslog_logger gem. We recommend remote_syslog2 because it works for other text files (like nginx and MySQL), has no impact on the Rails app, and is easy to set up. Also see Controlling Verbosity. Send log file with remote_syslog2 Install remote_syslog2 Download the current release. To extract it and copy the binary into a system path, run: Copy $ tar xzf ./remote_syslog*.tar.gz $ cd remote_syslog $ sudo cp ./remote_syslog /usr/local/bin RPM and Debian packages are also available. Configure Paths to log file(s) can be specified on the command-line, or save log_files.yml.example as /etc/log_files.yml. Edit it to define: path to your Rails log file (such as production.log) and any other log file(s) that remote_syslog2 should watch. the destination host and port provided under log destinations. If no destination port was provided, set host to logs.papertrailapp.com and remove the port config line to use the default port (514). The remote_syslog2 README has complete documentation and more examples. Start Start the daemon: Copy $ sudo remote_syslog Logs should appear in Papertrail within a few seconds of being written to the on-disk log file. Problem? See Troubleshooting. remote_syslog requires read permission on the log files it is monitoring. Auto-start remote_syslog2 can be automated to start at boot using init scripts (examples) or your preferred daemon invocation method, such as monit or god. See remote_syslog --help or the full README on GitHub. Troubleshooting See remote_syslog2 troubleshooting. Send events with the remote_syslog_logger gem Install Remote Syslog Logger The easiest way to install remote_syslog_logger is with Bundler. Add remote_syslog_logger to your Gemfile. If you are not using a Gemfile, run: Copy $ gem install remote_syslog_logger Configure Rails environment Change the environment configuration file to log via remote_syslog_logger. This is almost always in config/environment.rb (to affect all environments) or config/environments/.rb, such as config/environments/production.rb (to affect only a specific environment). Add this line: Copy config.logger = RemoteSyslogLogger.new('logsN.papertrailapp.com', XXXXX) You can also specify a program name other than the default rails: Copy config.logger = RemoteSyslogLogger.new('logsN.papertrailapp.com', XXXXX, :program => "rails-#{RAILS_ENV}") where logsN and XXXXX are the name and port number shown under log destinations. Alternatively, to point the logs to your local system, use localhost instead of logsN.papertrailapp.com, 514 for the port, and ensure that the system’s syslog daemon is bound to 127.0.0.1. A basic rsyslog config would consist of the following lines in /etc/rsyslog.conf: Copy $ModLoad imudp $UDPServerRun 514 Verify configuration To send a test message, start script/console in an environment which has the syslog config above (for example, RAILS_ENV=production script/console). Run: Copy RAILS_DEFAULT_LOGGER.error "Salutations!" The message should appear on the system's message history within 1 minute. Verbosity For more information on improving the signal:noise ratio, see the dedicated help article here. Lograge We recommend using lograge in lieu of Rails’ standard logging. Add lograge to your Gemfile and smile. Log user ID, customer ID, and more Use lograge to include other attributes in log messages, like a user ID or request ID. The README has more. Here’s a simple example which captures 3 attributes: Copy class ApplicationController < ActionController::Base   before_filter :append_info_to_payload   def append_info_to_payload(payload)     super     payload[:user_id] = current_user.try(:id)     payload[:host] = request.host     payload[:source_ip] = request.remote_ip   end end The 3 attributes are logged in production.rb: with this block: Copy config.lograge.custom_options = lambda do |event|   event.payload end The payload hash populated during the request above is automatically available as event.payload. payload automatically contains the params hash as params. Here's another production.rb example which only logs the request params: Copy config.lograge.custom_options = lambda do |event|   params = event.payload[:params].reject do |k|     ['controller', 'action'].include? k   end   { "params" => params } end Troubleshooting Colors and/or ANSI character codes appear in my log messages By default, Rails generates colorized log messages for non-production environments and monochromatic logs in production. Papertrail renders any ANSI color codes it receives (see More colorful logging with ANSI color codes), so you can decide whether to enable this for any environment. To enable or disable ANSI logging, change this option in your environment configuration file (such as config/environment.rb or config/environments/staging.rb). The example below disables colorized logging. Rails >= 3.x: Copy config.colorize_logging = false Rails 2.x: Copy config.active_record.colorize_logging = false See: http://guides.rubyonrails.org/configuring.html#rails-general-configuration The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation. We’re Geekbuilt.® Developed by network and systems engineers who know what it takes to manage today's dynamic IT environments, SolarWinds has a deep connection to the IT community. The result? IT management products that are effective, accessible, and easy to use. COMPANY INVESTORS CAREER CENTER RESOURCE CENTER EMAIL PREFERENCE CENTER FOR CUSTOMERS FOR GOVERNMENT GDPR RESOURCE CENTER SOLARWINDS TRUST CENTER Legal Documents Privacy California Privacy Rights Security Information Documentation & Uninstall Information Sitemap © 2021 SolarWinds Worldwide, LLC. All rights reserved. documentation-solarwinds-com-8045 ---- JSON search syntax SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our Cookie Policy. Continue Visit SolarWinds.com Documentation Contact Us Customer Portal Toggle navigation Academy SOLARWINDS ACADEMY CLASSES GUIDED CURRICULUM ELEARNING CERTIFICATION SOLARWINDS ACADEMY The SolarWinds Academy offers education resources to learn more about your product. The curriculum provides a comprehensive understanding of our portfolio of products through virtual classrooms, eLearning videos, and professional certification. See What's Offered AVAILABLE RESOURCES Virtual Classrooms Calendar eLearning Video Index SolarWinds Certified Professional Program VIRTUAL CLASSROOMS Attend virtual classes on your product and a wide array of topics with live instructor sessions or watch on-demand videos to help you get the most out of your purchase. Find a Class Open Sessions and Popular Classes General Office Hours Orion Platform Network Performance Monitor NetFlow Traffic Analyzer See All IP Address Manager Network Configuration Manager Server & Application Monitor Virtualization Manager GUIDED CURRICULUM Whether learning a newly-purchased SolarWinds product or finding information to optimize the software you already own, we have guided product training paths that help get customers up to speed quickly. View Suggested Paths ELEARNING On-demand videos on installation, optimization, and troubleshooting. See All Videos Popular Videos Upgrading Isn't as Daunting as You May Think Upgrading Your Orion Platform Deployment Using Microsoft Azure Upgrading From the Orion Platform 2016.1 to 2019.4 Don't Let the Gotchas Get You How to Install NPM and Other Orion Platform Products Upgrading the Orion Platform See All Videos Navigating the Web Console Prepare a SAM Installation Installing Server & Application Monitor How to Install SEM on VMware Customer Success with the SolarWinds Support Community New job, New to SolarWinds? SOLARWINDS CERTIFIED PROFESSIONAL PROGRAM Become a SolarWinds Certified Professional to demonstrate you have the technical expertise to effectively set up, use, and maintain SolarWinds’ products. Learn More STUDY AIDS Access Rights Manager Architecture and Design Database Performance Analyzer Diagnostics NetFlow Traffic Analyzer Network Configuration Manager Network Performance Monitor Server & Application Monitor ONBOARDING & UPGRADING NEW TO SOLARWINDS ORION ASSISTANCE PROGRAM UPGRADE RESOURCE CENTER SUPPORT OFFERINGS SMARTSTART WHAT’S NEW UPGRADE RESOURCE CENTER See helpful resources, answers to frequently asked questions, available assistance options, and product-specific details to make your upgrade go quickly and smoothly. Visit the Upgrade Resource Center PRODUCT-SPECIFIC UPGRADE RESOURCES Network Performance Monitor NetFlow Traffic Analyzer Network Configuration Manager Server & Application Monitor Storage Resource Monitor Virtualization Manager Web Performance Monitor Log Analyzer ORION ASSISTANCE PROGRAM This program connects you with professional consulting resources who are experienced with the Orion Platform and its products. These services are provided at no additional charge for customers who were/are running one of the Orion Platform versions affected by SUNBURST or SUPERNOVA. Learn More SUPPORT OFFERINGS Our Customer Support plans provide assistance to install, upgrade, and troubleshoot your product. Choose what best fits your environment and organization, and let us help you get the most out of your purchase. We support all our products, 24/7/365. Learn more AVAILABLE PROGRAMS Professional Premier Premier Enterprise SMARTSTART Our SmartStart programs help you install and configure or upgrade your product. Get assistance from SolarWinds’ technical support experts with our Onboarding and Upgrading options. We also offer a self-led program for Network Performance Monitor (NPM) and Server & Application Monitor (SAM) if you need help doing it yourself. Learn more AVAILABLE PROGRAMS SmartStart for Onboarding SmartStart for Upgrading SmartStart Self-Led for NPM and SAM WHAT’S NEW AT SOLARWINDS Find the latest release notes, system requirements, and links to upgrade your product. Learn More NEW TO SOLARWINDS You just bought your first product. Now what? Find out more about how to get the most out of your purchase. From installation and configuration to training and support, we've got you covered. Learn More Support Offerings PREMIER SUPPORT SMARTSTART WORKING WITH SUPPORT PREMIER SUPPORT We offer paid Customer Support programs to assist you with installation, upgrading and troubleshooting. Choose what best fits your environment and budget to get the most out of your software. Get priority call queuing and escalation to an advanced team of support specialist. AVAILABLE PROGRAMS Premier Support Premier Enterprise Support SMARTSTART Our SmartStart paid programs are intended help you install and configure or upgrade your product. You’ll be assisted by SolarWinds’ technical support experts who are dedicated to quickly and efficiently help you with getting up and running or moving to the latest version of your product. AVAILABLE PROGRAMS SmartStart for Onboarding SmartStart for Upgrading Working with Support WORKING WITH SUPPORT A glossary of support availability, tips, contact info, and customer success resources. We're here to help. Learn More PRODUCTS NETWORK MANAGEMENT SYSTEMS MANAGEMENT DATABASE MANAGEMENT IT SECURITY IT SERVICE MANAGEMENT APPLICATION MANAGEMENT DOCUMENTATION NETWORK MANAGEMENT Orion Platform Network Performance Monitor NetFlow Traffic Analyzer IP Address Manager Network Configuration Manager Engineer's Toolset View All Network Management Products Network Topology Mapper User Device Tracker VoIP Network Quality Manager Log Analyzer Enterprise Operations Console Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class SYSTEMS MANAGEMENT Server & Application Monitor Virtualization Manager Storage Resource Monitor Web Performance Monitor Server Configuration Monitor Backup View All Systems Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class IT SECURITY Security Event Manager Access Rights Manager Serv-U Managed File Transfer Server Serv-U FTP Server Patch Manager Identity Monitor View All IT Security Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class DATABASE MANAGEMENT Database Performance Analyzer Database Performance Monitor View All Database Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class IT SERVICE MANAGEMENT Dameware Remote Everywhere Dameware Remote Support Dameware Mini Remote Control Service Desk Web Help Desk View All IT Service Management Products Kiwi Syslog Server Kiwi CatTools ipMonitor Mobile Admin Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class APPLICATION MANAGEMENT AppOptics Pingdom Papertrail Loggly View All Application Management Products Your SolarWinds products come with a secret weapon. Award-winning, instructor-led classes, eLearning videos, and certifications. Find a Class COMMUNITY THWACK® ORANGE MATTER LOGICALREAD THWACK® Over 150,000 users—get help, be heard, improve your product skills Visit THWACK AVAILABLE PROGRAMS SolarWinds Lab THWACK Tuesday Tips (TTT) THWACKcamp™ 2020 On-demand Orange Matter Practical advice on managing IT infrastructure from up-and-coming industry voices and well-known tech leaders View Orange Matter LogicalRead Blog Articles, code, and a community of database experts Read the Blog SUBMIT A TICKET Academy SOLARWINDS ACADEMY See What's Offered Virtual Classrooms Calendar eLearning Video Index SolarWinds Certified Professional Program CLASSES Find a Class General Office Hours Orion Platform Network Performance Monitor NetFlow Traffic Analyzer See All IP Address Manager Network Configuration Manager Server & Application Monitor Virtualization Manager GUIDED CURRICULUM View Suggested Paths ELEARNING See All Videos Upgrading Isn't as Daunting as You May Think Upgrading Your Orion Platform Deployment Using Microsoft Azure Upgrading From the Orion Platform 2016.1 to 2019.4 Don't Let the Gotchas Get You How to Install NPM and Other Orion Platform Products Upgrading the Orion Platform See All Videos Navigating the Web Console Prepare a SAM Installation Installing Server & Application Monitor How to Install SEM on VMware Customer Success with the SolarWinds Support Community New job, New to SolarWinds? CERTIFICATION Learn More Access Rights Manager Architecture and Design Database Performance Analyzer Diagnostics NetFlow Traffic Analyzer Network Configuration Manager Network Performance Monitor Server & Application Monitor ONBOARDING & UPGRADING NEW TO SOLARWINDS Learn More ORION ASSISTANCE PROGRAM Learn More UPGRADE RESOURCE CENTER Visit the Upgrade Resource Center Network Performance Monitor NetFlow Traffic Analyzer Network Configuration Manager Server & Application Monitor Storage Resource Monitor Virtualization Manager Web Performance Monitor Log Analyzer SUPPORT OFFERINGS Learn More Professional Premier Premier Enterprise SMARTSTART Learn more SmartStart for Onboarding SmartStart for Upgrading SmartStart Self-Led for NPM and SAM WHAT’S NEW Learn More Support Offerings PREMIER SUPPORT Premier Support Premier Enterprise Support SMARTSTART SmartStart for Onboarding SmartStart for Upgrading Working with Support WORKING WITH SUPPORT Learn More PRODUCTS NETWORK MANAGEMENT Orion Platform Network Performance Monitor NetFlow Traffic Analyzer IP Address Manager Network Configuration Manager Engineer's Toolset View All Network Management Products Network Topology Mapper User Device Tracker VoIP Network Quality Manager Log Analyzer Enterprise Operations Console SYSTEMS MANAGEMENT Server & Application Monitor Virtualization Manager Storage Resource Monitor Web Performance Monitor Server Configuration Monitor Backup View All Systems Management Products IT SECURITY Security Event Manager Access Rights Manager Serv-U Managed File Transfer Server Serv-U FTP Server Patch Manager Identity Monitor View All IT Security Products DATABASE MANAGEMENT Database Performance Analyzer Database Performance Monitor View All Database Management Products IT SERVICE MANAGEMENT Dameware Remote Everywhere Dameware Remote Support Dameware Mini Remote Control Service Desk Web Help Desk View All IT Service Management Products Kiwi Syslog Server Kiwi CatTools ipMonitor Mobile Admin APPLICATION MANAGEMENT AppOptics Pingdom Papertrail Loggly View All Application Management Products DOCUMENTATION COMMUNITY THWACK® Visit THWACK ORANGE MATTER View Orange Matter LOGICALREAD Read the Blog SUBMIT A TICKET Documentation forPapertrail JSON search syntax In addition to using the Google-esque search syntax to find things in your logs, Papertrail can parse a JSON object that appears at the end of a log line. Each line can contain arbitrary string data before the JSON. For example: Copy 2019-12-02 03:04:05 DEBUG {"a":123,"b":456} This is a beta feature and the final syntax might change. If you have any questions or suggestions, please contact contact us. JSON Search Syntax Root level search Copy json.orgId:1193 Example matches: Exact match { "orgId": 1193 } Substring match { "orgId": 11933962 } Nested search Copy json.user.name:pete Example matches: Exact match { "user": {"name": "Pete"} } Substring match { "user": {"name": "Peter" } } Exact Match Copy json.orgId:"11933962" Example matches: Exact match { "orgId": 11933962 } Negation Copy json.cursor.tail:false AND -json.orgId:15884562 Example matches: Different value for orgId { "orgId": 11933962, "cursor": {"tail": false} } orgId not present { "cursor": {"tail": false} } The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation. We’re Geekbuilt.® Developed by network and systems engineers who know what it takes to manage today's dynamic IT environments, SolarWinds has a deep connection to the IT community. The result? IT management products that are effective, accessible, and easy to use. COMPANY INVESTORS CAREER CENTER RESOURCE CENTER EMAIL PREFERENCE CENTER FOR CUSTOMERS FOR GOVERNMENT GDPR RESOURCE CENTER SOLARWINDS TRUST CENTER Legal Documents Privacy California Privacy Rights Security Information Documentation & Uninstall Information Sitemap © 2021 SolarWinds Worldwide, LLC. All rights reserved. doi-org-7136 ---- An alternative approach to nucleic acid memory | Nature Communications Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement View all journals Search My Account Login Explore content Journal information Publish with us Sign up for alerts RSS feed nature nature communications articles article An alternative approach to nucleic acid memory Download PDF Article Open Access Published: 22 April 2021 An alternative approach to nucleic acid memory George D. Dickinson  ORCID: orcid.org/0000-0001-7711-03881 na1, Golam Md Mortuza  ORCID: orcid.org/0000-0003-1929-52512 na1, William Clay1 na1, Luca Piantanida  ORCID: orcid.org/0000-0001-7378-04131 na1, Christopher M. Green  ORCID: orcid.org/0000-0001-7848-71441 nAff5, Chad Watson1, Eric J. Hayden  ORCID: orcid.org/0000-0001-6078-54183, Tim Andersen  ORCID: orcid.org/0000-0002-0279-24222, Wan Kuang4, Elton Graugnard  ORCID: orcid.org/0000-0002-0497-98211, Reza Zadegan  ORCID: orcid.org/0000-0001-9662-73311 nAff6 & William L. Hughes  ORCID: orcid.org/0000-0003-0102-35731  Nature Communications volume 12, Article number: 2371 (2021) Cite this article 3762 Accesses 1 Citations 222 Altmetric Metrics details Subjects DNA computing and cryptography DNA nanostructures Information storage Super-resolution microscopy Abstract DNA is a compelling alternative to non-volatile information storage technologies due to its information density, stability, and energy efficiency. Previous studies have used artificially synthesized DNA to store data and automated next-generation sequencing to read it back. Here, we report digital Nucleic Acid Memory (dNAM) for applications that require a limited amount of data to have high information density, redundancy, and copy number. In dNAM, data is encoded by selecting combinations of single-stranded DNA with (1) or without (0) docking-site domains. When self-assembled with scaffold DNA, staple strands form DNA origami breadboards. Information encoded into the breadboards is read by monitoring the binding of fluorescent imager probes using DNA-PAINT super-resolution microscopy. To enhance data retention, a multi-layer error correction scheme that combines fountain and bi-level parity codes is used. As a prototype, fifteen origami encoded with ‘Data is in our DNA!\n’ are analyzed. Each origami encodes unique data-droplet, index, orientation, and error-correction information. The error-correction algorithms fully recover the message when individual docking sites, or entire origami, are missing. Unlike other approaches to DNA-based data storage, reading dNAM does not require sequencing. As such, it offers an additional path to explore the advantages and disadvantages of DNA as an emerging memory material. Download PDF Introduction As outlined by the Semiconductor Research Corporation, memory materials are approaching their physical and economic limits1,2. Motivated by the rapid growth of the global datasphere3, and its environmental impacts, new non-volatile memory materials are needed. As a sustainable alternative, DNA is a viable option because of its information density, significant retention time, and low energy of operation4. While synthesis and sequencing cost curves drive innovations in the field, divergent approaches to nucleic acid memory (NAM) have been constrained because of the ease of using sequencing to recover stored digital information5,6,7,8,9,10,11,12,13. Here, we report digital Nucleic Acid Memory (dNAM) as an alternative to sequencer-based DNA memory. Inspired by progress in DNA nanotechnology14, dNAM uses advancements in super-resolution microscopy (SRM)15 to access digital data stored in short oligonucleotide strands that are held together for imaging using DNA origami. In dNAM, non-volatile information is digitally encoded into specific combinations of single-stranded DNA, commonly known as staple strands, that can form DNA origami nanostructures when combined with a scaffold strand. When formed into origami, the staple strands are arranged at addressable locations (Fig. 1) that define an indexed matrix of digital information. This site-specific localization of digital information is enabled by designing staple strands with nucleotides that extend from the origami. Extended staple strands have two domains: the first domain forms a sequence-specific double helix with the scaffold and determines the address of the data within the origami; the second domain extends above the origami and, if present, provides a docking site for fluorescently labeled single-stranded DNA imager strands. Binary states are defined by the presence (1) or absence (0) of the data domain, which is read with a super-resolution microscopy technique called DNA-Points Accumulation for Imaging in Nanoscale Topography (DNA-PAINT)16. Unique patterns of binary data are encoded by selecting which staple strands have, or do not have, data domains. As an integrated memory platform, data is entered into dNAM when the staple strands encoding 1 or 0 are selected for each addressable site. The staple strands are then stored directly, or self-assembled into DNA origami and stored. Editing data is achieved by replacing specific strands or the entire content of a stored structure. To read the data, the origami is optically imaged below the diffraction limit of light using DNA-PAINT (Fig. S1). Fig. 1: Binary dNAM overview. The test message (a) for optically reading dNAM was ‘Data is in our DNA!’. The message was encoded and then synthesized into 15 dNAM origami. For clarity, only one of the 15 designs is shown in (b). The data domain colors correspond to their bit values as follows: droplet (green), parity (blue), checksum (yellow), index (red), and orientation (magenta). Site-specific localization is enabled by extending or not-extending the structural staple strands of the origami to create physical representations of 1s and 0s. The presence, absence, and identity of a data strand’s docking sequence defines the state of each data strand and is assessed by monitoring the binding of data imager strands via DNA-PAINT in (c). AFM images of an origami nanostructure are depicted in (d), with both the expected raft honeycomb structure (left) and data strands (right) visible. The scale bar is 25 nmin the AFM images and the color scale ranges from 0–1 nm in height. To ‘read’ the encoded message, 4 μL of the DNA origami mixture, containing 0.33 nM of each origami, was imaged via DNA-PAINT. Two representative origami cropped from the final rendered image are shown in (e), scale bar, 10 nm. All structures identified as origami in the rendered image were converted to a matrix of 1’s and 0’s corresponding to the pattern of localizations seen at each data domain in (f). The red boxes in (f) now indicate errors. The decoding algorithm performed error correction where possible in (g) and successfully retrieved the entire message when sufficient data droplets and indexes were recovered in (a). The blue boxes in (g) now indicate corrected errors. Full size image Key design features of dNAM, that ensure error-free data recovery, are our error-correcting algorithms. Detection of individual DNA molecules using DNA-PAINT is routinely limited by incomplete staple strand incorporation, defective imager strands, fluorophore bleaching, and/or background fluorescence17. Although it is possible to improve the signal-to-noise ratio by averaging multiple images of identical structures17, this approach comes at a significant cost to the read speed and information density. To overcome these challenges, we created dNAM-specific information encoding and decoding algorithms that combine fountain codes with a custom, bi-level, parity-based, and orientation-invariant error detection scheme. Fountain codes enable transmission of data over noisy channels18. They work by dividing a data file into smaller units called droplets and then sending the droplets at random to a receiver. Droplets can be read in any order and still be decoded to recover the original file19, so long as a sufficient number of droplets are sent to ensure that the entire file is received. We encode each droplet onto a single origami and add additional bits of information for error correction to ensure that individual droplets will be recovered, in the presence of high noise, from individual DNA origami. Together, the error-correction and fountain codes increase the probability that the message is fully recovered while reducing the number of origami that must be observed. In this report, we describe a working prototype of dNAM. As a proof of concept, we encoded the message ‘Data is in our DNA!\n’ into origami and recovered the message using DNA-PAINT. We divided the message into 15 digital droplets, each encoded by a separately synthesized origami with addressable staple strands that space out data domains approximately 10 nm apart. A single DNA-PAINT recording recovered the message from 20 fmoles of origami, with approximately 750 origami needing to be read to reach a 100% probability of full data retrieval. By combining the spatial control of DNA nanotechnology with our error-correction algorithms, we demonstrate dNAM as an alternative approach to prototyping DNA-based storage for applications that require a limited amount of data to have high information density, redundancy, and copy number. Results Recovery of a message encoded into dNAM To test our dNAM concept, we encoded the message ‘Data is in our DNA!\n’ into 15 distinct DNA origami nanostructures (Fig. 1a). Each origami was designed with a unique 6 × 8 data matrix that was generated by our encoding algorithm with data domains positioned ~10 nm apart. For encoding purposes, the message was converted to binary code (ASCII) and then segmented into 15 overlapping data droplets that were each 16 bits. Inspired in part by digital encoding formats like QR-codes, the 48 addressable sites on each origami were used to encode one of the 16-bit data droplets, as well as information used to ensure the recovery of each data droplet. Specifically, each origami was designed to contain a 4-bit binary index (0000–1110), twenty bits for parity checks, four bits for checksums, and four bits allocated as orientation markers (Fig. 1b). To fully recover the encoded message, we then synthesized each origami separately and deposited an approximately equal mixture of all 15 designs (~20 fmoles of total origami) onto a glass coverslip. The data domains were accessible for binding via fluorescently labeled imager probes because they faced the bulk solution and not the coverslip (Fig. 1c). High-resolution atomic force microscopy (AFM) was used in tapping mode to confirm the structural integrity of the origami and the presence of the data domains (Fig. 1d). 40,000 frames from a single field of view were recorded using DNA-PAINT (~4500 origami identified in 2982 µm2). The super-resolution images of the hybridized imager strands were then reconstructed from blinking events identified in the recording to map the positions of the data domains on each origami (Fig. 1e). Using a custom localization processing algorithm, the signals were translated to a 6 × 8 grid and converted back to a 48-bit binary string—which was passed to the decoding algorithm for error correction, droplet recovery, and message reconstruction (Fig. 1f, g). The process enabled successful recovery of the dNAM encoded message from a single super-resolution recording. Quality control of dNAM We evaluated all of the origami structures using AFM to confirm that the 15 different designs were successfully synthesized, with their data domains in the correct location. Automated image processing algorithms were developed to identify, orient, and average multiple images of each origami from the DNA-PAINT recording of the mixture (Fig. 2). Although the edges of the origami were more sensitive to data strand insertion failures (Fig. S2), the results confirmed that all of the data domains, in each of the origami designs, were detectable in each of the three separate experiments. The AFM images further confirmed that the general shapes of all 15 origami designs were as expected with properly positioned data domains (Fig. 1d, Fig. S3). The results indicate that the extended staple strands do not prevent the synthesis of the 15 unique origami designs. Fig. 2: DNA-PAINT imaging of dNAM indicates all sites are recovered in a single read. dNAM origami from a DNA-PAINT recording were identified and classified by aligning and template matching them with the 15 design matrixes (Design) in which all potential docking sites are shown. Filled circles indicate sites encoded ‘0’ (dark gray) or ‘1’ (white). Colored boxes indicate the regions of the matrixes used for the droplet (green), parity (blue), checksum (yellow), index (red), and orientation (magenta). For clarity, only the first design image includes the colored matrix sites. Averaged images of 4560 randomly selected origami, grouped by index, are depicted (DNA-PAINT). Scale bar, 10 nm. Full size image Further AFM analysis of dNAM origami As an additional quality control step, we used AFM to examine origami deposited onto a glass coverslip immediately following SRM imaging. We were not able to resolve individual docking sites in these images, most likely due to the increased roughness of glass, as compared to mica. However, it was possible to count the number of origami in a field of view for comparison with SRM. The densities of origami estimated from the images were 2.4 and 1.4 origami/µm2 for AFM and SRM, respectively, suggesting that ~60% of the total origami deposited on glass have their data domains facing away from the coverslip and available for imager strand binding. To further investigate the variance in error rates between origami designs, we resynthesized the most error-prone origami (origami index 2). DNA-PAINT imaging indicated that the new batch showed 9.7 ± 2 false-negative errors per origami, consistent with the original experiment, while the second batch showed 7.1 ± 2 false-negative errors (Fig. 3). This suggests that at least a portion of the variance in error rates is independent of origami design and may be caused by variations in mixing, folding, and purification conditions. Fig. 3: All 15 dNAM data strings were recovered from a single read. (a) plots the numbers of each origami index observed in a single recording, based on template matching. The mean counts are shown as gray bars, with the percentage of the total origami indicated on the secondary axis. In (b), the mean number of total errors (top) for each structure is shown, based on template matching. The same errors are also shown after being grouped into false negatives (middle) and false positives (bottom). (c) depicts the percent of origami passed to the decoding algorithm that had both their indexes and data strings correctly identified. In (d), the percentage of each origami decoded is plotted against the mean number of errors for each structure. (e) shows histograms of the total mean numbers of errors found in origami identified by template matching (open bars) and the decoding algorithm (gray bars). The difference between the two is plotted in blue. Mean values for three experiments are depicted in all graphs, error bars indicate ±SD. Individual data points are plotted as small black circles. Full size image Data encoding/decoding strategy for dNAM Our encoding approach added 24 error-correction bits of data to every origami structure so that data droplets can be determined from individual origami even when data domains are incorrectly resolved, and the entire message recovered if some droplets are missed entirely. To evaluate the performance of the decoding algorithm, we examined the frequency and types of errors in the DNA-PAINT images and the effect of these errors on our decoding outcomes. We used a template matching strategy where each of the 15 origami designs was considered a template, and each individual origami in the field of view was compared to these designs to find the best match. We identified the total number of origami that matched or did not match, each design (Fig. 3a, b). We then determined the number of each design identified by the decoding algorithm when recovering the message (Fig. 3c): a process independent of template matching and blind to the droplet data contained in the DNA origami. We observed a clear negative correlation between the number of errors detected in a specific design and the number of corresponding origami that were successfully decoded by the algorithm (Fig. 3d). The results indicate that, even though there was a low relative abundance of several origami in the deposition mixture (particularly origami index 2) and a mean of 7.3 ± 1.2 false errors per origami across the different designs, our error-correction scheme enabled successful message recovery. False positives were much less common in our experiments, with a mean of 1.7 ± 0.5 (Fig. 3b). Furthermore, the mean number of errors overcome by the decoding algorithm (5.5 ± 0.1) was lower than the mean number of errors observed across all the origami (7.7 ± 0.1), demonstrating the challenge of decoding origami when several fluorescent signals are missing (Fig. 3e). Nevertheless, the ability of our data encoding and decoding strategy to recover the message despite errors in individual origami is promising, and the results provide useful guidelines for evaluating and optimizing origami performance for future dNAM designs. Sampling analysis of dNAM Given the observed frequency of missing data points, we then used a random sampling approach to determine the number of origami needed to decode the ‘Data is in our DNA!\n’ message under our experimental conditions. We started with all the decoded binary output strings that were obtained from the single-field-of-view recordings and took random subsamples of 50–3000 binary strings. We passed each random subsample of strings through the decoding algorithm and determined the number of droplets that were recovered (Fig. 4). Based on the algorithmic settings used in the experiment, we found that only ~750 successfully decoded origami were needed to recover the message with near 100% probability. This number is largely driven by the presence of origami in our sample that were prone to high error rates and thus rarely decoded correctly (i.e., origami index 2). Fig. 4: Number of dNAM origami required to recover the message. The mean number of unique dNAM origami correctly decoded for randomly selected subsamples of decoded binary strings are shown. The analysis was broken out by the number of errors corrected for each origami, three examples are plotted (1, 4, and 9). Black filled circles depict the mean results for nine error corrections, which is the ‘maximum allowable number of errors’ parameter used in the decoding algorithm for all other analysis reported here. The horizontal lines indicate the probability of recovering the message with different numbers of unique droplets. With fourteen or more droplets, the message should always be recovered (thick green line, and above indicates 100% chance of recovery) and with nine or fewer droplets the message will never be recovered (thick red line and below indicates 0% chance of recovery). Mean values for three experiments are shown. Error bars indicate ±SD. Individual data points are plotted behind as smaller gray symbols. Full size image Simulations of dNAM Simulations were run to determine the size efficiency of the encoding scheme, as well as its ability to recover from errors. As shown in Fig. 5a, the number of origami required to encode a message of length n increases roughly at a linear rate up to n = 5000 bytes of data. Larger message sizes require more bits to be devoted to indexing, decreasing the number of available data bits per origami—creating a practical limit of 64 kB of data for the prototype described in this work. This limit can be increased by increasing the number of bits per origami. To determine the ability of the decoding and error correction algorithm to recover information in the presence of increasing error rates, in silico origami that encoded randomly generated data were subjected to increasing bit error rates. The decoding algorithm robustly recovers the entire message for all tested message sizes when the average number of errors per origami is less than 7.4 (Fig. 5b). At 7.4 errors per origami, the message recovery rate drops to 97.5%, and as expected decreases rapidly with higher error rates (55% recovery at 8.2 errors per origami, and 7.5% at 9 errors per origami). An important feature of our algorithm is that the origami recovery rate can be low (as low as 63% in these experiments) and still recover the entire message 100% of the time. Fig. 5: dNAM origami and message recovery rates in the presence of increasing errors. Simulations were performed to determine the theoretical success rates for correctly decoding individual dNAM origami and recovering encoded messages. In (a), the mean number of dNAM origami needed to successfully recover messages of increasing length with (circles) or without (squares) redundant bits are plotted. In (b), the mean success for recovering both individual origami (triangles) and the entire message (diamonds) are plotted against the mean number of errors per origami (errors were randomly generated for simulated data). Simulation recovery rates are averages of all message sizes tested (160 to 12,800 bits). For comparison, the mean success rate for experimental data is also plotted (open circles). For experimental data, the mean success was estimated by comparing the decode algorithm’s results with that of the template-matching algorithm. All simulations were repeated 40 times. Experimental data were derived from 3 independent DNA-PAINT recordings. Full size image Discussion Our results demonstrate a proof of concept for writing and reading digital information encoded in oligonucleotides. Because of the durability of DNA, dNAM has long-term future potential for archival information storage. Currently, the most widely used material for this purpose is magnetic tape. Recent advancements in tape report a two-dimensional areal information density up to 31 Gbit/cm220, though the current commercially available material typically has lower density8. Although relevant only for reading throughput, not storage, the information density of tape can be compared to the dNAM origami, which contains data domains spaced at 10 nm intervals to achieve an areal density of about 1000 Gbit/cm2. After accounting for using ~2/3 of the bits for indexing and error correction, this results in an areal data density of 330 Gbit/cm2. It is possible to increase dNAM areal density by placing a data domain at every turn in the DNA helix (~3.5 nm spacing), a distance that has been resolved by SRM21. Other avenues to increasing density are also available, such as previously reported multiplexing techniques with multiple fluorophores and orthogonal binding sequences with different binding kinetics22, and incorporation of each of these approaches is expected to impact reading throughput. In terms of durability, typical magnetic tape lasts for 10–30 years, while double-stranded DNA is estimated to be stable for millions of years under optimal environmental conditions7. With our optical microscope setup and origami deposition protocol, we can image the 7500 unique origami designs needed to store 5 kB of data (Fig. 5), albeit in several recordings. We conservatively estimate it would take ~30 recordings to ensure a 100% probability of successful data recovery given our current error rates. To efficiently handle larger datasets, it is necessary to improve the data capacity of individual origami, which will allow a larger range of indexing values and increase the proportion of bits dedicated to the data as compared to indexing, error-correction, and orientation. This could be achieved by engineering larger origami or by increasing data density—either by placing data sites closer together or by using multiplexing techniques to augment bit depth at each site (see SI, Supplemental Calculations). Our results also indicate that advancements in origami-based information storage and reading will require a coordinated effort between improvements in origami synthesis, substrate deposition, DNA-PAINT, and coding algorithms. For example, our subsampling approach (Fig. 4) showed that a decoding algorithm that corrected up to nine errors easily recovered our entire message, while algorithms that corrected only five or fewer errors are much less computationally expensive but rarely recovered our full message. This makes sense, given that most of the origami detected had more than five errors (Fig. 3e). We anticipate that reducing the number of errors by improving origami design and optimizing imager strand performance would allow more efficient algorithms for data recovery, which would, in turn, decrease the number of bits dedicated to error correction and thus increase information density. Our fountain code algorithm is robust to randomly lost packets of information, as long as the receiver receives K + ε packets, where K is the minimum number of packets required to encode the file under perfect conditions (i.e., K is equal to the file size) and ε is the number of additional packets received. The probability of being able to decode the file is then (1−δ), where δ is upper-bounded by 2−Kε23. This equation implies that all things being equal, the larger the file size the greater the likelihood of successfully recovering the file at the receiver. Normally, the transmitter continues to transmit droplets in a fountain code until the receiver acknowledges successful file recovery. In the case of dNAM, this is not possible since the number of droplets must be fixed ahead of time to equal the number of origami. Reducing the error rates, or improving error correction/detection, would have the added benefit of reducing the number of droplets and hence origami discarded by the fountain code. These improvements would make it easier to determine the minimum number of droplets per DNA origami needed to ensure robust file recovery while increasing information density even further. The lower abundance and higher error rate of origami index 2 (Fig. 3) indicate that some designs have defects that we could not detect by AFM and/or SRM. Careful defect analysis indicates that incorporated but inactive data domains play a greater role in producing errors than unincorporated staple strands24. Future dNAM research should focus on sequence optimization to minimize variation in hybridization rates and the formation of off-target structures25. It should also include the use of larger DNA origami and increased bit depth through multiplexing. Future work on dNAM will also need to address scalability if dNAM is to compete with established memory storage systems. In this report, we describe the storage of a small amount of data in order to illustrate the potential of dNAM. Scaling to much larger data sets requires substantial engineering improvements in both write and read speeds (see Fig. S8 and Supplemental Calculations for further comparisons). For writing, the rate-limiting step is the selection of the oligonucleotide data strands. In our lab, we use an EpMotion 5075 liquid-handling system to pipette oligonucleotides. While this machine could handle thousands of sample transfers per day, it limits the write speed to thousands of bits per day as each data strand encodes 1 bit. As far as we are aware, the fastest liquid-transfer system available is the Echo ® 520 Liquid Handler, which is reported by the manufacturer to process ~750,000 samples per day, allowing ~0.1 MB per day for 1-bit data strands. For dNAM to reach write speeds equivalent to tape (hundreds of MB per second) using laboratory hardware, significant increases in either the number of bits per strand and the rate of transfer of samples or the rate at which DNA oligonucleotides can be synthesized will be necessary. While writing information into DNA at a competitive rate is a sincere challenge that is facing the entire DNA-memory field5, and is likely to undergo rapid innovation as the market for synthesized DNA increases, the approach we have used here, in which a library of premade oligonucleotides are drawn on, is currently the fastest approach for dNAM. Due to the inherently parallel nature of DNA-PAINT imaging, the read speed of dNAM is arguably less of a challenge to scale up to deal with large amounts of data. The rate-limiting factors for DNA-PAINT are the camera integration time needed to collect sufficient photons to resolve an emitter and the number of emitters that can be identified in a single frame of a recording. The latest report on DNA-PAINT by Strauss and Jungmann describes a 100-fold speed-up in data collection for origami very similar to those we imaged in dNAM26. In their experiments, 5 nm resolution of the binding site was demonstrated with 100 ms camera integration times. Another recent innovation, using deep learning to rapidly identify the centroids of overlapping emitter blink events (Deep-STORM27), has been shown to be able to process dense SRM data (~6 emitters/µm2). Taken together we estimate that by using densely-deposited dNAM origami28 with data strands placed 5 nm apart, an EMCCD camera with a 1024 × 1024 imaging array, the Deep-STORM algorithm, and Straus and Jungmann’s 100-fold speed-up methodology, we could currently collect data at a rate of ~700 MB per day (see SI, Supplemental Calculations). Further improvements in reading speed could be achieved by increasing the imaging array area—via larger sensors or multiple cameras and using multicolored probes or three-dimensional information to collect multiple bits worth of data simultaneously from one site. Our hope is that this dNAM prototype will motivate this work and more. DNA is an emerging material for data storage due to its high information density, high durability, low energy of operation, and the declining costs of synthesis1. The traditional approach in the field is to design and synthesize unique oligonucleotides that encode data directly into their sequence. This data is recovered by reading the pool of oligonucleotides using sequencing. In contrast, dNAM takes advantage of another property of DNA—its programmability. By encoding binary data into DNA origami and reading it as spatially and temporally distinct hybridization events, dNAM decouples information recovery from sequencing. Editing the data is trivial through the inclusion or exclusion of sequence extensions from a library of staple strands. Data strands can be stored directly or incorporated into origami and then stored; separating the 3D storage density from the 2D reading density. In addition, dNAM is a massively parallel process because the large optical field of view affords tens of thousands of origami to be imaged simultaneously, and the number of optical read heads is proportional to the concentration of the imager strands in solution. Rather than averaging thousands of DNA-PAINT images together to resolve the digital data17, individual origami were read here using custom encoding, decoding, and error-correction algorithms. Our algorithms combined fountain codes with bi-level parity codes to significantly enhance our data retention—creating a multi-layer error correction scheme that encoded index, orientation, parity, and checksum bits into the origami. As a proof of concept, several bytes of data were recovered in a single DNA-PAINT recording. Even when the DNA origami recovery rate was poor (as low as 63%), the message was recovered 100% of the time. As an alternative platform for testing DNA-memory technology, dNAM offers a pathway to explore the advantages and disadvantages of DNA as a material for information storage and encryption, as previously demonstrated by Zhang et al.29. Because of the scaling challenges of using DNA as a memory material, this is particularly true for applications like barcoding that require a limited amount of data to have high information density, redundancy, and copy number. Methods The materials purchased for this study, and their respective vendors, are outlined in Table 1. All other reagents were obtained from Sigma. Table 1 Materials.Full size table Buffers As previously described17, two buffers were used to prepare and image DNA origami: a deposition buffer and an imaging buffer. The deposition buffer contained 0.5× TBE and 18 mM MgCl2. The imaging buffer contained the deposition buffer with the supplement of 60 nM PCD, 1 mM Trolox, 3 nM imager strands, and 10 mM PCA. PCA was added to the imaging buffer immediately before the start of a DNA-PAINT recording. Encoding algorithm The encoding algorithm used a multi-layer error correction scheme to encode message data bits along with the index, orientation, and error correction bits onto multiple origami (Fig. S4). At the message level, the algorithm used a fountain code to encode the data. Let m be a message string composed of a sequence of n bits. The fountain code algorithm first divides m into k equally sized and non-overlapping substrings s1, s2, …, sk, where the concatenation s1s2…sk = m, and then systematically combines one to many segments using the binary XOR operation to form multiple data blocks called droplets. The number of segments d used to form each droplet are typically drawn from a distribution based on the Soliton distribution: $$p\left( 1 \right) = 1/k$$ (1) The Soliton distribution ensures that the algorithm encodes the optimal number of single-segment droplets necessary for the decode step. Once the number of segments d for a droplet is determined, the droplet is formed by XOR’ing d randomly selected, unique segments from m, with each segment being selected with probability 1/k. For our experiments, we divided the message ‘Data is in our DNA!\n’ into 10 segments of 16 bits each. The segments were then combined via an XOR in different combinations using the fountain code algorithm to form the 15 droplets. While the theoretical minimum number of 16-bit droplets required to decode the message is 10, the redundancy provided by the additional droplets ensured that the message would be recoverable in all cases involving the loss of one droplet, and in some cases with the loss of up to five droplets (Fig. 4). After generating the droplets using fountain codes, the encoding algorithm encoded each droplet onto fifteen 6 × 8 matrixes, and sequentially added index and orientation marker bits, computed and added checksum bits, and then added parity bits (Fig. 1b). These matrixes were used to construct 15 origami structures, with a one-to-one mapping between the matrixes and the origami’s data domains. Figure 1b shows the layout of how droplet information was encoded onto each origami, composed of 16 bits of droplet data (green coloring in Fig. 1b), four indexing bits (red), four orientation bits (magenta), four checksum bits (yellow), and twenty parity bits (blue). It is important to note that the layout of the data, orientation, and index bits relative to the corresponding parity and checksum bits is invariant to rotation, which made it possible for the error correction algorithm to perform error detection and recovery before determining the orientation (Fig. S4). This led to more robust data recovery. DNA origami folding Rectangular DNA origami structures (~90 × 70 nm) were designed based on previous work by Rafat et al.30 with 48 potential docking strand sites arranged in a 6 × 8 matrix with 10 nm spacing. Then, using the protocol described by Schnitzbauer et al.17 a mixture of extended and unmodified staple strands (SI Tables S1 and S2) were selected to fold the M13 scaffold into the designed shape, with extended strands located at the ‘1’ positions described in the design matrix (SI Table S4). As described in the introduction, an extended staple strand has a binding site for the M1 imager strand, unmodified strands bind solely to the scaffold DNA to induce folding. Using this method, 15 origami designs were created that matched the 15 matrixes output by the encoding algorithm. We assembled individual origami designs by combining 22 nM M13mp18 with 10× unmodified stands, 50× extended strands, 1× TAE and 18 mM MgCl2 (in nuclease-free water; 100 µL total volume) and folding in a Mastercycler nexus thermal cycler (Eppendorf) using the following heating cycle: [1 min 90 °C, 2 min 80 °C, then from 80 °C to 25 °C over 12 h]. We purified the origami by running them on an ice-cooled 0.8% agarose gel containing 0.5× TBE and 8 mM MgCl2, excising the single sharp band, and collecting the exudate of the crushed gel piece. Sharp triangle origami used as fiducial markers were prepared similarly, as previously described31 (see S1 Table S3 for oligonucleotide sequences). All purified origami were stored in the dark at 4 °C until use. Glass coverslip preparation Borosilicate glass coverslips (25 × 75 and 22 × 22 mm, #1 Gold Seal Coverglass) were sonicated in 0.1% (v/v) Liquinox and nano-pure water (1 min in each) to remove contaminants and dried at 40 °C for at least 30 min. Fiducial markers (200 µL of 0.2 pM AuNPs) were deposited onto the coverslips for 10 min at room temperature. The labeled coverslips were rinsed with methanol and nano-pure water and stored at 40 °C prior to use. DNA origami deposition onto coverslips The glow discharge technique previously described by Green24 was used to deposit DNA origami onto glass coverslips using an air-plasma vacuum glow-discharge system. Briefly, coverslips that had been cleaned and labeled with fiducial markers were exposed to glow discharge generated using an electrode coupled 115 V Electro-Technic BD-10A High-Frequency Generator under 2 Torr of vacuum for 75 s. For DNA-PAINT analysis, a sticky-Slide flow cell (~50 µL channel volume) was glued to the coverslip, DNA origami were then deposited by introducing 200 µL of 0.05 nM origami (a mixture of dNAM origami and sharp triangle origami31 added as additional fiducial markers, in deposition buffer) into the flow chamber and incubated for 30 min at room temperature. After deposition, the flow chamber was rinsed with 1 mL of deposition buffer (no DNA origami) and refilled with imaging buffer. When performing AFM measurements on samples previously used for DNA-PAINT, a custom fluid chamber, modified from Jungmann et al.32, was used. A 22 × 22 mm coverslip was glued to a microscope slide using double-sided sticky tape with the addition of a thin layer of gel sealant—to both seal any gaps and weaken the binding of tape to the glass. Once DNA-PAINT imaging had been performed the sealant allowed the coverslip to be easily removed for further AFM analysis. Fluorescence microscopy DNA origami was imaged below the diffraction limit of light via DNA-PAINT17 using an inverted Nikon Eclipse Ti2 microscope from Nikon Instruments in total internal reflectance fluorescence (TIRF) mode. The images were acquired using: an optical feedback focal-drift correction system developed in-house or the Perfect Focus System from Nikon Instruments; an oil-immersion CFI Apochromat ×100 TIRF objective with a 1.49 numerical aperture, plus an extra ×1.5 magnification from Nikon Instruments; and a 405/488/561/647 nm Laser Quad Band Set TIRF filter cube from Chroma. A 561 nm laser source excited fluorescence from the DNA-PAINT imager strands within an evanescent field extending a few hundred nanometers above the surface of the glass coverslip. The emitted fluorescence was imaged onto the full chip with 512 × 512 pixels (1 pixel = 16 μm) using a ProEM EMCCD camera from Princeton Instruments at a 300 ms exposure time (~3 frames/s). During an experimental recording, each of the individual data strands, within a dNAM origami’s matrix, transiently and repeatedly bound an imager strand, which emits a signal, creating a series of blinks. Images with blinking events were recorded into a stack (typically 40,000 frames per recording) using Nikon NIS-Elements version 5.20.00 (Nikon Instruments) or LightField version 5 (Princeton Instruments) prior to processing and analysis. DNA-PAINT fluorophore localization After recording a DNA-PAINT stack, the center position of signals (localizations) emitted by imager probes, transiently binding to DNA origami docking strands, were identified using the ImageJ ThunderSTORM plugin33. The localizations were rendered and then drift corrected using the Picasso-Render software package, as described by Schnitzbauer et al.17. Data visualization and peak fitting of image data for PSF analysis were performed using OriginPro Version 2019b (OriginLab). Localization data processing A custom algorithm was developed for identifying clusters of localizations, determining the maximum likelihood position of the emitters, and generating binary matrix data. The algorithm selected localization clusters at random from the localization list. To do this, it sampled random points in the list, determined the average position of nearby localizations, and counted the localizations within a radius (R) and the localizations within a band R < r < 2R. The algorithm accepted clusters if the counts in the inner circle were greater than a threshold and the counts in the outer band were less than 15% of the counts in the inner band. This ensured selection of bright clusters that were isolated from other clusters. The algorithm then fits the cluster localizations to a grid of emitters. An idealized grid was created using the average DNA-PAINT image produced by several thousand individual origami structures of the same architecture used in this work. The algorithm performed fitting using a maximum likelihood estimation for the likelihood function: $$L\left( {I,x_c,y_c,\theta ,{\mathrm{{\Delta}}}x_g^2,B} \right) = \mathop {\prod }\limits_i \left( {\mathop {\sum }\limits_k \frac{{I_k}}{a}\exp \left( { - \frac{{\left( {x_i - x_k\left( {x_c,y_c,\theta } \right)} \right)^2 + \left( {y_i - y_k\left( {x_c,y_c,\theta } \right)} \right)^2}}{{{\mathrm{{\Delta}}}x_i^2 + {\mathrm{{\Delta}}}x_g^2}}} \right)} \right) \ast \frac{B}{A} \ast P(N,I,B)$$ (2) Where Ik is the intensity of the kth emitter, (xc, yc) is the center position of the grid, θ is the rotation angle of the grid, Δxg is the global lateral uncertainty caused by an error in drift correction, B is the background, Δxi is the lateral position uncertainty of localization i reported by the ThunderSTORM analysis described above, (xi, yi) is the position of the ith localization, (xk, yk) is the position of the kth emitter, as a function of the center position and rotation of the grid, A is the area of the cluster, and N is the number of localizations found in the cluster. a is a normalization constant given by: $$a = 2\pi \left( {{\mathrm{{\Delta}}}x_i^2 + {\mathrm{{\Delta}}}x_g^2} \right)$$ (3) P(N,I,B) is the probability of finding N localizations given the intensity of each grid point and the background intensity, determined from the Poisson distribution of mean value N. This likelihood function determines the probability of finding localizations at all of the observed sites given a set of point emitters at the grid sites with intensity Ik and background intensity B. The optimization utilized the L-BFGS-B method of the minimize function provided by Scipy34 to minimize −log(L) subject to the constraint that all intensities are positive. Signals that did not align to the 6 × 8 grid were filtered to minimize fragmented origami and to reduce inadvertent assimilation of the triangular origami fiducial markers into the results. The algorithm then assigned the emitters a binary value (1 or 0) using an empirically derived threshold value. This binary matrix data was decoded using the decoding algorithm described below. In parallel with this blind cluster analysis, the processing algorithm also carried out a template matching step to more reliably identify individual origami and analyze their errors. This additional step used the known origami designs as templates, matching the observed origami to the best fit, based on the total number of errors. This method was more robust to higher error rates than the blind cluster analysis and allowed more origami to be identified for image averaging and error analysis (Fig. 3). It should be noted, however, that the template matching method cannot be considered as a data reading method because it requires a priori knowledge of the data being analyzed. For this reason, none of the analysis of the recovery rates or data density discussed here used data obtained from pattern matching. Decoding algorithm The decoding algorithm (Fig. S5) utilized a multi-layer error correction/encoding scheme to recover the data in the presence of errors. The algorithm first works at the dNAM origami level (Step 1, below), using the parity and checksum bits, to attempt to identify and correct errors and recover the correct matrix. After recovery, the algorithm uses binary operations to recover the original data segments from the droplets (Step 2, below). Decoding algorithm: Step 1–error correction Given raw binary matrix data M for a single dNAM origami, the output from the localization data processing step, the matrix decoding algorithm determined which, if any, bits were associated with checksum and parity errors by calculating the bi-level matrix parity and checksum values, as described in Fig. S4. Any discrepancies between the calculated parity and checksum values and the values recovered from the origami were noted, and a weight for each of the bits associated with the errant parity/checksum calculation was deduced. If no parity/checksum errors were detected for a particular matrix, then the data was assumed to be accurate, and the algorithm proceeded to extract the message data. To determine the site(s) of likely errors, the decoding algorithm first determined a weight for every cell in M, beginning with data cells (the cells containing droplet, index, or orientation bits) and proceeding to parity and checksum cells. Let \(P_{c_{ij}}\) be the set of parity functions calculated over a given data cell cij. Then for each data cell cij: $$x_{ij} = \mathop {\sum }\limits_{f_{c_{pq}} \in P_{c_{ij}}} \left| {c_{pq} - f_{c_{pq}}\left( {\mathbf{M}} \right)} \right|$$ (4) Where cpq is the parity cell where the expected binary value of f is stored. The weight for each parity cell cij was then calculated based on the number of non-zero weights greater than 1 for the data cells associated with it. More formally, let cij be a parity cell and \(D_{c_{ij}}\) be the set of data cells used in the calculation of cij. Then the weight xij for each parity cell cij is: $$x_{ij} = \mathop {\sum }\limits_{c_{pq} \in D_{c_{ij} \wedge x_{pq} > 1}} {\mathop{\rm{sgn}}} \left( {x_{pq}} \right)$$ (5) The higher the weight value, the higher the probability that the corresponding cell had an error. An overall score for the matrix was then calculated by summing over all xij and normalizing by the sum of the correctly matched parity bits. This value was designated as the overall weight of the matrix. Higher values of this weight correspond to matrixes with more errors. $${\mathrm{Overall}}\,{\mathrm{matrix}}\,{\mathrm{weight}} = \frac{{\mathop {\sum }\nolimits_{i = 0}^6 \mathop {\sum }\nolimits_{j = 0}^8 x_{ij}}}{{\# {\mathrm{number}}\,{\mathrm{of}}\,{\mathrm{matched}}\,{\mathrm{parity}}\,{\mathrm{bits}}}}$$ (6) The algorithm then performed a greedy search to correct the errors using a priority queue ordered by the overall matrix weight (Fig. S6). The algorithm began by iteratively altering each of the probable site errors and computing the overall matrix weight of the modified matrix for each, placing each potential bit flip into a priority queue where the flips that produced the lowest overall weights had the highest priority. At each step, the algorithm selected the bit flip associated with the highest priority in the queue and then repeated this process on the resulting matrix. This process was continued until the algorithm produced a matrix with no mismatches or until it reached the maximum number of allowed bit flips (9 for our simulation/experiment). If it reached the maximum number of flips, it returned to the queue to pursue the next highest priority path. If the algorithm found a matrix with no mismatches, it then checked the orientation bits and oriented the matrix accordingly. The droplet and index data were then extracted and passed to the next step. If the queue was emptied without finding a correct matrix, the algorithm terminated in failure. Decoding algorithm: Step 2–fountain code decoding After extracting the droplet and index data from multiple origami the algorithm attempted to recover the full message (Fig. S7). Once decoded, each droplet had one or multiple segments XORed in it. Using the recovered indexes the algorithm determined how many and which segments were contained in each droplet. To decode the message, the algorithm maintained a priority queue of droplets based on the number of segments they contained (their degree), with the lowest degree droplets having the highest priority. The algorithm looped through the queue, removing the lowest degree droplet, attempting to use it to reduce the degree of the remaining droplets using XOR operations, and re-queuing the resulting droplets. Upon finding a droplet of ‘degree one’ it stored it as a segment for the final message. If all segments were recovered, the algorithm terminated successfully. Data simulation test To test the robustness of our encoding and decoding algorithms, origami data were simulated with randomly generated messages and errors. First, random binary messages of size m were created (for m = 160 to 12,800 bits, at 320-bit intervals). These messages were then divided into m/b equally sized segments, where b is the number of data bits to be encoded onto an individual origami. For fixed-size origami, larger messages necessitated a smaller b, as more bits had to be dedicated to the index. In these cases, b varied between eight (for m = 12,800) and twelve (for m = 160). After determining message segments, droplets were formed using the fountain code algorithm and encoded onto origami, along with the corresponding index, orientation, and error-correcting bits. Ten in silico copies of each unique origami were created, and 0–9 bits flipped at random to introduce errors. The origami was decoded as described above. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Code availability DNA-PAINT images were analyzed using custom and publicly available codes (as indicated). The encoding/decoding algorithms were written in-house using Python, version 3.7.3. The source codes for the encoding, decoding, and localization algorithms are available on GitHub at https://github.com/BoiseState/NAM. The schematic in Fig. 1c of digital Nucleic Acid Memory was derived from a model created using Nanodesign (www.autodeskresearch.com/projects/nanodesign). Data availability The original DNA-PAINT recordings and drift-corrected centroid localization data that support the findings of this study have been deposited in the Zenodo repository with the identifier “https://doi.org/10.5281/zenodo.4672665â€�. Source data are provided with this paper. Any other relevant data are available from the authors upon reasonable request. Source data are provided with this paper. References 1.Victor, Z. 2018 Semiconductor Synthetic Biology Roadmap. 1–36 https://doi.org/10.13140/RG.2.2.34352.40960. (2018). 2.ITRS. International Technology Roadmap for Semiconductors, 2015 Results. ITRPV vol. 0 1–37 https://www.semiconductors.org/wp-content/uploads/2018/06/0_2015-ITRS-2.0-Executive-Report-1.pdf. Accessed 1st March 2021. (2016). 3.Reinsel, D., Gantz, J. & Rydning, J. The Digitization of the World-From Edge to Core. IDC White Paper US44413318 https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf. Accessed 1st March 2021. (2018). 4.Zhirnov, V., Zadegan, R. M., Sandhu, G. S., Church, G. M. & Hughes, W. L. Nucleic acid memory. Nat. Mater. 15, 366–370 (2016). ADS  CAS  Article  Google Scholar  5.Organick, L. et al. Random access in large-scale DNA data storage. Nat. Biotechnol. 36, 242–248 (2018). CAS  Article  Google Scholar  6.Goldman, N. et al. Towards practical, high-capacity, low-maintenance information storage in synthesized DNA. Nature 494, 77–80 (2013). ADS  CAS  Article  Google Scholar  7.Grass, R. N., Heckel, R., Puddu, M., Paunescu, D. & Stark, W. J. Robust chemical preservation of digital information on DNA in silica with error-correcting codes. Angew. Chem. Int. Ed. 54, 2552–2555 (2015). CAS  Article  Google Scholar  8.Bornholt, J. et al. A DNA-based archival storage system. ACM SIGARCH Comput. Archit. N. 44, 637–649 (2016). Article  Google Scholar  9.Shipman, S. L., Nivala, J., Macklis, J. D. & Church, G. M. Molecular recordings by directed CRISPR spacer acquisition. Science. 353, aaf1175-1–aaf1175-10 (2016). Article  Google Scholar  10.Erlich, Y. & Zielinski, D. DNA Fountain enables a robust and efficient storage architecture. Science 355, 950–954 (2017). ADS  CAS  Article  Google Scholar  11.Blawat, M. et al. Forward error correction for DNA data storage. Procedia Comput. Sci. 80, 1011–1022 (2016). Article  Google Scholar  12.Yazdi, S. M. H. T., Gabrys, R. & Milenkovic, O. Portable and error-free DNA-based data storage. Sci. Rep. 7, 1–6 (2017). Article  Google Scholar  13.Lee, H., Kalhor, R., Goela, N., Bolot, J. & Church, G. Enzymatic DNA synthesis for digital information storage. bioRxiv 348987, https://doi.org/10.1101/348987. (2018). 14.Wang, P., Meyer, T. A., Pan, V., Dutta, P. K. & Ke, Y. The beauty and utility of DNA origami. Chem 2, 359–382 (2017). CAS  Article  Google Scholar  15.Nieves, D. J., Gaus, K. & Baker, M. A. B. DNA-based super-resolution microscopy: DNA-PAINT. Genes 9, 1–14 (2018). Article  Google Scholar  16.Jungmann, R. et al. Single-molecule kinetics and super-resolution microscopy by fluorescence imaging of transient binding on DNA origami. Nano Lett. 10, 4756–4761 (2010). ADS  CAS  Article  Google Scholar  17.Schnitzbauer, J., Strauss, M. T., Schlichthaerle, T., Schueder, F. & Jungmann, R. Super-resolution microscopy with DNA-PAINT. Nat. Protoc. 12, 1198–1228 (2017). CAS  Article  Google Scholar  18.Luby, M. LT codes. In Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science (IEEE, 2002) 271–280 (IEEE, 2002). 19.MacKay, D. J. C. Fountain codes. IEE Proc. Commun. 152, 1062–1068 (2005). Article  Google Scholar  20.Greengard, S. The future of data storage. Commun. ACM 62, 12–12 (2019). Article  Google Scholar  21.Gwosch, K. C. et al. MINFLUX nanoscopy delivers 3D multicolor nanometer resolution in cells. Nat. Methods 17, 217–224 (2020). 22.Wade, O. K. et al. 124-color super-resolution imaging by engineering DNA-PAINT blinking kinetics. Nano Lett. 19, 2641–2646 (2019). ADS  CAS  Article  Google Scholar  23.Langari, S. M. M., Yousefi, S. & Jabbehdari, S. Fountain-code aided file transfer in vehicular delay tolerant networks. Adv. Electr. Comput. Eng. 13, 117–124 (2013). Article  Google Scholar  24.Green, C.M. Nanoscale optical and correlative microscopies for quantitative characterization of DNA nanostructures. https://doi.org/10.18122/td/1639/boisestate (Boise State University Theses and Dissertations, 2019). 25.Hata, H., Kitajima, T. & Suyama, A. Influence of thermodynamically unfavorable secondary structures on DNA hybridization kinetics. Nucleic Acids Res. 46, 782–791 (2018). CAS  Article  Google Scholar  26.Strauss, S. & Jungmann, R. Up to 100-fold speed-up and multiplexing in optimized DNA-PAINT. Nat. Methods 17, 789–791 (2020). CAS  Article  Google Scholar  27.Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458 (2018). ADS  CAS  Article  Google Scholar  28.Takabayashi, S. et al. Boron-implanted silicon substrates for physical adsorption of DNA origami. Int. J. Mol. Sci. 19, 2513 (2018). 29.Zhang, Y. et al. DNA origami cryptography for secure communication. Nat. Commun. 10, 5469 (2019). ADS  Article  Google Scholar  30.Aghebat Rafat, A., Pirzer, T., Scheible, M. B., Kostina, A. & Simmel, F. C. Surface-assisted large-scale ordering of DNA origami tiles. Angew. Chem. Int. Ed. 53, 7665–7668 (2014). CAS  Article  Google Scholar  31.Rothemund, P. W. K. Folding DNA to create nanoscale shapes and patterns. Nature 440, 297–302 (2006). ADS  CAS  Article  Google Scholar  32.Dai, M., Jungmann, R. & Yin, P. Optical imaging of individual biomolecules in densely packed clusters. Nat. Nanotechnol. 11, 798–807 (2016). ADS  CAS  Article  Google Scholar  33.Ovesný, M., Křížek, P., Borkovec, J., Å vindrych, Z. & Hagen, G. M. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics 30, 2389–2390 (2014). Article  Google Scholar  34.Oliphant, T. E. Python for scientific computing. Comput. Sci. Eng. 9, 10–20 (2007). CAS  Article  Google Scholar  Download references Acknowledgements This research was funded in part by the National Science Foundation (ECCS 1807809), the Semiconductor Research Corporation, and the State of Idaho through the Idaho Global Entrepreneurial Mission and Higher Education Research Council. Author information Author notes Christopher M. Green Present address: Center for Bio/Molecular Science and Engineering, U.S. Naval Research Laboratory, Washington, DC, USA Reza Zadegan Present address: Department of Nanoengineering, Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, NC, USA These authors contributed equally: George D. Dickinson, Golam Md Mortuza, William Clay, Luca Piantanida. Affiliations Micron School of Materials Science and Engineering, Boise State University, Boise, ID, USA George D. Dickinson, William Clay, Luca Piantanida, Christopher M. Green, Chad Watson, Elton Graugnard, Reza Zadegan & William L. Hughes Department of Computer Science, Boise State University, Boise, ID, USA Golam Md Mortuza & Tim Andersen Department of Biological Sciences, Boise State University, Boise, ID, USA Eric J. Hayden Department of Electrical and Computer Engineering, Boise State University, Boise, ID, USA Wan Kuang Authors George D. DickinsonView author publications You can also search for this author in PubMed Google Scholar Golam Md MortuzaView author publications You can also search for this author in PubMed Google Scholar William ClayView author publications You can also search for this author in PubMed Google Scholar Luca PiantanidaView author publications You can also search for this author in PubMed Google Scholar Christopher M. GreenView author publications You can also search for this author in PubMed Google Scholar Chad WatsonView author publications You can also search for this author in PubMed Google Scholar Eric J. HaydenView author publications You can also search for this author in PubMed Google Scholar Tim AndersenView author publications You can also search for this author in PubMed Google Scholar Wan KuangView author publications You can also search for this author in PubMed Google Scholar Elton GraugnardView author publications You can also search for this author in PubMed Google Scholar Reza ZadeganView author publications You can also search for this author in PubMed Google Scholar William L. HughesView author publications You can also search for this author in PubMed Google Scholar Contributions W.L.H. conceived the concept. E.J.H., T.A., W.K., E.G., R.Z., and W.L.H. designed the study. C.W., E.J.H., T.A., W.K., E.G., and W.L.H. supervised the work. C.W. managed the research project. G.D.D. and L.P. synthesized the DNA origami and performed DNA-PAINT imaging. L.P. carried out AFM imaging and analysis. T.A. and G.M.M. developed the encoding-decoding algorithms and necessary software, performed data processing, and generated the simulations. G.D.D. and W.C. developed the image-analysis software and analyzed the DNA-PAINT recordings. C.M.G. performed preliminary experiments and contributed critical suggestions to experimental design. All authors prepared the manuscript. Corresponding author Correspondence to William L. Hughes. Ethics declarations Competing interests The authors declare no competing interests. Additional information Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Supplementary Information Peer Review File Reporting Summary Source data Source Data Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions About this article Cite this article Dickinson, G.D., Mortuza, G.M., Clay, W. et al. An alternative approach to nucleic acid memory. Nat Commun 12, 2371 (2021). https://doi.org/10.1038/s41467-021-22277-y Download citation Received: 08 February 2021 Accepted: 04 March 2021 Published: 22 April 2021 DOI: https://doi.org/10.1038/s41467-021-22277-y Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Download PDF Advertisement Explore content Research articles Reviews & Analysis News & Comment Videos Collections Subjects Follow us on Facebook Follow us on Twitter Sign up for alerts RSS feed Journal information About the journal Editors' Highlights Contact Top 50 Articles Editorial policies Publish with us For authors For Reviewers Submit manuscript Search Search articles by subject, keyword or author Show results from All journals This journal Search Advanced search Quick links Explore articles by subject Find a job Guide to authors Editorial policies Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap About us Press releases Press office Contact us Discover content Journals A-Z Articles by subject Nano Protocol Exchange Nature Index Publishing policies Nature portfolio policies Open access Author & Researcher services Reprints & permissions Research data Language editing Scientific editing Nature Masterclasses Nature Research Academies Libraries & institutions Librarian service & tools Librarian portal Open research Recommend to library Advertising & partnerships Advertising Partnerships & Services Media kits Branded content Career development Nature Careers Nature Conferences Nature events Regional websites Nature Africa Nature China Nature India Nature Italy Nature Japan Nature Korea Nature Middle East Legal & Privacy Privacy Policy Use of cookies Manage cookies/Do not sell my data Legal notice Accessibility statement Terms & Conditions California Privacy Statement © 2021 Springer Nature Limited Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Email address Sign up I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing doi-org-7697 ---- Project MUSE - Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation [Skip to Content] Institutional Login LOG IN Accessibility Browse OR Search: menu Advanced Search Browse MyMUSE Account Log In / Sign Up Change My Account User Settings Access via Institution MyMUSE Library Search History View History Purchase History MyMUSE Alerts Contact Support portal: Libraries and the Academy Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation Marc Meola portal: Libraries and the Academy Johns Hopkins University Press Volume 4, Number 3, July 2004 pp. 331-344 10.1353/pla.2004.0055 Article View Citation Related Content Additional Information Purchase/rental options available: Buy Issue for $20 at JHUP Abstract This paper criticizes the checklist model approach (authority, accuracy, objectivity, currency, coverage) to teaching undergraduates how to evaluate Web sites. The checklist model rests on faulty assumptions about the nature of information available through the Web, mistaken beliefs about student evaluation skills, and an exaggerated sense of librarian expertise in evaluating information. The checklist model is difficult to implement in practice and encourages a mechanistic way of evaluating that is at odds with critical thinking. A contextual approach is offered as an alternative. A contextual approach uses three techniques: promoting peer- and editorially-reviewed resources, comparison, and corroboration. The contextual approach promotes library resources, teaches information literacy, and encourages reasoned judgments of information quality. collapse You are not currently authenticated. If you would like to authenticate using a different subscribed institution or have your own login and password to Project MUSE Authenticate Purchase/rental options available: Buy Issue for $20 at JHUP Recommend Additional Information ISSN 1530-7131 Print ISSN 1531-2542 Pages pp. 331-344 Launched on MUSE 2004-07-09 Open Access No Project MUSE Mission Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. About MUSE Story Publishers Discovery Partners Advisory Board Journal Subscribers Book Customers Conferences What's on Muse Open Access Journals Books MUSE in Focus T.S. Eliot Prose Resources News & Announcements Email Sign-Up Promotional Materials Get Alerts Presentations Information For Publishers Librarians Individuals Instructors Contact Contact Us Help Policy & Terms Accessibility Privacy Policy Terms of Use 2715 North Charles Street Baltimore, Maryland, USA 21218 +1 (410) 516-6989 muse@jh.edu ©2020 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. Now and Always, The Trusted Content Your Research Requires Now and Always, The Trusted Content Your Research Requires Built on the Johns Hopkins University Campus Built on the Johns Hopkins University Campus ©2021 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. Back To Top This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. doi-org-8273 ---- Reflections on Archival User Studies | Rhee | Reference & User Services Quarterly Journal Content Search Search Scope All Authors Title Abstract Index terms Full Text Browse By Issue By Author By Title Other Journals Article Tools Indexing metadata How to cite item Email this article (Login required) Email the author (Login required) Notifications View Subscribe Home About Search Current Archives About RUSA Sections Committees Home > Vol 54, No 4 (2015) > Rhee Reflections on Archival User Studies Hea Lim Rhee Abstract This study is the first to focus on how developments in research trends, technology, and other factors have changed archival user studies. How have they changed in the past thirty years? How have they been conducted? This study examines and analyzes the US and Canadian literature on archival user studies to trace their past, characterize their present, and uncover the issues and challenges facing the archival community in conducting user studies. It discusses findings and gives suggestions for further archival user studies. Full Text: HTML PDF References Clark A. Elliott, “Citation Patterns and Documentation for the History of Science: Some Methodological Considerations,” American Archivist 44 (Spring 1981): 131–42. Elsie T. Freeman, “In the Eye of the Beholder: Archives Administration from the User’s Point of View,” American Archivist 47 (Spring 1984): 112. Lawrence Dowler, “The Role of Use in Defining Archival Practice and Principles: A Research Agenda for the Availability and Use of Records,” American Archivist 51, no. 1/2 (Winter/Spring 1988): 74–86. Jacqueline Goggin, “The Indirect Approach: A Study of Scholarly Users of Black and Women’s Organizational Records in the Library of Congress Manuscript Division,” Midwestern Archivist 11, no. 1 (1986): 57–67. Fredric Miller, “Use, Appraisal, and Research: A Case Study of Social History,” American Archivist 49 (Fall 1986): 371–92. Lisa R. Coats, “Users of EAD Finding Aids: Who Are They and Are They Satisfied?” Journal of Archival Organization 2, no. 3 (2004): 25–39. Richard Pearce-Moses, A Glossary of Archival and Records Terminology (Chicago: Society of American Archivists, 2005), s.v. “finding aid,” also available online at http://www2.archivists.org/glossary/terms/f/finding-aid. Carolyn Harris, “Archives Users in the Digital Era: A Review of Current Research Trends,” Dalhousie Journal of Interdisciplinary Management 1 (2005): 2–6. Anneli Sundqvist, “The Use of Records: A Literature Review,” Archives & Social Studies 1, no. 1 (2007): 623–53. Archival Education and Research Institutes, “Discussion: Archival Journal Ranking,” 2010, accessed May 23, 2013, http://aeri2010.wetpaint.com/thread/3891876/Archival+Journal+Ranking. Lawrence Dowler, “The Role of Use in Defining Archival Practice and Principles: A Research Agenda for the Availability and Use of Records,” American Archivist 51 (Winter/Spring 1988): 79. Richard H. Lytle, “Intellectual Access to Archives: I. Provenance and Content Indexing Methods of Subject Retrieval,” American Archivist 43, no. 1 (Winter 1980): 66. Clark A. Elliott, “Citation Patterns and Documentation for the History of Science: Some Methodological Considerations,” American Archivist 44 (Spring 1981): 131–42. Elsie T. Freeman, “In the Eye of the Beholder: Archives Administration from the User’s Point of View,” American Archivist 47 (Spring 1984): 112. Jacqueline Goggin, “The Indirect Approach: A Study of Scholarly Users of Black and Women’s Organizational Records in the Library of Congress Manuscript Division,” Midwestern Archivist 11, no. 1 (1986): 57–67. Fredric Miller, “Use, Appraisal, and Research: A Case Study of Social History,” American Archivist 49 (Fall 1986): 371–92. Paul Conway, “Facts and Frameworks: An Approach to Studying the Users of Archives,” American Archivist 49 (Fall 1986): 393–407. Mary Jo Pugh, Providing Reference Services for Archives & Manuscripts (Chicago: Society of American Archivists, 2005). Helen R. Tibbo, Learning to Love Our Users: A Challenge to the Profession and a Model for Practice, accessed October 23, 2013, www.ils.unc.edu/tibbo/MAC%20Spring%202002.pdf. James M. O’Toole and Richard J. Cox, Understanding Archives & Manuscripts (Chicago: Society of American Archivists, 2006). Bruce W. Dearstyne, “What is the Use of Archives? A Challenge for the Profession,” American Archivist 50 (Winter 1987): 77. Helen R. Tibbo, Learning to Love Our Users: A Challenge to the Profession and a Model for Practice, 10, accessed on October 23, 2013, www.ils.unc.edu/tibbo/MAC%20Spring%202002.pdf. Hea Lim Rhee, “Exploring the Relationship between Archival Appraisal Practice and User Studies: U.S. State Archives and Records Management Programs” (PhD diss., University of Pittsburgh, 2011). Morgan G. Daniels and Elizabeth Yakel, “Seek and You May Find: Successful Search in Online Finding Aid Systems,” American Archivist 73 (2010): 535–68. Elizabeth Yakel and Deborah A. Torres, “AI: Archival Intelligence and User Expertise,” American Archivist 66, no. 1 (Spring/Summer 2003): 51–78. Elizabeth Yakel and Deborah A. Torres, “Genealogists as a ‘Community of Records,’” American Archivist 70 (Spring/Summer 2007): 93–113. Megan E. Phillips, “Usage Patterns for Holdings Information Sources at the University of North Carolina at Chapel Hill Manuscripts Department” (master’s thesis, University of North Carolina at Chapel Hill, 1997). Shayera D. Tangri, “Evaluating Changes in the Methods by Which Users of the University of North Carolina at Chapel Hill Manuscripts Department Learn of the Holdings of the Department” (master’s thesis, University of North Carolina at Chapel Hill, 2000). Kris Inwood and Richard Reid, “The Challenge to Archival Practice of Quantification in Canadian History,” Archivaria 36 (Autumn 1993): 232–38. David Bearman, “User Presentation Language in Archives,” Archives and Museum Informatics 3 (Winter 1989–90): 3–7. Dianne L. Beattie, “An Archival User Study: Researchers in the Field of Women’s History,” Archivaria 29 (Winter 1989–90): 33–50. Wendy M. Duff and Catherine A. Johnson, “A Virtual Expression of Need: An Analysis of E-mail Reference Questions,” American Archivist 64 (Spring/Summer 2001): 43–60. Anne J. Gilliland-Swetland, “An Exploration of K–12 User Needs for Digital Primary Source Materials,” American Archivist 61 (Spring 1998): 136–57. Dianne L. Beattie, “An Archival User Study: Researchers in the Field of Women’s History,” Archivaria 29 (Winter 1989–90): 33–50. Helen R. Tibbo, “Primarily History in America: How U.S. Historians Search for Primary Materials at the Dawn of the Digital Age,” American Archivist 66 (Spring/Summer 2003): 9–50. Jodi Allison-Bunnell, Elizabeth Yakel, and Janet Hauck, “Researchers at Work: Assessing Needs for Content and Presentation of Archival Materials,” Journal of Archival Organization 9, no. 2 (2011): 67–104. Ann D. Gordon, Using the Nation’s Documentary Heritage: The Report of the Historical Documents Study (Washington, DC: National Historical Publications and Records Commission in cooperation with the American Council of Learned Societies, 1992). Wendy M. Duff and Catherine A. Johnson, “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives,” Library Quarterly 72, no. 4 (October 2002): 472–96. Helen R. Tibbo, “Primary History: Historians and the Search for Primary Source Materials” (proceedings presented at the 2002 ACM IEEE Joint Conference on Digital Libraries, July 14–18, 2002), accessed February 10, 2013, http://portal.acm.org/citation.cfm?doid=544220.544222. Xiaomu Zhou, “Student Archival Research Activity: An Exploratory Study,” American Archivist 71 (Fall / Winter 2008): 476–98. Wendy M. Duff and Catherine A. Johnson, “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives,” Library Quarterly 72, no. 4 (October 2002): 472–96. Wendy M. Duff and Catherine A. Johnson, “Where Is the List with All the Names? Information-Seeking Behavior of Genealogists,” American Archivist 66 (Spring/Summer 2003): 79–95. Kristina L. Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections,” Archival Issues 26, no. 2 (2002): 91–109. Barbara C. Orbach, “The View from the Researcher’s Desk: Historians’ Perceptions of Research and Repositories,” American Archivist 54 (Winter 1991): 28–43 Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no. 2 (2002): 111–23. Wendy Duff, Barbara Craig, and Joan Cherry, “Finding and Using Archival Resources: A Cross-Canada Survey of Historians Studying Canadian History,” Archivaria 58 (Fall 2004): 51–80. Wendy Duff, Barbara Craig, and Joan Cherry, “Historians’ Use of Archival Sources: Promises and Pitfalls of the Digital Age,” Public Historian 26, no. 2 (2004): 7–22. Michael E. Stevens, “The Historians and Archival Finding Aids,” Georgia Archive (Winter 1977): 64–74. Paul Conway, “Research in Presidential Libraries: A User Survey,” Midwestern Archivist 11, no. 1 (1986): 35-56. Christopher J. Prom, “User Interactions with Electronic Finding Aids in a Controlled Setting,” American Archivist 67, no. 2 (Fall/Winter 2004): 234–68. Wendy Scheir, “First Entry: Report on a Qualitative Exploratory Study of Novice User Experience with Online Finding Aids,” Journal of Archival Organization 3, no. 4 (2006): 49–85. Margaret O’Neill Adams, “Analyzing Archives and Finding Facts: Use and Users of Digital Data Records,” Archival Science 7, no. 1 (2007): 21–36. Elizabeth Yakel and Laura L. Bost, “Understanding Administrative Use and Users in University Archives,” American Archivist 57 (1994): 596–615. Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no.2 (2002): 111–23. Helen R. Tibbo, “How Historians Locate Primary Resource Materials: Educating and Serving the Next Generation of Scholars” (paper presented at the ACRL Eleventh National Conference Charlotte, North Carolina, 2003), www.ala.org/ala/mgrps/divs/acrl/events/pdf/tibbo.pdf. Magia G. Krause, “Undergraduates in the Archives: Using an Assessment Rubric to Measure Learning,” American Archivist 73 (Fall/Winter 2010): 507–34. Elizabeth Yakel, “Listening to Users,” Archival Issues 26, no. 2 (2002): 111–23. Herbert Clark, Using Language (New York: Cambridge University Press, 1996), 93. Karen Collins, “Providing Subject Access to Images: A Study of User Queries,” American Archivist 61 (Spring 1998): 36–55. Wendy M. Duff and Joan M. Cherry, “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact,” American Archivist 71 (Fall/Winter 2008): 499–529 Kristina L. Southwell, “How Researchers Learn of Manuscript Resources at the Western History Collections,” Archival Issues 26, no. 2 (2002): 91–109. Paul Conway, “Research in Presidential Libraries: A User Survey,” Midwestern Archivist 11, no. 1 (1986): 35–56. James Bantin and Leah Agne, “Digitizing for Value: A User-Based Strategy for University Archives,” Journal of Archival Organization 8, no. 3–4 (2010): 244–50. Paul Conway, Partners in Research; Improving Access to the Nation’s Archive. User Studies of the National Archives and Records Administration (Pittsburgh: Archives and Museum Informatics, 1994). Paul Conway, “Modes of Seeing: Digitized Photographic Archives and the Experienced User,” American Archivist 73 (Fall/Winter 2010): 425–62. Donghee Sinn, “Room for Archives? Use of Archival Materials in No Gun Ri Research,” Archival Science 10 (2010): 117–40. Mary Jo Pugh, “The Illusion of Omniscience: Subject Access and the Reference Archivist,” American Archivist 45 (Winter 1982): 33–44. Wendy M. Duff, “Working as Independently as Possible: Historians and Genealogists Meet the Archival Finding Aid,” in The Power and the Passion of Archives: A Festschrift in Honour of Kent Haworth, edited by Reuben Ware, Marion Beyea, and Cheryl Avery (Ottawa: Association of Canadian Archivists, 2005), 201. Mary Jo Pugh, Providing Reference Services for Archives & Manuscripts (Society of American Archivists: Chicago, 2005) Bruce Washburn, Ellen Eckert, and Merrilee Proffitt, Social Media and Archives: A Survey of Archive Users (OCLC Research: Dublin, 2013). Christopher J. Prom, “Using Web Analytics to Improve Online Access to Archival Resources,” American Archivist 74 (2011): 158–84. Elaine G. Toms and Wendy Duff, “I Spent 1 _ Hours Sifting Through One Large Box . . . Diaries as Information Behavior of the Archives User: Lessons Learned,” Journal of the American Society for Information Science and Technology 53, no. 14 (December 2002): 1232–38. Catherine A. Johnson and Wendy M. Duff, “Chatting up the Archivist: Social Capital and the Archival Researcher,” American Archivist 68, no. 1 (Spring/Summer 2005): 113–29. Nancy McCall and Lisa A. Mix, “Scholarly Returns: Patterns of Research in a Medical Archives,” Archivaria 41 (Spring 1996): 158–87. Robert P Spindler and Richard Peace-Moses, “Does AMC Mean ‘Archives Made Confusing’? Patron Understanding of USMARC AMC Catalog Records,” American Archivist 56 (Spring 1993): 330–41. Helen R. Tibbo, “How Historians Locate Primary Resource Materials: Educating and Serving the Next Generation of Scholars” (paper presented at the ACRL Eleventh National Conference Charlotte, North Carolina, 2003), https://www.ala.org/ala/acrl/acrlevents/tibbo.PDF. Margaret L. Hedstrom et al., “’The Old Version Flickers More’: Digital Preservation from the User’s Perspective,” American Archivist 69 (Spring/Summer 2006): 159–87. Christopher J. Prom, “Using Web Analytics to Improve Online Access to Archival Resources,” American Archivist 74 (2011): 161. Kristen E. Martin, “Analysis of Remote Reference Correspondence at a Large Academic Manuscripts Collection,” American Archivist 64 (Spring-Summer 2001): 17–42. Helen R. Tibbo, “Interviewing Techniques for Remote Reference: Electronic Versus Traditional Environments,” American Archivist 58 (Summer 1995): 294–310. DOI: https://doi.org/10.5860/rusq.54n4.29 Refbacks There are currently no refbacks. ALA Privacy Policy © 2021 RUSA doi-org-9149 ---- Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course | Cognitive Research: Principles and Implications | Full Text Skip to main content Advertisement Search Get published Explore Journals Books About My account Search all SpringerOpen articles Search Cognitive Research: Principles and Implications About Articles Submission Guidelines Download PDF Original article Open Access Published: 31 March 2021 Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course Jessica E. Brodsky  ORCID: orcid.org/0000-0001-9654-68061,2, Patricia J. Brooks  ORCID: orcid.org/0000-0001-8030-88111,2, Donna Scimeca2, Ralitsa Todorova3, Peter Galati2, Michael Batson2, Robert Grosso2, Michael Matthews2, Victor Miller2 & Michael Caulfield4  Cognitive Research: Principles and Implications volume 6, Article number: 23 (2021) Cite this article 3010 Accesses 276 Altmetric Metrics details Abstract College students lack fact-checking skills, which may lead them to accept information at face value. We report findings from an institution participating in the Digital Polarization Initiative (DPI), a national effort to teach students lateral reading strategies used by expert fact-checkers to verify online information. Lateral reading requires users to leave the information (website) to find out whether someone has already fact-checked the claim, identify the original source, or learn more about the individuals or organizations making the claim. Instructor-matched sections of a general education civics course implemented the DPI curriculum (N = 136 students) or provided business-as-usual civics instruction (N = 94 students). At posttest, students in DPI sections were more likely to use lateral reading to fact-check and correctly evaluate the trustworthiness of information than controls. Aligning with the DPI’s emphasis on using Wikipedia to investigate sources, students in DPI sections reported greater use of Wikipedia at posttest than controls, but did not differ significantly in their trust of Wikipedia. In DPI sections, students who failed to read laterally at posttest reported higher trust of Wikipedia at pretest than students who read at least one problem laterally. Responsiveness to the curriculum was also linked to numbers of online assignments attempted, but unrelated to pretest media literacy knowledge, use of lateral reading, or self-reported use of lateral reading. Further research is needed to determine whether improvements in lateral reading are maintained over time and to explore other factors that might distinguish students whose skills improved after instruction from non-responders. Introduction Young adults (ages 18–29 years) and individuals with at least some college education are the highest Internet users in the USA (Pew Research Center, 2019a). These groups are also most likely to use at least one social media site (Pew Research Center, 2019b). Despite their heavy Internet and social media use, college students rarely “read laterally” to evaluate the quality of the information they encounter online (McGrew et al., 2018). That is, students do not attempt to seek out the original sources of claims, research the people and/or organizations making the claims, or verify the accuracy of claims using fact-checking websites, online searches, or Wikipedia (Wineburg & McGrew, 2017). The current study reports findings from one of eleven colleges and universities participating in the Digital Polarization Initiative (DPI), a national effort by the American Democracy Project of the American Association of State Colleges and Universities to teach college students information-verification strategies that rely on lateral reading for online research (American Democracy Project, n.d; Caulfield, 2017a). The DPI curriculum was implemented across multiple sections of a general education civics course, while other sections taught by the same instructors received the “business-as-usual” civics curriculum. We evaluated the impact of the DPI curriculum on students’ use of lateral reading to accurately assess the trustworthiness of online information, as well their use and trust of Wikipedia. We also examined factors that might influence whether students showed gains in response to the curriculum, such as their prior media literacy knowledge. How do fact-checkers assess the trustworthiness of online information? Fact-checking refers to a process of verifying the accuracy of information. In journalism, this process occurs internally before publication as well as externally via articles evaluating the accuracy of publicly available information (Graves & Amazeen, 2019). Ethnographic research on the practices of professional fact-checkers found that fact-checking methodology involves five steps: “choosing claims to check, contacting the speaker, tracing false claims, dealing with experts, and showing your work” (Graves, 2017, p. 524). Interest in the cognitive processes and strategies of professional fact-checkers is not surprising in light of concerns about the rapid spread of false information (i.e., “fake news”) via social media platforms (Pennycook et al., 2018; Vosoughi et al., 2018), as well as the emergence of fact-checking organizations during the twenty-first century, especially in the USA (Amazeen, 2020). When assessing the credibility of online information, professional fact-checkers first “take bearings” by reading laterally. This means that they “[leave] a website and [open] new tabs along the browser’s horizontal axis, drawing on the resources of the Internet to learn more about a site and its claims” (Wineburg & McGrew, 2018, p. 53). This practice allows them to quickly acquire background information about a source. When reading laterally, professional fact-checkers also practice “click restraint,” meaning that they review search engine results before selecting a result and rely on their “knowledge of digital sources, knowledge of how the Internet and searches are structured, and knowledge of strategies to make searching and navigating effective and efficient” (Wineburg & McGrew, 2018, p. 55). In contrast to professional fact-checkers, both historians and college students are unlikely to read laterally when evaluating online information (Wineburg & McGrew, 2017). How do college students assess the trustworthiness of online information? How individuals assess the credibility of information has been studied across a variety of fields, including social psychology (e.g., work on persuasion), library and information science, communication studies, and literacy and discourse (see Brante & Strømsø, 2018 for a brief overview). When assessing the trustworthiness of online social and political information, college students tend to read vertically. This means that they look at features of the initial webpage for cues about the reliability of the information, such as its scientific presentation (e.g., presence of abstract and references), aesthetic appearance, domain name and logo, and the usefulness of the information (Brodsky et al., 2020; McGrew et al., 2018; Wineburg & McGrew, 2017; Wineburg et al., 2020). College students’ use of non-epistemic judgments (i.e., based on source features) rather than epistemic judgments (i.e., based on source credibility or corroboration with other sources) has also been observed in the context of selecting sources to answer a question and when ranking the reliability of sources (List et al., 2016; Wiley et al., 2009). When provided with opportunities to verify information, adults (including college students) rarely engage in online searches and when they do, they usually stay on Google’s search results page (Donovan & Rapp, 2020). While looking for information, college students rely on the organization of search engine results and prior trust in specific brands (e.g., Google) for cues about the credibility of the information (Hargittai et al., 2010). Low search rates, superficial search behaviors, and reliance on cognitive heuristics (e.g., reputation, endorsement by others, alignment with expectations) may be indicative of a lack of ability or lack of motivation to engage in critically evaluating the credibility of online information. According to the dual processing model of credibility assessment, use of more effortful evaluation strategies depends on users’ knowledge and skills, as well as their motivation (Metzger, 2007; Metzger & Flanagin, 2015). Drawing on the heuristic-systematic model of information processing (Chen & Chaiken, 1999), Metzger and colleagues argue that the need for accuracy is one factor that motivates users to evaluate the credibility of information. Users are more likely to put effort into evaluating information whose accuracy is important to them. In cases where accuracy is less important, they are likely to use less effortful, more superficial strategies, if any strategies at all. Teaching college students to read laterally The current study focuses on teaching college students to read laterally when assessing the trustworthiness of online information. However, a number of other approaches have already been used to foster students’ credibility evaluation knowledge and skills. Lateral reading contrasts with some of these approaches and complements others. For example, teaching students to quickly move away from the original content to consult other sources contrasts with checklist approaches that encourage close reading of the original content (Meola, 2004). One popular checklist approach is the CRAAP test, which provides an extensive list of questions for examining the currency, relevance, authority, accuracy, and purpose of online information (Blakeslee, 2004; Musgrove et al., 2018). On the other hand, lateral reading complements traditional sourcing interventions that teach students how to identify and leverage source information when assessing multiple documents (Brante & Strømsø, 2018). More specifically, lateral reading instruction emphasizes that students need to assemble a collection of documents in order to be able to assess information credibility, identify biases, and corroborate facts. Lateral reading also aligns with aims of media, news, and information literacy instruction. Media literacy instruction teaches students how to access, analyze, evaluate, create, reflect, and act on media messages as means of both protecting and empowering them as media consumers and producers (Hobbs, 2010, 2017). Media literacy interventions can increase students’ awareness of factors that may affect the credibility of media messages, specifically that media content is created for a specific audience, is subject to bias and multiple interpretations, and does not always reflect reality (Hobbs & Jensen, 2009; Jeong et al., 2012). These media literacy concepts also apply in the context of news media (Maksl et al., 2017). Lateral reading offers a way for students to act on awareness and skepticism fostered through media and news literacy interventions by leaving the original messages in order to investigate sources and verify claims. While media and news literacy instruction focuses on students’ understanding of and interactions with media content, information literacy instruction teaches students how to search for and verify information online (Koltay, 2011). Being information literate includes understanding that authority is constructed and contextual and “us[ing] research tools and indicators of authority to determine the credibility of sources, understanding the elements that might temper this credibility” (Association of College & Research Libraries, 2015, p. 12). Lateral reading offers one means of investigating the authority of a source, including its potential biases (Faix & Fyn, 2020). Lateral reading is also a necessary component of “civic online reasoning” during which students evaluate online social and political information by researching a source, assessing the quality of evidence, and verifying claims with other sources (McGrew et al., 2018). McGrew et al. (2019) conducted a pilot study of a brief in-class curriculum for teaching undergraduate students civic online reasoning. One session focused explicitly on teaching lateral reading to learn more about a source, while the second session focused on examining evidence and verifying claims. Civic online reasoning was assessed using performance-based assessments similar to those used in their 2018 study (McGrew et al., 2018). Students who received the curriculum were more likely to make modest gains in their use of civic online reasoning, as compared to a control group of students who did not receive the curriculum. Aligning with this approach, the American Democracy Project of the American Association of State Colleges and Universities organized the Digital Polarization Initiative (DPI; American Democracy Project, n.d.) as a multi-institutional effort to teach college students how to read laterally to fact-check online information. Students were instructed to practice four fact-checking “moves”: (1) “look for trusted work” (search for other information on the topic from credible sources), (2) “find the original” (search for the original version of the information, particularly if it is a photograph), (3) “investigate the source” (research the source to learn more about its agenda and biases), and (4) “circle back” (be prepared to restart your search if you get stuck) (Caulfield, 2017a). Because emotionally arousing online content is more likely to be shared (Berger & Milkman, 2012), students were also taught to “check their emotions,” meaning that they should make a habit of fact-checking information that produces a strong emotional response. In the current study, we were interested in fostering students’ use of lateral reading to accurately assess the trustworthiness of online content. Therefore, we focused specifically on students’ use of the first three fact-checking “moves.” These moves are all examples of lateral reading, as they require students to move away from original content and conduct searches in a new browser window (Wineburg & McGrew, 2017), and align with the practices of professional fact-checkers. While the DPI curriculum also taught the move of “circling back” and encouraged students to adopt the habit of “checking their emotions,” this move and habit are difficult to assess through performance-based measures and were not the focus of the assessments or analyses presented here. Research objectives We present results from an efficacy study that used the American Democracy Project’s DPI curriculum to teach college students fact-checking strategies through lateral reading instruction. Students in several sections of a first-year, general education civics course received the DPI curriculum in-class and completed online assignments reinforcing key information and skills, while other sections received the “business-as-usual” civics instruction. We were interested in whether students who received the DPI curriculum would be more likely to use lateral reading to correctly assess the trustworthiness of online content at posttest, as compared to “business-as-usual” controls. Additionally, we wanted to know the extent to which attempting the online assignments, which reviewed the lateral reading strategies and provided practice exercises, contributed to students’ improvement. As part of the analyses, we controlled for prior media literacy knowledge. Even though media literacy has not been tied directly to the ability to identify fake news (Jones-Jang et al., 2019), students with greater awareness of the media production process and skepticism of media coverage may be more motivated to investigate online content. As part of the team implementing the DPI curriculum, we were provided with performance-based assessments like the ones used by McGrew et al. (2018) and McGrew et al. (2019) to assess students’ lateral reading at pretest and posttest. These types of assessments are especially critical given findings that college students’ self-reported information evaluation strategies are often unrelated to their observed behaviors (Brodsky et al., 2020; Hargittai et al., 2010; List & Alexander, 2018). In light of previous research on the disconnect between students’ self-reported and observed information-evaluation behaviors, we also examined whether students who received the DPI curriculum were more likely to self-report use of lateral reading at posttest, as compared to “business-as-usual” controls. In the DPI curriculum, one of the sources that students are encouraged to consult when reading laterally is Wikipedia. Even though they are often told by secondary school teachers, librarians, and other college instructors that Wikipedia is an unreputable source (Garrison, 2018; Konieczny, 2016; Polk et al., 2015), students may rely on Wikipedia to acquire background information on a topic at the start of their searches (Head & Eisenberg, 2010). Therefore, we were interested in whether college students who received the DPI curriculum would report higher use of and trust of Wikipedia at posttest, as compared to “business-as-usual” controls. Lastly, for students who received the DPI curriculum, we explored factors that might distinguish students who used lateral reading to correctly assess the trustworthiness of online content at posttest from their classmates who did not read laterally. In an effort to distinguish groups, we compared students on their use of lateral reading at pretest and their self-reported use of lateral reading at pretest. We also examined group differences in general media literacy knowledge at pretest, use of and trust of Wikipedia at pretest, and number of online homework assignments attempted. Methods Participants First-year college students (N = 230) enrolled in a general education civics course at a large urban public university in the northeastern USA took part in the study. The university has an open-admission enrollment policy and is designated as a Hispanic-serving institution. Students took classes at main and satellite campuses, both serving mostly commuter students. Participants’ self-reported demographics are presented in Table 1. Almost half (47.8%) were first-generation students (i.e., neither of their parents attended college). Table 1 Participants’ self-reported demographics for matched sections (N = 230; NDPI = 136, NControl = 94)Full size table Prior to the outset of the semester, the course instructors received training in the DPI curriculum and met regularly throughout the semester to go over lesson plans and ensure fidelity of instruction. Four instructors taught “matched” sections of the civics course, i.e., at least one section that received the DPI curriculum and at least one section that was a “business-as-usual” control. Two of the instructors taught one DPI section and one control section at the main campus, one instructor taught one DPI and one control section at the satellite campus, and one instructor taught one DPI and one control section at the main campus and one DPI section at the satellite campus. Across the matched sections, we had N = 136 students in the five DPI sections and N = 94 students in the four control sections. The research protocol was classified as exempt by the university’s institutional review board. The DPI curriculum Students in DPI and control sections completed the online pretest in Week 3 and online posttest in Week 10 of a 15-week semester. The pretest and posttest were given as online assignments and were graded based on completion. For the pretest and posttest, materials were presented in the following order: lateral reading problem set, demographic questions, Wikipedia use and trust questions, self-reported use of lateral reading strategies, general media literacy scale, and language background questions. All materials are described below. In the DPI sections, instructors spent three class sessions in Weeks 4 and 5 introducing students to the four fact-checking “moves” using two slide decks provided by developers of the DPI curriculum to colleges and universities participating in this American Democracy Project initiative. A script accompanying the slide decks guided instructors through explaining and demonstrating the moves to students. The slide decks included many examples of online content for instructors and students to practice fact-checking during class. The in-class DPI curriculum drew heavily on concepts and materials from Caulfield (2017a). In the first slide deck, students were introduced to the curriculum as a way to help them determine the trustworthiness of online information. The four moves (look for trusted work, find the original, investigate the source, and circle back) were framed as “quick skills to help you verify and contextualize web content.” Students learned about the difference between vertical and lateral reading in the context of investigating the source. They also practiced applying three of the moves (looking for trusted work, finding the original, and investigating the source) to fact-check images, news stories, and blog posts by using the following techniques: checking Google News and fact-checking sites to find trusted coverage of a claim, using reverse image search to find the original version of an image, and adding Wikipedia to the end of a search term to investigate a source on Wikipedia. In the second slide deck, students reviewed the three moves of looking for trusted work, finding the original, and investigating the source, as well as their associated techniques. Students were reminded that the fourth move, circle back, involved restarting the search if their current search was not productive. Students then learned that, in addition to using a reverse search to find the original version of an image, they could find the original source of an article by clicking on links. For investigating the source, students were told that they could also learn more about a source by looking for it in Google News. The remainder of the slide deck provided a variety of online content for students to practice fact-checking information using the four moves. In Weeks 7 and 8, students in DPI sections spent three class sessions practicing evaluating online content related to immigration. This topic was chosen because it aligned with course coverage of social issues in the USA. Students were also given three online assignments to review and practice the strategies at home using online content related to immigration. These online assignments were graded based on completion and are described in detail below. Aside from giving the pretest and posttest as online assignments, instructors in control sections followed the standard civics curriculum (i.e., “business as usual”), which focused on the US government, society, and economy, with no mention of lateral reading strategies and/or how to evaluate online content. As students in the control sections did not complete the three interim online homework assignments, the instructors implemented their regular course assignments, such as group projects. Pretest, posttest, and online assignments were all administered via Qualtrics software with the links posted to the Blackboard learning management system. The script, slide decks, and online homework assignments are publicly available in an online repository.Footnote 1 Lateral reading problems Two sets of lateral reading problems (problem sets A and B) were provided by the developers of the DPI curriculum to all 11 campuses. Problems were adapted from the Stanford History Education Group’s civic online reasoning curriculum (Stanford History Education Group, n.d.) and from the Four Moves blog (Caulfield, 2017b). To ensure fidelity of implementation across campuses, we did not make any changes to the problem sets. Students completed one of the lateral reading problem sets (A or B) as a pretest and the other problem set as a posttest. Set order was counterbalanced across instructors: students in sections taught by two instructors received problem set A at pretest and problem set B at posttest, and students in sections taught by the other two instructors received problem set B at pretest and problem set A at posttest. Each problem set consisted of one of each of four types of lateral reading problems determined by the developers of the DPI curriculum. The problems in each set included some problems with accurate online content, while other problems featured online content that was less trustworthy. Each problem was labeled by its problem type in order to frame the problem, but students could use multiple lateral reading strategies to fact-check each problem. For each problem, students indicated their level of trust in the online content using a Likert scale ranging from 1 = Very Low to 5 = Very High. Students could also indicate that they were Unsure (− 9). Students were then prompted to “Explain the major factors in deciding your level of trust” using an open-response textbox. See Table 2 for a list of each problem type, problem set, online content used, and correct trust assessments and Fig. 1 for screenshots of two example problems. Table 2 Problem type, online content, and correct trust assessment for problem sets A and BFull size table Fig. 1 Screenshots of two of the lateral reading problems. Note: The left panel shows the Sourcing Evidence problem from problem set A, and the right panel shows the Clickbait Science and Medical Disinformation problem from problem set B Full size image Scoring of lateral reading problems The DPI provided a rubric for scoring student responses to the prompt “Explain the major factors in deciding your level of trust”: 0 = made no effort, 1 = reacted to or described original content, 2 = indicated investigative intent, but did not search laterally, 3 = conducted a lateral search using online resources such as search engines (e.g., Google), Wikipedia, or fact-checking sites (e.g., Snopes, PolitiFact) but failed to correctly evaluate the trustworthiness of the content (i.e., came to the incorrect conclusion or focused on researching an irrelevant aspect of the content to inform their decision), or 4 = conducted a lateral search and correctly evaluated the trustworthiness of the content. We established inter-rater reliability using the DPI’s rubric by having two authors independently score a randomly selected 16.5% of the responses for each lateral reading problem in each problem set.Footnote 2 Since we used an ordinal scoring scheme ranging from 0 to 4, we calculated weighted Cohen’s Kappa k = 0.93 as a measure of inter-rater agreement, which takes into account the closeness of ratings (Cohen, 1968). All disagreements were resolved through discussion. The authors then divided and independently coded the remaining responses. Given the volume of responses, we decided to verify manual scores of 4 using an automated approach. First, we identified keywords that were indicative of use of lateral reading and searched each response for those keywords. Keywords were determined using a top-down and bottom-up approach, meaning that some words came from the curriculum, while other words were selected by scanning students’ responses. Table 3 presents keywords and sample responses for keywords. Responses that used at least one keyword were scored as 1, indicating that the student read laterally. Responses that did not use any keywords were scored 0, indicating that the student did not read laterally. Next, we scored responses on the Likert scale asking about the trustworthiness of the online content as 0 for incorrect trust assessment and 1 for correct trust assessment (see Table 2). Lastly, we combined the keyword and trust scores so that 0 indicated no use of lateral reading or use of lateral reading but with an incorrect trust assessment, and 1 indicated use of lateral reading with a correct trust assessment, which was equivalent to a manual score of 4. Table 3 Keywords used to automatically score responses for lateral readingFull size table We next reviewed responses where manual and automated scores did not match (58 out of 1787 responses = 3.2%, Cohen’s Kappa k = 0.80).Footnote 3 Twenty-three were false positives (i.e., had an automated score of 1 and a manual score of 3 or less), and 35 were false negatives (i.e., had an automated score of 0 and a manual score of 4). In six of the false-negative responses, students expressed a trust assessment in their open-ended response that explicitly contradicted their trust assessment on the Likert scale. All disagreements were resolved in favor of the manual scoring. Self-reported use of lateral reading strategies Students used a 5-point Likert scale ranging from 1 = Never to 5 = Constantly to respond to the prompt “How frequently do you do the following when finding information online for school work?” for the three fact-checking moves requiring lateral reading and the habit of checking their emotions. Each move was described using layman’s terms in order to make it clear for students in control sections who were not exposed to the DPI curriculum. Look for trusted work was presented as “check the information with another source,” find the original was presented as “look for the original source of the information,” and investigate the source was presented as two items: “find out more about the author of the information” and “find out more about who publishes the website (like a company, organization, or government).” Check your emotions was presented as “consider how your emotions affect how you judge the information,” but was not included in analyses because it reflects a habit, rather than a lateral reading strategy. The four-item scale showed good internal consistency at pretest (α = .80). Use of Wikipedia Students were asked to respond to the question “How often do you use Wikipedia to check if you can trust information on the Internet?” using a 5-point Likert scale ranging from 1 = Never and 5 = Constantly. Trust of Wikipedia Students were asked to respond to the question “To what extent do you agree with the statement that ‘people should trust information on Wikipedia’?” using a 5-point Likert scale ranging from 1 = Strongly Disagree to 5 = Strongly Agree. General media literacy knowledge scale Students completed an 18-item scale (6 reverse-scored items) assessing general and news media literacy knowledge (adapted from Ashley et al., 2013, and Powers et al., 2018). For each statement, students indicated the extent to which they agreed or disagreed with the statement using a 5-point Likert scale ranging from 1 = Strongly Disagree to 5 = Strongly Agree. The 18-item scale showed adequate internal consistency at pretest (α = .76); reliability increased after removing an item with low item-rest correlation (–.08) (α = .80). The 17-item scale was used in analyses. An exploratory principal components analysis conducted using IBM SPSS Statistics (version 27) found four components with clustering primarily based on whether or not the item was reverse-scored.Footnote 4 Therefore, we interpreted clustering based on reverse-coding to be a statistical artifact and treated the scale as unidimensional. See “Appendix” for students’ agreement on each item by condition at pretest. To determine accuracy of students’ media literacy knowledge, scores were recoded such that scores of 1 through 3 were recoded as 0 (inaccurate) and scores of 4 and 5 were recoded as 1 (accurate). “Appendix” also reports accuracy on each item by condition at pretest. Online homework assignments Students in the DPI sections completed three online assignments to practice the lateral reading strategies covered in class. For each assignment, students were prompted to recall the four moves and a habit for reading laterally, saw slides and videos reviewing the four moves and a habit, and practiced using the four moves and a habit to investigate the validity of online content related to immigration, a topic covered in the civics course. Online content was selected from the Four Moves blog (Caulfield, 2017b). The first homework assignment asked students to investigate an article from City Journal magazine titled “The Illegal-Alien Crime Wave” (Caulfield, 2018c), the second assignment asked students to investigate a photograph that purported to show a child detained in a cage by US Immigration and Customs Enforcement (Caulfield, 2018b), and the last assignment asked students to investigate a Facebook post claiming that Border Patrol demanded that passengers on a Greyhound bus show proof of citizenship (Caulfield, 2018a). The online assignments are publicly available in an online repository.Footnote 5 Results Results are organized by research questions. All analyses were run in R (version 3.6.2; R Core Team, 2018; RStudio Team, 2016). Preliminary analyses of lateral reading at pretest Prior to conducting analyses to compare students who received the DPI curriculum with “business-as-usual” controls on lateral reading at posttest, we ran a series of preliminary analyses on the pretest data to assist us in formulating the models used to evaluate posttest performance. We first examined whether students’ average scores on lateral reading problems differed by instructor or condition at pretest. For this set of analyses the dependent variable was each student’s average score across the four problems, as assessed via the DPI rubric (0 to 4). Students’ average scores at pretest did not differ significantly by condition (MDPI = 1.21, SD = 0.35 and MControl = 1.22, SD = 0.42; t(228) = 0.18, p = .855), see Table 4 for breakdown by problem and condition. A one-way between-group ANOVA with the instructor as the between-group variable and average score across the four problems as the dependent variable indicated that pretest performance did not differ by instructor (F(3, 226) = 1.47, p = .223, ηp2 = 0.02). Table 4 Mean score for students in each condition for each problem at pretest and posttest (N = 230; NDPI = 136, NControl = 94)Full size table At the level of individual students, 7.0% of students received a score of 4 (i.e., read laterally and correctly assessed trustworthiness) for at least one problem at pretest (5.9% of students in the DPI sections and 8.5% in the control sections; see Table 5 for breakdown by problem type and condition). There was no significant difference across conditions, X2(1) = 0.26, p = .612, or instructor, Fisher’s exact test p = .603. Therefore, to evaluate the effectiveness of the DPI curriculum, we chose to examine differences in students’ scores only at posttest. For the posttest models, we created a control variable to indicate whether or not the student had engaged in lateral reading and drew the correct conclusion about the trustworthiness of the online content on one or more problems at pretest. We also included a control variable for the instructor to account for possible differences in the fidelity of implementation of the DPI curriculum. Table 5 Percentage of students in each condition who received a score of 4 (i.e., read laterally and drew the correct conclusion about the trustworthiness of the online content) on each problem type at pretest and posttest (N = 230; NDPI = 136, NControl = 94)Full size table We next examined whether problem sets A and B and the four types of problems were of equal difficulty at pretest. Students’ average score across the four problems did not differ significantly by problem set (Mset A = 1.25, SD = 0.38 and Mset B = 1.18, SD = 0.37; t(228) = 1.36, p = .175). To examine differences in scores by problem type, we conducted a one-way repeated-measures ANOVA with problem type as a within-subject variable and score as the dependent variable. With a Greenhouse–Geisser correction for lack of sphericity, there was a main effect of problem type, F(2.95, 657.48) = 2.66, p = .048, ηp2 = .01. Post hoc tests with Tukey adjustment for multiple comparisons indicated that the Fake News problem type was harder than the Photo Evidence problem type (p = .040). All other problem types were of comparable difficulty. For each problem type, sets A and B were of comparable difficulty, except for the Sourcing Evidence problem type, where set A had an easier problem (M = 1.35, SD = 0.64) than set B (M = 1.10, SD = 0.43), t(218.28) = 3.55, p < .001. We retained problem type as a control variable in the posttest models. Problem set order was counterbalanced at the level of instructor and therefore fully confounded with instructor (see above); hence, we chose not to include problem set as a control variable in order to be able to retain instructor as a control variable in the posttest models. Differences in online homework attempts Among students who received the DPI curriculum, 6.6% of students attempted no online homework assignments, 14.7% attempted one homework assignment, 44.1% attempted two assignments, and 34.6% attempted all three online homework assignments. On average, students in the DPI sections attempted 2.07 assignments (SD = 0.87). Given different rates of engagement with the assignments, we included the number of assignments attempted in the posttest models. Differences in general media literacy knowledge Across both conditions, students demonstrated high general media literacy knowledge at pretest (Magreement = 3.92, SD = 0.42; Maccuracy = 74.0%, SD = 20.5%). Students’ agreement as assessed via the Likert scale did not differ significantly by condition (MDPI = 3.90, SD = 0.42 and MControl = 3.95, SD = 0.43; t(228) = 0.80, p = .425). The accuracy of students’ knowledge also did not differ significantly by condition (MDPI = 73.4%, SD = 20.3% and MControl = 74.7%, SD = 20.7%; t(228) = 0.49, p = .624). See “Appendix” for mean agreement and accuracy per question at pretest by condition. Changes in lateral reading at posttest At posttest, students in DPI sections had an average score of M = 2.22 (SD = 0.92) across the four problems and received a score of 4 on an average of 1.07 problems (SD = 1.07). In contrast, students in control sections had an average score of M = 1.15 (SD = 0.30) and received a score of 4 on an average of 0.03 problems (SD = 0.23). To address our primary research question, we ran a mixed-effects ordinal logistic regression model with a logit link using the clmm function of the ordinal package (Christensen, 2019) in R (R Core Team, 2018; RStudio Team, 2016); see Table 6. For each posttest problem, our ordinal dependent variable was the student’s score on the 0–4 scale from the DPI rubric. We included an intercept-only random effect for students. Our fixed effects were media literacy knowledge at pretest, use of lateral reading to make a correct assessment at pretest, instructor, problem type, condition (DPI vs. control), and the number of online assignments attempted. Table 6 Mixed-effects ordinal logistic regression model used to predict score for each problem on a scale of 0 to 4 (N = 230)Full size table Overall, the full model with all fixed effects and the random effect of student fit significantly better than the null model with only the random effect of student (X2(10) = 137.46, p < .001). For each fixed effect, we compared the fit of the full model to the fit of the same model with the fixed effect excluded. This allowed us to determine whether including the fixed effect significantly improved model fit; see Table 6 for model comparisons. All control variables (i.e., media literacy knowledge at pretest, use of lateral reading to make a correct assessment at pretest, instructor, and problem type) significantly improved model fit or approached significance as predictors of students’ scores on lateral reading problems. Controlling for all other variables, students in the DPI sections were more likely to score higher on lateral reading problems than students in the control sections. Attempting more homework assignments was also significantly associated with higher scores. Therefore, we dichotomized manual scores by recoding scores of 4 as 1 to indicate that the response provided evidence of lateral reading with a correct conclusion about the trustworthiness of the online content; all other scores were recoded as 0. We then re-ran the model above with the dichotomized version of the dependent variable to see whether findings differed. For each posttest problem, our dependent variable indicated whether or not students received a score of 4, i.e., whether they read laterally and also drew the correct conclusion about the trustworthiness of the online content. We used a mixed-effects logistic regression model with a binomial logit link using the glmer function of the lme4 package (Bates et al., 2014) in R (R Core Team, 2018; RStudio Team, 2016); see Table 7. Table 7 Mixed-effects logistic regression model used to predict use of lateral reading and correct trustworthiness conclusion on each problem (N = 230)Full size table Overall, the full model with all fixed effects and the random effect of student fit significantly better than the null model with only the random effect of student (X2(10) = 161.30, p < .001). For each fixed effect, we again compared the fit of the full model to the fit of the same model with the fixed effect excluded; see Table 7 for model comparisons. All control variables except media knowledge at pretest significantly improved model fit, indicating that they were significant predictors of scoring 4, i.e., reading laterally and drawing a correct conclusion about trustworthiness. Controlling for all other variables, students in the DPI sections were significantly more likely to receive a score of 4 than students in the control sections. Students who attempted more homework assignments were also significantly more likely to score 4. Changes in self-reported lateral reading at posttest Descriptive statistics for students’ self-reported use of lateral reading strategies at pretest and posttest are presented in Table 8. At pretest, students in the control and DPI sections did not differ in the frequency with which they self-reported using lateral reading strategies when finding information online for school work, t(228) = –1.30, p = .196. On average, students at pretest reported using lateral reading strategies between Sometimes and Often. Table 8 Descriptive statistics for self-reported use of lateral reading strategies by time and condition (N = 230; NDPI = 136, NControl = 94)Full size table To examine whether students who received the DPI curriculum were more likely to self-report use of lateral reading at posttest, as compared to controls, we conducted a 2 × 2 repeated-measures ANOVA with time (pretest vs. posttest) as a within-subject variable, condition (DPI vs. control) as a between-subject variable, and mean self-reported use of lateral reading as the dependent variable. There was a significant main effect of time, F(1, 228) = 4.67, p = .032, ηp2 = 0.02, with students reporting higher use of lateral reading at posttest (M = 3.44, SD = 0.87) than at pretest (M = 3.30, SD = 0.84). There was also a significant main effect of condition, F(1, 228) = 4.13, p = .043, ηp2 = 0.02, with students in the DPI sections reporting higher use of lateral reading (M = 3.45, SD = 0.84) than students in the control sections (M = 3.25, SD = 0.88). The interaction of time and condition was not significant, F(1, 228) = 1.06, p = .304, ηp2 = 0.01. Changes in use of and trust of Wikipedia at posttest Descriptive statistics for students’ use of and trust of Wikipedia at pretest and posttest are presented in Table 9. Since we used single items with ordinal scales to measure these variables, we used the nonparametric Wilcoxon–Mann Whitney test to compare students’ use and trust of Wikipedia across conditions at pretest and posttest (UCLA Statistical Consulting Group, n.d.). Table 9 Percentage of students who indicated each response for use and trust of Wikipedia by time and condition (N = 230; NDPI = 136, NControl = 94)Full size table At pretest, students in DPI sections did not differ from students in control sections in their responses to the question How often do you use Wikipedia to check whether you can trust information on the Internet?, Median = 2 (Rarely) for both conditions, W = 6135.5, p = .591. However, at posttest, students in DPI sections reported using Wikipedia more often to fact-check information (Median = 3, Sometimes) as compared to controls (Median = 2, Rarely), W = 5358.5, p = .030. At pretest, students in DPI and control sections did not differ in their responses to the question To what extent do you agree with the statement that “people should trust information on Wikipedia”? Median = 2 (Disagree) for both conditions, W = 6492, p = .835. At posttest, students in DPI sections tended to report a higher level of trusting information on Wikipedia (Median = 3, No opinion) than students in the control sections (Median = 2, Disagree), but the difference in trust was not significant, W = 5753.5, p = .181. Individual differences in lateral reading for students in DPI sections To better understand individual differences in students’ responses to the DPI curriculum, we compared students who scored 4 (i.e., used lateral reading and correctly assessed trustworthiness) on at least one problem at posttest (n = 83 or 61.0% of students in DPI sections) with their peers who did not receive a score of 4 on any of the lateral reading problems at posttest (n = 53 or 39.0% of students in DPI sections). We first looked at group differences on whether or not students read laterally and drew the correct conclusion about the trustworthiness of the online content on at least one problem at pretest and on their self-reported use of lateral reading at pretest. Groups did not differ in use of lateral reading on pretest problems or self-reported use of lateral reading at pretest. Next, we examined whether groups differed in their general media literacy knowledge at pretest and their use and trust of Wikipedia at pretest. There was no difference between groups in general media literacy knowledge (agreement and accuracy) at pretest or in their use of Wikipedia at pretest. However, students in DPI sections who used lateral reading on at least one problem at posttest reported significantly lower trust of Wikipedia at pretest (Median = 2, Disagree) than students who failed to read laterally (Median = 3, No opinion, W = 2790, p = .006). Lastly, we examined whether groups differed in the number of online homework assignments attempted. Students in DPI sections who used lateral reading on at least one problem at posttest attempted more online homework assignments (M = 2.23, SD = 0.83) than students who did not read laterally at posttest (M = 1.81, SD = 0.88, t(134) = –2.80, p = .006). Discussion The current study examined the efficacy of the Digital Polarization Initiative’s (DPI) curriculum to teach students fact-checking strategies used by professional fact-checkers. In particular, we examined whether students in sections that administered the curriculum showed greater use of lateral reading at posttest than “business-as-usual” controls. We also examined whether conditions differed in self-reported use of lateral reading and use and trust of Wikipedia at posttest. Additionally, to explore possible individual differences in student responses to the curriculum, we examined whether use of lateral reading to correctly assess the trustworthiness of online content at pretest, self-reported use of lateral reading at pretest, general media literacy knowledge at pretest, use of and trust of Wikipedia at pretest, and number of online homework assignments attempted distinguished students who read laterally on at least one posttest problem from their classmates did not read laterally at posttest. At posttest, students who received the DPI curriculum were more likely to read laterally and accurately assess the trustworthiness of online content, as compared to their peers in the control classes. Notably, there were no differences at pretest, as students almost universally lacked the skills prior to receiving the DPI curriculum. These findings are in keeping with previous work by McGrew et al. (2019), showing that targeted instruction in civic online reasoning (including lateral reading) can improve college students’ use of these skills. We also observed that the number of online assignments attempted was associated with use of lateral reading at posttest, with students in DPI sections who read laterally on at least one problem at posttest attempting more online homework assignments than students in DPI sections who failed to read laterally at posttest. This correlation suggests that time devoted to practicing the skills was helpful in consolidating them. However, we cannot confirm that the homework was the critical factor as students who were more diligent with their homework may also have had better in-class attendance and participation or better comprehension skills. Students who put more time or effort into the homework assignments may also have provided more written justifications on the posttest problems that could be scored using the DPI rubric (Bråten et al., 2018). While 61.0% of students read and accurately assessed at least one problem after receiving the DPI curriculum, students rarely received a score of 4 on all four problems at posttest. This finding echoes previous research showing that, even when explicitly told that they can search online for information, adults, including college students, rarely do so (Donovan & Rapp, 2020). It is possible that students may have been more motivated to use lateral reading on certain problems based on their interest or how much they valued having accurate information on the topic (Metzger, 2007; Metzger & Flanagin, 2015). It is also possible that, for problems that produced a strong emotional response, students may have struggled to “check their emotions” sufficiently to read laterally and draw a correct conclusion about the trustworthiness of the online content (Berger & Milkman, 2012). Neither of these concerns would have emerged at pretest as students were almost uniformly unaware of lateral reading strategies. Since the DPI curriculum was delivered in-class, students’ responsiveness to the DPI curriculum and their performance on the posttest may also have been affected by course-related factors. We observed an effect of instructors in the current study, which speaks to the importance of providing professional development and training for instructors teaching students lateral reading strategies. Another course-related factor that we could not account for was students’ attendance during class sessions when the curriculum was taught. Moving delivery of the DPI curriculum to an online format, e.g., by incorporating the instruction into the online homework assignments, may help ensure fidelity of implementation of the curriculum and facilitate better tracking of student participation and effort. On average, students answered the majority (74.0%) of general media literacy knowledge items correctly at pretest. While general media literacy knowledge at pretest significantly predicted scores on the 0–4 scale at posttest, it was not a significant predictor of the dichotomized score distinguishing students who did and did not receive a score of 4 (i.e., those who did vs. did not use lateral reading to draw correct conclusions about the trustworthiness of the online content). Also, notably, students in DPI sections who received a score of 4 on at least one problem at posttest did not differ in their media literacy knowledge from students in DPI sections who never scored 4. These findings suggest that understanding of persuasive intent and bias in media messages may have helped students recognize the need to investigate or assess the credibility of the information, but it was not sufficient to motivate them to use the fact-checking strategies to draw the correct conclusions. Traditional media literacy instruction may also be too focused on the media message, rather than on the media environment (Cohen, 2018). Students may benefit from instruction that fosters understanding of how their online behaviors and features of the Internet (e.g., use of algorithms to personalize search results) shape the specific media messages that appear in their information feeds. The need for additional instruction about the online information environment is also reflected in recent findings from Jones-Jang et al. (2019) documenting a significant association between information literacy knowledge (i.e., knowledge of how to find and evaluate online information) and the ability to identify fake news. In addition to examining students’ performance on the lateral reading problems, we also asked students to self-report their use of lateral reading (e.g., by checking information with another source or finding out more about the author of the information). At pretest, students in both conditions reported using lateral reading strategies between Sometimes and Often, even though very few students in either condition demonstrated lateral reading on any of the pretest problems. Although students in the DPI sections self-reported greater use of lateral reading as compared to controls, the DPI students who read at least one problem laterally at posttest did not differ in their self-reported use of lateral reading strategies from DPI students who failed to read laterally at posttest. These findings align with the dissociation between students’ perceived and actual use of lateral reading skills observed in prior studies of students’ information evaluation strategies (Brodsky et al., 2020; Hargittai et al., 2010; List & Alexander, 2018). The observed dissociation may be due to students’ lack of awareness and monitoring of the strategies they use when evaluating online information (Kuhn, 1999). Instruction should aim to foster students’ metastrategic awareness, as this may improve both the accuracy of their self-reported use of lateral reading and their actual use of lateral reading. Several other explanations for this dissociation are also possible. Some students may have accurately reported their use of lateral reading at posttest, but did not receive any scores of 4 on the lateral reading problems because their trustworthiness assessments were all incorrect. Alternatively, List and Alexander (2018) suggest that the dissociation between students’ self-reported and observed behaviors may be due to self-report measures reflecting students’ self-efficacy and attitudes toward these behaviors or their prior success in evaluating the credibility of information, rather than their actual engagement in the target behaviors. Overall, although performance-based measures may be more time-consuming and resource-intensive than self-report assessments (Hobbs, 2017; List & Alexander, 2018; McGrew et al., 2019), they are necessary for gaining insight into students’ actual fact-checking habits. Despite the emphasis of the DPI curriculum on using Wikipedia to research sources and its popularity among professional fact-checkers (Wineburg & McGrew, 2017), students in the DPI sections only reported modestly higher Wikipedia use at posttest as compared to controls, and no difference in trust. Difficulties with changing students’ use and trust of Wikipedia may reflect influences of prior experiences with secondary school teachers, librarians, and college instructors who considered Wikipedia to be an unreliable source (Garrison, 2018; Konieczny, 2016; Polk et al., 2015). While McGrew et al. (2017) argue that students should be taught how to use Wikipedia “wisely,” for example, by using the references in a Wikipedia article as a jumping-off point for their lateral reading, this approach may require instructors teaching fact-checking skills to change their own perceptions of Wikipedia and familiarize themselves with how Wikipedia works. In future implementations, the DPI curriculum may benefit from incorporating strategies for conceptual change (Lucariello & Naff, 2010) to overcome instructors’ and students’ misconceptions about Wikipedia. Notably, our analysis of individual differences in response to the curriculum indicated that DPI students who demonstrated lateral reading at posttest were less trusting of information on Wikipedia at pretest than their peers who failed to use lateral reading at posttest. This unexpected result suggests that the lateral reading strategies were more memorable for DPI students who initially held more negative views about trusting information on Wikipedia, possibly because using Wikipedia as part of the DPI curriculum may have induced cognitive conflict which can foster conceptual change (Lucariello & Naff, 2010). Looking ahead, additional research is needed to parse out individual differences in students’ responses to the DPI curriculum. Over a third of students did not read laterally on any of the problems at posttest, but this was unrelated to their use of lateral reading to correctly assess the trustworthiness of online content at pretest, their self-reported lateral reading at pretest or their self-reported use of Wikipedia at pretest to check whether information should be trusted. Given prior work on the roles of developmental and demographic variables, information literacy training, cognitive styles, and academic performance in children and adolescents’ awareness and practice of online information verification (Metzger et al., 2015), it may be fruitful to examine the role of these variables in predicting students’ responsiveness to lateral reading instruction. In addition, students’ reading comprehension and vocabulary knowledge should be taken into consideration as language abilities may impact students’ success in verifying online content (Brodsky et al., 2020). Future research also needs to examine the extent to which gains in lateral reading are maintained over time and whether students use the strategies for fact-checking information outside of the classroom context. Conclusion The current study, conducted with a diverse sample of college students, examined the efficacy of the DPI curriculum in teaching students to fact-check online information by reading laterally. Compared to another study of college students’ online civic reasoning (McGrew et al., 2019), we used a larger sample and a more intensive curriculum to teach students these skills. Our findings indicate that the DPI curriculum increased students’ use of lateral reading to draw accurate assessments of the trustworthiness of online information. Our findings also indicate the need for performance-based assessments of information verification skills as we observed that students overestimate the extent to which they actually engaged in lateral reading. The modest gains that students made in Wikipedia use at posttest highlight an important challenge in teaching lateral reading as college students as well as instructors may hold misconceptions about the reliability of Wikipedia and ways to use it as an information source (Garrison, 2018; Konieczny, 2016). Lastly, the lack of relation between general media literacy knowledge and use of lateral reading to draw correct conclusions about trustworthiness of online information suggests that understanding and skepticism of media messages alone is not sufficient to motivate fact-checking. Instead, teaching lateral reading as part of general education courses can help prepare students for navigating today’s complex media landscape by offering them a new set of skills. Availability of data and materials The R Markdown file, analysis code, and instructional materials used in the current study are available in the Open Science Framework repository at https://osf.io/9rbkd/. Notes 1.https://osf.io/9rbkd/. 2.Only 13.5% of the responses for the Sourcing Evidence problem in Set B were scored due to missing data or responses stating that the YouTube video was unavailable. 3.Thirty-nine additional responses had clerical errors in the manual scoring that were corrected prior to reliability calculations. There were also 53 responses that were either missing data or that stated that the YouTube video was unavailable. These responses are not included in reliability calculations. 4.Given that we expected components to be correlated, we used a direct oblimin rotation with Kaiser normalization (Costello & Osborne, 2005). For the four components with eigenvalues greater than 1.00, seven non-reverse scored items clustered on the first component, four reverse-scored items clustered on the second component, two non-reverse scored items clustered on the third component, and one reverse-scored item clustered on the fourth component. Three items were below our criteria of .40 for the minimum factor loading (Stevens, 2002, as cited in Field, 2009). 5.https://osf.io/9rbkd/. References Amazeen, M. A. (2020). Journalistic interventions: The structural factors affecting the global emergence of fact-checking. Journalism, 21(1), 95–111. https://doi.org/10.1177/1464884917730217. Article  Google Scholar  American Democracy Project (n.d.). Digital Polarization Initiative. American Association of State Colleges and Universities. Retrieved June 26, 2020, from https://www.aascu.org/AcademicAffairs/ADP/DigiPo/. Ashley, S., Maksl, A., & Craft, S. (2013). Developing a news media literacy scale. Journalism & Mass Communication Educator, 68(1), 7–21. https://doi.org/10.1177/1077695812469802. Article  Google Scholar  Association of College & Research Libraries. (2015). Framework for information literacy for higher education. Chicago: Association of College & Research Libraries. Retrieved March 2, 2021, from http://www.ala.org/acrl/files/issues/infolit/framework.pdf. Bates, D., Maechler, M., Bolker, B., & Walker, S. (2014). lme4: Linear mixed-effects models using Eigen and S4. CRAN R Package, 1(7), 15–23. Google Scholar  Berger, J., & Milkman, K. L. (2012). What makes online content viral? Journal of Marketing Research, 49(2), 192–205. https://doi.org/10.1509/jmr.10.0353. Article  Google Scholar  Blakeslee, S. (2004). The CRAAP test. LOEX Quarterly, 31(3), 6–7. Google Scholar  Bråten, I., Brante, E. W., & Strømsø, H. I. (2018). What really matters: The role of behavioural engagement in multiple document literacy tasks. Journal of Research in Reading, 41(4), 680–699. https://doi.org/10.1111/1467-9817.12247. Article  Google Scholar  Brante, E. W., & Strømsø, H. I. (2018). Sourcing in text comprehension: A review of interventions targeting sourcing skills. Educational Psychology Review, 30, 773–799. https://doi.org/10.1007/s10648-017-9421-7. Article  Google Scholar  Brodsky, J. E., Barshaba, C. N., Lodhi, A. K. & Brooks, P. J. (2020). Dissociations between college students' media literacy knowledge and fact-checking skills [Paper session]. AERA Annual Meeting San Francisco, CA. Retrieved March 2, 2021, from http://tinyurl.com/saedj5t (Conference Canceled). Caulfield, M. (2017a). Web literacy for student fact-checkers...and other people who care about facts. Pressbooks. Retrieved March 2, 2021, from https://webliteracy.pressbooks.com/. Caulfield, M. (2017b). Four moves: Adventures in fact-checking for students. Retrieved March 2, 2021, from https://fourmoves.blog/. Caulfield. (2018a). Greyhound Border Patrol. Four moves: Adventures in fact-checking for students. Retrieved March 2, 2021, from https://fourmoves.blog/2018/01/27/greyhound-border-patrol/. Caulfield. (2018b). Detained by ICE? Four moves: Adventures in fact-checking for students. Retrieved March 2, 2021, from https://fourmoves.blog/2018/06/16/detained-by-ice/. Caulfield. (2018c). Immigration crime wave? Four moves: Adventures in fact-checking for students. Retrieved March 2, 2021, from https://fourmoves.blog/2018/06/25/immigration-crime-wave/. Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology. (pp. 73–96). Guilford Press. Google Scholar  Christensen, R. H. B. (2019). ordinal—Regression Models for Ordinal Data. R package version 2019.12-10. Retrieved March 2, 2021, from https://CRAN.R-project.org/package=ordinal. Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. https://doi.org/10.1037/h0026256. Article  PubMed  Google Scholar  Cohen, J. N. (2018). Exploring echo-systems: How algorithms shape immersive media environments. Journal of Media Literacy Education, 10(2), 139–151. https://doi.org/10.23860/JMLE-2018-10-2-8. Article  Google Scholar  Costello, A. B., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10, 1–9. Google Scholar  Donovan, A. M., & Rapp, D. N. (2020). Look it up: Online search reduces the problematic effects of exposures to inaccuracies. Memory and Cognition, 48, 1128–1145. https://doi.org/10.3758/s13421-020-01047-z. Article  PubMed  Google Scholar  Faix, A., & Fyn, A. (2020). Framing Fake News: Misinformation and the ACRL Framework. portal: Libraries and the Academy, 20(3), 495–508. https://doi.org/10.1353/pla.2020.0027. Article  Google Scholar  Field, A. (2009). Discovering statistics using SPSS (and sex and drugs and rock’n’roll). (3rd ed.). Sage. Google Scholar  Garrison, J. C. (2018). Instructor and peer influence on college student use and perceptions of Wikipedia. The Electronic Library, 36(2), 237–257. https://doi.org/10.1108/EL-02-2017-0034. Article  Google Scholar  Graves, L. (2017). Anatomy of a fact check: Objective practice and the contested epistemology of fact checking. Communication, Culture and Critique, 10(3), 518–537. https://doi.org/10.1111/cccr.12163. Article  Google Scholar  Graves, L., & Amazeen, M. (2019). Fact-checking as idea and practice in journalism. Oxford University Press. https://doi.org/10.1093/acrefore/9780190228613.013.808. Book  Google Scholar  Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young adults’ evaluation of web content. International Journal of Communication, 4, 468–494. Google Scholar  Head, A. J., & Eisenberg, M. B. (2010). How today’s college students use Wikipedia for course-related research. First Monday. https://doi.org/10.5210/fm.v15i3.2830. Article  Google Scholar  Hobbs, R. (2010). Digital and media literacy: A plan of action. The Aspen Institute. Retrieved March 2, 2021, from https://eric.ed.gov/?id=ED523244. Hobbs, R. (2017). Measuring the digital and media literacy competencies of children and teens. In F. C. Blumberg & P. J. Brooks (Eds.), Cognitive development in digital contexts. (pp. 253–274). Elsevier. https://doi.org/10.1016/B978-0-12-809481-5.00013-4. Chapter  Google Scholar  Hobbs, R., & Jensen, A. (2009). The past, present, and future of media literacy education. Journal of Media Literacy Education, 1(1), 1–11. Google Scholar  Jeong, S. H., Cho, H., & Hwang, Y. (2012). Media literacy interventions: A meta-analytic review. Journal of Communication, 62(3), 454–472. https://doi.org/10.1111/j.1460-2466.2012.01643.x. Article  Google Scholar  Jones-Jang, S. M., Mortensen, T., & Liu, J. (2019). Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American Behavioral Scientist. https://doi.org/10.1177/0002764219869406. Article  Google Scholar  Koltay, T. (2011). The media and the literacies: media literacy, information literacy, digital literacy. Media, Culture & Society, 33(2), 211–221. https://doi.org/10.1177/0163443710393382. Article  Google Scholar  Konieczny, P. (2016). Teaching with Wikipedia in a 21st-century classroom: Perceptions of Wikipedia and its educational benefits. Journal of the Association for Information Science and Technology, 67(7), 1523–1534. https://doi.org/10.1002/asi.23616. Article  Google Scholar  Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28(2), 16–46. https://doi.org/10.3102/0013189X028002016. Article  Google Scholar  List, A., & Alexander, P. A. (2018). Corroborating students’ self-reports of source evaluation. Behaviour & Information Technology, 37(3), 198–216. https://doi.org/10.1080/0144929X.2018.1430849. Article  Google Scholar  List, A., Grossnickle, E. M., & Alexander, P. A. (2016). Undergraduate students’ justifications for source selection in a digital academic context. Journal of Educational Computing Research, 54(1), 22–61. https://doi.org/10.1177/0735633115606659. Article  Google Scholar  Lucariello, J. & Naff, D. (2010). How do I get my students over their alternative conceptions (misconceptions) for learning. American Psychological Association. Retrieved March 2, 2021, from http://www.apa.org/education/k12/misconceptions. Maksl, A., Craft, S., Ashley, S., & Miller, D. (2017). The usefulness of a news media literacy measure in evaluating a news literacy curriculum. Journalism & Mass Communication Educator, 72(2), 228–241. https://doi.org/10.1177/1077695816651970. Article  Google Scholar  McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 46(2), 165–193. https://doi.org/10.1080/00933104.2017.1416320. Article  Google Scholar  McGrew, S., Ortega, T., Breakstone, S., & Wineburg, S. (2017). The challenge that’s bigger than fake news: Civic reasoning in a social media environment. American Educator, 41(3), 4–9. Google Scholar  McGrew, S., Smith, M., Breakstone, J., Ortega, T., & Wineburg, S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89(3), 485–500. https://doi.org/10.1111/bjep.12279. Article  Google Scholar  Meola, M. (2004). Chucking the checklist: A contextual approach to teaching undergraduates web-site evaluation. portal: Libraries and the Academy, 4(3), 331–344. https://doi.org/10.1353/pla.2004.0055. Article  Google Scholar  Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58, 2078–2091. https://doi.org/10.1002/asi.20672. Article  Google Scholar  Metzger, M. J., & Flanagin, A. J. (2015). Psychological approaches to credibility assessment online. In S. S. Sundar (Ed.), The handbook of the psychology of communication technology. (pp. 445–466). Wiley. https://doi.org/10.1002/9781118426456.ch20. Chapter  Google Scholar  Metzger, M. J., Flanagin, A. J., Markov, A., Grossman, R., & Bulger, M. (2015). Believing the unbelievable: Understanding young people’s information literacy beliefs and practices in the United States. Journal of Children and Media, 9(3), 325–348. https://doi.org/10.1080/17482798.2015.1056817. Article  Google Scholar  Musgrove, A. T., Powers, J. R., Rebar, L. C., & Musgrove, J. G. (2018). Real or fake? Resources for teaching college students how to identify fake news. College & Undergraduate Libraries, 25(3), 243–260. https://doi.org/10.1080/10691316.2018.1480444. Article  Google Scholar  Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465. Article  Google Scholar  Pew Research Center. (2019a). Internet/broadband fact sheet [Fact sheet]. Retrieved March 2, 2021, from https://www.pewresearch.org/internet/fact-sheet/internet-broadband/#who-uses-the-internet. Pew Research Center. (2019b). Social media fact sheet [Fact sheet]. Retrieved March 2, 2021, from https://www.pewresearch.org/internet/fact-sheet/social-media/. Polk, T., Johnston, M. P., & Evers, S. (2015). Wikipedia use in research: Perceptions in secondary schools. TechTrends, 59, 92–102. https://doi.org/10.1007/s11528-015-0858-6. Article  Google Scholar  Powers, K. L., Brodsky, J. E., Blumberg, F. C., & Brooks, P. J. (2018). Creating developmentally-appropriate measures of media literacy for adolescents. In Proceedings of the Technology, Mind, and Society (TechMindSociety’18) (pp. 1–5). Association for Computing Machinery. https://doi.org/10.1145/3183654.3183670 R Core Team. (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing. Retrieved March 2, 2021, from https://www.R-project.org. RStudio Team. (2016). RStudio: Integrated Development for R. Boston, MA: RStudio Inc. Retrieved March 2, 2021, from http://www.rstudio.com/. Stanford History Education Group (n.d.). Civic online reasoning. https://cor.stanford.edu/. UCLA Statistical Consulting Group (n.d.). Choosing the correct statistical test in SAS, SPSS, and R. Retrieved December 4, 2020 from https://stats.idre.ucla.edu/other/mult-pkg/whatstat/. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559. Article  Google Scholar  Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46(4), 1060–1106. https://doi.org/10.3102/0002831209333183. Article  Google Scholar  Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and learning more when evaluating digital information (Stanford History Education Group Working Paper No. 2017-A1). Retrieved March 2, 2021, from https://ssrn.com/abstract=3048994. Wineburg S. & McGrew, S. (2018). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information (Stanford Graduate School of Education Open Archive). Retrieved March 2, 2021 from https://searchworks.stanford.edu/view/yk133ht8603. Wineburg, S., Breakstone, J., Ziv, N., & Smith, M. (2020). Educating for misunderstanding: How approaches to teaching digital literacy make students susceptible to scammers, rogues, bad actors, and hate mongers (Stanford History Education Group Working Paper No. A-21322). Retrieved March 2, 2021, from https://purl.stanford.edu/mf412bt5333. Download references Acknowledgements We thank Jay Verkuilen of The Graduate Center, City University of New York, for statistical consultation. Preliminary results were presented at the APS-STP Teaching Institute at the Annual Convention of the Association for Psychological Science held in May 2019 and at the American Psychological Association’s Technology, Mind, and Society Conference held in October 2019. Funding The authors have no sources of funding to declare. Author information Affiliations The Graduate Center, CUNY, 365 5th Ave, New York, NY, 10016, USA Jessica E. Brodsky & Patricia J. Brooks The College of Staten Island, CUNY, 2800 Victory Blvd, Staten Island, NY, 10314, USA Jessica E. Brodsky, Patricia J. Brooks, Donna Scimeca, Peter Galati, Michael Batson, Robert Grosso, Michael Matthews & Victor Miller Lehman College, CUNY, 250 Bedford Park Boulevard West, Bronx, NY, 10468, USA Ralitsa Todorova Washington State University Vancouver, 14204 NE Salmon Creek Ave, Vancouver, WA, 98686, USA Michael Caulfield Authors Jessica E. BrodskyView author publications You can also search for this author in PubMed Google Scholar Patricia J. BrooksView author publications You can also search for this author in PubMed Google Scholar Donna ScimecaView author publications You can also search for this author in PubMed Google Scholar Ralitsa TodorovaView author publications You can also search for this author in PubMed Google Scholar Peter GalatiView author publications You can also search for this author in PubMed Google Scholar Michael BatsonView author publications You can also search for this author in PubMed Google Scholar Robert GrossoView author publications You can also search for this author in PubMed Google Scholar Michael MatthewsView author publications You can also search for this author in PubMed Google Scholar Victor MillerView author publications You can also search for this author in PubMed Google Scholar Michael CaulfieldView author publications You can also search for this author in PubMed Google Scholar Contributions JEB and PJB prepared the online homework assignments, analyzed the data, and prepared the manuscript. JEB and RT coded students’ open responses for use of lateral reading. DS, PG, MB, RG, MM, and VM contributed to the design of the online homework assignments and implemented the DPI curriculum in their course sections. MC developed the in-class instructional materials and the lateral reading problems. All authors read and approved the final manuscript. Corresponding author Correspondence to Jessica E. Brodsky. Ethics declarations Ethics approval and consent to participate The research protocol was classified as exempt by the university’s institutional review board. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Appendix Appendix Percentage of students with accurate media literacy knowledge by item and condition at pretest (N = 230; NDPI = 136, NControl = 94). Item Agreement Accuracy DPI M (SD) Control M (SD) DPI M (SD) Control M (SD) A news story that has good pictures is less likely to get published. (reverse-scored) 2.77 (0.91) 2.64 (0.83) 45.6% (50.0) 41.5% (49.5) People who advertise think very carefully about the people they want to buy their product 3.94 (0.99) 4.11 (0.91) 74.3% (43.9) 84.0% (36.8) When you see something on the Internet the creator is trying to convince you to agree with their point of view 3.78 (0.76) 3.70 (0.80) 69.1% (46.4) 64.9% (48.0) People are influenced by news whether they realize it or not 4.04 (0.81) 4.16 (0.79) 80.1% (40.0) 83.0% (37.8) Two people might see the same news story and get different information from it 4.10 (0.81) 4.12 (0.82) 86.0% (34.8) 85.1% (35.8) Photographs your friends post on social media are an accurate representation of what is going on in their life. (reverse-scored) 2.29 (1.03) 2.18 (0.99) 64.7% (48.0) 67.0% (47.3) People pay less attention to news that fits with their beliefs than news that doesn’t. (reverse-scored) 3.08 (1.11) 3.11 (0.97) 32.4% (47.0) 26.6% (44.4) Advertisements usually leave out a lot of important information 3.90 (0.90) 3.94 (0.88) 73.5% (44.3) 75.5% (43.2) News makers select images and music to influence what people think 3.98 (0.79) 4.01 (0.71) 79.3% (40.7) 81.9% (38.7) Sending a document or picture to one friend on the Internet means no one else will ever see it. (reverse-scored) 1.74 (0.80) 1.71 (0.88) 83.8% (37.0) 84.0% (36.8) Individuals can find news sources that reflect their own political values 3.93 (0.77) 4.05 (0.68) 80.1% (40.0) 81.9% (38.7) A reporter’s job is to tell the trutha 3.11 (1.20) 3.07 (1.20) 37.5% (48.6) 39.4% (49.1) News companies choose stories based on what will attract the biggest audience 4.23 (0.80) 4.20 (0.85) 84.6% (36.3) 84.9% (36.0) When you see something on the Internet you should always believe that it is true. (reverse-scored) 1.76 (0.92) 1.60 (0.69) 83.8% (37.0) 92.6% (26.4) Two people may see the same movie or TV show and get very different ideas about it 4.40 (0.69) 4.31 (0.76) 92.6% (26.2) 91.5% (28.1) News coverage of a political candidate does not influence people’s opinions. (reverse-scored) 2.13 1.00) 2.26 (0.97) 69.1% (46.4) 70.2% (46.0) People are influenced by advertisements, whether they realize it or not 4.13 (0.79) 4.20 (0.73) 86.8% (34.0) 87.1% (33.7) Movies and TV shows don’t usually show life like it really is 3.66 (1.01) 3.78 (0.96) 62.5% (48.6) 69.1% (46.4) Overall Mean (17 items) 3.90 (0.42) 3.95 (0.43) 73.4% (20.3) 74.7% (20.7) All agreement scores are on a scale of 1 = Strongly Disagree to 5 = Strongly Agree. Items were reverse-scored prior to calculating overall means and standard deviations aItem removed due to low item-rest correlation Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions About this article Cite this article Brodsky, J.E., Brooks, P.J., Scimeca, D. et al. Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course. Cogn. Research 6, 23 (2021). https://doi.org/10.1186/s41235-021-00291-4 Download citation Received: 30 June 2020 Accepted: 17 March 2021 Published: 31 March 2021 DOI: https://doi.org/10.1186/s41235-021-00291-4 Keywords Fact-checking instruction Lateral reading Media literacy Wikipedia College students Download PDF Associated Content Collection The Psychology of Fake News Advertisement Support and Contact Jobs Language editing for authors Scientific editing for authors Leave feedback Terms and conditions Privacy statement Accessibility Cookies Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page By using this website, you agree to our Terms and Conditions, California Privacy Statement, Privacy statement and Cookies policy. Manage cookies/Do not sell my data we use in the preference centre. © 2021 BioMed Central Ltd unless otherwise stated. Part of Springer Nature. doi-org-9671 ---- A Survey on Long-Range Attacks for Proof of Stake Protocols | IEEE Journals & Magazine | IEEE Xplore Skip to Main Content IEEE Account Change Username/Password Update Address Purchase Details Payment Options Order History View Purchased Documents Profile Information Communications Preferences Profession and Education Technical Interests Need Help? US & Canada: +1 800 678 4333 Worldwide: +1 732 981 0060 Contact & Support About IEEE Xplore Contact Us Help Accessibility Terms of Use Nondiscrimination Policy Sitemap Privacy & Opting Out of Cookies A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2021 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions. dorothyparker-com-5416 ---- ‘News Item’ and ‘Résumé’ Enter Public Domain January 1 | Dorothy Parker Society Dorothy Parker Society Official Dorothy Parker Site Since 1998 Menu About Society Homes Los Angeles Haunts Gallery Audio Book Shop Dorothy Parker Books Algonquin Round Table Books Collections Plays & Study Guides Audio Books DVD Dorothy Parker Merchandise Tours Dorothy Parker Upper West Side Algonquin Round Table The Guide Contact Menu ‘News Item’ and ‘Résumé’ Enter Public Domain January 1 Posted on December 18, 2020February 2, 2021 by Kevin Fitzpatrick Do you celebrate New Year’s Day or Public Domain Day? For Dorothy Parker fans, why not both? Just as we published last year, turning the calendar pages of U.S. copyright law, on January 1, 2021, more works of art, film, music, poetry, and writing will enter the public domain. This milestone will bring out work published in 1925 that copyrights have been lifted. While in some quarters the news that The Great Gatsby is now out of copyright will be celebrated, Dorothy Parker makes the list with 25 poems, including her “greatest hits” collection. This is colloquially Public Domain Day. What is happening, how U.S. law is interpreted, and what the hell Sonny Bono and Mickey Mouse have to do with copyright law is explained here. What this means is that what Parker was doing in 1925 during the Speakeasy Era matters in the Covid Era. Parker published 25 poems and free verse in 1925 that are now able to be used without paying her estate, which is controlled by the NAACP. Incredible as it may sound, this also includes the most famous ones that she gave away to her friend and mentor, Franklin P. Adams, to be published in the New York World on the same day on August 16, 1925, in his “Conning Tower” column. Under Parker’s heading “Some Beautiful Letters” were six of her most beloved pieces: “Observation,” “Social Note,” “News Item” (“Men seldom make passes/At girls who wear glasses”), “Interview,” “Comment,” and possibly Parker’s most well-known, “Résumé.” These 25 can now be used in any manner; some are already on tattoos, of course. “Observation” attained some acclaim when it was included in Mrs. Parker and the Vicious Circle, performed by Jennifer Jason Leigh. Observation If I don’t drive around the park, I’m pretty sure to make my mark. If I’m in bed each night by ten, I may get back my looks again, If I abstain from fun and such, I’ll probably amount to much, But I shall stay the way I am, Because I do not give a damn. These are the 25 Dorothy Parker poems that will enter the public domain in the United States on January 1, 2021. All were published for the first time in 1925 and the copyright will expire. “Song of Perfect Propriety” “Balto” “Cassandra Drops Into Verse” “I Shall Come Back” “Biographies” “A Dream Lies Dead” “Story of Mrs. W–” “Little Song” “Braggart” “Epitaph” “Threnody” “Epitaph For A Darling Lady” “Some Beautiful Letters”: “Observation,” “Social Note,” “News Item,” “Interview,” “Comment,” “Résumé” “Convalescent” “Wail” “Testament” “Recurrence” “August” “Hearthside” “Rainy Night” The only other Parker published writing in 1925 were a few reviews for The New Yorker, which debuted in February 1925. This means everything from the first year of the magazine also enters the public domain on January 1. (If you produce any coffee mugs or tote bags, please send them our way). In 1925 Parker did not sell any short stories or essays, as far as we know. Other 1925 books by big names besides F. Scott Fitzgerald will also be out of copyright. These include: Theodore Dreiser’s An American Tragedy, Ernest Hemingway’s In Our Time, John Dos Passos’, Manhattan Transfer, and Virginia Woolf’s Mrs. Dalloway. Among the films are Harold Lloyd’s The Freshman and The Merry Widow, and Buster Keaton’s Go West, His People, and Lovers in Quarantine (extremely appropriate today, even if this film’s quarantine is only one week). The Big Parade (directed by King Vidor), the first major WWI movie and the biggest box office success of the decade, will also be public domain material. So is Charlie Chaplin’s short, The Gold Rush. Edward Hopper, House By The Railroad (Museum of Modern Art Collection) Among the hundreds, if not thousands, of pieces of music, are “Always,” by Irving Berlin, “Sweet Georgia Brown,” by Ben Bernie, Maceo Pinkard & Kenneth Casey, works by Gertrude ‘Ma’ Rainey, the “Mother of the Blues,” including “Army Camp Harmony Blues” (with Hooks Tilford) and “Shave ’Em Dry” (with William Jackson). In visual art, paintings include Edward Hopper’s House by the Railroad (owned by the Museum of Modern Art, New York) and Picasso’s Les Trois Danseuses (The Three Dancers) at the Tate Gallery, London. Public Domain Day ties into the first two aims of the Dorothy Parker Society, founded in 1999: “To promote the work of Dorothy Parker” and “To introduce new readers to the work of Dorothy Parker.” While 2021 is good for public domain works, 2022 looks to be special too. copyright NAACP poetry The New Yorker Search for: Recent News 1966 Radio Interview with Dorothy Parker and Richard Lamparski August 13, 2021 From Boats to Speakeasies, A Brief History of Parkerfest August 3, 2021 Parkerfest 2021: Brooklyn, The Bronx, Manhattan, August 20-22 July 22, 2021 Zabou Breitman Brings “Dorothy” to Avignon, Answers Q&A July 16, 2021 Dorothy Parker Memorial Fund Launches June 23, 2021 Al Hirschfeld Foundation Supports Dorothy Parker Memorial Fund With Exclusive T-Shirt June 22, 2021 Support Campaign for Dorothy Parker Gravestone Begins Today with New York Distilling Company and the Al Hirschfeld Foundation June 7, 2021 New York Post Breaks Story About Dorothy Parker Gin Funding Gravestone Fund June 7, 2021 Balto, the Dog Sculpture Hero of Central Park February 2, 2021 ‘News Item’ and ‘Résumé’ Enter Public Domain January 1 December 18, 2020 Archives 1999-2021 Archives 1999-2021 Select Month August 2021  (2) July 2021  (2) June 2021  (4) February 2021  (1) December 2020  (1) October 2020  (1) September 2020  (3) August 2020  (1) July 2020  (1) June 2020  (1) May 2020  (2) April 2020  (1) March 2020  (1) February 2020  (1) January 2020  (2) December 2019  (2) November 2019  (1) October 2019  (1) September 2019  (2) August 2019  (2) June 2019  (2) May 2019  (1) October 2018  (1) August 2018  (1) May 2018  (1) April 2018  (1) March 2018  (1) February 2018  (2) January 2018  (2) September 2017  (1) July 2017  (1) June 2017  (2) April 2017  (1) March 2017  (2) February 2017  (2) January 2017  (1) December 2016  (1) November 2016  (1) October 2016  (1) August 2016  (2) June 2016  (3) May 2016  (3) April 2016  (4) March 2016  (2) February 2016  (2) September 2015  (1) August 2015  (1) July 2015  (2) May 2015  (1) April 2015  (2) March 2015  (1) February 2015  (2) January 2015  (2) December 2014  (3) November 2014  (2) September 2014  (2) August 2014  (5) June 2014  (4) May 2014  (5) March 2014  (5) January 2014  (3) December 2013  (2) November 2013  (1) October 2013  (2) September 2013  (5) August 2013  (7) May 2013  (2) March 2013  (4) February 2013  (1) January 2013  (2) December 2012  (1) September 2012  (3) August 2012  (1) July 2012  (1) June 2012  (2) May 2012  (4) April 2012  (4) March 2012  (2) February 2012  (3) January 2012  (3) December 2011  (3) November 2011  (4) October 2011  (8) September 2011  (2) August 2011  (3) July 2011  (1) March 2011  (2) February 2011  (2) January 2011  (5) November 2010  (1) August 2010  (3) June 2010  (2) May 2010  (2) April 2010  (2) March 2010  (2) February 2010  (4) January 2010  (5) September 2009  (1) August 2009  (4) July 2009  (5) June 2009  (2) May 2009  (5) March 2009  (1) February 2009  (2) December 2008  (5) November 2008  (3) September 2008  (7) August 2008  (1) July 2008  (2) June 2008  (6) May 2008  (6) April 2008  (5) March 2008  (5) February 2008  (4) January 2008  (5) December 2007  (3) November 2007  (5) October 2007  (8) September 2007  (2) August 2007  (8) July 2007  (12) June 2007  (1) May 2007  (1) April 2007  (2) March 2007  (4) January 2007  (3) December 2006  (4) November 2006  (2) September 2006  (2) August 2006  (2) July 2006  (1) June 2006  (4) May 2006  (5) April 2006  (3) March 2006  (5) February 2006  (3) January 2006  (3) December 2005  (1) October 2005  (2) July 2005  (3) May 2005  (2) March 2005  (1) February 2005  (3) January 2005  (3) December 2004  (8) November 2004  (1) October 2004  (12) September 2004  (7) August 2004  (5) July 2004  (3) June 2004  (6) May 2004  (5) April 2004  (7) February 2004  (1) January 2004  (4) December 2003  (5) August 2003  (2) June 2003  (1) December 2002  (1) November 2002  (2) October 2002  (2) August 2002  (4) June 2002  (1) February 2002  (2) December 2001  (2) November 2001  (3) September 2001  (9) August 2001  (5) May 2001  (2) April 2001  (1) February 2001  (2) January 2001  (2) December 2000  (1) November 2000  (5) October 2000  (1) September 2000  (2) August 2000  (5) July 2000  (3) June 2000  (2) May 2000  (5) April 2000  (2) March 2000  (6) February 2000  (4) January 2000  (4) December 1999  (4) November 1999  (5) October 1999  (3) September 1999  (8) August 1999  (9) July 1999  (3) June 1999  (5) May 1999  (5) April 1999  (7) March 1999  (3) February 1999  (5) January 1999  (10) Social A Vicious Circle Algonquin Hotel Algonquin Round Table Dorothy Parker on Facebook Dorothy Parker Society Los Angeles Dorothy Parker Society Seattle Franklin P. Adams George S. Kaufman Heywood Broun Robert Benchley Society Friend Sites Don Marquis Fitzgerald Society Forgotten NY Great Gatsby Boat Tour Literary Manhattan Liza Donnelly Nat Benchley Natalie Ascencios Ring Lardner Tallulah Bankhead © 2021 Dorothy Parker Society | Powered by Minimalist Blog WordPress Theme drive-google-com-6440 ---- Subspace: A solution to the farmer's dilemma.pdf - Google Drive Sign in drive-google-com-6909 ---- Inclusive Terminology Guide Glossary - Carissa Chew - NLS - 1.0.pdf - Google Drive Sign in duraspace-org-4590 ---- Fedora Migration Paths and Tools Project Update: May 2021 - Duraspace.org Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter About DuraSpace Projects Services Community Membership Support News & Events Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter Home » Latest News » Fedora Migration Paths and Tools Project Update: May 2021 Fedora Migration Paths and Tools Project Update: May 2021 Posted on May 28, 2021 by David Wilcox This is the eighth in a series of monthly updates on the Fedora Migration Paths and Tools project – please see last month’s post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. The University of Virginia has completed their data migration and successfully indexed the content into a new Fedora 6.0 instance deployed in AWS using the fcrepo-aws-deployer tool. They have also tested the fcrepo-migration-validator tool and provided some initial feedback to the team for improvements. Some work remains to update front-end indexes for the content in Fedora 6.0, and the team will also investigate some performance issues that were encountered while migrating and indexing content in the Amazon AWS environment in order to document any relevant recommendations for institutions wishing to migrate to a similar environment. Based on this work, we will be offering an initial online workshop on Migrating from Fedora 3.x to Fedora 6.0. This workshop is free to attend with limited capacity so please register in advance. This is a technical workshop pitched at an intermediate level. Prior experience with Fedora is preferred, and participants should be comfortable using a Command Line Interface and Docker. The workshop will take place on June 22 at 11am ET. The Whitman College team has been busy iterating on test migrations of representative collections into a staging server using the islandora_workbench tool. The team has been making updates to the migration tool, site configuration, and documentation along the way to better support future migrations. In particular, the work the team has done to iterate on the spreadsheets until they were properly configured for ingest will be very useful to other institutions interested in following a similar path. Once the testing and validation of functional requirements is complete we will begin the full migration into the production site. We are nearing the end of the pilot phase of the grant, after which we will finalize a draft of the migration toolkit and share it with the community for feedback. While this toolkit will be openly available for anyone who would like to review it, we are particularly interested in working with institutions with existing Fedora 3.x repositories that would like to test the tools and documentation and provide feedback to help us improve the resources. If you would like to be more closely involved in this effort please contact David Wilcox for more information. Tags: Blog, Fedora, Fedora Repository, News DuraSpace Articles Recent Articles RSS Feeds Tags Announcements (35) Blog (397) Cloud (21) COAR (7) Communication (425) community (6) Conferences (86) Data curation (91) DSpace (209) DSpace 7 (39) DSpaceDirect (13) DuraCloud (46) DuraSpace (370) DuraSpace digest (346) education (5) Events (88) Fedora (9) Fedora Repository (170) Governance (5) Higher education (40) Hydra (62) Islandora (75) Linked data (106) LYRASIS (58) LYRASIS Digest (17) meetings (14) NDSA (9) News (362) Open access (370) Open data (348) Open Repositories (13) Open source (370) Preservation and archiving (260) professional development (7) Registered Service Provider (8) Repository (247) Samvera (14) Scholarly publishing (296) SPARC (12) Technology (151) VIVO (165) VIVO Camp (7) VIVO Conference (15) VIVO updates (20) Web seminar (33) About About DuraSpace History What We Do Board of Directors Meet the Team Policies Reports Community Our Users Community Programs Service Providers Strategic Partners Membership Values & Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter This work is licensed under a Creative Commons Attribution 4.0 International License duraspace-org-4777 ---- Home - Duraspace.org Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter About DuraSpace Projects Services Community Membership Support News & Events Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter Help us preserve and provide access to the world's intellectual, cultural and scientific heritage Join Us Learn More Latest News 8.02.21 DSpace 7 Press Release 7.28.21 Fedora Migration Paths and Tools Project Update: July 2021 7.27.21 VIVO Announces Dragan Ivanović as Technical Lead Our Global Community The community DuraSpace serves is alive with ideas and innovation aimed at collaboratively meeting the needs of the scholarly ecosystem that connects us all. Our global community contributes to the advancement of DSpace, Fedora and VIVO. At the same time subscribers to DuraSpace Services are helping to build best practices for delivery of high quality customer service. We are grateful for our community’s continued support and engagement in the enterprise we share as we work together to provide enduring access to the world’s digital heritage. Join us   Open Source Projects The Fedora, DSpace and VIVO community-supported projects are proud to provide more than 2500 users worldwide from more than 120 countries with freely-available open source software. Fedora is a flexible repository platform with native linked data capabilities. DSpace is a turnkey institutional repository application. VIVO creates an integrated record of the scholarly work of your organization.   Our Services ArchivesDirect, DSpaceDirect, and DuraCloud services from DuraSpace provide access to institutional resources, preservation of treasured collections, and simplified data management tools. Our services are built on solid open source software platforms, can be set up quickly, and are competitively priced. Staff experts work directly with customers to provide personalized on-boarding and superb customer support. DuraCloud is a hosted service that lets you control where and how your content is preserved in the cloud. DSpaceDirect is a hosted turnkey repository solution. ArchivesDirect is a complete, hosted archiving solution.   About About DuraSpace History What We Do Board of Directors Meet the Team Policies Reports Community Our Users Community Programs Service Providers Strategic Partners Membership Values & Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter This work is licensed under a Creative Commons Attribution 4.0 International License duraspace-org-6381 ---- Fedora Migration Paths and Tools Project Update: July 2021 - Duraspace.org Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter About DuraSpace Projects Services Community Membership Support News & Events Projects DSpace Fedora VIVO Who’s Using Services ArchivesDirect DSpaceDirect DuraCloud Community Our Users Community Programs Service Providers Strategic Partners Membership Values and Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter Home » Latest News » Fedora Migration Paths and Tools Project Update: July 2021 Fedora Migration Paths and Tools Project Update: July 2021 Posted on July 28, 2021 by David Wilcox This is the latest in a series of monthly updates on the Fedora Migration Paths and Tools project – please see the previous post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. We completed some final performance tests and optimizations for the University of Virginia pilot. Both the migration to their AWS server and the Fedora 6.0 indexing operation were much slower than anticipated, so the project team tested a number of optimizations, including: Adding more processing threads Increasing the size of the server instance  Using a separate and larger database server  Using locally attached flash storage Fortunately, these improvements made a big difference; for example, ingest speed was increased from 6.8 resources per second to 45.6 resources per second. In general, this means that institutions with specific performance targets can use a combination of parallel processing and increased computational resources. Feedback from this pilot has been incorporated into the migration guide, updates to the migration-utils to improve performance, updates to the aws-deployer tool to provide additional options, and improvements to the migration-validator to handle errors. The Whitman College team has begun their production migration using Islandora Workbench. Initial benchmarking has shown that running Workbench from the production server rather than locally on a laptop achieves much better performance, so this is the recommended approach. The team is working collection-by-collection using CSV files and a tracking spreadsheet to keep track of each collection as it is ingested and ready to be tested. They have also developed a Quality Control checklist to make sure everything is working as intended – we anticipate doing detailed checks on the first few collections and spot checks for subsequent collections. As we near the end of the pilot project phase of the grant work we are focused on documentation for the migration toolkit. We plan to complete a draft of this documentation over the summer, after which this draft will be shared with the broader community for feedback. We will organize meetings in the Fall to provide opportunities for community members to provide additional feedback on the toolkit and make suggestions for improvements. Tags: Blog, Fedora, Fedora Repository, News, Open source DuraSpace Articles Recent Articles RSS Feeds Tags Announcements (35) Blog (397) Cloud (21) COAR (7) Communication (425) community (6) Conferences (86) Data curation (91) DSpace (209) DSpace 7 (39) DSpaceDirect (13) DuraCloud (46) DuraSpace (370) DuraSpace digest (346) education (5) Events (88) Fedora (9) Fedora Repository (170) Governance (5) Higher education (40) Hydra (62) Islandora (75) Linked data (106) LYRASIS (58) LYRASIS Digest (17) meetings (14) NDSA (9) News (362) Open access (370) Open data (348) Open Repositories (13) Open source (370) Preservation and archiving (260) professional development (7) Registered Service Provider (8) Repository (247) Samvera (14) Scholarly publishing (296) SPARC (12) Technology (151) VIVO (165) VIVO Camp (7) VIVO Conference (15) VIVO updates (20) Web seminar (33) About About DuraSpace History What We Do Board of Directors Meet the Team Policies Reports Community Our Users Community Programs Service Providers Strategic Partners Membership Values & Benefits Current Members Financial Contributors Become a Member Support Choosing a Project Choosing a Service Technical Specifications Wiki Contact Us News & Events Latest News Event Calendar Webinars Monthly Newsletter This work is licensed under a Creative Commons Attribution 4.0 International License duraspace-org-9385 ---- News – Duraspace.org News – Duraspace.org Fedora Migration Paths and Tools Project Update: July 2021 This is the latest in a series of monthly updates on the Fedora Migration Paths and Tools project – please see the previous post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. We completed some final performance tests and optimizations for the University... Read more » The post Fedora Migration Paths and Tools Project Update: July 2021 appeared first on Duraspace.org. All Aboard for Fedora 6.0 As you may have heard, earlier this month the Fedora 6.0 Release Candidate was announced, which means we are moving full steam ahead toward an official full production release of the software. After 2 long years of laying down the tracks to guide us toward a shiny new Fedora, this train is nearly ready to... Read more » The post All Aboard for Fedora 6.0 appeared first on Duraspace.org. Fedora Migration Paths and Tools Project Update: May 2021 This is the eighth in a series of monthly updates on the Fedora Migration Paths and Tools project – please see last month’s post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. The University of Virginia has completed their data migration and successfully... Read more » The post Fedora Migration Paths and Tools Project Update: May 2021 appeared first on Duraspace.org. Fedora Migration Paths and Tools Project Update: April 2021 This is the seventh in a series of monthly updates on the Fedora Migration Paths and Tools project – please see last month’s post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. Born Digital has set up both staging and production servers for... Read more » The post Fedora Migration Paths and Tools Project Update: April 2021 appeared first on Duraspace.org. Meet the Members Welcome to the first in a series of blog posts aimed at introducing you to some of the movers and shakers who work tirelessly to advocate, educate and promote Fedora and other community-supported programs like ours. At Fedora, we are strong because of our people and without individuals like this advocating for continued development we... Read more » The post Meet the Members appeared first on Duraspace.org. Fedora Migration Paths and Tools Project Update: January 2021 This is the fourth in a series of monthly updates on the Fedora Migration Paths and Tools project – please see last month’s post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. The grant team has been focused on completing an initial build... Read more » The post Fedora Migration Paths and Tools Project Update: January 2021 appeared first on Duraspace.org. Fedora Migration Paths and Tools Project Update: December 2020 This is the third in a series of monthly updates on the Fedora Migration Paths and Tools project – please see last month’s post for a summary of the work completed up to that point. This project has been generously funded by the IMLS. The Principal Investigator, David Wilcox, participated in a presentation for CNI... Read more » The post Fedora Migration Paths and Tools Project Update: December 2020 appeared first on Duraspace.org. Fedora 6 Alpha Release is Here Today marks a milestone in our progress toward Fedora 6 – the Alpha Release is now available for download and testing! Over the past year, our dedicated Fedora team, along with an extensive list of active community members and committers, have been working hard to deliver this exciting release to all of our users. So... Read more » The post Fedora 6 Alpha Release is Here appeared first on Duraspace.org. Fedora Migration Paths and Tools Project Update: October 2020 This is the first in a series of monthly blog posts that will provide updates on the IMLS-funded Fedora Migration Paths and Tools: a Pilot Project. The first phase of the project began in September with kick-off meetings for each pilot partner: the University of Virginia and Whitman College. These meetings established roles and responsibilities... Read more » The post Fedora Migration Paths and Tools Project Update: October 2020 appeared first on Duraspace.org. Fedora in the time of COVID-19 The impacts of coronavirus disease 2019 are being felt around the world, and access to digital materials is essential in this time of remote work and study. The Fedora community has been reflecting on the value of our collective digital repositories in helping our institutions and researchers navigate this unprecedented time.  Many member institutions have... Read more » The post Fedora in the time of COVID-19 appeared first on Duraspace.org. eadiva-com-1144 ---- Occupation | EADiva Toggle navigation About Blog Me Contact List of Elements Alphabetical List of Elements Sample EAD Files Submit A Plain-Talking EAD Tag Library Home About List of Elements Alphabetical Element List Sample EAD Files Blog Me Contact Occupation is a term identifying a type of work, profession, trade, business, or avocation significantly reflected in the described materials. Not all occupations need to be tagged, but those for which greater retrieval should be provided may be tagged. The element may be used to tag any function during a paragraph, with the NORMAL attribute supplying the proper form of the term. Occupations which feature significantly in the material should be listed within even if they are indicated elsewhere. may be used within , , , , , , ,