Thursday, January 18, 2018

GitHub Giveth; Wikipedia Taketh Away


One of the joys of administering Free-Programming-Books, the second most popular repo on GitHub, has been accepting pull requests (edits) from new contributors, including contributors who have never contributed to an open source project before. I always say thank you. I imagine that these contributors might go on to use what they've learned to contribute to other projects, and perhaps to start their own projects. We have some hoops to jump through- there's a linter run by Travis CI that demands alphabetical order, even for cyrillic and CJK names that I'm not super positive as to how they get "alphabetized". But I imagine that new and old contributors get some satisfaction when their contribution gets "merged into master", no matter how much that sounds like yielding to the hierarchy.

Contributing to Wikipedia is a different experience. Wikipedia accepts whatever edits you push to it, unless the topic has been locked down. No one says thank you. It's a rush to see your edit live on the most consulted and trusted site on the internet. But then someone comes and reverts or edits your edit. And instantly the emotional state of a new Wikipedia editor changes from enthusiasm  to bitter disappointment and annoyance at the legalistic (and typically white male) Wikipedian.

Psychologists know that that rewards are more effective motivations than punishments so maybe the workflow used on GitHub is kinder than that used on Wikipedia. Vandalism and spam are a difficult problem for truly open systems, and contention is even harder. Wikipedia wastes a lot of energy on contentious issues. The GitHub workflow simplifies the avoidance of contention and vandalism but sacrifices a bit of openness by depending a lot on the humans with merge privileges. There are still problems - every programmer has had the horrible experience of a harsh or petty code review, but at least there are tools that facilitate and document discussion.

The saving grace of GitHub workflow is that if the maintainers of a repo are mean or incompetent, you can just fork the repo and try to do better. In Wikipedia, controversy gets pushed up a hierarchy of privileged clerics. The Wikipedia clergy does an amazingly good job, considering what they're up against, and their workings are in the open for the most part, but the lowly wiki-parishioner rarely experiences joy when they get involved. In principle, you can fork wikipedia, but what good would it do you?

The miracle of Wikipedia has taught us a lot; as we struggle to modernize our society's methods of establishing truth, we need to also learn from GitHub.

Update 1/19: It seems this got picked up by Hacker News. The comment by @avian is worth noting. The flip side of my post is that Wikipedia offers immediate gratification, while a poorly administered GitHub repo can let contributions languish forever, resulting in frustration and disappointment. That's something repo admins need to learn from Wikipedia!

Friday, December 29, 2017

2017: Not So Prime

Mathematicians call 2017 a prime year because 2017 has no prime factors other than 1 and 2017. Those crazy number theorists.

I try to write at least one post here per month. I managed two in January. One of them raged at a Trump executive order that compelled federal libraries to rat on their users. Update: Trump is still president.  The second pointed out that Google had implemented cookie-like user tracking on previously un-tracked static resources like Google Fonts, jQuery, and Angular. Update: Google is still user-tracking these resources.

For me, the highlight of January was marching in Atlanta's March for Social Justice and Women with a group of librarians.  Our chant: "Read, resist, librarians are pissed!"



In February, I wrote about how to minimize the privacy impact of using Google AnalyticsUpdate: Many libraries and publishers use Google Analytics without minimizing privacy impact.

In March, I bemoaned the intense user tracking that scholarly journals force on their readersUpdate: Some journals have switched to HTTPS (good) but still let advertisers track every click their readers make.

I ran my first-ever half-marathon!



In April, I invented CC-licensed "clickstream poetry" to battle the practice of ISPs selling my clickstream.  Update: I sold an individual license to my poem!

Science March NYC 2017I dressed up as the "Trump Resistor" for the Science March in New York City. For a brief moment I trended on Twitter. As a character in Times Square, I was more popular than the Naked Cowboy!

In May, I tried to explain Readium's "lightweight DRM"Update: No one really cares - DRM is a fig-leaf anyway.

In June, I wrote about digital advertising and how it has eviscerated privacy in digital libraries.  Update: No one really cares - as long as PII is not involved.

I took on the administration of the free-programming-books repo on GitHub.  At almost 100,000 stars, it's the 2nd most popular repo on all of GitHub, and it amazes me. If you can get 1,000 contributors working together towards a common goal, you can accomplish almost anything!

In July, I wrote that works "ascend" into the public domain. Update: I'm told that Saint Peter  has been reading the ascending-next-monday-but-not-in-the-US "Every Man Dies Alone

I went to Sweden, hiked up a mountain in Lappland, and saw many reindeer.



In August, I described how the National Library of Medicine lets Google connect Pubmed usage to Doubleclick advertising profilesUpdate: the National Library of Medicine still lets Google connect Pubmed usage to Doubleclick advertising profiles.

In September, I described how user interface changes in Chrome would force many publishers to switch to HTTPS to avoid shame and embarassment.  Update: Publishers such as Elsevier, Springer and Proquest switched services to HTTPS, avoiding some shame and embarrassment.

I began to mentor two groups of computer-science seniors from Stevens Institute of Technology, working on projects for Unglue.it and Gitenberg. They are a breath of fresh air!

In October, I wrote about new ideas for improving user experience in ebook reading systemsUpdate: Not all book startups have died.

In November, I wrote about how the Supreme Court might squash out an improvement to the patent system. Update: no ruling yet.

I ran a second half marathon!


In December, I'm writing this summary. Update: I've finished writing it.

On the bright side, we won't have another prime year until 2027. 2018 is twice a prime year. That hasn't happened since 1994, the year Yahoo was launched and the year I made my first web page!

Sunday, November 26, 2017

Inter Partes Review is Improving the Patent System

Today (Monday, November 27), the Supreme Court is hearing a case, Oil States Energy Services, LLC v. Greene’s Energy Group, LLC, that seeks to end a newish  procedure called inter partes review (IPR). The arguments in Oil States will likely focus on arcane constitutional principles and crusty precedents from the Privy Council of England; go read the SCOTUSblog overview if that sort of thing interests you. Whatever the arguments, if the Court decides against IPR proceedings, it will be a big win for patent trolls, so it's worth understanding what these proceedings are and how they are changing the patent system. I've testified as an expert witness in some IPR proceedings, so I've had a front row seat for this battle for technology and innovation.

A bit of background: the inter partes review was introduced by the "America Invents Act" of 2011,  which was the first major update of the US patent system since the dawn of the internet. To understand how it works, you first have to understand some of the existing patent system's perverse incentives.

When an inventor brings an idea to a patent attorney, the attorney will draft a set of "claims" describing the invention. The claims are worded as broadly as possible, often using incomprehensable language. If the invention was a clever shelving system for color-coded magazines, the invention might be titled "System and apparatus for optical wavelength keyed information retrieval". This makes it difficult for the patent examiner to find "prior art" that would render the idea unpatentable. The broad language is designed to prevent a copycat from evading the core patent claims via trivial modifications.

The examination proceeds like this: The patent examiner typically rejects the broadest claims, citing some prior art. The inventor's attorney then narrows the patent claims to exclude prior art cited by the examiner, and the process repeats itself until the patent office runs out of objections. The inventor ends up with a patent, the attorney runs up the billable hours, and the examiner has whittled the patent down to something reasonable.

As technology has become more complicated and the number of patents has increased, this examination process breaks down. Patents with very broad claims slip through, often because the addition of the internet means that prior art was either un-patented or unrecognized because of obsolete terminology. These bad patents are bought up by "non-practicing entities" or "patent trolls" who extort royalty payments from companies unwilling or unable to challenge the patents. The old system for challenging patents didn't allow the challengers to participate in the reexamination. So the patent system needed a better way to correct the inevitable mistakes in patent issuance.

In an inter partes review, the challenger participates in the challenge. The first step in drafting a petition is proposing a "claim construction". For example. if the patent claims "an alphanumeric database key allowing the retrieval of information-package subject indications", the challenger might "construct" the claim as "a call number in a library catalog", and point out that call numbers in library catalogs predated the patent by several decades. The patent owner might respond that the patent was never meant to cover call numbers in library catalog. (Ironically,  in an infringement suit, the same patent owner might have pointed to the broad language of the claim asserting that of course the patent applies to call numbers in library catalogs!) The administrative judge would then have the option of accepting the challenger's construction and open the claim to invalidation, or accepting the patent owner's construction, and letting the patent stand (but with the patent owner having agreed to a narrow claim construction!)
Disposition of IPR Petitions in the first 5 years. From USPTO.

In the 5 years that IPR proceedings have been available, 1,153 patents have been completely invalidated and 287 others have had some claims cancelled. 331 patents that have been challenged have been found to be completely valid. (See this statistical summary.) This is a tiny percentage of patents; it's likely that only the worst patents have been challenged; in the same period, about one and a half million patents have been granted.

It was hoped that the IPR process would be more efficient and less costly than the old process; I don't know if this has been true but patent litigation is still very costly. At least in the cases I worked on had correct outcomes.

Some companies in the technology space have been using the IPR process to oppose the patent trolls. One notable effort has been Cloudflare's Project Jengo. Full disclosure: They sent me a T-shirt!


Update (November 28): Read Adam Liptak's news story about the argument at the New York Times
  • Apparently Justices Gorsuch and Roberts were worried about patent property being taken away by administrative proceedings. This seems odd to me, since in the case of bad patents, the initial grant of a patent amounts to a taking of property away from the public, including companies who rely on prior art to assure their right to use public property.
  • Some news stories are characterizing the IPR process as lopsided against patent owners. (Reuters: "In about 1,800 final decisions up to October, the agency’s patent board canceled all or part of a patent around 80 percent of the time.") Apparently the news media has difficulty with sampling bias - given the expense of an IPR filing, of course only the worst of the worst patents are being challenged; more than 99.9% of patents are untouched by challenges!


Sunday, October 29, 2017

Turning the page on ereader pagination

Why bother paginating an ebook? Modern websites encourage you to "keep on swiping" but if you talk to people who read ebooks, they rather like pages. I'll classify their reasons into "backward looking" and "practical".

Backward looking reasons that readers like pagination
  • pages evoke the experience of print books
  • a tap to turn a page is easier than swiping
Practical reasons that readers like pagination
  • pages divide reading into easier to deal with chunks
  • turning the page gives you a feeling of achievement
  • the thickness of the turned pages help the reader measure progress
Reasons that pagination sucks
  • sentences are chopped in half
  • paragraphs are chopped in half
  • figures and such are sundered from their context
  • footnotes are ... OMG footnotes!
How would you design a long-form reading experience for computer screens if you weren't tied to pagination? Despite the entrenchment of Amazon and iPhones, people haven't stopped taking fresh looks at the reading experience.

Taeyoon Choi and his collaborators at the School for Poetic Computation recently unveiled their "artistic intervention" into the experience of reading. (Choi and a partner founded the Manhattan-based school in 2013 to help artists learn and apply technology.) You can try it out at http://poeticcomputation.info/



On viewing the first chapter, you immediately see two visual cues that some artistry is afoot. On the right side, you see something that looks like a stack of pages. On the left is some conventional-looking text, and to its right is a some shrunken text. Click on the shrunken text to expand references for the now shrunken main text. This conception of long form text as existing in two streams seems much more elegant than the usual pop-up presentation of references and footnotes in ebook readers. Illustrations appear in both streams, and when you swipe one stream up or down, the other stream moves with it.

The experience of the poetic computation reader on a smartphone adapts to the smaller screen. One or other of the two streams is always off-screen, and little arrows, rather than shrunken images indicate the other's existence.

 * * *

On larger screens, something very odd happens when you swipe down a bit. You get to the end of the "page". And then it starts moving the WRONG way, sideways instead of up and down. Keep swiping, and you've advanced the page! The first time this happened, I found it really annoying. But then, it started to make sense. "Pages" in the Poetic Computation Reader are intentional, not random breaks imposed by the size of the readers screen and the selected typeface. The reader gets a sense of achievement, along with an indication of progress.

In retrospect, this is a completely obvious thing to do. In fact, authors have been inserting intentional breaks into books since forever. Typesetters call these breaks "asterisms" after the asterisks that are used to denote them. They look rather stupid in conventional ebooks. Turning asterisms into text-breaking animations is a really good idea. Go forth and implement them, ye ebook-folx!

On a smart phone, Poetic Computation Reader ignores the "page breaks" and omits the page edges. Perhaps a zoom animation and a thickened border would work.

Also, check out the super-slider on the right edge. Try to resist sliding it up and down a couple of times. You can't!

 * * *

Another interesting take on the reading experience is provided by Slate, the documentation software written by Robert Lord. On a desktop browser, Slate also presents text in parallel streams. The center stream can be thought of as the main text. On the left is the hierarchical outline (i.e. a table of contents), on the right is example code. I like the way you can scroll either the outline or the text stream and the other stream follows. The outline expands and contracts accordion-style as you scroll, resulting in effortless navigation. But Slate uses a responsive design framework, so on a smartphone, the side streams reconfigure into inline figures or slide-aways.

"Clojure by Example", generated by Slate.

There are no "pages" in Slate. Instead, the animated outline is always aware of where you are and indicates your progress. The outline is a small improvement on the static outline produced by documentation generators like Sphinx, but the difference in navigability and usability is huge.

As standardization and corporate hegemony seem to be ossifying digital reading experiences elsewhere,  independent experiments and projects like these give me hope that a next generation of ebooks will put some new wind in the sails of our digital reading journey.

Notes:
  1. The collaborators on the Poetic Computation Reader include Molly Kleiman, Shannon Mattern, Taeyoon Choi and HAWRAF. Also, these footnotes are awkward.


Monday, September 11, 2017

Prepare Now for Topical Storm Chrome 62

Sometime in October, probably the week of October 17th, version 62 of Google's Chrome web browser will be declared "stable". When that happens, users of Chrome will get their software updated to version 62 when they restart.

One of the small but important changes that will occur is that many websites that have not implemented HTTPS to secure their communications will be marked in a subtle way as "Not Secure". When such a website presents a web form, typing into the form will change the appearance of the website URL. Here's what it will look like:

Unfortunately, many libraries, and the vendors and publishers that serve them, have not yet implemented HTTPS, so many library users that type into search boxes will start seeing the words "Not Secure" and may be alarmed.

What's going to happen? Here's what I HOPE happens:
  • Libraries, Vendors, and Publishers that have been working on switching their websites for the past two years (because usually it's a lot more work than just pushing a button) are motivated to fix the last few problems, turn on their secure connections, and redirect all their web traffic through their secure servers before October 17.
          So instead of this:

           ... users will see this:

  • Library management and staff will be prepared to answer questions about the few remaining problems that occur. The internet is not a secure place, and Chrome's subtle indicator is just a reminder not to type in sensitive information, like passwords, personal names and identifiers, into "not secure" websites.
  • The "Not Secure" animation will be noticed by many users of libraries, vendors, and publishers that haven't devoted resources to securing their websites. The users will file helpful bug reports and the website providers will acknowledge their prior misjudgments and start to work carefully to do what needs to be done to protect their users.
  • Libraries, vendors, and publishers will work together to address many interactions and dependencies in their internet systems.


Here's what I FEAR might happen:
  • The words "Not Secure" will cause people in charge to think their organizations' websites "have been hacked". 
  • Publishing executives seeing the "Not Secure" label will order their IT staff to "DO SOMETHING" without the time or resources to do a proper job.
  • Library directors will demand that Chrome be replaced by Firefox on all library computers because of a "BUG in CHROME". (creating an even worse problem when Firefox follows suit in a few months!) 
  • Library staff will put up signs instructing patrons to "ignore security warnings" on the internet. Patrons will believe them.
Back here in the real world, libraries are under-resourced and struggling to keep things working. The industry in general has been well behind the curve of HTTPS adoption, needlessly putting many library users at risk. The complicated technical environment, including proxy servers, authentication systems, federated search, and link servers has made the job of switching to secure connections more difficult.

So here's my forecast of what WILL happen:
  • Many libraries, publishers and vendors, motivated by Chrome 62, will finish their switch-over projects before October 17. Users of library web services will have better security and privacy. (For example, I expect OCLC's WorldCat, shown above in secure and not secure versions, will be in this category.)
  • Many switch-over projects will be rushed, and staff throughout the industry, both technical and user-facing, will need to scramble and cooperate to report and fix many minor issues.
  • A few not-so-thoughtful voices will complain that this whole security and privacy fuss is overblown, and blame it on an evil Google conspiracy.

Here are some notes to help you prepare:
  1. I've been asked whether libraries need to update links in their catalog to use the secure version of resource links. Yes, but there's no need to rush. Website providers should be using HTTP redirects to force users into the secure connections, and should use HSTS headers to make sure that their future connections are secure from the start.
  2. Libraries using proxy servers MUST update their software to reasonably current versions, and update proxy settings to account for secure versions of provider services. In many cases this will require acquisition of a wildcard certificate for the proxy server.
  3.  I've had publishers and vendors complain to me that library customers have asked them to retain the option of insecure connections ... because reasons. Recently, I've seen reports on listservs that vendors are being asked to retain insecure server settings because the library "can't" update their obsolete and insecure proxy software. These libraries should be ashamed of themselves - their negligence is holding back progress for everyone and endangering library users. 
  4. Chrome 62 is expected to reach beta next week. You'll then be able to install it from the beta channel. (Currently, it's in the dev channel.) Even then, you may need to set the #mark-non-secure-as flag to see the new behavior. Once Chrome 62 is stable, you may still be able to disable the feature using this flag.
  5. A screen capture using chrome 62 now might help convince your manager, your IT department, or a vendor that a website really needs to be switched to HTTPS.
  6. Mixed content warnings are the result of embedding not-secure images, fonts, or scripts in a secure web page. A malicious actor can insert content or code in these elements, endangering the user. Much of the work in switching a large site from HTTP to HTTPS consists of finding and addressing mixed content issues.
  7. Google's Emily Schechter gives an excellent presentation on the transition to HTTPS, and how the Chrome UI is gradually changing to more accurately communicate to users that non-HTTPS sites may present risks: https://www.youtube.com/watch?v=GoXgl9r0Kjk&feature=youtu.be (discussion of Chrome 62 changes starts around 32:00)
  8. (added 9/15/2017) As an example of a company that's been working for a while on switching, Elsevier has informed its ScienceDirect customers that ScienceDirect will be switching to HTTPS in October. They have posted instructions for testing proxy configurations.