Too Big to Know wins two international “best book of the year” awards

I’m thrilled that in October 2012, Too Big to Know won two “best book of the year” awards.

The first was from getAbstract, and was awarded at the Frankfurt Book Fair.

The second was from the World Technology Awards. I won in the category of Media & Journalism for Too Big to Know.

Thank you both so much!

Categories: reviews

Tags:

No Comments

[errata] Inconsistent reference

The next day: Double d’oh! Mcconstock points out that Pauling didn’t have a stinking vaccine. So, it’s wrong, not just inconsistent. Thanks again, mccomstock!


mccomstock on Twitter points out that when I do a call back to Salk, I instead reference Pauling:

“It has been rare and hard-won such as Darwin with his barnacles and Linus Pauling with his vaccine” (page 176)

Too late to fix it in the paperback. At least it’s not factually wrong, just infelicitous. Thanks, mccomstock!

Categories: errata

Tags: , ,

No Comments

[2b2k] What we can learn from what we don’t know

I wrote a piece in the early afternoon yesterday about what we can learn from watching how we fill in the blanks when we don’t know stuff…in this case, when we don’t know much about Suspect #1 and #2. It’s about the narratives that shape our unserstanding.

For example, it turns out that I only have three Mass Murderer Narratives: Terrorist, Anti-Social, or Delusional. As we learned more about Suspect #2 yesterday, he seemed not to fit well into any of them. Perhaps he will once we know more, or perhaps my brain will cram him into one even if he doesn’t fit. Anyway, you can read the post at CNN.

 


I find myself unwilling to use Suspect #2′s name today because Martin Richard is too much with me.

Categories: boston, cnn, journalism, marathon, narratives, stories

Tags:

No Comments

[misc][2b2k] Making Twitter better for disasters

I had both CNN and Twitter on yesterday all afternoon, looking for news about the Boston Marathon bombings. I have not done a rigorous analysis (nor will I, nor have I ever), but it felt to me that Twitter put forward more and more varied claims about the situation, and reacted faster to misstatements. CNN plodded along, but didn’t feel more reliable overall. This seems predictable given the unfiltered (or post-filtered) nature of Twitter.

But Twitter also ran into some scaling problems for me yesterday. I follow about 500 people on Twitter, which gives my stream a pace and variety that I find helpful on a normal day. But yesterday afternoon, the stream roared by, and approached filter failure. A couple of changes would help:

First, let us sort by most retweeted. When I’m in my “home stream,” let me choose a frequency of tweets so that the scrolling doesn’t become unwatchable; use the frequency to determine the threshold for the number of retweets required. (Alternatively: simply highlight highly re-tweeted tweets.)

Second, let us mute based on hashtag or by user. Some Twitter cascades I just don’t care about. For example, I don’t want to hear play-by-plays of the World Series, and I know that many of the people who follow me get seriously annoyed when I suddenly am tweeting twice a minute during a presidential debate. So let us temporarily suppress tweet streams we don’t care about.

It is a lesson of the Web that as services scale up, they need to provide more and more ways of filtering. Twitter had “follow” as an initial filter, and users then came up with hashtags as a second filter. It’s time for a new round as Twitter becomes an essential part of our news ecosystem.

Categories: boston marathon, everythingismisc, everythingIsMiscellaneous, journalism, twitter

Tags:

No Comments

Elsevier acquires Mendeley + all the data about what you read, share, and highlight

I liked the Mendeley guys. Their product is terrific — read your scientific articles, annotate them, be guided by the reading behaviors of millions of other people. I’d met with them several times over the years about whether our LibraryCloud project (still very active but undergoing revisions) could get access to the incredibly rich metadata Mendeley gathers. I also appreciated Mendeley’s internal conflict about the urge to openness and the need to run a business. They were making reasonable decisions, I thought. At they very least they felt bad about the tension :)

Thus I was deeply disappointed by their acquisition by Elsevier. We could have a fun contest to come up with the company we would least trust with detailed data about what we’re reading and what we’re attending to in what we’re reading, and maybe Elsevier wouldn’t win. But Elsevier would be up there. The idea of my reading behaviors adding economic value to a company making huge profits by locking scholarship behind increasingly expensive paywalls is, in a word, repugnant.

In tweets back and forth with Mendeley’s William Gunn [twitter: mrgunn], he assures us that Mendeley won’t become “evil” so long as he is there. I do not doubt Bill’s intentions. But there is no more perilous position than standing between Elsevier and profits.

I seriously have no interest in judging the Mendeley folks. I still like them, and who am I to judge? If someone offered me $45M (the minimum estimate that I’ve seen) for a company I built from nothing, and especially if the acquiring company assured me that it would preserve the values of that company, I might well take the money. My judgment is actually on myself. My faith in the ability of well-intentioned private companies to withstand the brute force of money has been shaken. After all this time, I was foolish to have believed otherwise.

MrGunn tweets: “We don’t expect you to be joyous, just to give us a chance to show you what we can do.” Fair enough. I would be thrilled to be wrong. Unfortunately, the real question is not what Mendeley will do, but what Elsevier will do. And in that I have much less faith.

 


I’ve been getting the Twitter handles of Mendeley and Elsevier wrong. Ack. The right ones: @Mendeley_com and @ElsevierScience. Sorry!

Categories: annotations, copyright, culture, elsevier, mendeley, open access, too big to know

Tags:

No Comments

[2b2k] Back when not every question had an answer

Let me remind you young whippersnappers what looking for knowledge was like before the Internet (or “hiphop” as I believe you call it).

Cast your mind back to 1982, when your Mommy and Daddy weren’t even gleams in each other’s eyes. I had just bought my first computer, a KayPro II.

I began using WordStar and ran into an issue pretty quickly. For my academic writing, I needed to create end notes. Since the numbering of those notes would change as I took advantage of WordStar’s ability to let me move blocks of text around (^KB and ^KK, I believe, marked the block), I’d have to go back and re-do the numbering both in the text and in the end notes section. What a bother!

I wanted to learn how to program anyway, so I sat down with the included S-Basic manual. S-Basic shared syntax with BASIC, but it assumed you’d write functions, not just lines of code to be executed in numbered order. This made it tougher to learn, but that’s not what stopped me at first. The real problem I had was figuring out how to open a file so that I could read it. (My program was going to look for anything between a “[[" and a "]]”,, which would designate an in-place end note.)The manual assumed I knew more than I did, what with its file handlers and strange parameters for what type of file I was reading and what types of blocks of data I wanted to read.

I spent hours and hours and hours, mainly trying random permutations. I was so lacking the fundamental concepts that I couldn’t even figure out what to play with. I was well and truly stuck.

“Simple!” you say. “Just go on the Internet…and…oh.” So, it’s 1982 and you have a programming question. Where do you go? The public library? It was awfully short on programming manuals at that time, and S-Basic was an oddball language. To your local bookstore? Nope, no one was publishing about S-Basic. Then, how about to…or…well…no…then?…nope, not for another 30 years.

I was so desperate that I actually called the Boston University switchboard, and got connected to a helpful receptionist in the computer science division (or whatever it was called back then), who suggested a professor who might be able to help me. I left a message along the lines of “I’m a random stranger with a basic question about a programming language you probably never heard of, so would you mind calling me back? kthxbye.” Can you guess who never called me back?

Eventually I did figure it out, if by “figuring out” you mean “guessed.” And by odd coincidence, as I contemplate moving to doing virtually all my writing in a text editor, I’m going to be re-writing that little endnoter pretty soon now.

But that’s not my point. My point is that YOU HAVE NO IDEA HOW LUCKY YOU ARE, YOU LITTLE BASTARDS.

 


For those of you who don’t know what it’s like to get a programming question answered in 2013, here are some pretty much random examples:

Categories: kaypro, old fart, programming, tech

Tags:

No Comments

[annotation][2b2k] Critique^it

Ashley Bradford of Critique-It describes his company’s way of keeping review and feedback engaging.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

To what extent can and should we allow classroom feedback to be available in the public sphere? The classroom is a type of Habermasian civic society. Owning one’s discourse in that environment is critical. It has to feel human if students are to learn.

So, you can embed text, audio, and video feedback in documents, video and images. It translates docs into HTML. To make the feedback feel human, it uses slightly stamps. You can also type in comments, marking them as neutral, positive, or critique. A “critique panel” follows you through the doc as you read it, so you don’t have to scroll around. It rolls up comments and stats for the student or the faculty.

It works the same in different doc types, including Powerpoint, images, and video.

Critiques can be shared among groups. Groups can be arbitrarily defined.

It uses HTML 5. It’s written in Javascript, PHP, and uses Mysql.

“We’re starting with an environment. We’re building out tools.” Ashley aims for Critique^It to feel very human.

Categories: annotation, interop, liveblog, too big to know

Tags:

No Comments

[annotation][2b2k]Opencast-Matterhorn

Andy Wasklewicz and Jeff Austin from Entwine [twitter:entwinemedia] describe a multi-institutional project to build a platform-agnostic tool for enriching video through note-taking, structured annotations, and sharing. It uses HTML 5, and allows for structured tagging, time-based annotation, and more.

Categories: annotations, interop, misc

Tags:

No Comments

[annotation][2b2k] Mediathread

Jonah Bossewich and Mark Philipsonfrom Columbia University talk about Mediathread, an open source project that makes it easy to annotate various digital sources. It’s used in many courses at Columbi, as well as around the world.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

It comes from Columbia’s Center for New Media Teaching and Learning. It began with Vital, a video library tool. It let students clip and save portions of videos, and comment on them. Mediathread connects annotations to sources by bookmarking, via a bookmarklet that interoperates with a variety of collections. The bookmarklet scrapes the metadata because “We couldn’t wait for the standards to be developed.” Once an item is in Mediathread, it embeds the metadata as well.

It has always been conceived of a “small-group sharing and collaboration space.” It’s designed for classes. You can only see the annotations by people in your class. It does item-level annotation, as well as regions.

Mediathread connects assignments and responses, as well as other workflows. [He's talking quickly :)]

Mediathread’s bookmarklet approach requires it to have to accommodate the particularities of sites. They are aiming at making the annotations interoperable in standard forms.

Categories: annotation, interop, liveblog, too big to know

Tags:

No Comments

[annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

Categories: annotation, interop, liveblog, platforms, too big to know

Tags:

No Comments

[annotation][2b2k] Neel Smith: Scholarly annotation + Homer

Neel Smith of Holy Cross is talking about the Homer Multitext project, a “long term project to represent the transmission of the Iliad in digital form.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He shows the oldest extant ms of the Iliad, which includes 10th century notes. “The medieval scribes create a wonderful hypermedia” work.

“Scholarly annotation starts with citation.” He says we have a good standard: URNs, which can point to, for example, and ISBN number. His project uses URNs to refer to texts in a FRBR-like hierarchy [works at various levels of abstraction]. These are semantically rich and machine-actionable. You can google URN and get the object. You can put a URN into a URL for direct Web access. You can embed an image into a Web page via its URN [using a service, I believe].

An annotation is an association. In a scholarly notation, it’s associated with a citable entity. [He shows some great examples of the possibilities of cross linking and associating.]

The metadata is expressed as RDF triples. Within the Homer project, they’re inductively building up a schema of the complete graph [network of connections]. For end users, this means you can see everything associated with a particular URN. Building a facsimile browser, for example, becomes straightforward, mainly requiring the application of XSL and CSS to style it.

Another example: Mise en page: automated layout analysis. This in-progress project analyzes the layout of annotation info on the Homeric pages.

Categories: annotation, homer, interop, libraries, liveblog, too big to know

Tags:

No Comments