Download

How Much of the Web is Archived?” (3 pages; PDF)
by Scott G. Ainsworth, Ahmed Alsum, Hany SalahEldeen, Michele C. Weigle, and Michael L. Nelson
Presented at JCDL 2011 Earlier This Month
From a Web Science and Digital Libraries Research Group (Old Dominion University) Blog Post:

There are many questions to ask about web archiving and digital preservation – why is archiving important? what should be archived? what is currently being archived? how often should pages be archived?

The short paper “How Much of the Web is Archived?” (Scott G. Ainsworth, Ahmed AlSum, Hany SalahEldeen, Michele C. Weigle, and Michael L. Nelson), published at JCDL 2011, is our first step at determining to what extent the web is being archived and by which archives.

To address this question, we sampled URIs from four sources to estimate the percentage of archived URIs and the number and frequency of archived versions. We chose 1000 URIs from each of the following sources:

  1. Open Directory Project (DMOZ) – sampled from all
  2. URIs (July 2000 – Oct 2010)
  3. Delicious – random URIs from the Recent Bookmarks list
  4. Bitly – random hash values generated and dereferenced
  5. search engine caches (GoogleBingYahoo!) – random sample of URIs from queries of 5-grams (using Google’s N-gram data)

For each of the sample URIs (4000 in all), we used Memento to discover archived versions, or mementos, of the URI.

We categorize the archives as Internet Archive (using the classic Wayback Machine), search engine caches (Google, Bing, and Yahoo!), and other (e.g., DiigoArchive-ItUK National ArchivesWebCite).

The blog post continues with a few observations about what was learned while performing the research

Direct to Complete Paper (3 pages; PDF)

Via INFOdocket

NO COMMENTS

The TeleRead community values your civil and thoughtful comments. We use a cache, so expect a delay. Problems? E-mail newteleread@gmail.com.