E-Books, One Laptop Per Child project, plenty else could benefit from new memory chips if they pan out

locpictureGood-bye hard disks? Hello, your own Library of Congress? Well, we’re not there yet. But in the next few years, a new technology could lead to thumb-sized solid state drives storing a terabyte each. Power consumption might be one-thousandth of flash memory and costs perhaps one-tenth. Just the ticket for multimedia e-books, eh? Or even high-res movies inside them?

In between his CSSing for the TeleBlog, Jon Noring took time out for some calculations. He figured that 20 million books exist in the world and that 18,000 of these drives would do the trick for high-res images of them.

If nothing else, imagine the benefits for the One Laptop Per Child project. Even without WiFi, kids in mountains and remote jungles could enjoy immediate access to huge collections of knowledge—well, budgets and copyright gods permitting. Perhaps the already-available info would be the equivalent of a cache, reducing the need for new downloading when WiFi was available.

The gobbledygook for the technology is programmable metallization cell (PMC), and Wired News has the details, inspiring the inevitable Slashdotting.

3 Comments on E-Books, One Laptop Per Child project, plenty else could benefit from new memory chips if they pan out

  1. My “calculation” was done solely to provide a rough order of magnitude estimate. I assumed there are about 20 million unique books in the world (come to think of it, it’s higher than that, maybe as many as 100 million, but a factor of 5 isn’t too far off.) I assumed each book has 300 pages, and the 600 dpi, full-color scan of each page, when compressed using JPEG2000 at 75% compression, runs to around 3 megs per page.

    Now, if we convert all these books to structured and proofed digital text (like what Distributed Proofreaders produces), each book, including hi-rez scans of images/graphics that we find in many books, should average around 3 megs. In this case simply divide the 18,000 by 300 and that works out to only 60 of these terrabyte drives.

    O.k., assuming that the quoted cost and storage are possible to achieve in the not-so-distant future marketplace (one never knows, this technology may die on the vine), that works out to around $1000 per terabyte, which is still more expensive than harddrives today, but not overly so.

    Looking beyond flash memory to the general area of file storage, it almost boggles the mind where things are heading — we don’t see any letup in increased storage capacities and greatly decreasing costs. If some of the new optical storage technologies prove viable, we might even see putting 50 terabytes in a cheap plastic, sugar-cube-sized storage media (yes, the potential has been around for years, but from what I’ve read optical still looks commercially feasible.) Where will it end? Hard to say, really, but it is clear we’ve not plateau’d yet. There are a number of new candidate storage technologies vying to push the limits — all it takes is one of these to prove commercially feasible and the game continues.

  2. Jon, thanks for the additional context. On the negative, here’s one scary thought. Suppose the technology is not as reliable for long-term storage as people think; imagine all the books that could be lost.

    I’d hope, then, that parallel storage systems would exist, and that people would constantly monitor the integrity of all systems and not be fooled by the lack of moving parts.

    We know that Flash memory can degrade, even if the recent tech is better than before.

    Monitoring seems the logical thing to do, then, but as the lack for preparation for Katrina shows us, “logical” and “actual” are not always the same thing.

    Of course, a systematic TeleRead approach would help the cause of monitoring.

    I’d like to see something with structure rather than just helter-skelter or simply relying on corporations with quarterly profit goals—perhaps causing them to skimp on archival precautions.

    Thanks,
    David

  3. Definitely there is the issue with longevity of the digital data. If storage becomes cheap and compact enough, storage redundancy is more easily achieved.

Leave a Reply

wordpress analytics