Download

We see a lot of new ebooks being released that are riddled with editorial and formatting problems. From the publisher’s side, the problem is that to proofread ebooks after conversion, especially after OCR (scanning) conversion, is expensive — contrary to what the naysayers believe, it is not a job for a high school graduate who thinks Twittering is the be-all and end-all of language literacy, but a job for a skilled professional — especially when it cannot be known with certainty how many ebook sales will be made.

Perhaps the time has come to rethink how and what gets published. I don’t mean which books but which formats. Perhaps the time has come to publish only hardcover and ebook formats, dropping the mass market paperback from the mix and keeping the trade paperback for those pbooks that do not justify a hardcover print run (although considering that the cost differential is slight between paperback and hardcover, I see no particular need to retain even the trade paperback).

Before the coming of the paperback, books were available in hardcover only. That limitation was the impetus for several innovations, including the public library. But the limitation served a good market purpose. It kept the price high relative to incomes; created an educated class to which people aspired; allowed nearly all print runs to be profitable; created the first commercial publishing class (as opposed to scholarly class) of books; created the respected profession of editor; and limited the number of books available for purchase. As a side effect, it created secondary and tertiary markets for books: secondary being the used-book market and tertiary being the collector’s market.

Today, the publishing world runs wild with no discipline imposed either directly or indirectly on the publishing world and process. Consider the growth of books published in the United States alone in the past decade: In 2002, 215,000 books were published traditionally (which largely means through the old-style process of vetting, editing, and so on by an established publisher) and 33,000 nontraditionally (which largely means self-published). Jump ahead a mere seven years to 2009 and the numbers are 302,000 and 1.33 million, respectively. One year later, 2010, the respective numbers are 316,000 traditionally published and 2.8 million — more than double — nontraditionally published! I’m not sure I want to know the numbers for 2011.

The jump in nontraditional publishing numbers is simply a testament to the rise of the ebook. The numbers do not imply or correlate with sales, quality, price, or anything other than raw numbers of suddenly available books. If I read one book a day, every day, or 365 books a year (vacationing from reading only on the extra day in leap years) for 60 years, I could read 21,900 books, which represents a mere 0.0078% of the 2.8 million nontraditional books published in 2010. The likelihood of my being able to read a significant percentage of all books available to me is nonexistent.

How does this tie into the idea of dropping paperbacks? It runs a convoluted course like this: As I cannot possibly read all of the books published in 2010 alone, I would prefer to march publishing backward and be less egalitarian and open access and more unequal and closed. I want to make what reading I do count with minimal search-and-find effort on my part. I want to see more profitability for authors and publishers in exchange for better vetting of books and significantly better production quality control. One way to do this is to control market access.

eBooks are already eroding pbook sales, so let’s help that erosive process by guiding it. If a person must read or buy a pbook, make the only pbook version available the hardcover version. Book buyers are already accustomed, from centuries of ingrained experience, to paying a premium price for a hardcover book. Book buyers perceive value — whether that value is real or not makes no difference; buyers believe it exists, which is sufficient for it to, in fact, exist — in hardcover versions. One side effect of that perception is that buyers of hardcovers tend to treat the books more carefully than they treat paperbacks, thus creating a secondary market with some value. Thus, let’s satisfy the pbook market need by providing a better-quality hardcover.

By limiting the pbook to hardcover only, we are also changing the secondary market. A used hardcover will now have more value because there is no pbook alternative. And it wouldn’t take a great deal of effort to figure out a way for authors and publishers to receive a small royalty from secondary market sales. Eliminate the paperback and there will be more incentive for that solution to be found.

The other benefit of eliminating paperbacks is that the ebook can easily replace it. More effort and money can be put into production of the ebook version and a more realistic price can be charged. Right now, much of the price grumbling about ebooks is a result of comparing the ebook to the paperback. Why should an ebook cost more than the paperback version? (The question is rhetorical here.) Eliminating the paperback removes the yardstick against which the ebook price is currently measured. The market will settle, just as it did for paperback pricing, around a few price points for ebooks, which will be less than the hardcover price. Within a relatively short period of time, that price stabilization will be accepted by most book buyers and what we will see is the return of the market we had before ebooks, but with ebooks in the role of paperbacks.

One other consideration is that by eliminating the paperback, traditional publishers are eliminating a major debit to their balance sheets. To offer a paperback version means you actually have to do a print run — the product has to be available in that form — which also means that the direct and ancillary costs (e.g., returns, warehousing) have to be incurred. And if the paperback is a decent seller, it means that the costs have to be incurred multiple times. In contrast, with an ebook production costs only have to be incurred once; any cost of duplication of the electronic file, once perfected, is minimal.

Will elimination of the paperback cause pain in the market? Sure it will, just as any established market change and upheaval does. But this is an opportune moment to make that change. Publishers need to move paperback readers to ebooks. They also need to enhance the value of both ebooks and hardcovers in the consumer’s thoughts. The easiest and most effective way to do this is for publishers to take their lumps now and eliminate the paperback from the equation (think of the shift from videotape to DVD and vinyl record/audiotape to CD). The period of rapid growth of ebooks is the time to reshape the market, not when the idea of coavailability of the three formats is entrenched.

Via Rich Adin’s An American Editor

25 COMMENTS

  1. I understand this excellent approach to sustainable book quality. But it should be mentioned that binding format is not linked to editorial control and hard cover books are nowadays paperbacks cased.

    Genre ranked sales do suggest that the terrain is now divided between screen and print rather than between hardback and paperback as it was before ebooks. The current flood of paperback production is dominated by self publishing as that entire sector is production from high speed copiers (“on-demand printing”) that have no in-line hardback option.

    Some other background developments are migration of monographic publishing of all kinds from off-set lithographic (wet ink) to electrostatic (dry in copiers) production. Another smaller development relevant to the topic is that the legacy specialty of library binding is now pioneering high quality cloth hardback binding for print on demand (paperback binding) technologies.

  2. The poster stated: “From the publisher’s side, the problem is that to proofread ebooks after conversion, especially after OCR (scanning) conversion, is expensive — contrary to what the naysayers believe, it is not a job for a high school graduate who thinks Twittering is the be-all and end-all of language literacy, but a job for a skilled professional — especially when it cannot be known with certainty how many ebook sales will be made.”

    This is nonsense. Proofreading previous published text against a newly minted file is a comparison task. It requires attention to detail, patience, and a little practice. Add in checking for formatting against a set of house rules (who to handle indents, end of section, etc), especially if this is an “upgrade” or “side grade” from the previous edition … that’s a bit of a skill bump.

    To suggest an intelligent, detail oriented high school graduate can’t do this is silly. It is also ideal work for offshore English as second language workers where ages are lower, and good white color jobs are rarer. It is appalling that a company like Penguin, with operations in many countries, can release ebook editions of backlist titles like “The First Rumpole Omnibus” riddled with OCR typos. A thorough read and compare would be a few hours time and an hour to correct the file. At $15/hr that hardly imperceptibly increases production costs.

    Editing, I agree, is a different and refined skill set. But that’s not what the issue is with backlist conversions.

  3. “..to proofread ebooks after conversion, especially after OCR (scanning) conversion, is expensive — contrary to what the naysayers believe, it is not a job for a high school graduate who thinks Twittering is the be-all and end-all of language literacy, but a job for a skilled professional ”

    That really is a joke. My son is not finished High School yet and could proofread any fiction book to an extremely high level, far far higher than the utter rubbish quality of eBooks being sold at full price by the big publishers that have been documented repeatedly hereabouts. Skilled Profession ? Nutz !

  4. In publishing, content is the dog, delivery format is the tail.

    Stop staring at the tail. Stop obsessing about the tail. Fiddling with the tail is not going to solve the myriad problems of the dog.

  5. Ah, but it’s the tail, the long tail, that is the worst of the problem.

    How many copies of the afore-mentioned Rumpole Omnibus do you think will sell in which ever flavor of epub you were reading? 10? 100? No more, probably. Maybe 300 in .azw (Kindle format), but probably not. It’s OLD. It’s not popular anymore. There are literally MILLIONS of books out there trying to get to the same group of eyes.

    Now, the obvious solution is to simply NOT ISSUE those ebooks. But then how much of the market should we cede to pirates? Should we let people get comfortable with going to pirates, because they can’t find the book elsewhere?

    And if we do pay $15 per hour, which is in no way a living wage, and the proofreader can handle a very fast 4 standard pages (of 250 words each) per hour, and the book has 100,000 words, which is 400 manuscript pages, that’s $1500.

    If the book is priced at $15, which is outrageously high, and the publisher gets 35% of that ($5 and change), which is normal, and if the royalties are 25% of receipts, which is also normal, then it’s going to have to sell about 400 copies in EACH FORMAT to break even, even without any other direct costs.

    Publishing is a lot less simple than it looks from the outside.

  6. @Marion: I don’t want to fuss over the details too much but … I’ll fuss a little.

    Proofreading in this context — checking that the new file matches the old printed text word for word — can be done MUCH quicker than 1000 words/hr. Nor is $15/hr a bad wage for an entry level student or, as I was suggesting, off-shore English language speakers where such a wage is a fortune. (In fact, if it can’t be done for less than $500 per title, something has gone terribly wrong.)

    Nor does the file need to be rechecked from epub to azw — although the entire file needs a quick check to ensure the formatting has been made right; that requires a page-by-page scan only for the second format.

    If an ebook by a best selling author like John Mortimer isn’t worth investing a few hundred dollars for an international release, knowing that the ongoing cost of keeping that book in print is approximately $0 for the next decade … then the author is not worth publishing at all. (And I am not talking about outrageous charges like $15 for a 30-year old backlist title either.)

  7. There are a ton of entry-level jobs which do not pay a living wage and are not meant to, Marion—my college-age brother is apparently spending his July slinging chicken wings into a deep fryer at a sports bar, and getting little above minimum wage for it, for instance. Nobody would suggest that he try and support a family on this—it isn’t that kind of job. But there are clearly plenty of people out there to whom this sort of work and pay is acceptable, given the terms—you choose the hours, so you can fit in school or other life things on the side, you get a fixed minimum level of pay, and if you want more pay, you do more work. And if you want to support a family, you upgrade your skills and get a ‘real’ job. Alexander is correct in saying that this is not an editing task. The editor has already had his day with the book. This is simply a clean-up, and I promise you that if you went to a college campus, you would have no trouble recruiting reasonably skilled English majors to do this kind of thing for $15 an hour, to an acceptable level. We’re talking about weeding out obvious OCR errors here, not actual, creative editing.

    I can’t speak for whether eliminating the paperback is going to be the answer, but I think a good first step would be to develop a format, be it epub or something else, where you really can get a good output out of your final word doc or whatever. If it’s taking you 500 steps to generate—badly—each new format, then you need to refine your toolset so that this workload is reduced. I’d wager that any expense in developing such a system would be more than offset by the long-term savings from having it.

  8. @Marion, you’re missing the point.

    I used to be an application developer working in SGML back in the early 1990s. I know what software does well and what it doesn’t do well (e.g., proofing and editing). The idea that publishing would not, after all this time, be preparing ONE electronic file, proofed and edited ONE time, that could then be used to produce the hardback, [mass|trade] paperback, multiple ebook formats, laser-printing onto papyrus or stone tablets…it’s insane. Generation of multiple formats could/should be the easy, mechanical part of publishing. Yet people like Rich–and apparently you–seem to think that formats are heart of the problem. Is it that you aren’t aware of the technology, or simply don’t care because “that’s not how we do things in publishing”?

    If you’re generating content off of a single source, you can make money off of one epub or a thousand. The generation from the common source is mechanical. It doesn’t cost you anything extra, because choosing to generate an epub or mobi or azw or prc or insert-next-big-format-here is choosing what colored bag to put to product in. It’s trivial. Why on earth would you and Rich think that the bag is the point?

  9. There’s a reason that Distributed Proofreaders (http://www.pgdp.net) try to make 3 proofing passes over OCR documents. It’s very easy to miss ‘scannos’ like tbe. Now a spell-check would catch that, but I’ve spelled checked a 100+ page technical document and it took several hours. Even a fiction book can kick a lot of spelling errors depending on its contents.

    Even with a common, well-checked source document, the minute you have to convert it into multiple formats, you can introduce grammar, spelling, and formatting issues. Well, not ‘you’ but the conversion software. It can happen because of a formatting mistake in the original document, bugs in the conversion programs, hidden characters that trigger something in the conversion, etc.

  10. @John Doe — You wrote: “Yet people like Rich–and apparently you–seem to think that formats are heart of the problem. Is it that you aren’t aware of the technology, or simply don’t care because ‘that’s not how we do things in publishing’”?

    John, I want to clarify a few things. Formats are not the heart of the problem and not the reason why I advocate dropping paperbacks. I am quite aware of the the technology and for years prepared manuscripts with SGML coding and now with XML coding. But the coding is not a panacea as you suggest for all of a book’s ills.

    Part of the problem is that the publishers export to outhouse sources the document creation process to compositors. This work is not done inhouse in most instances because doing it inhouse is not cost effective for large multinational publishers who need to produce quarterly returns for short-sighted shareholders. A consequence of exporting to outhouse sources document creation is that the publisher doesn’t get the electronic files. This is changing, but very slowly, because publishers are not yet certain what to do with or how to handle the electronic files.

    Plus there is the problem of defining who owns the electronic files: is it the compositor? the publisher? the copyright owner? In my case, my policy has always been that my client — the publisher — owns the final electronic files, but even when I give them the files, they have no means to properly store or catalogue the thousands of such files for their books. Imagine how many files a Simon & Schuster or Random House would have to deal with each year. Archiving and cataloging alone would add considerably to the production costs.

    More importantly, even though my clients are given the electronic files, they do not have the inhouse means of dealing with them. Few have any production staff that perform tasks outside the usual editorial tasks so what will they do with thousands, if not millions, of InDesign files?

    One other thing to note is that the SGML/XML coded files are the copyedited files but not the final files used to produce the book. Things get changed between copyediting and printing. Additionally, the coding is more a generic code than a specific code. SGML coding will indicate a head level but you still need the designer to define what that head level will look like. Consequently, there are more things than just the coded files that have to be kept together and implemented together. And design defines much more than simply saying the A-level head (@H1) is Garamond. It defines the specific foundry Garamond, the size, the density, and myriad other attributes.

    Creating the “bag” is not a trivial task. Moving a file, for example, from InDesign to ePub is not as simple as choosing to export the ID file to ePub because the process is not flawless. Some day it may be flawless, but today it isn’t. The translation creates errors. Similarly, simply because something was done in SGML or XML doesn’t make it smoothly translatable to any desired output.

    I’ve been in publishing for 27 years and agree that there are many things that publishers could do to improve their processes, but it is naive to think that production of hundreds of thousands of books is a simple as pressing a button to generate flawless products. If you take the micro view and focus on a single book, the solutions seem simplistic and those who protest otherwise seem to be making a mountain out of a molehill. But a macro perspective that involves hundreds of thousands of books confirms that the mountain is what is really there.

  11. Constantly hearing how hard it is for publishers to deal with the digital age is getting really old. So, it’s hard for the poor publishers to format and proofread their books! It’s hard for the poor publishers to deal with rampant copyright infringement. It’s hard for the poor publishers to make a good return on their backlist. And so on and so forth.

    Well, I don’t care. Get the job done or fade away. There are vast numbers of clever people who are very skilled at dealing with digital media and finding efficient means to publish ebooks.

    There’s just one thing that stands in their way – a bunch of antiquated, whiny, self-obsessed publishing companies that are too clumsy, too scared and too wedded to the past to be able to adapt to the digital world.

  12. @Rich, thanks for your response.

    I was not under the impression that there’s a button that publishers can press that would immediately enable books to be generated in any format (like conspiracy theorists who believe Detroit has been sitting on a 200MPG engine for the past 30 years). Just as you’ve been in publishing for donkeys years, I’ve been in software development. I appreciate that specifying the formatting is a detail-oriented, intricate process.

    But the point of SGML/XML, as I’m sure you’re aware, is that you can separate content from how it is used. And an advantage is that you can specify multiple sets of tags within the same content. That level of tech was available more than 20 years ago. If you don’t have a single “source of record” for your content, you’re doing it wrong. Period.

    What you’re describing: not getting electronic files, issues of ownership rights, not knowing how to handle them, doing additional markup that you’re not storing…those are process issues. And they’re examples of publishers doing things badly, if not incompetently. You’re seriously trying to tell me that:

    “they have no means to properly store or catalogue the thousands of such files for their books. Imagine how many files a Simon & Schuster or Random House would have to deal with each year. Archiving and cataloging alone would add considerably to the production costs.”

    Large hard drives, version control, and meta data are not difficult concepts to grasp, and believe it or not, dealing with your content and formatting properly would have the effect of *lowering* your production costs. And, by the way? Dealing intelligently with the archiving of the content (and intermediate work product) that publishers make their money on? It’s their JOB.

    Publishing is at least 20 years behind the times when it comes to the use of technology. You’ll forgive me if I have little sympathy for an industry that seems to invest zero in infrastructure, and instead focus its energies on complaining about how hard everything is.

  13. JohnDoe – an absolutely excellent post. The state of Publishers IT incompetence is staggering and yet they have had years to prepare and years of warning to prepare. Now they still don’t seem to know what they are doing and are trying to pass on the cost of their incompetence to the reading customer.
    “..publishers are not yet certain what to do with or how to handle the electronic files.”
    Staggering. Completely staggering.

  14. Richard—files? Really? With all due respect, you simply will not convince me that an inability to keep track of FILES is really the problem here. Every business has files. My dad is in advertising and they have a server with a file for every client. You create a document relating to that client’s account, you save it on the server in their folder. Same with my business. I wrote 300 report cards this year. Each class has a folder, each kid within the class has a folder, and you save your contributions in the appropriate spot. Worst case scenario and you really can’t keep track of an electronic document (or you have to archive it to prevent server overload, as we do) you burn it to a CD-ROM and file it in the same place you used to file your paper stuff back in the day. If you are seriously telling me that the publishers cannot produce a quality ebook because they don’t know how to FILE—well, then the publishing industry perhaps deserves whatever failures are coming their way. I mean, seriously—you’re justifying ebooks with obvious OCR errrors because they don’t know how to file?

  15. @John Doe — You wrote: “Dealing intelligently with the archiving of the content (and intermediate work product) that publishers make their money on? It’s their JOB.”

    This may be the crux of the problem. Until recently, it was not the publisher’s “job” to archive the content. It was the publisher’s job to find new authors, coddle quality existing authors, and shepherd a book from author concept to finished product in consumer hands. Publishers still believe, and I’m not convinced wrongly, that such remains their primary job.

    It is only within the last 3 years that the ebook “revolution” has really occurred and as much as we would like to think that publishers are agile enough to change course at the whiff of change, they aren’t. Change takes time, perhaps mre than the consumer wants it to take, but these problems are not immediately solvable by diktat from the top.

    You also wrote: “But the point of SGML/XML, as I’m sure you’re aware, is that you can separate content from how it is used. And an advantage is that you can specify multiple sets of tags within the same content. That level of tech was available more than 20 years ago. If you don’t have a single “source of record” for your content, you’re doing it wrong. Period. ”

    Yes, I am aware of the versatility of SGML/XML and was aware of it 15+ years ago. But 2 decades ago, that versatility wasn’t needed and so no one learned how to make use of it. Again, the time when it did become useful was in very recent years, when ebooks really began taking off and when customizable textbooks came into demand.

    Until the real takeoff of ebooks, there was no need for publishers to have control over much more than the final PDF that is used to create the printed book. Now that there is a need, there also needs to be a process in place that the publisher can adhere to. It is not as simple as simply taking control of the InDesign files used to create the book.

    Remember this, too: Once we step outside the world of fiction everything in the book production process becomes more complicated. However, the solution that a publisher puts in place has to be a universal solution, one that will handle all of the publisher’s output. It may be pretty easy to archive and catalogue a novel and its single file, but it isn’t so easy to do the same with a 2600-page multivolume medical text or even a much smaller how-to book for teachers that is riddled with forms, tables, and illustrations, and has different needs from the fiction book. Yet for the Elseviers and Bertelsmanns of the publishing ecosystem, a single company-wide system needs to be created.

    I don’t disagree that it should be done; I merely am suggesting that it isn’t as easy a task to do as is often suggested.

  16. @Joanna — You wrote: “Richard—files? Really? With all due respect, you simply will not convince me that an inability to keep track of FILES is really the problem here.”

    If we were talking a few hundred files, I agree. But we are talking millions of files. That’s problem number 1. For example, a book I recently did for a client has more than 300 files — and that’s for the one book.

    On the micro level everything is simple; on the macro level things become more complex. A publisher like Elsevier or Oxford University Press will have hundreds of millions if not hundreds of billions of files that have to be dealt with.

    To solve the problem you need a single workable system not multiple systems that are simply local solutions.

    More importantly, up to a couple of years ago publishers had no need to retain source files. Source files often were paper manuscripts that were sent to a typesetter who rekeyed the files. They weren’t easy-to-manipulate Word-type electronic files. All the publisher kept were the plates and today the PDFs that are the plate equivalents. Once a book was published, the most likely scenario is that there would be an identical reprint in the same form. Again, it is only within the past 3 years that the electronic files have gained importance.

    Even so, source files change over the years. Fifteen years ago, I prepared source files for publishers primarily using Ventura Publisher. But Ventura died on the vine about 10 years ago and was still usable up to 5 years ago. But before publishers even saw a need for retention of source files, the Ventura source files essentially became wholly unusable and unaccessible — because the computer industry moved on.

    Today, most publishers make use of InDesign. Yet there is no gurantee that in 5 years InDesign files will be accessible, just like 8-Track tapes — they may exist but can’t be played.

    You think of a publisher’s role differently than publisher’s think of their role. They are coming to grips with the new paradigm, but they cannot make the necessary adjustments at the drop of a pin. Unlike your father’s office where he can decide tomorrow to wholly revamp his computer system and have it done a week later, international conglomerates cannot. There is much more that they have to take into consideration.

    In your article about end-of-the-year library cleanup illustrates the problem. You had to sit around until someone showed you how to scan the books in because you had not done it before. Why didn’t you learn how to do it 6 months ago in case you needed the skill at the end of the year or 5 years from now? That is what you are complaining about as regards the publishers — that they didn’t foresee 10 years ago precisely what they need today and create the system immediately to deal with it.

    Until relatively recently they had no need to catalogue and archive source files. Preserving them was the role of the printer/typesetter and was a way for the printer/typesetter to get repeat business.

  17. Richard, I do understand your point that it is a more involved issue than it might seem. But where it falls apart for me is, an issue such as saving a file really is not a very hard problem to solve compared to other problems they might be dealing with. Changing your whole business model due to a paradigm shift? Fine, that might take awhile. But learning how to save a file? Can we not just deal with it so we can move on to other issues? If their business really is so inflexible that they can’t even solve the little problems, how is anything ever going to get done? We might as well just give up now and only read public domain freebies. I, for one, am not willing to pay full retail price for error-riddled ebooks just because some suit somewhere can’t figure out how to file a document.

  18. @Richard, I’ll move on (or go back under the bridge, depending on your view of me). But I wanted to make one last point.

    E-books are new. The necessity of the publishing industry to manage the packaging of its content into multiple formats is not. How long has the industry had to manage issuing a single book in hardback, trade paperback, mass market paperback, book club edition, large-print edition, Braille, and probably other formats that I’m forgetting? How well have they done in introducing technology to ease movement from one format to another?

    I’m not saying it’s easy (and I know that non-fiction introduces a whole new set of problems). There are plenty of technical obstacles to overcome. But if you never even start, it’s impossible.

  19. Rich, what you’re describing is data life-cycle management and every modern business has to deal with it. The fact that a multi billion dollar industry didn’t come up with a better solution 10 years ago is unbelievable but here we are.

    You don’t have to save everything. Data life-cycle management is expensive and if costs more to manage the asset then you think the asset is worth you dump it (unless there are regulations that you have to keep it).

    If the publishing industries’ data management plan was to store their assets on paper that’s a valid option. The media has a long life span but it does have lots of disadvantages. The one we’re discussing is that it’s expensive to convert to a quality digital text (it’s expensive to republish in any form).

    I just don’t see what this has to do with the price of tea in China, or whether we still need paperbacks. If paperbacks generate enough demand to pay for themselves then you continue to produce them.

  20. I think what frustrates a lot of people here is that the publishing industry acts like their data management issues are unique. They aren’t. Other industries have been dealing with these issues for years, and there are solutions.

  21. Every novel I’ve ever written involved 2 files: The novel; and the cover; and sometimes a third file of other available books. I suppose textbooks can easily have a few hundred files involved, but most fiction will not. And when you brought up hardbacks and paperbacks, I didn’t imagine you were talking about textbooks.

    Or is there some other reason why a fiction book would require 300 files?

  22. I still haven’t encountered an ebook that couldn’t have been produced with HTML 1.0.

    “Oh, but my carefully-selected font! My individualized kerning for each separate letter of every chapter heading! My unique little swirly graphics used to indicate scene transitions! My creative use of whitespace! How am I supposed to get all these things in bare HTML?”

    Well…maybe those things aren’t needed.

  23. Nobody yet has brought up what for me is the obvious solution: crowdsourcing. Build into your e-reading software the capacity for the reader to flag typos and report them back to the distributor. Do a deal: for every 100 typos flagged, the reader gets a voucher for $5 off a new book. Be seen as responsive; process the corrections quickly and make corrected versions available to purchasers at no cost.

The TeleRead community values your civil and thoughtful comments. We use a cache, so expect a delay. Problems? E-mail newteleread@gmail.com.