Slate has a piece on the use of Google’s Book Seach in detecting plagiarism. It is an interesting area, as it really represents technology removing another barrier to something that was previously relatively hard to do.

Previously, there were a number of barriers to detection of plagiarism. The scope of detection was limited by people’s access to the original books (hence the popularity of plagiarising material published overseas, or which was out of print for some time, or both), their ability to read them (ie to have the time to read it, and to translate the work if necessary), their recollection of what they had read, and the likelihood of finding and reading the infringing work. And in most cases, you would just get a feeling that something was amiss, rather than knowing straight away which portions had been copied and from where.

Now, it is a simple matter of setting a powerful computer loose on a massive database.

My suspicion is that a great number of similar instances of plagiarism will be detected as more and more books are loaded into the database, and particularly cases of copying that would never have been practical to detect before (eg 19th century English author copying little known 18th century French work).

Borrowing even between well-known works has often gone (apparently) undetected. According to recent research, Churchill consciously borrowed phrases from HG Wells for his speeches, although he expressly told Wells of the fact, and that he was a “great fan” of Wells’ work. Despite the great popularity of both, and the fact that they were published only a year or so apart, no-one seems to have picked it up earlier.