[Next] [Up] [Previous] [Index]

Cryptography and Content Protection

One common application of cryptography is to prevent copies, or at least digital copies, being made of computer programs, music, pictures, or movies.

Since these things can't be used while in an encrypted form, however, works protected in this fashion still need to be accompanied, either as they are distributed, or in the device on which they will be legitimately played or used, by all the information needed to decrypt them. Thus, it appears that someone attempting to overcome such protection will always have an alternative to cryptanalysis as a means of attack: prying the key out of wherever it is hidden.

However, if a key is hidden inside the circuitry of a microchip, prying it out of there requires specialized equipment; that, in itself, would be more reassuring if many hackers weren't college students, but the military also uses various techniques to make that more difficult, such as painting chips with chemicals that will catch fire if exposed to the air. Because this limitation does mean that no content protection method can be technically perfect, it is not surprising, whether or not one approves of it, that industries relying on copyright have asked for (and have recieved in many cases, such as the Digital Millenium Copyright Act in the United States) specific legal protection of content protection schemes, to make it illegal to attempt to defeat them, and to reveal the hidden keys to others once they are found.

To allow a protected movie or song, for example, to be played on a computer, without it being necessary to allow the protected content in decrypted form to move along the computer's buses, one of the ideas that has been advanced, and which does seem necessary, is to put the decryption inside each display device, such as inside video cards (or, more recently, inside monitors with digital inputs), sound cards, and printers (so that you can print a copy of a book without being able to access its text in machine readable form).

It may also be noted that a number of content protection schemes include some measure of protection against the manufacturers of display devices.

One could begin by envisioning that a song or movie, encoded digitally, could simply be encrypted by a key built into every playback device.

The next step might be to think of the song or movie being encrypted by a key which is chosen randomly, with that key then included, in encrypted form, in a header.

Then, it becomes possible for the header to contain several copies of the key in encrypted form, encrypted using the keys of different device manufacturers. The idea is that if a manufacturer was careless with his key, new recordings might no longer include their keys encrypted using that manufacturer's key. This, of course, would affect the many consumers who purchased that manufacturer's preceding products.

If one envisages that there might be a very large number of manufacturers involved, instead of including a very large number of encrypted versions of the recording key on each recording, each encrypted by one manufacturer's master key, some scheme could be used involving such measures as giving several of the keys used to encrypt the recording keys to each manufacturer, but with no two manufacturers having the exact same combination of keys. Each content provider also might only have some of the existing keys of that type. As well, splitting the recording key into several pieces, for example by encrypting several values which when XORed together yield the recording key, might be useful as well.

An additional step, also requested by content providers, which is more controversial, is for the law to mandate watermark detection in display or recording devices. Using the techniques of steganography, an indication of the copyright status of an audio or video signal can be included within it. Mandating the recognition of these watermarks on display and recording devices in general imposes both costs and the risk of false positives on uses of the display and recording devices that are completely unconnected with the use of copyrighted entertainment media.

Software, if protected by encryption, could be protected in two different ways. It could be distributed with a dongle that decrypts an important part of the software, totally preventing copying. Or, the encryption could use a key which is jointly derived from the user's serial number or name and a corresponding secret value: the two together would produce the constant key in which the software is encrypted on a CD-ROM, but it would be made difficult to find and use this key directly, so that unauthorized copies would normally identify their source.

Allowing Everyone to Produce Protected Content

I remember that, some years ago, there was a news story about a new microprocessor that had, built into it, the capability of running programs that were encrypted. Actually, two chips had this feature; they were the NEC V25 Software Guard and the NEC V35 Software Guard. These chips were 8086-compatible chips; the V35 (which also existed in a plain form without this feature), in addition, had features that allowed it to address 16 Megabytes of RAM with a 24-bit address, but in a simpler fashion than that which later became the standard with Intel's 80286 chip.

The encryption provided was, however, somewhat limited. Customers could specify a 256-byte translation table, and when the chip was executing encrypted software, this table was used to decrypt the first opcode byte of instructions.

Since the address portion of an instruction usually appears in the clear on the address bus in a later cycle, it made sense not to encrypt that, and thereby provide a window into the translation table for anyone who could monitor the computer's bus.

One could imagine slightly enhancing this kind of encryption, while keeping its time requirements comparable to those involved in address calculation:

Here, bytes being fetched by the CPU go through two translation tables or S-boxes, and in between are XORed with a quantity calculated from the least significant two bytes of the address from which they were fetched.

Four different S-boxes are present in each position. Another table, not shown in the diagram, would determine which S-box is to be used for various types of memory access, and it might look something like this:

00000 First opcode byte                   00 00 00 00
00001 Other opcode bytes                  01 01 01 01
00010 8-bit displacement                X
00011 Address field                     X
00100 (not used)                        X
00101 one-byte data                       10 10 10 10
00110 16-bit data, first byte             11 11 11 11
00111 16-bit data, second byte            00 01 10 11
01000 32-bit integer, first byte          01 10 11 00
01001 32-bit integer, second byte         10 11 00 01
01010 32-bit integer, third byte          11 00 01 10
01011 32-bit integer, fourth byte         11 10 01 00
01100 32-bit floating, first byte         10 01 00 11
01101 32-bit floating, second byte        01 00 11 10
01110 32-bit floating, third byte         00 11 10 01
01111 32-bit floating, fourth byte        00 11 00 11

11000 64-bit floating, first byte         01 10 01 10
11001 64-bit floating, second byte        10 01 10 01
...

so there would be nine bits for each entry, one turning off encryption, the other eight specifying the four S-boxes to use. One could add another two bits, so that the two XOR steps shown in the diagram could individually be switched to addition.

To allow a standard part to be used, the chip could contain the ability to do public-key cryptography, so that it could load in the contents for all these tables from the outside.

But even with the additional complications shown, it seems like quite a mismatch to start off by using something as powerful as public-key cryptography, and then protect software with such an elementary type of cryptography.

So, instead of (or in addition to) using the chipmaker's public key to encrypt S-boxes for use in this elementary fashion, it ought to be used to allow decryption of executable code, which, in decrypted form, would be kept in memory on the chip itself, and not allowed to leave there.

The program so decrypted could be a small program, including a key, which would serve to conventionally decrypt by any algorithm additional program code to also be placed in this internal memory. This would reduce the amount of dedicated encryption hardware needed on the chip, but might create problems in connection with what I propose below.

Decrypting a program by a secure algorithm, and only storing the result inside the microprocessor chip for use, would be quite secure.

But this raises another issue.

Do we allow every software maker to protect its own software in this fashion? Or will making use of the mechanism be restricted to large, respected companies, that the chipmaker will trust to abide by a non-disclosure agreement?

Using public-key cryptography would mean that the chipmaker could disclose the public key corresponding to the private key built into every chip without compromising the security. But what happens when writers of viruses and trojan-horse programs use it to protect their efforts? Of course, the chipmaker would use its knowledge of its private key to assist efforts to combat viruses, but this would still allow such code to be far more damaging, and harder to detect.

In a USENET post, I proposed a scheme that would allow a facility of this nature to be made openly available and yet have additional protection against misuse.

Hence, the only way that a program containing encrypted parts could successfully execute on a user's computer would be if that user activated the program by using a utility to superencrypt that program's encrypted parts with his own personal key.

This would be a fair approach to content protection, as it would provide a level playing field for software writers, and it would also provide the user with control over his computer, by being able to decide what programs he will trust to execute in encrypted form on that computer.

Note that this proposal requires on-chip symmetric encryption capability, to handle the user's key. Programs to be loaded into protected memory using this encryption might also be required to be superencrypted with the user's key, in addition to requiring this for the block encrypted with public-key techniques.

(There is no need to require this for programs which aren't decrypted once, and loaded into chip-internal memory, but executed in regular memory using the simple scheme illustrated in the diagram above, where the block containing the S-boxes for the program has been activated. Although much less secure, it might be thought useful to include this kind of ability on a chip that runs secured software, so as to allow all the program to be protected somewhat, providing an additional nuisance to hackers, in addition to protecting the small pieces of the program loaded into the internal memory of the chip with more advanced encryption. Possibly also useful would be a secondary user key, used to activate programs which are only allowed to use the multiple S-box method of external protection, but which are not loaded in part into the chip's internal memory.)

But even this would not be a foolproof way of preventing a protected program from accepting other programs as protected in a fashion that bypasses the requirement of explicit user activation, since a program could always be loaded in the form of P-code into an on-chip data area, as a program which is to be hidden needs the ability to work with data in private as well. This is particularly likely to be a problem if the computer's operating system makes use of this protection, but if the operating system were activated with the type of secondary user key proposed above, so that it was only protected using the simple scheme in the illustration, it would have no direct access to the internal memory. But that wouldn't stop it from accepting programs written in encrypted P-code for execution, of course.

Also note that a protected program, using either type of protection, would have to be treated like an interrupt service routine by the computer, so that it could only be called at the entry points explicitly specified when it was loaded. However, that does not mean that such programs should be privileged; limiting those externally protected to being user-mode programs, and further limiting those executing on-chip to access to a fixed area of memory, so that they can only serve as computational subroutines, is another way to combat misuse of the security feature, although, again, it is not foolproof.

The Atomic Handshake

Another issue limiting the acceptance of Digital Rights Management is that a protected work is usually tied to a particular device. One can play a CD in any CD player. One can lend a book to a friend.

The basic problem with making downloads transferable between devices while preventing copying has to do with how handshake protocols work. To prevent accidents from causing a transfer to result in the loss of a copy one has paid for, the protocol has to make sure that the new copy can be, and will be, activated on the new device before it is deactivated and removed on the old device. But in that case, what is there to stop some hacker from interrupting the protocol at just the right time to lead the old device to retain its copy, thinking the transfer had failed, and the new device to allow use of its copy, thinking the transfer had succeeded?

A solution does exist, though, that should offer adequate security for this purpose.

The protocol for exchanging the right to view content between two devices could work like this:

First, the two devices establish an encrypted communications link. The content consortium has provided all authorized devices with keys, so that a man-in-the-middle attack is not possible.

The protocol will consist of the exchange of a thousand messages back and forth. This would still take only a fraction of a second at electronic speeds.

Which one of those messages is the message from the recipient of the transferred protected work that says it has completed activation of the content, and the content is now to be inactivated on the source device, will be chosen at random at the start of the protocol; and this message will depend on successful receipt of the message from the source of the content immediately before it.

Thus, interrupting the transfer at a random point would result in either a failed transfer, or a normal successful transfer, 999 times out of 1000, and only once in duplication of the content. Reasonable restrictions on repeated transfers would be sufficient to prevent it from being possible to work around that low probability of success to still effect unrestricted duplication.


[Next] [Up] [Previous] [Index]

Next
Table of Contents
Main Page