ver the centuries, cryptographers have devised many ways to encode messages to make them unreadable to all but their intended recipients. They range from simple substitution, where each letter of the alphabet is replaced by another letter or a combination of letters, to ciphers, mathematical treatments that transform and jumble the letters in a message in complex ways. But no matter what methods are used, all cryptography systems make use of keys that enable recipients to unlock and read encrypted text.
Traditionally, the only way to assure that encrypted messages remained secure was to keep the keys secret, which usually meant hand-delivering them to the intended recipients. For small organizations like banks with just a few branches, or for organizations with established methods of transporting sensitive information, such as the military or the diplomatic corps, that was not a problem. But when the senders and receivers of encrypted messages became millions of computer users all over the globe, keeping keys a secret became a seemingly impossible feat.
In 1976, Whitfield Diffie, a professor at Stanford University, and Martin Hellman, a graduate student, proposed a solution to this thorny problem. In a paper titled "New Directions in Cryptography" published in IEEE Transactions on Information Theory, they introduced the idea of public-key encryption, which uses two keys: a private key that only the user knows, and a public key that is distributed freely. To send a secure message to another user, one encrypts it using the recipient's public key. The message can then only be decrypted by the recipient using his private key. An additional advantage of public-key encryption is that it can be used to create unforgeable digital signatures. A user creates a digital signature using her private key; it can only be unlocked with her public key. Since only someone who knows the private key could have created the signature, it verifies the sender's identity.
The first popular application of public-key cryptography was a product called RSA (named for the first letters of the last names of its inventors, MIT mathematicians Ron Rivest, Adi Shamir and Leonard Adleman). RSA, developed in 1977, makes use of trap-door, one-way mathematical functions.
A one-way function is a mathematical operation that can be performed easily in one direction, but is virtually impossible to do in the opposite direction. For example, two very large prime numbers can be easily multiplied; however, if one knows only the product, working backward to determine the original primes is virtually impossible if the number is large enough. A trap-door is another number or function that, if known, turns a one-way mathematical function into a two-way function-one that can be completed easily in both directions. In simple terms, the public key in RSA is related to the product of two prime numbers, while the private key is related to the prime numbers themselves and the trapdoor function.
For the nascent on-line business community, the availability of public-key cryptography was a Godsend. It helped overcome one of the greatest fears of potential consumers by making it possible for the first time to send e-mail, credit card numbers and personal information securely over the world's most insecure network, the Internet. But, notes Christof Paar, assistant professor of electrical and computer engineering, it can sometimes be a bit difficult to say just what "secure" means when it comes to encryption.
"Cryptography is a weird field," he explains. "In practice, you can never really prove that an algorithm is secure. You publish it and you hope a lot of people try to break it-and don't succeed. The recent history of modern cryptography is rich in examples of promising algorithms that were quickly broken."
The future of on-line commerce depends, in no small measure, on whether enough potential customers feel confident enough in the unbreakability of modern cryptographic algorithms to entrust their credit card numbers and personal information to the unprotected by-ways of the Internet. But as Paar notes, using any cryptographic algorithm is an act of faith.
Recent developments in the on-line world have demonstrated that even tried and true encryption systems can be vulnerable to attacks by a persistent hacker or by a researcher with the right resources and enough time. In June, a scientist at Bell Laboratories discovered a hole in a system used widely on the World Wide Web to encrypt passwords, credit card numbers and other sensitive information. Called the Secure Sockets Layer or SSL, it's the software that locks the little padlock in Netscape and Internet Explorer, telling users their communications are secure.
The flaw might have enabled a hacker to determine how to unscramble secret information by analyzing the patterns in error messages returned after failed attempts to read encrypted data. The researcher found that he could unlock the code after anywhere from one million and four million unsuccessful tries. Although it's unlikely that anyone has been unable to take advantage of the bug, a group of major software companies quickly developed a fix that plugged the hole.
Less than a month later, a group of cryptographers found a way to crack the Data Encryption Standard (DES), perhaps the most widely used data scrambling algorithm in the world. Using brute computing strength in the form of a custom-built, $250,000 cracking computer, they worked through 90 billion possible keys every second for 56 hours straight until they found one that worked. In all, they tried about a quarter of all possible keys.
The widely publicized feat heated up the debate over the U.S. government's policy on the export of encryption systems. Currently, the federal government, considering cryptography a military technology, restricts the export of encryption systems to foreign governments and companies. Although the rules are complicated, in general exported software is limited to 40-bit encryption (vs. the 56-bit to 128-bit systems commonly used in the United States), unless the systems give the government access to the keys needed to decipher messages. The relative ease with which a 56-bit DES key could be deciphered has made many observers ask whether the government is justified in relegating foreign computer users to much weaker 40-bit algorithms.
The encryption debate is not limited to software exports. The government, particularly the FBI, has always been wary of making it too easy for criminals to scramble their secret communications with unbreakable ciphers. Accordingly, the Clinton Administration would like to require that all strong encryption schemes have a sort of back door that would let the government unscramble messages or that users place their private keys in an escrow account with a third party, where the government could get them as part of a criminal investigation. The software industry and organizations like the Electronic Freedom Foundation believe that the government has no business being able to snoop on private citizens and that solutions like key escrow could prove highly expensive for the government and the private sector.
As Net commerce continues to grow, and as the value of the goods and services bought and sold over the global information network skyrockets, the demand for stronger and more confidence-inducing cryptography will likely rise in step, as will efforts to thwart it and control it.
Last Updated: 11/20/98 13:25:39 EST