DESIGNING A HIGH-SECURITY CIPHER
Frank Rubin
July 7, 1995


Abstract:   In this paper I describe the steps that a cryptographer might take in designing a high-security cipher. The goal is to show the designer's mental process. The byproduct, of course, is an ideal cipher, simple, fast, compact, portable, and secure against today's computers, with built-in expansion to keep ahead of tomorrow's computers.

Keywords:   cryptography, cryptology, polyalphabetic cipher, Vigenère cipher, Gromark cipher, random number, pseudo-random number generator, linear feedback shift register, chained addition.


INTRODUCTION

       When I was first asked by Greg Mellen to write an article for this 20th anniversary issue of Cryptologia, my immediate thought was to write a paper showing just how easy it is to design a secure cipher. The more I worked on it, however, the less easy it became. There was always one more problem to overcome, one more analysis to perform, one more old reference to locate.

       So I have decided to take a different approach. I am going to write about the process. I will describe the steps I took to obtain a cipher that I believe is secure.

       In real life you design a cipher to meet a particular need. Is the cipher for hand use, for computer use, or to be implemented in some specific type of hardware? Is it likely that characters will be garbled in transmission? Who are the potential opponents, and what are their cryptographic capabilities? And so forth.

       In this case, since I started out to show how easy it is to design a cipher, I didn't try to invent anything new. Rather, I began with what is probably the oldest cipher still in active use, the polyalphabetic cipher, first described by Leon Battista Alberti in 1466, intending to adapt it for modern computer use. Therefore, I have retrofit the requirements to suit the intended solution. Polyalphabetic ciphers are inherently fast, in most cases require little storage, and are easy to implement in hardware or software. The hard part is designing one that is secure. So, there are my public goals: fast, compact, portable, secure. The first three I get free, provided I don't complicate things too much, and the last is the real target.

       In the rest of this paper I will pursue the quest of making the cipher secure. The process is simple: you start with a basic cipher, then you examine all of the known ways that such ciphers have been broken in the past, and you devise ways to defeat those attacks. Then you imagine what other attacks might be used, and you foil those, too. Finally, you build in some extra protection against future attacks and future improvements in computers.


THE STARTING POINT

       Let us start with the most common of the polyalphabetic ciphers, the Vigenère cipher invented by Giovan Batista Belaso in 1553. It can be represented as
 
Ci = Pi + Ki   (mod 26),

where Pi is the plaintext message, Ki is the key stream, and Ci is the resulting ciphertext character. The key stream is just a short key word repeated indefinitely.

       For modern versions using computers, characters would be represented in 8-bit EBCDIC or ASCII code, and the arithmetic would be done modulo 256 rather than modulo 26, namely
 
Ci = Pi + Ki   (mod 256).

       The first step in making our cipher secure is to hit the books, and find out what attacks have been successful against similar ciphers. I found that these attacks could be divided into three classes, separability attacks, electronic attacks, and depth attacks. In the next three sections I will discuss these at length.


SEPARABILITY ATTACKS

       One of the most common forms of stream cipher is to add or exclusive-or key characters generated by a pseudo-random number generator to successive message characters. The first such cipher was invented by Gilbert S. Vernam about 1915. Such ciphers are easily defeated by known-plaintext attacks [4,5]. Several elaborations that have been tried are adding several random number streams [7], using every n-th bit from each of n different random number streams, and combining several streams with linear or nonlinear logic elements, such as J-K flip-flops [6], or combining the bits of a register through 3 levels of nonlinear circuit elements [14]. These ciphers are similarly defeated [8,9,10,14].

       These ciphers all fail because it is possible to separate the key stream from the message stream. Combining the plaintext and key with addition or exclusive-or is easily reversed. To defeat these types of attacks, you need to combine the plaintext and the key in a way that cannot be reversed. This can be done by masking either the plaintext bytes or the ciphertext bytes.

       The simplest way to provide masking is to use mixed alphabets, or equivalently, to perform simple substitution on either the plaintext or the ciphertext. A simple substitution is a permutation of the alphabet, or of the numbers representing the characters. Since you are aiming for a high-security cipher you should do both. Such a cipher was invented in 1563 by Giovanni Batista Porta, and is now known as a Type IV Quagmire cipher by hobbyists in the American Cryptogram Association. This cipher can be represented as
 
Ci = T (S(Pi) + Ki)   (mod 256).

S(x) and T(x) represent the substitutions S and T applied to the character x.

       These substitutions can prevent an enemy from breaking a cipher, even with large amounts of known plaintext. Knowledge of Pi does not lead to knowledge of S(Pi), and knowledge of Ci does reveal the value of S(Pi)+Ki, which is T-1(Ci). This is the same approach that is recommended in [8] and [11].

       If these substitutions are fixed, for example imbedded in fixed tables in the cipher device, or in the software program implementing the cipher, then they can be stripped off. To be secure, each substitution must be chosen by use of an independent key.


ELECTRONIC ATTACKS

       Electronic attacks look at the waveforms or signals that make up the individual bits of the ciphertext. Material on these attacks is classified, so you need to talk to trustworthy individual electronics engineers. In doing so, expect differences of opinion.

       All of the engineers I consulted agree on this first point. If the last logic operation in producing the ciphertext was an exclusive-or, it may be possible to distinguish from the waveform a 0-0 from a 1-1, and a 1-0 from a 0-1. It is also probable that this can be done through several generations of exclusive-ors, depending on the type of circuits used. Therefore, the exclusive-or operation generally should be avoided. When it is used it should always be followed by a table look-up operation.

       In the proposed cipher, so far, this is handled by using addition instead of exclusive-or to combine the plaintext and key, and following this with the T substitution, which is performed by table look-up. Some electronics engineers, however, believe that even table look-up reveals a trace of the argument or input value in the resulting waveforms. Since the low-order bit of an addition is precisely an exclusive-or, it would follow that an enemy could determine the low-order bits of the key stream by examining the waveforms of the ciphertext bytes.

       If you are aiming at truly high security, you should protect against even this remote possibility. This can be done with two additional masking steps. First, add another simple substitution to mask the key bytes
 
Ci = U(S(Pi) + T(Ki))   (mod 256)

where S, T and U are three independent simple substitutions. The extra substitution which is applied to the key bytes is really equivalent to using a different key sequence, T(Ki) in place of Ki in the Quagmire cipher. The T substitution also masks off any mathematical regularity in the generation of the key sequence. If a portion of the key sequence were recovered somehow by an enemy this substitution would prevent recovery of the entire key sequence.

       The second defense is to follow the U substitution by four inversion operations, that is replacing x by its bitwise complement. My preference is to invert once by exclusive-oring with all ones, invert back through table look-up, then repeat the operation. This will attenuate any trace of the argument value that is detectable from the waveform after the U substitution.

       There must be enough choices for the three substitutions so that an enemy cannot try them all by a brute force attack. Even with only a 16-bit key determining each substitution there would be 248, or about 2.8x1014, choices for the set of three substitutions. This is enough to prevent exhaustive trial. Again, you should choose a longer key to maintain a margin of safety, and to protect against the possibility that exhaustively trying 2 of the 3 substitutions would be enough.

       The three substitutions also must not have any predictable linearity. That is, there should be no correlation between any linear function of x and any linear function of S(x), T(x) or U(x) which holds across all choices of the key.

       Another wise precaution when implementing a cipher in software is not to store the ciphertext back into the same buffer that contained the plaintext. Otherwise a trace of the prior contents may be detectable. Likewise, when writing the ciphertext to external storage, such as a tape or disk, you want to overwrite the plaintext with the ciphertext, so that no copy of the plaintext remains after encipherment, but at the same time you need to erase all traces of the prior contents, perhaps by first writing all ones into each record, then all zeroes, then writing the ciphertext record on top of that.

       If there is any possibility that the enemy could obtain the physical medium containing the message, such as intercepting a diskette in the mail, even stronger precautions are advised. You should use a diskette that has never contained the plaintext of any message or file you might want protected.


DEPTH ATTACKS

       When you use the cipher proposed above, an enemy can still apply depth attacks. The basis for cracking such a polyalphabetic cipher is to discover several sections of text that have been enciphered with the same part of the key stream. Auguste Kerckhoffs noted, around 1880, that the characters in corresponding positions in each such section of text will all be enciphered with the same simple substitution. These can then be solved [12] using letter frequencies, pair frequencies and short words, like any simple substitution. In order to make the polyalphabetic cipher secure, you must either prevent such repeated use of sections of the key stream, or you must prevent an enemy from detecting them.

       I suggested earlier that there should be at least 248 choices for the substitutions S, T and U. This means that there is a very low probability that any two messages would be enciphered with the same set of substitutions. However, since you are looking for high security, you don't want to depend on just one such factor to protect you. You should proceed as though all the messages were enciphered with a single set of substitutions. This could happen in practice through operator error, because an installation did not change keys often enough, or because an enemy might have some means of distinguishing messages that use the same substitutions.

       You can make it very expensive for an enemy to detect overlapping sections of the key stream simply by making the stream sufficiently long. You need to look at the methods an enemy could use to discover reused sections of the key stream, and see how long the key stream must be to defeat these attacks.

       The first method is due to Maj. Friedrich W. Kasiski (Die Geheimschriften und die Dechiffrir-kunst, 1863). The idea is to look at repeated pairs or longer sequences of letters in a message. These often are the result of common letter sequences in the plaintext enciphered with the same part of the key. Therefore the distance between such repeated sequences will be a multiple of the period, and the period can be discovered by factoring the distances.

       The Kasiski method is defeated by making the key stream longer than any possible message. For example, 107 should be long enough, unless you are transmitting bulk data files.

       The second method is the Index of Coincidence due to William F. Friedman (The Index of Coincidence and Its Applications in Cryptography, 1922). If you match up sections of two messages, and they were enciphered with two different portions of the key stream, then two corresponding characters would be equal with a probability of 1/256. If the sections were enciphered with the same part of the key, then corresponding characters would be equal with a probability more like 1/10 to 1/20 depending on language, used of mixed case, use of blanks and punctuation, etc. In a section of 1000 characters, nonmatching keys will result in about 4 equal characters, but matching keys would yield 50 to 100 equal characters. This clear dichotomy means that you can distinguish matching keys with great accuracy by comparing as little as 100 bytes of each message.

       With N messages, there are about N²/2 pairs of messages that could potentially overlap. Each pair could overlap in about 2M positions, so there are about N²M different overlaps to test. Suppose that each test takes a fixed time to perform, and you estimate that an enemy could make 1,000,000 such tests per second. If you believe that an enemy would be willing to invest one year of computer time to break this cipher, then N²M could be as large as 3.15x1013. Without going into detail, you can assure that there will be a 99% probability that no overlaps will be found if you make the length of the key stream at least 3.2x1015.

       This will not prevent messages from ever being enciphered with overlapping portions of the key stream, but it will make such overlaps very costly for an enemy to detect.

       In addition to these well-known methods, you need to consider unknown and unpublished methods. That is, you should examine the proposed cipher as though you were an enemy cryptanalyst, and look for possible attacks. In this case, I was able to devise such an attack as I was revising this paper. I call it the trigram filter. Possibly it is original.

       Suppose you have two sections of plaintext, and you choose trigrams, that is, 3 consecutive characters in corresponding positions. There is about 1 chance in 1,000 that they will be equal, again depending on language, subject matter, etc. Now if the two sections of text were enciphered with the same section of the key stream, when the 2 trigrams are equal their corresponding 3 bytes of ciphertext will be equal. This happens about 1 time in 1,000, so if the two sections of text are at least 1,000 bytes long there is about a 63% chance they will contain at least one equal trigram. This jumps to 86% for 2,000 bytes, 95% for 3,000 bytes, and so forth.

       On the other hand, if the two sections of text are not enciphered with the same section of the key, then any two corresponding ciphertext trigrams will be equal with probability 2-24. The two sections would contain equal trigrams with probability 0.006% if they are 1,000 characters long, or 0.012% for 2,000 characters.

       To exploit this, an enemy could collect a large number of ciphertext messages. All of the trigrams could be tagged with their corresponding message numbers and collected in a data file. Then the file would be sorted. Whenever a set of equal trigrams is found, the corresponding set of messages would be tested with index of coincidence. Instead of testing every possible pair of messages, this would filter the possible pairings to identify potential matches.

       The limiting factor in this case would be the storage. Suppose you estimate that an enemy could devote 1011 bytes of storage to this enterprise. For simplicity, assume this consists solely of 1,000-byte messages. Now each trigram will occupy 6 bytes in the sort file, for the trigram and message number tag. So each message will occupy 1,000 bytes in the message file and 6,000 bytes in the sort file, or 7,000 bytes altogether. This means an enemy can accommodate about 14,000,000 such messages. Typically you need at least 2 copies of the data during sorting, but let's assume that isn't necessary here.

       You can assure that there is a 99% probability that no overlaps will occur among the 14,000,000 messages if you make the key stream at least 2.0x1019 bytes long. Again, this does not prevent overlaps from ever occurring, but it means an enemy would have to devote significant resources to finding such an overlap.

       The discussion of the various depth attacks gives an estimate of how long the key sequence must be. These figures are only intelligent guesses, since you really cannot know how much computer time or storage an enemy might be willing to commit to the task of breaking the cipher. On the one hand, you might not insist on a 99% chance of no overlaps, or you might never send 14,000,000 messages with the same S, T and U substitutions, so you would need far less than 2x1019. On the other hand, an enemy might have more computer power or storage capacity than you anticipated, and 2x1019 might be insufficient. In any case, it gives us a ballpark figure.

       Remember that we have assumed the enemy has some means of distinguishing messages sent with the same set of substitutions, which might not be the case. Also, note that finding two overlapping messages won't be enough to mount a depth attack. In fact, 10 overlapping messages may not be sufficient. So you really have a pretty good safety margin when you make the key stream length 2x1019.


PSEUDO-RANDOM NUMBERS

       At this point you are faced with the problem of how to achieve this huge key stream. It is plainly infeasible for a pair of correspondents to generate, transmit and store a sequence of 2x1019 bytes. The solution is to use a sequence that is generated by some mathematical algorithm, so sender and receiver can generate any section of the key stream they need independently. There are several algorithms that generate such sequences with statistical properties that approximate random numbers. They are therefore called pseudo-random numbers.

       In most cases, the mathematical algorithm requires storing only a few constants, and the key is merely a number giving the starting point in the sequence. However, numbers on the order of 2x1019 are larger than the word size of most computers. This makes multiplicative-congruential and linear-congruential pseudo-random generators impractical. Combining several such generators is costly and opens the possibility of a divide-and-conquer attack, such as [3].

       For a hardware implementation, you can achieve periods this long with linear feedback shift registers. These have good random properties, and are well-suited to this purpose. Any size register of 65 or more bits will suffice. However, 8 cycles of the register are needed to produce the 8 bits for enciphering each character of the message.

       Ideally, you want a method that can be implemented easily in either hardware or software, on machines with various word sizes. One such method is chained addition. It is fast, requires little storage, and can have an arbitrarily long period and an arbitrarily large key. For example, using a table of 80 single-byte numbers, the generator
 
Ki = Ki-79 + Ki-80   (mod 256)

has a period of (280-1)27 or about 1.55x1026, provided that the 80 seed numbers K1 through K80 are not all even. This long period leaves a large safety margin as computers become faster in the future, or in case an enemy uses multiple computers. This sequence generates each of the values 0 to 255 with almost exactly the same frequency.

       Actually, the situation is even better than that. Any given initial set of 80 bytes, not all even, will generate a sequence of about 287 values. However, there are 2553, or about 2.95x10166 different sequences. So the chance of any two messages being enciphered with the same part of the same sequence is infinitesimal, unless the same key is used.

       The concept of chained addition is used in a hobbyist cipher called the Gromark cipher [2,13]. Gromark stands for Gronsfeld with Mixed Alphabet and Running Key. The Gronsfeld cipher is a degenerate form of the Vigenère cipher, where the key values Ki are restricted to the decimal digits 0 to 9. Mixed Alphabet, in our terms, is the U substitution, and Running Key refers to the method of generating the digits by chained addition. The Gromark typically uses a 5-digit decimal key, compared to the 80-byte key used above, and does not have an S or T substitution. So, in a sense, the proposed cipher is the Gromark cipher scaled up from hobbyist size to practical size. Note that the T substitution foils Blackman's attack [13].

       An advantage of using a chained addition generator is that it has a large internal state. Multiplicative- and linear-congruential generators have an internal state consisting of a single integer. Learn that integer, and the entire sequence can be generated. By contrast, the chained addition generator has an internal state of 80 integers. All 80 must be known to generate the sequence. If fewer than 75 are known, then exhaustive trial of all possible sequences becomes impractical.

       It might be instructive to compare the new cipher to the German Enigma cipher of World War II fame. The rotors of the Enigma machine play the role of the substitutions S, T and U in our cipher. Using 3 rotors from an 8 rotor set gives only 336 different substitutions. The turning of the rotors generates the key stream. Since each rotor has 26 possible positions, the length of the key stream is 263 or 17,576. By comparison, the proposed cipher has at least 2.8x1014 different substitutions, and a key stream of length 1.55x1026.

       An important feature of the proposed cipher is that its security can be increased to any arbitrary level without making the cipher any slower or using materially more storage. For example, the three substitutions S, T and U could be determined by 32-bit keys, and the additive pseudo-random number generator could be increased from 80 to 96 stages, which would make the key stream slightly over 1031 bytes, with no effect on speed, and only a 16-byte increase in storage.


CONCLUSIONS

       Sometimes classical cryptographic methods and hobbyist ciphers can be adapted for practical use, offering the same security as far more complex and costly ciphers. This paper shows the steps you would need to adapt the classical polyalphabetic cipher and the hobbyist Gromark cipher for modern computer use. It shows some of the analysis needed to determine how long to make the key stream.

       In the cipher that is developed, security is provided through three simple means: known-plaintext attacks and electronic attacks are prevented by performing substitutions on the plaintext, key and ciphertext characters; depth attacks, such as index of coincidence, are prevented by using a very long key stream; and divide-and-conquer attacks are prevented by using a single pseudo-random number generator instead of combining several.

       The importance of taking a belt-and-suspenders approach is stressed. For high-security ciphers you cannot depend on just one obstacle to each potential threat. A large number of substitutions is provided, but the length of the key stream is so long that the cipher would be secure even if all the messages were sent using the same substitutions.

       It is a general belief in the cryptographic community that "random numbers don't work." A secondary result of this paper is to show how random numbers can work when you make an adequate analysis to assure that you use them properly. The cipher proposed here is many times faster than the DES or any of the public-key cryptographic methods, yet offers arbitrarily high security.


REFERENCES
1 David Kahn 1967. The Codebreakers. New York: Macmillan.
2 W. J. Hall 1969. The Gromark Cipher. The Cryptogram 35(2): 25.
3 Bryant Tuckerman 1970. A Study of the Vigenère-Vernam Single and Multiloop Enciphering Systems. IBM Research Report RC-2879.
4 Carl H. Meyer, and Tuchman, Walter L. 1972. Pseudorandom Codes can be Cracked. Electronic Design 23: 74-76.
5 James Reeds 1977. "Cracking" a Random Number Generator. Cryptologia 1(1): 20-26.
6 Vera S. Pless 1977. Encryption Schemes for Computer Confidentiality. IEEE Transactions on Computers 26: 1133-6.
7 Names withheld 1978. Encryption Challenge. Cryptologia 2(2): 168-171.
8 Frank Rubin 1978. Computer Methods for Decrypting Random Stream Ciphers. Cryptologia 2(3): 215-231.
9 Frank Rubin 1979. Solving a Cipher Based on Multiple Random Number Streams. Cryptologia 3(3): 155-7.
10 Frank Rubin 1981. Decrypting a Stream Cipher Based on J-K Flip-flops. Cryptologia 5(1): 51-57.
11 Frank Rubin 1987. Foiling an Exhaustive Key-Search Attack. Cryptologia 11(2): 102-7.
12 John M. Carroll and Lynda Robbins 1987. The Automated Cryptanalysis of Polyalphabetic Ciphers. Cryptologia 11(4): 193-205.
13 Deane R. Blackman 1989. The Gromark Cipher, and Some Relatives. Cryptologia 13(3): 273-282.
14 Ross J. Anderson 1991. Tree Functions and Cipher Systems. Cryptologia 15(3): 194-202.