An Old Mystery Solved: Project C-43 and Public Key Encryption

by Steve Wildstrom   |   June 13th, 2013

For most of history, it was believed that the only way a message could be encrypted was if the sender and the receiver shared the secret of srambling and unscrambling the text. That view changed sharply in 1976, when Stanford computer scientists Martin E. Hellman and Whitfield Diffie published a paper called ““New Directions in Cryptography” that described what is now known as public key encryption (PKE). Two years later, Ron Rivest, Adi Shamir, and Len Adelman of MIT described a simpler method. When the web came along, the Diffie-Hellman and RSA algorithms became the bedrock of secure communications.

But PKE had an unknown pre-history. As early as the 1960s, John H. Ellis of GCHQ/CESG, the British equivalent of the National Security Agency’s Central Security Service, was experimenting with ideas about “non-secret encryption.” He described his work in a 1970 paper entitled “The Possibility of Non-Secret Digital Encryption,” but it remained classified until 1997. In the 1970s, CESG researchers Clifford Cocks and Malcolm Williamson found ways to implement PKE, but this work, too, stayed secret for more than two decades. A 2004 Wired story by Steven Levy gives a detailed account of the British efforts. In his account, The History of Non-Secret Encryption,” Ellis drops a fascinating hint of earlier work. Reflecting on the “obvious” impossibility of secret communications without a shared secret, he wrote:

The event which changed this view was the discovery of a wartime Bell Telephone report by an unknown author describing an ingenious idea for secure telephone speech… The relevant point is that the receiver needs no special position or knowledge to get secure speech. No key is provided; the interceptor can know all about the system; he can even be given the choice of two independent identical terminals. If the interceptor pretends to be the recipient, he does not receive; he only destroys the message for the recipient by his added noise. This is all obvious. The only point is that it provides a counter example to the obvious principle of paragraph 4. The reason was not far to seek. The difference between this and conventional encryption is that in this case the recipient takes part in the encryption process. Without this the original concept is still true. So the idea was born. Secure communication was, at least, theoretically possible if the recipient took part in the encipherment.1

Ellis refers to a document titled “Final Report on Project C-43″ without any additional identifying information. For years, this passing reference has intrigued the cryptographic community with the possibility that Bell Labs researchers might have made important progress on private key encryption as early as the 1940s. It turns out that the mysterious Final Report exists and is available (if obscurely) online.23

Some background on Project C-43 is needed to make sense of this. The name refers to a wartime contract between Bell Labs and the National Defense Research Committee for work on systems for secret speech transmission. The goal both to devise methods for communication for U.S. forces and, more urgently, to means to unscramble German and Japanese transmissions. (Because voice communications at the time were analog signals, the digital techniques used for encrypting text were not available; purely audio techniques had to be devised.) AT&T’s work ranged from theoretical projects at its West Street lab in Manhattan to running radio intercept stations in Holmdel, N.J., and Point Reyes, Calif.

Project C-43 ran in parallel to, but apparently with little or no contact with, a better known Bell Labs secret speech effort, Project X. This project, which produced a cumbersome but effective method for secure speech transmissions between fixed locations, is described in detail in the official history of Bell Labs.4 Bell researchers submitted regular progress reports on C-43 to NDRC and at the end of the contract in 1944, Walter Koenig Jr., the engineer who headed the project compiled these into a final report. (I don’t know why Ellis talked about an “unknown author;” Koenig’s name appears on the title page.)

One obvious way to secure speech is to hide it with noise that can then be removed at the receiving end by a technique similar to what is used in today’s noise cancellation systems. But the approach is fraught with many difficulties, not the least of which is securely transmitting to the recipient a copy of the noise the noise that is to be subtracted. The Project X method required courier distribution of  noise tracks on phonograph records. Because the noise had to be as long as the speech it masked and each track could only be used once–it was the audio equivalent of a Vernam cipher or a one-time pad–the system was exceedingly cumbersome. In the course of a discussion of masking methods, Koenig, almost as an aside, describes what seems to have been a thought experiment:

Another masking system is shown in figure 21, which uses only one line. In this system, noise is added to the line at the receiving end instead of at the sending end. Again, the noise can be perfectly random. Since the noise is generated at the receiving end, the process of cancellation can, theoretically, be made very exact. This system, however, cannot be used for radio at all because the level of the noise decreases with distance from the receiving station, while the level of the signal increases, The interceptor, therefore, will get good speech signals if he is close to the transmitter. With telephone lines this differential can be kept small.5

c-43-figure-21This is what so intrigued Ellis. Alice could speak and transmit in clear, while Bob would simultaneously inject noise into the same circuit. An adversary intercepting the conversation would hear only the masking noise. Bob, knowing the exact characteristics of the noise, could cancel it and retrieve the signal—encryption with no shared secret. Alas, it proved to be unusable for the reasons stated in the report. For example, while Project X desperately needed a solution to its key-distribution problem, its purpose was to secure long-range radiophone transmissions, initially between Washington and London.

It’s safe to say that beyond the inspiration it gave to Ellis, this early Ball Labs work did not contribute materially to the development of PKE. Other than the lack of a shared secret, the audio approach bears no resemblance to any public-key method, since there is no concept of a public key involved. It remained for Ellis, Cocks, and Williamson, and then, independently, Hellman, Diffie, Rivest, Shamir, and Adelman to discover the mathematics that allow a piece of publicly shared information to be used for secure data communications.

  1. Ellis, J.H., “The History of Non-Secret Digital Encryption,” p. 1 []
  2. The original document is probably in the archives of the Defense Technical Information Center at Fort Belvoir, Va. Preliminary and progress reports on C-43 are at the National Archives & Records Administration’s Archives II facility in College Park, Md., but the final report is not with them. []
  3. Be patient with the download. It’s a 6 MB scanned PDF and can take a while to load. []
  4. Fagen, M.D., ed., A History of Engineering and Science in the Bell System: National Service in War and Peace (1925-1975) Bell Telephone Laboratories, 1978, pp. 296-312. []
  5. Koenig, “Final Report on Project C-43,” pp. 23-24. []

Steve Wildstrom

Steve Wildstrom is veteran technology reporter, writer, and analyst based in the Washington, D.C. area. He created and wrote BusinessWeek’s Technology & You column for 15 years. Since leaving BusinessWeek in the fall of 2009, he has written his own blog, Wildstrom on Tech and has contributed to corporate blogs, including those of Cisco and AMD and also consults for major technology companies.
  • Diego Aranha

    I disagree that the noise-based encryption system described does not require shared secrets, as the noise characteristics are exactly the shared secret and need to be transmitted through a secure channel.

    • Guest

      This is clarified in the article: the noise only needs to be introduced at the receiving end.

    • steve_wildstrom

      The noise–based system as used in Project X definitely required a shared secret (and noise was only one of a number of methods used to provide secrecy.) But the system described by Koenig did not. Noise was injected only by the recipient and the sender did not need any knowledge of the noise and the obscured signal could be transmitted through an open channel (though for the reasons cited, it required a wireline channel.) Not that if the sender is listening on the line, he would (in theory) hear only the noise and would have no way of removing it. But this isn;t important. Obviously, this system describes a one-way signal. A two way conversation would require a second line, with the sender and receiver switching roles.

  • Guest

    In http://sqgroup.iwarp.com/Kelvin1687/SLSq%20Short%20ITS%20Bio%202011%20v1.pdf, Stephen Squires (ex-DARPA, ex-NSA) makes the following assertion: “Shortly after returning to NSA, he [Squires] developed a prototype of the first operational public key system using advanced computational complexity theory results and that was experimentally used on the internal NSA ARPAnet based system by the mid 1970s.”

    It would be interesting to know how that work fits in with the early history of public key cryptography. The “advanced computational complexity results” doesn’t sound much like D-H or RSA (or like Ellis, Cocks and Williamson)

    • steve_wildstrom

      I can’t quite grok what that means. Complexity theory is important in assessing the difficulty of the problem on which the trapdoor function of a PKE algorithm depends–factoring for RSA, discrete logarithms for D-H. But I don’t see what complexity theory by itself does for you.

      Oddly enough, I got involved in this issue because of some work I am doing on a history of mathematics at Bell Labs, which started by looking on Ron Graham’s work in complexity theory in the 1960s and 70s.

      • Stephen L Squires

        The fundamental theory and algorithms had already emerged from NSA Crypto Math before GCHQ and the later D-H. The role of computational complexity theory was to develop optimal algorithms. In this particular case, having optimal big number fast multiply was important. The high performance implementation of the algorithms including fast big number multiply on a Burroughs D-Machine was an accelerator for the PDP-10 configured as a server on the internal version of the ARPA-net. The result was a prototype public key system.

        • steve_wildstrom

          Do you have a citation for an NSA implementation of public key encryption prior to the publication of the Diffie-Hellman paper? The computational algorithms were necessary but nowhere close to sufficient.

          • Stephen L Squires

            The advanced algorithms we had were sufficient when used in the advanced system context that I had in my lab.

    • Stephen L Squires

      The computational complexity theory results were for fast multiply. The fundamentals for public key had already been developed by NSA Crypto Mathmeticians by the late 1960s. A micro and nano programmable computer had been invented by Burroughs as the D-machine in the late 1960s. Using the computational complexity results for fast multiply a big number library was developed for the D-machine. In 1973 a D-machine was connected to a DEC PDP-10 in the NSA Computer Science laboratory. The system was used to prototype advanced crypto math algorithms using the D-machine big number library as an accelerator. The DEC PDP-10 with D-Machine accelerator was a node on the internal ARPAnet based network in NSA at Fort Meade. The result was a prototype public key system by 1973. With access to advanced mathematics, advanced computer science, advanced computer architectures and systems in the advanced research enviroment of NSA at the time — as a kind of “time machine” — it was easy. //SLSq

  • Tom Knight

    This is incredibly insecure. Sampling the noise at any two distinct points on the line allows one to distinguish the forward from the reverse signal, due to the finite speed of transmission. Once you have the reverse signal, subtracting it is trivial.