[Show/Hide Left Column]


2. Background


2.1 Revisiting IP networks


The network situations that we will consider are typical scenarios for IP networks. We assume that the reader is familiar with IP networks. There are numerous books on data networks that contain in-depth descriptions of IP networks and the basic protocols. The examples that we consider use IPv4 but most material also applies mutatis mutandis to similar IPv6 variants. There are however some exceptions like for how packet fragmentation is handled. In our examples we will use the following network nodes; client, gateway, NAT gateway, and server. In our setups we assume that these nodes are connected to the network using a standard Ethernet network interface card (NIC). This implies that the maximum transmission unit (MTU) is 1xxx (bytes) and that IP packets that are larger than xxxx bytes are cut into smaller packets at the IP interface level. The IP packets/fragments are transmitted and finally received at the destination where the fragments are reassembled to a complete IP packet and delivered. Normally the IP packets come from either the TCP layer or the UDP layer. The UDP layer is also referred as the datagram layer as it is used to transmit independent data packets. The UDP layer does not guarantee that data packets are received in the same order as they are transmitted and in fact has no control over non delivered packets. All these control functions are left to the application layer to deal with. The TCP layer implements a reliable data transport service that that deals with the problems of packets being out of order or packets that are lost or are received twice. It also takes care of fragmentation problems by adjusting the packets size of the packets given to the IP layer so these are not fragmented again. The IP layer knows to which protocol to pass the received data by looking at the protocol byte of the IP packet. illustrates how the IP layer delivers the received packets



Figure 2.1: Data delivery decided by the value of the protocol byte in the IP header.

It is illustrative to look at the complete data flow from application down to the physical layer. The Internet protocol model differs from the ISO-OSI protocol model. The TCP/IP stack which implements the Internet model has therefore fewer layers than the 7 layer ISO-OSI model. In this book we will mostly discuss the Transport layer and the Network layers. In the Link layer we have routing. In this book we hardly discuss the link layer except for some specific impacts on the routing that is needed for some of the protocols which we will discus. For readers that want to know more about the Internet protocol layers we refer to books on Internet and computer networks such as, for example






Figure 2.2: The ISO-OSI and Internet Protocol Models and the TCP/IP Stack.

The link and physical layers are very different for mobile networks. Here we will not consider such networks but many mobile networks support TCP/IP so many aspects that we mention still applies for mobile networks as well. In most operating system that have a separation between user space and kernel space, the layers TCP/UDP and below belong to the kernel space. This means that normal users only can use the available interfaces but cannot modify the software of the layers in kernel space. SSL and TLS is often also part of the kernel space but some packages like OpenSSL provide SSL/TLS support in user space. Since IPsec is located at the IP layer IPsec support is more a system’s issue than SSL/TLS which can be handled by the user. We come back later on how IPsec can be realized in the protocol stack.



2.2 Cryptographic algorithms


2.2.1 General aspects

The secure connections that we will study have several common characteristics. First is that one makes a distinction between the establishment of the connection and that of the transmission of the actual payload. When the payload is transmitted its data can be confidentiality protected, integrity protected, or both depending what features are supported by the protocol. To achieve confidentiality protection the data is encrypted and integrity protection is achieved by using a cryptographic checksum computed from a cryptographic hash value. The cryptographic checksum is referred to as a Message Authentication Code (MAC) and like the encryption its computation depends on a key. MAC computation and encryption are two cryptographic primitives from a wide side of cryptographic primitives, Figure xx shows an overview with several primitives that are relevant in our setting. Most protocols follow to good practice to use different keys for MACing and encryption. During the establishment phase the keys for MACing and encryption are computed. Besides this key derivation process one often authenticates one or both sides of the communication channel.





Figure 2.3: Characterization of cryptographic primitives.


2.2.2 Symmetric key Encryption

Encryption transforms data into other data. The transformation is controlled by a specific input called the key. One speaks of a good encryption algorithm if without the knowledge of the key value it is practically infeasible to recover the input data from the output data. One can the notion of a good algorithm more precise and the construction and study of encryption algorithms is an important subject in the area of cryptology, see []. Furthermore, the encryption transformation has an inverse and often there is a decryption transformation that given the key value transforms the encrypted data back into is original form (also called the plaintext). One distinguishes between symmetric encryption and asymmetric encryption. We have symmetric encryption when the key for encryption and decryption is the same. Asymmetric encryption refers to the case when the encryption transformation is controlled by one key and the decryption by another related key such that the decryption key cannot (with realistic efforts) be derived from the key used for encryption. One refers the two related keys in such an asymmetric setup as a key pair and the encryption key as the public key and the decryption key as the private key. The private key is kept secret and hence in such a asymmetric encryption setup the public key can be given to everybody because it cannot be used to compute the private key and only the owner of the private key can decrypt the data the has been encrypted. Symmetric encryption and decryption is usually fast which means it fits the need of handling of large payloads. Asymmetric encryption can in principle also be used but since all known asymmetric algorithms are very complex they are rarely used in practice.
Beside the distinction in asymmetric and symmetric algorithms one distinguishes between stream ciphers and block ciphers. Many security protocols use block ciphers and here the plaintext symbols are divided into blocks and these blocks of symbols are transformed into cipher blocks which are concatenated into a the complete cipher text. Most block ciphers are operated in the so-called Cipher Block Chaining (CBC) mode. In this mode the plain text blocks are first added to the previous cipher text block before encryption takes place. Figure 2.4 shows the encryption and decryption processes for a sequence of plain text blocks P0,P1,….




Figure 2.4: Principle of CBC Encryption and Decryption.

A particular problem that often arises when using block ciphers is that the clear text has a total length that is not a multiple of the block length of the cipher. To cope with this protocols like TLS and IPsec use a technique called padding where the clear text is expanded so the length becomes a multiple of the block length. The padding symbols are discarded at the receiving side. Padding is simple whan the original length of the clear text is known, e.g. because it is explicitly transmitted, at the receiving side. There exist padding techniques where.the message length is handled implicitely. PKCS #5 4and RFC1423 7 define often used specific padding mechanisms. For example if the block length is 8 and when applies PKCS #5 one will always pad the message M with a padding Q to obtain the string plain text P=M+Q. If L= length(P) mod 8 then, because PKCS#5 is a padding scheme in which one always pads, Q is determined as follows:

if L == 0, Q = 08 08 08 08 08 08 08 08
if L == 1, Q = 07 07 07 07 07 07 07
if L == 2, Q = 06 06 06 06 06 06
if L == 3, Q = 05 05 05 05 05
if L == 4, Q = 04 04 04 04
if L == 5, Q = 03 03 03
if L == 6, Q = 02 02
if L == 7, Q= 01

IPsec requires padding also because of the requirements that the data should be 32 bit word aligned. The padding in IPsec consists of 0 to 255 bytes and the length of the padding is transmitted explicitly, see RFC 4303 8.

Stream ciphers operate on the basis of an autonomic finite state machine that is loaded with the ciphering key and which produces a stream of symbols that are combined symbol by symbol with the plain text symbols. The combining process is often the exclusive XOR function which is idempotent that is when encrypting twice when gets the original (plain) text back, see Figure 2.5. Stream ciphers are often used in wireless systems, e.g. GSM, UMTS, and Bluetooth 1.1. But otherwise this type of algorithm is less frequently used in internet protocols .




Figure 2.5: Principle of a stream cipher.


2.2.3 Hash and MAC functions


Hash functions are cryptographic transformations that map an arbitrary length input x strings to a string y of fixed length with the properties that

1. given x it is easy to compute y=h(x);
2. given y it is hard to compute an x such that y=h(x)
3. given x, y, such that y=h(x) it is hard to find an x’ such that h(x)=h(x’)

The condition 3. is referred as the weak-collision property. Sometimes one wants a slightly stronger condition to be satisfied

3. it is hard to compute find an x and x’ such that h(x)=h(x’)

Hash functions are used in many protocols and serve as building blocks in various more complex functions. Most hash functions have an iterative structure where blocks of fixed length are processed. The length of blocks is referred to as the block size of the hash function.

One important usage of hash functions is that in digital signatures where they are used to map the message to the fixed string that is then further processed. Another use is that to build MACs from hash functions. Finally hash functions are used as pseudo random functions (PRFs) is many protocols to derive (session) keys from other keys.
MAC functions are very similar to hash functions and should satisfy the same properties. However MAC functions also use a key as an input. In fact hash functions can be used to construct MAC functions. One example is the HMAC construction that given any hash function constructs the MAC

MAC(key,x)=HMAC(key,x)=h((k ⊕ opad)¦¦ h((k ⊕ ipad)¦¦ m))


where opad and ipad are predefined strings, opad = 0x5c5c5c...5c5c and ipad = 0x363636...3636 with a length equal to the block size of the hash function.

In Table 2.1 the most frequently used hash functions and their block and output sizes are given.

Name Block size
(bits)
Output Size
(bits)
reference
MD2 128 128 RFC 1319
MD5 512 128 RFC1321
SHA-1 512 160 Specification: FIPS 180-2 Secure Hash Standard
Sha-256 512 256 Specification: FIPS 180-2 Secure Hash Standard


Table 2.1: Frequently used hash functions and their output size.

MD5, MD2, and SHA-1 are considered to be broken (give references). SHA-256 is to be used instead of SHA-1 until there is a new hash algorithm standardized. In practice, MD5 and SHA-1 are still frequently used as existing server infrastructure only slowly adapts to these recommendations.
We refer to 2 for more details on hash and mac functions.


2.2.4 Asymmetric encryption

In an asymmetric cryptographic system there is a public key and a private (secret) key. The public key is like the word indicates a key that is (in principle) public. For that reason one refers to these cryptosystems as public-key cryptosystems. There are several public-key cryptosystems known. In our setting two systems are important; one is the RSA encryption scheme and the other is the Diffie-Hellman (DH) key exchange system. Both use arithmetic with very large numbers which puts considerable burden on the processing device. RSA is based on the difficulty of factoring large numbers and DH is based on the so-called discrete log(arithm) problem. The latter is the problem if one knows the value of y, y = g^x mod p, then we do not know an efficient algorithm to compute x knowing only g and the prime number p, except for some special instances. We refer the interested reader to 2 for more details and on the public key crypto systems.
The RSA system is important because it is used, for example, in TLS in the key negotiations process. Furthermore, RSA encryption and decryption plays an important role in digital signatures. Digital signatures are crucial to witness the integrity and ownership of public keys. In practice a public key that belongs to a public and secret key pair is usually associated with an owner of the key pair. Without being able to verify that a public key indeed belongs to a claimed owner it becomes hard to make use of public-key cryptography. Technically this is solved issuing a digital certificate which consists of a public key, an identifier for the key owner, an identifier for certificate issues, and a signature over the mentioned certificate data. The certificate issuer uses its own private key for creating the signature of the certificate and the public key is somehow made available to all users. In practice often the latter is done by issuing a certificate that contains the issues public key and which is signed by using its private key. Such a certificate can be verified by using the certificate that itself contains and for this reason such certificates are also called self-signed certificates. The public key of the certificate issuer is also referred to as the root certificate and the issuer itself is called the Certificate Authority or just CA. This setup allows us to verify public keys as long as the certificate the public key is distributed with, is signed by an issuer that via signatures can be traced to a root key of a known CA. Figure 8 illustrates a self-signed certificate that is used to verify the public key in another certificate.




Figure 2.6: Illustration of a self-signed certificate that can be used to verify a second certificate.

The most common certificate types are
  1. self-signed or so-called root certificates
  2. server certificates
  3. user certificates
Root certificates are usually distributed in manual or out-of-band fashion to create an infrastructure of trusted root (public) keys that can be used to verify certificates. Server certificates are used by servers to prove that they are indeed the correct server one wants to connect to. User certificates are certificates with a public key that are kept together with the private key that goes a long with that public key. Server and user certificates are very similar in the way they are generated. For both one generates the public-private key pair, and the public key is signed by another certificate. The difference lies in the way how the certificates are used: server certificates are used to support verification and generation of signatures or wrapping(encryption) of keys or nonces, whereas user certificates are usually used for non-repudiation (digital signing of documents). One sometimes uses the term client certificates, especially in connection with TLS and SSL. In that case one is referring to the blob that contains a server certificate and its corresponding private key. An example of the latter is a PKCS#12 key, for PKCS #12 see 6.
A common format used for implementing certificates is X509 and the profile in RFC 2459: Internet X.509 Public Key Infrastructure. Besides X509 there are other formats like, for example PEM. A simple freeware tool to generate X509 certificates is OpenSSL. Via its command line interface one can generate public and private key pairs and subsequently generate certificates of different types.
To generate a CA (self-signed certificate one can use the script in Figure 9:




Figure 2.7: Script file to generate self-signed CA certificate (see Appendix for the openssl.cnf configuration file).

Next we generate a server certificate and the corresponding private key. The certificate is signed by the CA. The script in Figure 10 is used for this purpose.




Figure 2.8: Script file to generate server certificate, private key and have the CA as issuer. The server is called Fuga.

The FugaCert.pem will be later used in our SSL/TLS example. In Figure 11 we give a listing of the Fuga certificate using the capabilities of the x509 command in openssl.




Figure 2.9: The Fuga server certificate signed by CA=fuga.

This certificate is now set to be used between the validity limits; Not Before: Oct 15 15:26:47 2008 GMT and Not After : Oct 15 15:26:47 2010 GMT. It has a basic constraint: X509v3 Basic Constraints: CA:FALSE
This means that the certificate is not a CA certificate and thus cannot be the last one in a chain of certificate verifications. The CAcert.pem certificate has here CA: TRUE. The X509v3 Authority Key Identifier: A3:EE:94:F9:05:9F:25:F4:F4:C1:E2:6F:F8:8D:5C:3A:7A:52:9B:28 is the key identifier of the signers certificate, in this case the CA certificate.

The private key corresponding to the Fuga certificate should be protected. A standard procedure is to use password encryption which results in the following FugaKey.pem file



Figure 2.10: The password protected private key file for Fuga.


2.3 Security protocol principles.


The purpose of a security protocol between two end points is usually to ensure that the data that is exchanged is protected against malicious manipulations by attackers. What type of protection a security protocol can provide is one of the main characteristics of the protocol. The most common security protocols like IPsec, SSL/TLS, and SSH provide message confidentiality through data encryption and message integrity protection through data integrity protection. It is common for a security protocol to allow its users to choose among a given set of protection features using different types of cryptographic mechanism for data encryption and data integrity. Protocols use normally symmetric key cryptographic algorithms for data confidentiality and data integrity. The computational complexity of these algorithms is smaller than that of asymmetric key algorithms and therefore they are more suitable for operating on large data amounts. However the symmetric key algorithms require that the end points share the same secret key. For that reason many security protocols have built-in mechanisms to establish a secret key between the end points. Alternatively the protocol assumes that the end points have a shared secret key and implementations have an interface through which the shared key can be entered into the protocol engine. One can say that the former protocol types are complete and the latter protocol types are modular as the can be combined quite freely with other key establishment mechanisms. IPsec is an example of the latter type and TLS is an example of the former type.
The shared secret key can, in principle, be used directly in an encryption mechanism such as AES-CBC and in an MAC mechanism such as HMAC based on SHA-1. But one should be careful here.
  • what is the life-time of the key ?
  • what happens if either the encryption of mac algorithm is broken
When the key is used for a long period of time the amount of clear/cipher text material becomes large which may help an attacker. A weakness in the encryption algorithm could allow a hacker to find the secret key and thus also to compromise the data integrity protection. To mitigate these effects protocols often do not use the shared key directly. Instead two session keys, one for ciphering and the other for MACing are derived from the shared secret. On top of that the life-time of session keys can be constrained which implies that a protocol can replace session keys with new fresh ones..


2.4 Key establishment principles

Pre-shared, sending key, Diffie-Hellman based.

Key establishment mechanisms are important in many protocols. The purpose a key establishment operation




Menu [toggle]