Implementing Cisco Networking Solutions
上QQ阅读APP看书,第一时间看更新

Transmission Control Protocol (TCP)

"The single biggest problem with communication is the illusion that it has taken place."
- George Bernard Shaw

As discussed in the previous section, IP provides a connectionless service. There is no acknowledgement mechanism in the IP layer, and the IP packets are routed at every hop from the source to the destination. Hence, it is possible that some packets sent by the transmitting node are lost on the network due to errors, or are discarded by the intermediate devices due to congestion on the network. Hence the receiving node will never receive the lost packets in the absence of a feedback mechanism.

Further, if there are multiple paths on the network to reach the destination from the source, it is possible that packets will take different paths to reach the destination, depending upon the routing topology at a given time. This implies that packets can reach the receiving node out of sequence with respect to the sequence in which they were transmitted.

The TCP layer ensures that whatever was transmitted is correctly received. The purpose of the TCP layer is to ensure that the receiving host application layer sees a continuous stream of data as was transmitted by the transmitting node as though the two were connected through a direct wire. Since TCP provides that service to the application layer using the underlying services of the IP layer, TCP is called a connection-oriented protocol.

A typical TCP segment is shown in Figure 8, where the different fields of the TCP header are shown along with their lengths in bits in parentheses. A brief description of the functions of the various fields is shown in the following figure:

Figure 8: Transmission Control Protocol (TCP) segment structure
  • Source Port/Destination Port: As discussed in the earlier sections, the transport layer provides the multiplexing function of multiplexing various data connections over a single network layer. The source port and destination port fields are 16-bit identifiers that are used to distinguish the upper layer protocols. Some of the common TCP port numbers are shown in the following figure:
Figure 9: Common TCP Port Numbers
  • Sequence Number: This 16-bit field is used to number the starting byte of the payload data in this TCP segment with relation to the overall data stream that is being transmitted as a part of the TCP session.
  • Acknowledgement Number: This 16-bit field is a part of the feedback mechanism to the sender and is used to acknowledge to the sender how many bytes of the stream have been received successfully, and in sequence. The acknowledgement number identifies the next byte that the receiving node is expecting on this TCP session.
  • Data Offset: This 4-bit field is used to convey how far from the start of the TCP header the actual message starts. Hence, this value indicates the length of the TCP header in multiples of 32-bit words. The minimum value of this field is 5.
  • Reserved: These are bits that are not to be used, and will be reserved for future use.
  • Control flags: There are 9 bits reserved in the TCP header for control flags and there are 9 one-bit flags as shown in Figure 10. Although these flags are carried from left to right, we will describe them in the random order for ease of understanding:
Figure 10: TCP control Flags
  • SYN: This 1-bit flag is used to initiate a TCP connection during the three-way handshake process.
  • FIN: This 1-bit flag is used to signify that there is no more data to be sent on this TCP connection, and can be used to terminate the TCP session.
  • RST: This 1-bit flag is used to reject the connection to maintain synchronization of the TCP session between two hosts.
  • PSH: Push (PSH) is a 1-bit flag that tells the TCP receiver not to wait for the buffer to be full, but to send the data gathered so far to the upper layers.
  • ACK: This 1-bit flag is used to signify that the Acknowledgement field in the header is significant.
  • URG: Urgent (URG) is also a 1-bit flag, and when set signifies that this segment contains Urgent data and the urgent pointer defines the location of that urgent data.
  • ECE: This 1-bit flag (ECN Echo) signals to the network layer that the host is capable of using Explicit Congestion techniques as defined in the ECN bit section of the IP header. This flag is not a part of the original TCP specification, but is added by RFC 3168.
  • CWR: This is also a 1-bit flag added by RFC 3168. The Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set.
  • NS (1 bit): This 1-bit flag is defined by an experimental RFC 3540, with the primary intention that the sender can verify the correct behavior of the ECN receiver.
    • Window Size: This 16-bit field indicates the number of data octets beginning with the one indicated in the acknowledgment field, which the sender of this segment is willing to accept. This is used to prevent the buffer overruns at the receiving node.
    • Checksum: This 16-byte field is used for checking the integrity of the received TCP segment.
    • Urgent Pointer: The urgent pointer field is often set to zero and ignored, but in conjunction with the URG control flags, it can be used as a data offset to identify a subset of a message that requires priority processing.
    • Options: These are used to carry additional TCP options such as Maximum Segment Size (MSS) that the sender of the segment is willing to accept.
    • Padding: This is a field that is used to pad the TCP header to make the header length a multiple of 4 bytes, as the definition of the data offset field mandates that the TCP header length be a multiple of 4 bytes.
    • Data: This is the data that is being carried in the TCP segment and includes the application layer headers.

Most of the traffic that we see on the internet today is TCP traffic. TCP ensures that application data is sent from the source to the destination in the sequence that it was transmitted, thus providing a connection oriented service to the application. To this end, TCP uses acknowledgement and congestion control mechanisms using the various header fields described earlier. At a very high level, if the segments are received at the receiver TCP layer that are out of sequence, the TCP layer buffers these segments and waits for the missing segments, asking the source to resend the data if required. This buffering, and the need to sequence datagrams, needs processing resources, and also causes unnecessary delay for the receiver.

We live in a world where data/information is time sensitive, and loses value if delivered later in time. Consider seeing the previous day's newspaper at your doorstep one morning. Similarly, there are certain types of traffic that lose their value if the traffic is delayed. This type of traffic is usually voice and video traffic when encapsulated in IP. Such traffic is time sensitive and there is no point in providing acknowledgements, and adding to delays. Hence, this type of traffic is carried in a User Datagram Protocol (UDP) that is a connectionless protocol and does not use any retransmission mechanism. We will explore this more during our discussions on designing and implementing QoS.