by Arpit Kumar
06 Aug, 2023
7 minute read
Understanding TCP Protocol and Backpressure

Delving into the fundamentals of TCP (Transmission Control Protocol) and how it relates to managing data flow, exploring the concept of backpressure and its crucial role in preventing overwhelming network communication by regulating data transmission

TCP (Transmission Control Protocol) is a reliable, connection-oriented protocol used in computer networks to ensure the reliable delivery of data between devices. It is an essential part of the TCP/IP protocol suite, which is the foundation of the internet.

Let’s understand in simple terms how TCP works. There are several steps involved in sending

Connection Establishment

  • Before data transmission can occur, a connection must be established between the sender and receiver. This process is known as the three-way handshake.
  • The client (initiator) sends a SYN (Synchronize) packet to the server, indicating its intention to establish a connection.
  • The server responds with a SYN-ACK (Synchronize-Acknowledge) packet, acknowledging the request and indicating its readiness to establish a connection.
  • Finally, the client sends an ACK (Acknowledgement) packet to confirm the server’s acknowledgment. At this point, the connection is established.

TCP connection

Data Transmission

  • After the connection is established, data transmission can begin. Data is broken down into smaller chunks known as segments.
  • Each segment is assigned a sequence number, allowing the receiver to reassemble the data in the correct order.

Flow Control

  • TCP implements flow control to ensure that the sender does not overwhelm the receiver with data. It uses a sliding window mechanism to control the number of unacknowledged segments that can be sent at a time.
  • The receiver advertises its window size, indicating the amount of data it can currently accept.
  • The sender adjusts the rate of transmission based on the receiver’s window size, ensuring efficient data transfer.


  • When the receiver successfully receives a segment, it sends an acknowledgment (ACK) back to the sender.
  • If the sender does not receive an ACK within a specified timeout period, it assumes the segment was lost and retransmits it.

Connection Termination

  • When data transmission is complete, the connection needs to be terminated gracefully. This is achieved using a four-way handshake.
  • The client sends a FIN (Finish) packet to the server, indicating its desire to close the connection.
  • The server responds with an ACK to acknowledge the FIN packet.
  • The server then sends its own FIN packet to the client.

Finally, the client responds with an ACK to acknowledge the server’s FIN packet, and the connection is closed.

TCP’s reliability is achieved through these mechanisms, which ensure that data is delivered accurately, in the correct order, and without congestion or loss. However, this reliability comes at the cost of higher overhead compared to unreliable protocols like UDP (User Datagram Protocol), which does not guarantee data delivery. TCP is commonly used for applications such as web browsing, file transfer, email, and other scenarios where data integrity and reliability are crucial.

What & Why of TCP Backpressure

TCP backpressure is essential for maintaining the stability and efficiency of a network by preventing overwhelming the receiver with more data than it can handle. Backpressure occurs when a receiving node (e.g., server) is unable to process incoming data at the rate it is being sent by the sending node (e.g., client). This situation can arise due to various reasons, such as limited processing capabilities, buffer overflows, or temporary congestion in the network.

Here are the reasons why TCP backpressure is important:

  • Congestion Control: TCP backpressure helps prevent network congestion. If the receiver cannot keep up with the incoming data, it will start dropping packets. When TCP detects packet loss, it interprets it as a sign of network congestion and throttles back the sending rate. This mechanism ensures that the network’s capacity is not exceeded, and data is transmitted at a sustainable rate, preventing a domino effect of congestion across the network.
  • Avoiding Buffer Overflows: Backpressure ensures that the receiver’s buffers do not overflow. When data arrives faster than the receiver can process it, it might not be able to store it all in its buffers, leading to buffer overflow. Buffer overflow can result in data loss and potential application crashes. TCP backpressure regulates the flow of data, preventing this situation from occurring.
  • Fairness: TCP’s backpressure mechanism allows for fair sharing of the network’s resources among different connections. If a single connection tries to send data too quickly, it may dominate the available bandwidth, causing other connections to suffer from poor performance. Backpressure helps maintain fairness by slowing down aggressive senders and giving other connections a chance to utilize the network resources.
  • Quality of Service (QoS): In systems where certain applications or services have priority over others, backpressure helps enforce QoS policies. By throttling back the sending rate for specific connections or applications, critical services can be given the necessary bandwidth to function smoothly, even during periods of network congestion.
  • Stability: TCP backpressure contributes to the overall stability of the network. By preventing data overload at the receiver’s end, it avoids scenarios where the receiver gets overwhelmed and crashes or becomes unresponsive, leading to degraded user experience or service interruptions.

Thanks for reading Sum of Bytes! Subscribe for free to receive new posts and support my work.

How TCP backpressure works

TCP backpressure is a feedback-driven mechanism where the receiver regulates the flow of data by advertising its buffer space to the sender. By doing so, TCP maintains an efficient and reliable data transfer between the two endpoints, preventing data loss and maintaining network stability. The overall backpressure scheme works as below

  • Sender’s Data Transmission: The sender continuously sends data packets to the receiver.
  • Receiver’s Window Size: The receiver maintains a receive buffer, which represents the available space to store incoming data. It advertises this buffer space as the “receive window size” in the TCP header of the acknowledgment (ACK) packets it sends back to the sender.
  • Window Adjustment: As the receiver processes and consumes data from the buffer, the available space in the buffer increases. It dynamically adjusts the advertised window size in the ACK packets to reflect this increased buffer space.
  • Flow Control: When the sender receives ACK packets with a reduced window size, it knows that the receiver’s buffer is filling up. The sender responds by slowing down its data transmission rate, limiting the amount of unacknowledged data in flight.
  • Resuming Data Transmission: As the receiver continues to process and free up space in its buffer, it advertises a larger window size in the ACK packets. The sender then increases its data transmission rate accordingly.

The cycle of window adjustment, flow control, and data transmission continues throughout the data transfer process, ensuring that the sender does not overwhelm the receiver with data.

Dissecting TCP Header to see how window size is passed to sender

TCP header

The TCP (Transmission Control Protocol) header is a fundamental part of the TCP packet used for data transmission over the internet. The TCP header contains several fields that provide crucial information for reliable data delivery. Here’s an explanation of each component of the TCP header:

  • Source Port (16 bits): It identifies the port number of the sender application or process. The port number is a 16-bit integer ranging from 0 to 65535.
  • Destination Port (16 bits): It identifies the port number of the receiver application or process, indicating the destination of the TCP packet. Like the source port, it is a 16-bit integer.
  • Sequence Number (32 bits): It is a 32-bit number that represents the sequence number of the first data octet in the current TCP segment. Sequence numbers are used to reassemble the data correctly at the receiving end.
  • Acknowledgment Number (32 bits): In acknowledgement (ACK) packets, this field contains the next expected sequence number. It acknowledges receipt of data up to this sequence number. In other packets, it indicates the sequence number of the next expected data octet.
  • Data Offset (4 bits): This field specifies the size of the TCP header in 32-bit words. Since the TCP header length can vary due to optional fields, this field is used to determine where the data starts in the TCP segment.
  • Reserved (3 bits): These bits are reserved for future use and should be set to zero.

Control Flags (9 bits): The control flags (also known as TCP flags or control bits) are used to indicate the purpose and state of the TCP segment. The most common flags are:

  • URG: Urgent pointer field is significant.
  • ACK: Acknowledgment number field is significant.
  • PSH: Push function—data should be delivered to the application immediately.
  • RST: Reset the connection.
  • SYN: Synchronize sequence numbers to initiate a connection.
  • FIN: Finish the connection.
  • Window Size (16 bits): The window size indicates the amount of the receive buffer space available at the receiver. It helps in flow control, allowing the sender to adjust its data transmission rate based on the receiver’s buffer availability.
  • Checksum (16 bits): The checksum is a value calculated over the TCP header and data. It is used for error detection during transmission to ensure data integrity.
  • Urgent Pointer (16 bits): If the URG flag is set, this field points to the last urgent data octet in the TCP segment. It is used when the urgent data needs special handling.
  • Options (Variable, optional): The options field is optional and used to accommodate various TCP extensions. These extensions include Maximum Segment Size (MSS), Selective Acknowledgment (SACK), Timestamp, Window Scale, and others.

Padding (Variable, optional): The padding field is optional and used to align the TCP header to a 32-bit boundary when additional options are present.

Each of these components plays a crucial role in TCP’s reliable data delivery, flow control, and connection management mechanisms.

The window size is basically the one from the header which is used as recieve buffer size by the receiver helping the flow control through back pressure.

In next post I would talk about AIMD algortithm for controlling the TCP traffic congestion across internet nodes.

Thanks for reading Sum of Bytes! Subscribe for free to receive new posts and support my work.

Recent Posts

Understanding Asynchronous I/O in Linux - io_uring
Explore the evolution of I/O multiplexing from `select(2)` to `epoll(7)`, culminating in the advanced io_uring framework
Building a Rate Limiter or RPM based Throttler for your API/worker
Building a simple rate limiter / throttler based on GCRA algorithm and redis script
MicroVMs, Isolates, Wasm, gVisor: A New Era of Virtualization
Exploring the evolution and nuances of serverless architectures, focusing on the emergence of MicroVMs as a solution for isolation, security, and agility. We will discuss the differences between containers and MicroVMs, their use cases in serverless setups, and highlights notable MicroVM implementations by various companies. Focusing on FirecrackerVM, V8 isolates, wasmruntime and gVisor.

Get the "Sum of bytes" newsletter in your inbox
No spam. Just the interesting tech which makes scale possible.