Skip to content

The Keyboard Banger Guide To The TCP Protocol

I have gathered my study notes on the TCP protocol in one big blog post. I hope you are going to enjoy it!

TCP Protocol service model

TCP used by 95% Internet applications.

A TCP segment contains one or more bytes of data.

The 4-PDU at the source system is not inspected by the routers on the network path. Once it reaches the destination system, the latter reads the 4-PDU header and passes the data to the application layer.

For each application, there is a process running in memory.

TCP application multiplexing and demultiplexing

The transport layer, and more precisely the TCP protocol, at the source system inserts data from different application processes into segments. This mechanism is called “application multiplexing”. The target application process is determined by the following triplet: source port, destination port, destination IP address. The source system generates a unique source port number.

The transport layer (again here the TCP protocol) at the end system reassembles received segments extract application process data and sorts out messages for each application process, based on the following triplet: source port, destination port, source IP address. This mechanism is called “application demultiplexing”.

TCP port numbers

Port numbers are 16-bits numbers from 0 to 65536. Those that are in the range of [0-1023] are called well-known port numbers.

TCP services

  • Emulates a bidirectional byte stream of data, by using sequence numbers and acknowledgements
  • Reliable delivery through the following techniques: acknowledgements, segment retransmission, sequence numbers, timers, initial connection setup, integrity verification, (by using a checksum)
  • Flow control: to control the flow of TCP traffic to avoid swamping a switch link or a router link.

TCP is full duplex; application processes can send and receive data at the same time.

TCP is point to point; data is transferred between only two hosts.

The transport layer guarantees reliable delivery of segments in spite of the service model provided by the network layer.

Transport layer protocols provide a logical communication between processes. Network layer protocols provide a logical communication between hosts.

TCP protocol is described by a Finite State Machine that lays out all the states that the protocol can be in and all the transitions.

TCP MSS

  • The MSS (Maximum Segment Size) has a confusing definition. MSS determines the maximum size of application data a segment can carry and NOT the maximum size of the segment.
  • appears in the Options field of the TCP header
  • MSS in a TCP segment of a device A tells a device B how much maximum data it can put in a segment towards A
  • concerns only data bytes. It does concern neither TCP header nor IP header sizes
  • When device A has data to send -such as a JPEG file- the Transport layer chunks the data into small chunks where each chunk size <= MSS. The last segment has the smallest data chunk size (<MSS).

TCP buffers

For each direction of data transfer, TCP maintains:

  • a “send buffer”, on the source host
  • a “receive” buffer”, on the destination host.

This means, for two hosts A and B exchanging data:

  • host A has a “send buffer” to send data to B, and a “receive buffer” to receive data from B,
  • host B has a “receive buffer” to receive data from A, and a “send buffer” to send data to A,

Application layer and TCP buffers

When the application layer on A has data to send, it is put in the TCP “send buffer” of A. A chunk of the data is put into a segment and the segment is transferred to the network layer.

On host B, TCP ensures that the TCP “receive buffer” contains in-order, sequenced, uncorrupted and complete stream of bytes, identical to the sender’s byte stream. The application process reads data from the TCP “receive buffer” in its original form.

TCP Connection establishment

To establish a connection in the TCP protocol there is what is known as the three-way handshake.

Identifying a TCP connection

The combination of the following fields uniquely identifies a TCP connection:

    • TCP source port
    • TCP destination port
    • IP source address
    • IP destination address
    • Protocol ID (the “Protocol” field of the IP header)

Other notes

  • There are no explicit “negative ACKs”. The only way for the receiver to signal a problem is by sending duplicate ACKs (sending the same ACK again)
  • With each segment that the TCP protocol gives to IP, it starts a timer. When the timer expires and no ACK is received,, an “interrupt event” is generated and the segment that caused the timeout is re-sent..
  • TCP Fast Retransmit is an RFC standard.
  • a sender can send many segments without waiting for an ACK. This is called pipelining. This will be discussed further when we will talk about the Sliding Window.
  • The TCP protocol can optimize its bandwidth use with pipelining, as long as the ratio of number of transmitted segments to Round Trip is small.
  • In a Telnet or SSH session, each character corresponds to a byte of data, in a separate TCP segment. So the size of the segment will equal the TCP header overhead plus the data size, which means 1 byte + 20 bytes = 21 bytes.

TCP Three-way handshake

In the TCP protocol, we talk about two hosts or devices that communicate together. One is the Active opener (the requester) and the other is the Passive opener (usually the server).

The three-way handshake is the first step to open a TCP byte stream between two openers. Here are the three steps:

  1. client sends a Synchronize segment [SYN]
  2. server responds with a segment where both SYN and ACK flags are set [SYN/ACK]
  3. client sends a segment with the ACK flag set [ACK]
tcp three way handshake
Figure 1: TCP Three-way handshake -copyright www.usenix.org

So in simpler text:

  • A —– SYN ————- B
  • B —– SYN, ACK —– A
  • A —–ACK—————B
    • the SYN segment: seq_number= A_isn
    • the SYN/ACK segment (the connection-granted segment): seq_number= B_isn and ACK_number =A_isn+1
    • the ACK segment: seq_number=A_isn+1 and ACK_number = B_isn+1

These messages can be seen with Wireshark. For example, for the HTTP application, on the client side add the filter “tcp.port == 80 && ip.addr == {ip address of HTTP server}

The TCP protocol allows for Simultaneous Open where both sides become Active openers:

tcp-simultaneous-open scenario
figure: TCP Simultaneous Open scenario

TCP three-way handshake in Wireshark

In this example, I will show the steps involved in the TCP three-way handshake, in Wireshark. The TCP connection that will be subject of our Wireshark trace is a simple HTTP connection to “Google.tn”. The client IP address is 192.168.43.247 and the server IP address is 173.194.116.227.

Running Wireshark will display hundreds if not thousands of lines. Let’s concentrate on HTTP packets by putting the following filter: tcp.port == 80

tcp filter in wireshark

One note to mention: You see the sequence number 0? In reality, there are no sequence numbers that equal 0. In fact, even if Wireshark displays sequence numbers of 0, they are called relative sequence number. Wireshark simplifies reading sequence numbers by setting the first sequence number of the data bytestream to 0. The remaining sequence numbers (of the same host) are increments of that relative sequence number.

The first segment sent to the server is a TCP SYN segment:

tcp syn segment in wireshark

The server replies with a SYN ACK segment:

tcp-three-way-handshake-wireshark-3

The client sends an ACK segment

tcp-three-way-handshake-wireshark-4

TCP connection termination

To tear down a TCP connection:

  • A —– FIN ————- B
  • B —– (sends remaining data), ACK —– A
  • B —–FIN —————A
  • A —–ACK————–B

Connection teardown can be initiated by either the client or the server.

Let’s suppose both client and server are in the Established state.

    • Although the TCP connection termination can be initiated by either the client or the server, let’s assume here that the application process on the client side informs TCP that it no longer needs the connection. Client’s TCP protocol sends a FIN segment and enters the FIN-WAIT-1 state. At this point the local application no longer sends data, but it still can receive data.
    • server’s TCP receives the FIN segment and informs the application process to terminate the connection. Server sends back an ACK and enters the CLOSE-WAIT state. When the client receives the ACK, it enters the FIN-WAIT-2 state.
    • the application process at the server side finishes what it was doing and signals TCP that it is ready to close the connection. TCP sends a FIN segment to the client and enters the LAST-ACK state
    • client receives the FIN segment from the server, sends ACK, enters TIME-WAIT state and starts a timer that equals two times the value of the Maximum Segment Life (MSL) time. This timer ensures that the server receives its ACK
    • server receives the ACK and enters the CLOSED state
    • client timer expires and enters the CLOSED state

The following diagram describes the TCP connection termination sequence:

tcp-connection-termination
TCP connection termination – copyright The TCP/IP Guide

The TCP connection termination process can also be described in a Finite State Machine.

TCP Finite State Machines (FSM)

Network protocol states, events and actions can be captured in FSM. A state describes a unique configuration of the protocol.

the FSM describes how a protocol behaves from both client and server points of view.

FSM can describe all or some of the states. If only some states are described, this leaves the door open to improvements.

The bridges between states describe the transition from one state to another.

The transition is described by two things:

– the event that led to the transition

– the action that the system takes when the transition happens. If there is no action, then we leave it blank.

finite-state-machine-example
A sample Finite State Machine in TCP, at the sender side. This is Stop-and Wait protocol.

A protocol can be in only one state at a time.

In FSM, a protocol starts in the Closed state, which is a fictional state. The following diagram depicts the TCP Finite State Machine.

tcp-finite-state-machine
TCP Finite State Machine – copyright TCP/IP Guide

TCP allows also for “Simultaneous close”, where both parties of the communication initiate a TCP connection close request.

tcp-simultaneous-close
Figure: TCP Simultaneous Close – copyright ttcplinux.sourceforge.net

References

The TCP/IP Guide: A Comprehensive, Illustrated Internet Protocol Reference

TCP protocol header format

The header overhead in the TCP protocol is usually 20 bytes. The way to represent that visually (on the TCP header figure below) is by multiplying the number of rows by the size of each row. The minimum number of rows is 5, because the Options field and the Data field are not mandatory. And the length of each row is 32 bits (from bit number 0 to bit number 31) or 4 bytes. You multiply 5*4 = 20 bytes.

TCP-header-format
TCP header format – copyright Stackoverflow.com
    • source port: a client that sends a TCP segment generates a random unique source port. To guarantee the uniqueness of the source port number, the client increments the source port number with each new connection.
    • destination port: the port defined by the application that will receive the byte stream
    • checksum: calculated on the basis of both TCP header and data. The same algorithm used in TCP checksum is also used in UDP checksum and IP header checksum
    • TCP options: an optional field that was created after the TCP standard was released
    • Header Length: gives the size of the TCP header, in 32-bit words, and consequently determines where the data portion of the segment starts. This is similar to the Total Length field of the IP header. This field is also known as the Offset.
    • (Flags)
      • SYN (SYNchronize): this flag tells the receiver “here is the sequence number of my first byte of data. Please synchronize with me.”
      • ACK: When set to 1, the ACK bit tells that the ACK Number field is valid. In the initial SYN segment, the ACK flag is off. Once a connection is made (in other words the TCP Three-way Handshake is done), this flag will always be set.
      • FIN
      • RST (ReSeT)
      • PSH (PuSH): tells the Transport layer of the destination host to send data to the Application layer as soon as the segment is received
      • URG (URGent): tells the Transport layer of the destination host that the Application layer of the source host has marked this data as Urgent.
      • Sequence number: Let’s take a step back and understand that the transport layer views application data as a continuous bytestream. And when data is to be put in segments, the transport layer assigns numbers to each byte of the bytestream. A Sequence Number equals the bytestream number of the first byte of a segment, in one direction. This means that the sequence numbers of the segments from A to B are different from the sequence numbers of the segments from B to A. For example, a segment with sequence number = 42 means “the first byte of the segment corresponds to byte number 42 of the byte stream of data”. For simplicity, we sometimes consider that bytestream numbers start at 0, and so do initial sequence numbers. But in reality, initial sequence numbers are random values. Randomness prevents overlapping with old existing TCP connections.
        • Sequence numbers do not start at 0. This is a security measure because if a hacker sniffs some TCP segments, he would guess how many segments have already been sent and what the next sequence numbers are.
      • ACK Number: An ACK sent by host B acknowledges all bytes received from host A up to this current segment and requests the next byte.  The ACK number equals the sequence number of the next data byte that the host wishes to receive (the next expected byte). For example, if A and B are two hosts involved in a TCP connection: ACK_sent_by_B = SeqNum_sent_by_A + 1
      • Window Size:  tells the sender how much data the RecvBuffer, at the receiver side, can receive. Remember that RecvBuffer contains uncorrupted, sequenced and in-order bytes of data.
      • Urgent Pointer: the Urgent Pointer field is valid only if the URG flag is set. Urgent Pointer gives the position (offset) of the last byte of the urgent data in the data field. So when the Application layer of the destination host reads this field, it knows where the last byte of the urgent data is within the data it receives from the Transport layer. To find the last byte of data, add the Sequence Number value to the Urgent Pointer offset value.
      • Options: will contain the negotiation of the MSS value and the Window scaling factor (a parameter in high-speed networks).
  • In the TCP protocol, suppose host A has three segments to send to host B. Host B receives only the first and third segments. What TCP does here depends on the Window Size. If Window Size = 1 segment, then the third segment is dropped and retransmission of the second segment is requested. If Window Size were equal to 3 packets, then B could store the third segment in memory and request the retransmission of the second segment.

TCP Reno

TCP Reno is another variant of TCP that appeared after TCP Tahoe. We will observe the behaviour of TCP Reno in two major events: on a timeout and on duplicate ACKs.

  • On a timeout: enters Slow Start mode.

tcp-reno-behaviour1

  • on receiving dup ACKs:
    • the missing segment is sent (TCP Fast Retransmit)
    • ssthresh ← cwnd/2
    • TCP enters Fast Recovery mode: cwnd ← cwnd/2 + #of dup ACKs received. This means that the window size increases by one for each dup ACK received. This is called Window Inflation, a technique that is part of Fast Recovery. Window Inflation lasts only during Fast Recovery. This is where the trick lies. Since Fast Retransmit takes a whole RTT, TCP Reno says “why not take the opportunity to send new data?” So since the “inflated” window size allow to send more outstanding segments (because its size is bigger than the old cwnd), then what happens is the sender sends new data without waiting for the next RTT. On the figure below, the window inflation is not shown.

tcp-reno-behaviour-on-dup-Acks

  • When the ACK for the retransmitted segment is received (a good ACK), TCP exits Fast Recovery: the Window is Deflated and shrinks to cwnd ← old cwnd /2 (the old window size halved). If this window size allows for more outstanding segments to be sent, then the sender sends them. Now the sender waits for the segments sent during Fast Recovery to be ACKed. If a good ACK is received, TCP enters Congestion Avoidance state

The TCP Reno FSM is given here:

tcp-reno-fsm
TCP Reno FSM © Stanford University

Note: Optimal window size = bandwidth * delay

TCP Tahoe

  • In the old TCP implementation (the pre-Tahoe):
    • the transmitter initially sends a number of segments that equals the size of the window. This will incur packet loss and the network is used at less than its capacity.
    • for each packet sent, a timer is executed
  • the TCP Tahoe FSM has two states: Slow Start and Congestion Avoidance.
tcp-tahoe-fsm
TCP Tahoe FSM – © Stanford University
  • In the TCP Tahoe implementation, after a Retransmission TimeOut (RTO) or upon receiving duplicate ACKs, the transmitter gets in the “slow start” state.
TCP Tahoe behaviour1
TCP Tahoe Behaviour: getting back to “Slow Start” state © Stanford University

The Slow Start state is a way for TCP to “probe” for the capacity of the network. It continues probing until it discovers a point where the network gets congested. At that point, it switches to Congestion Avoidance state.

TCP-Tahoe-behaviour2

In the Slow Start state:

  • The growth of the window size start small and becomes exponential. Even if TCP implements delayed ACKs, the growth is a bit slower, but still is exponential and more rapid than in the Congestion Avoidance state,
  • The congestion window is set to 1 so it sends one segment that contains one MSS. When it receives a “good ACK” for the segment, it increases the window size to 2 and.sends 2 segments (a “good ACK” is a ACK that requests the next higher byte of data that has not already been acknowledged). When TCP receives an ACK for each segment sent, it increases the window size by 2 MSSs. So more generally, at each RTT, TCP increases the congestion window size by 1 MSS for each ACK received per segment sent (window size: 1, 2, 4, 8…)
tcp-slow-start-example1
Simple exchange of segments that demonstrates Slow Start state © Kamil Sarac
  • This process continues until cwnd > ssthresh (ssthresh is the Slow Start Threshold). At this point, TCP behaves additively (in AIMD manner, increasing the window by one MSS each RTT). This is the Congestion Avoidance state
tcp-tahoe-ssthresh
TCP switches to AIMD at ssthresh © Kamil Sarac
  • If one of the following events happen…
    • a RTO, or
    • three (or more) duplicate ACKs received
  • … then a packet loss is assumed and:
    • ssthresh← cwnd/2
    • cwnd ← 1, and
    • TCP sender retransmits the missing segment (this is TCP Fast Retransmit).
    • TCP gets back to Slow Start state.
  • The cycle repeats:
tcp-tahoe-slow-start-congestion-avoidance
TCP Tahoe cyle © www.soi.wide.ad.jp

 

  • In the congestion avoidance state:
    • at each RTT, TCP increases the congestion window size by 2*MSS/ Window size for every ACK received.
    • the growth of the window size is less rapid than in Slow Start
  • What happens when cwnd = ssthresh? TCP can behave either in Slow Start state or in Congestion Avoidance state.
  • To explain the initial exponential burst in the Window size graph, of a TCP Tahoe implementation, let’s consider a new TCP connection: the ssthresh is defined at a very high value. So cwnd increases exponentially until there are dup ACKs or a RTO. At that point, ssthresh is defined as cwnd/2, and TCP Tahoe enters Slow Start again. Once cwnd > ssthresh, TCP Tahoe enters AIMD…So that burst appears only in the initial probe.
  • When there are dup ACKs, the window size does not grow.
  • The flow control window is a parameter that sets an upper bound on the amount of data the transmitter can send. It does not take into consideration the state of the network. In fact, the sender might be able to send at rates higher than the network rate. The sender’s window can be much smaller than the flow control window. Recall this formula from the article “Network-based And Host-based Congestion Control”:

Sender’s window size = min (advertised window, congestion window)

  • AIMD works perfectly for stable networks.
  • The main difference between Slow Start state and Congestion Avoidance state is how TCP increases cwnd.

TCP RTT estimation and self clocking

  • In the TCP protocol, each connection is characterized by its Round Trip Time (RTT), which is the time it takes for a segment to be sent and to get its corresponding ACK.
  • The following rule must be true:

Timeout >= RTT

  • The connection timeout must be slightly greater than the RTT
    • if it is smaller than RTT, this will result in unnecessary segment retransmissions
    • if it is too much greater than RTT, then whenever a segment is lost, we will wait long for the timeout before resending the segment, which will result in transfer delays for the application.
  • For the sake of calculations, RTT will be denoted as SampleRTT.
  • SampleRTT fluctuates from one segment to another, because of congestion.
  • The study of Jacobson allowed us to calculate segment timeouts and RTT values.
  • EstimatedRTT is the average of SampleRTT values, over time. There are two ways to calculate it depending on the TCP implementation.
  • Let’s call “r” the estimated RTT and “m” the last measured RTT value.

RTT estimation in the pre-Tahoe implementation

r = α*r + (1-α) * m

Timeout = ß*r, where ß = 2

Over time, the Estimated RTT and the Sample RTT values follow an EWMA rule (Exponential Weighted Moving Averages), where the Sample RTT is exponentially decreasing in favor of the Estimated RTT.

RTT estimation in the Tahoe implementation

  •  e = m – r
  • r = r + g*e
  • v = v + g * (|e| – v)
  • Timeout = r + ß*v ; ß = 4

where:

      • e is the error in the estimate
      • g is the EWMA gain, chosen after extensive experimentation
      • v is the measured variance (aka the deviation)

ß is also chosen after extensive experimentation

The higher the deviation value is, the higher the Timeout value is.

ACK clocking and Self-clocking

  • Let’s suppose we have three hosts A, B and C separated by a bottleneck link B-C. Even if host A initially sends a burst of segments at a high rate (will will be queued at B), C will respond with ACKs at the rate of the B-C link (the rate of the bottleneck link). For each ACK received, A will send a new segment. So the ACKs from C will “clock” the transmission. Since all this is part of TCP, we say that TCP is “self clocking”. This is true as long as:
    • bottleneck link is not congested in the direction from the receiver to the sender. Otherwise (if link C-B is congested), ACKs will be slowing down the transmission.
    • no ACKs are lost
  • Self clocking allows the flow control window to “slide” and thus to introduce new segments into the network.
tcp-self-clocking
TCP Self clocking © Ratul Mahajan, Washington University

TCP Fast Retransmit

  • Fast Retransmit: the sender retransmits a lost segment before the RTO expires
  • Fast Retransmission is opposed to Timer-based Retransmission. Timer-based Retransmission is when the sender waits for RTO to expire before retransmitting the requested segment.
  • Fast Retransmit is part of both TCP Tahoe and TCP Reno
  • when a TCP receiver receives out-of-order segments, it immediately sends a duplicate ACK. This is to signal to the sender that it receives out-of-order segments. When the TCP sender receives the duplicate ACK, it assumes there is packet loss. The role of the TCP sender is to fill in the holes of the TCP receiver as soon as possible.
    tcp-fast-retransmit-simple-example
    TCP Fast Retransmit example © Bechir Hamdaoui
  • Once the first duplicate ACK received, TCP sender waits for a period of dupthresh and then sends the missing segment without waiting for RTO to expire (RTO stands for Retransmission TimeOut). Note that I said the “missing segment” and not “missing segments” because I am assuming that normal ACKs -and not Selective ACKs- is used. This is the purpose of Fast Retransmit.
  • Fast Retransmit occurs in one RTT, after receiving the three dup ACKs. In this same RTT, the ACK for the retransmitted segment is received.
  • Receiving one or more duplicate ACK is an indication of one of two things: there is packet loss, and/or there is packet reordering in the network. Since there is no way for the TCP sender to know what exactly is going on, it assumes there is packet loss.
  • dupthresh: Duplicate ACK Threshold: is the number of duplicate ACKs that the TCP sender waits for before firing TCP Fast Retransmit. Usually, dupthresh=3, but some Unix implementations have dupthresh >3.

Delayed ACK, Cumulative ACK and Fast Retransmit

  • A sends an in-order segment
  • B receives the in-order segment with expected sequence number and waits 500ms. If it receives another in-order segment with expected sequence number within 500ms, then it immediately sends a Cumulative ACK requesting the next byte. Otherwise, if the 500ms expires and no other in-order segment with expected sequence number is received, it sends a Delayed ACK.
  • Each time A sends a segment it starts a timer. If there is a timeout and A has not received a corresponding ACK (acknowledging the data it contains and requesting the next bytes), it retransmits the segment.
  • if B receives an out-of order segment then it immediately sends a duplicate ACK requesting the expected sequence number.
  • When A receives the duplicate ACK three times (or more), it assumes the segment is lost and enters TCP Fast Retransmit: A sends the missing segment immediately, without waiting for the timer to expire. Attention though, the fact that a Transmitter receives three or more duplicate ACKs does not always mean a packet loss. It’s just a strong indication of it.
  • Here is an interesting situation: A sends in-order segments 1 and 2. B replies with ACK for segments 1 and 2 but ACK for segment 1 is lost. ACK for segment 2 arrives at A before timeout.–> A does not resend segment 1 because ACK for segment 2 acknowledges all bytes up to segment 2. But if ACK for segment 2 arrives after segment 1 timeout, then A re-sends segment 1. B detects that the segment contains bytes already in its possession so it simply discards the segment.

TCP Sliding Window

  • the purpose of the TCP sliding window is not flow control; it is to keep track of bytes/packets sent, received, ACKd, written, read, expected,…on the sender and the receiver
  • uses the concept of pipelining: sending a bunch of segments without waiting for ACKs → improved performance
  • the TCP Sliding window protocol improves on the Stop-and-Wait protocol
  • Window size is the number of bytes (or packets) that can be sent or received
  • Window size is advertised with the SYN segment
  • a sender maintains three variables:
    • SWS: number of segments the sender can send without waiting for ACK → the allowed number of unacknowledged segments to be sent
    • LAR
    • LSS
  • the receiver maintains these variables
    • RWS: number of out-of-order segments that it can receive before sending an ACK
    • LAS: the sequence number of the Last Acceptable Segment
    • LSR: the sequence number of the Last Segment Received
  • The sender makes sure this invariant is true: LSS – LAR <= SWS
  • the receiver makes sure this invariant is true: LAS – LSR <= RWS
  • Any time a host sends packets, it holds a SWS. Any time it has to receive data, it holds a RWS → since TCP is a full duplex protocol, each host maintains both SWS and RWS.
  • in terms of bytes:
    • the receiver can not buffer data that is more than ACK Seq Num + RWS
    • the sender must not send data more than ACK Seq Num + RWS
  • principle:
    • Sender side:
      • when a higher ACK is received:
        • the LAR pointer advances one slot
        • the window advances one slot, which means we can send one more segment
        • when the sender keeps receiving the same ACK, it means that one segment is lost. It must retransmit it and get an ACK for it before sliding the window.
    • Receiver side:
      • when a segment is received:
        • if seqnum <= LSR or seqnum > LAS, then the segment is discarded
        • otherwise it is accepted
        • if ACK is lost and ACK+1 is sent, then there is no need to send ACK, because ACK+1 acknowledges data up to ACK→ concept of TCP Cumulative ACKs
  • if a RTT = x ms, then the number of possible RTTs per one second is (1000/x)
  • SWS <= SendBuffer
  • RWS <= RcvBuffer
  • Advertized Window = the amount of bytes that the sender can send. This value is advertized by the receiver in the TCP header (Window field). This is also called RecvWindow.
tcp-sliding-window-1
Figure: The place of the Window field, in the TCP header
    • maxRcvBuffer – (LastByteRcvd – LastByteRead)    = Advertized Window
  •  Effective Window: how many bytes the sender can still send to a particular receiver. This value is relative to the Advertized Window.
    • Effective Window = Advertized Window – (LastByteSent – LastByteAckd)
  •  LastByteWritten – LastByteAcked <= maxSendBuffer
  • Bytes to know:
    • at the sender side: LastByteAckd < LastByteSent < LastByteWritten
    • at the receiver side: LastByteRead < NextExpectedByte < LastByteRecvd

tcp-sliding-window-2

Figure: Important bytes to know, from the sender side and the receiver side, in a TCP connection

  • the number of segments a sender can send is limited by two factors:
    • SWS
    • MSS
  • The window size is variable in time
  • The Sequence Number Space is the group of all the possible sequence numbers of bytes sent to a receiver. The sequence number space must be at least equal to SWS+RWS sequence numbers
  • if RWS == 1, then we talk about “Go-back-N” protocol. To illustrate the process of the Go back N protocol, let’s assume:
    • SWS = 3
    • Host A sends segments 1, 2, 3
    • segment 2 is lost
    • Since RWS ==1, then only one segment is allowed in the RecvBuffer, and host B can not buffer segment 3. So host B sends ACK2, ACK2, ACK2. At that time, host A times out segment 2 and resends it. Once received, host B sends ACK3 and host A resends segment 3.

References

Published inNetworking Basics

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

copyright 2020 keyboardbanger.com