I am going to define in this post some principles of packet switching.
Definitions for Packet Switching
packet = self containing unit of information
Switching = forwarding packets very quickly and efficiently. When the link is free, the packet is forwarded. When the link is busy, the forwarding is delayed.
Flow = a collection of datagrams that belong to the same end-to-end communication
Principle of Packet Switching
- In the old switching model, the SRC inserts the path in each packet. Each switch on the path reads the next hop within the packet and forwards the packet. The process continues until the packet is delivered to the final destination. This process is called source routing. Nowadays it is abandoned for security purposes.
- In the new switching model, each switch conserves a little bit of state. Each switch builds a table that associates addresses with next hops. However, packet switches do not maintain per-flow state. If a flow dies, packet switches are simply not aware of it.
- Packet Switches are routers and switches
- each Packet Switch maintains a Forwarding Table
- only one packet uses the link at a time
- Packet switches use buffers when more than one packet arrives. Buffers are important especially in times of network congestion
Advantages of Packet Switching
Packet switches present two advantages:
- they allow each packet to use the full capacity of the link
- they allow users to share the link
- resiliency against node failures: if an intermediate device fails, packets are rerouted over another path
Let’s suppose we have a packet switch with a link to the internet. If the link is free, user A can benefit from the full capacity of the link to download files. When user B wants to use the link, the link is shared between users A and B. We call this process Statistical Multiplexing
The Internet uses packet switching technology for its benefits, and because early-existing networks used packet switching.