Federico Mengozzi

The component of a router


Routers consists of many hardware parts, each with its own function

Input ports

Inputs ports perform many task. For starter, they need to perform all the link-layer functions needed to interoperate with the link layer at the other side of the incoming link.

Then, there is the look-up function in the forwarding tables. The router matches the prefix of the packet’s destination address with the entries in the table. In particular, longest-prefix-matching it’s used in this task. If there is no match, a default output interface is usually configured. The lookup is a critical portion of the router’s job, for this reason it’s important it’s optimized as much as possible, for example with specific hardware like Ternary Content Addressable Memories (TCAM). After the lookup, the packet is sent to the switch fabric.

In addition to matching, in the input ports control packets are sent directly to the routing processor.

Other task performed in the input ports consist of inspecting the packet’s version number, checksum and time-to-live-field (the latter two are rewritten) and updating counters used for network management.

Switch fabric

It connect in-ports to out-ports and it’s responsible for moving packets according to the forwarding information. There are several ways to move packets from in-port to put-ports

Switching via memory

The router processor, copies the packet from the input-port buffer to the correct output-buffer. If the in/out-port buffer is $B$, then the overall forwarding throughput is $\dfrac{B}{2}$. It’s also important to notice that the forwarding can only be serial since there is no room for parallelism.

Switching via bus

The packets moves from the in-ports to a shared buffer connected to all out-ports. Before the packet ends up on the bus, a switching label is appended and it will be used by the out-ports to accept the packet (all other ports will just ignore it). The label is then removed at the out-port.

Switching via an intercommunication bus

The bus consist of $N$ horizontal link ($N$ in-port connected to $N$ out-port) and $N$ vertical connection to form a grid of $N^2$ connection. In this way, the switch fabric controller can open specific crosspoints and have the packets move along the circuit. It’s the only type of switching that offer parallelism as long as packets are not destined to the same output port.

Output ports

Where packets, arriving form the switch fabric are stored and transmitted (selected and de-queued), they perform the necessary link-layer and physical-layer functions.

Routing processor

It performs control-plane functions like executing routing protocols, maintaining the routing tables, compute forwarding tables. Forwarding tables are either computed and updated using a routing algorithm or received from a SDN (software-defined network) controller


Assuming a rate of transmission of $R_{line}$ (byte/s) that is the same for the $N$ input and $N$ output ports, and a switch fabric transfer rate of $R_{switch}$. If $R_{switch} = c \cdot N \cdot R_{line}$ then only a negligible queuing will occur at the input ports.

Input queuing

If the switch fabric is not fast enough, it’s possible to have an important queueing at the input ports, even in the case of a switching via intercommunication bus. Since it’s only possible to $k$ two packets simultaneously only if they all have different input ports and different output ports there can be scenario such as ahead-of-the-line HOL blocking.

The following is an example of HOL blocking.

Given two input ports and two output ports, the packets $P_1, P_2$ that need to be sent from $in_1$ to $out_1$, $P_3$ from $in_2$ to $out_1$ and $P_4$ from $in_2$ to $out_2$. Let’s assume that $P_1$ is the first being processed.

In this case $P_3$ needs to wait $P_1$ to be forwarded in order to use $out_1$. However, $P_4$ could move from $in_2$ to $out_2$ it it wasn’t for $P_3$ that is in front of its in th queue.

Output queuing

Even if $R_{switch} = c \cdot N \cdot R_{line}$, it might happen that all $N$ packets are destined to a single out-ports. In this case it’s necessary queue them. If there is not enough memory available either the new arriving packet is dropped (drop-tail) or on of the already queued packet is removed.

Usually packets are dropped before the buffer is full in order to leave space for control packets to provide a congestion signal to the sender. These approach is called active queue management AQM and its most implemented version is random early detection RED.

The amount of memory reserved for the out-ports buffer was usually equal to $B = C \cdot RTT$, where $C$ is the link capacity. However, in network with many TCP flows ($N$) the buffer reserved is usually $B = \frac{C \cdot RTT}{N}$

Packet scheduling




Priority classes are used to classify packets based on their priority $w_i$. For example real-time voice-over IP might receiver higher priority than SMTP or IMAP email packets.

Round Robin and WFQ

In RR, packets are divided in class as well but instead of using a fixed priority rule, each class is assigned an interval in which its packets are transmitted.

A generalized form of RR is weighted fair queueing QFW and consists of assigning each class a fraction of the service equal to $\dfrac{w_i}{\sum_j w_j}$ in this way each class is guaranteed to have a throughput of $R \cdot \dfrac{w_i}{\sum_j w_j}$

Go to top