Federico Mengozzi

Textbook: Computer Networking - A Top-Down Approach

Introduction

The Internet

The Internet is a computer network that interconnects hundreds of millions of computing devices, called hosts or end systems. End systems are connected together by a network of communication links and packet switches (routers, link-layer switch) and access the Internet through Internet Service Providers (ISPs). End systems, packet switches and everything else on the Internet run protocols.

The Network Edge

End systems can be divided into clients and servers based on the operation they perform on the network. Every end system access the Internet using a router (the very first router that connected the host to the network is called edge router).

Information on the network are sent between transmitter-receiver pairs in the form of electromagnetic waves across a physical medium. The physical media (solid medium) can be guided or unguided (atmosphere, space).

The Network Core

Packet Switching

On the network, end system exchange messages with each other. Messages are broken into smaller chunks of data called packets that travel through communication links and packet switches (the most common are routers and link-layer switches) manage this operation. A packet of $L$ $bits$ that is sent over a link with a transmission rate of $R$ $bits/sec$ requires $\dfrac{L}{R}$ $sec$ to be fully transmitted.

Store-and-Forward Transmission

Most packets switches use a store-and-forward transmission that requires the packet to be completely received before starting to transmit it again. To reach another router a packet need theoretically $\dfrac{L}{R} sec$, if the packet’s path to its destination consists of $N$ link, the end-to-end delay (between source and destination) would be $N\dfrac{L}{R}$ $sec$.

Queuing Delay and Packet Loss

Each link store a limited number of packets that is about to send in what it’s called output queue. When a packet arrives at the link and it’s not the first to arrive, it should wait for all packets before it to be sent. For this reason, in addition to the store-and-forward delay, a packet suffer also the queuing delay. It can also happen that, if a packet arrives when the output queue is full, there is simply no space to store the packet that eventually will be dropped.

Forwarding Tables and Routing Protocols

Each packet is provided with a destination IP address. Each router has a forwarding table that map destination addresses to router’s outbound links, in this way a packet can be correctly forwarded to the next link. The forwarding table usually abides by a set of rules known as routing protocols.

Circuit Switching

In circuit switching networks the resources needed along the path (buffers, link, ..) are reserved for the duration of the whole communication session between the source and the destination.

Multiplexing in Circuit-Switched Networks

There are two way to implement a circuit in a network

  • Frequency-division multiplexing (FDM)
  • Time-division multiplexing (TDM)

In Frequency-division multiplexing the frequency of a link is divided up among the connection established across the link, the width of the frequency band is called bandwidth.

In Time-division multiplexing the time is divided into frames and each frame is divided up among the connection. One time slot in each frame is dedicated to a single connection that can benefit from the whole bandwidth.

Network Structure

End system connect to the Internet via access to ISP. A naive network structure might be one where each ISP are connected to each other in order to create a global network, in such scenario $N$ ISPs to be fully connected to each other would require $N^2$ links (this structure doesn’t scale). By adding intermediate ISP and other actors in the play, the network has evolved to its modern structure. On top of the structure, there are tier-1 ISP and content provider (content provider also have private network used to carry traffic internally). On the second level, there are various Internet Exchange Point (IXP) that are a meeting point where different ISPs can peer and PoP that are routers of a provider ISP where customer ISPs can connect. One level below several regional ISPs are responsible for providing the network to limited areas. Those regional ISPs are usually connected to the network directly from the tier-1 ISP or from IXP. In the last level access IPS are responsible for connecting the end system to the whole Internet.

End-to-End Delay

When a packet is sent from one node to another, different types of delay may occur. In the complete path from source to destination the delay a packet experience might be

  • Processing Delay
  • Queuing Delay
  • Packet Loss
  • Transmission Delay
  • Propagation Delay

The overall delay for a packet that is sent over a path of $N$ node is then $D = N(d_{proc} + d_{trans} + d_{prop})$, in a case where there is no traffic congestion and the queuing delay is negligible.

Processing Delay

The processing delay is the time that a link requires to examine the packet’s header and correctly route it to the next node (other operations may occur, for example errors checking). It’s usually on the order of microseconds.

Queuing Delay

The queuing delay is the time a packet has to wait in order for the queue to be emptied, allowing the packet to be sent. It’s usually in the range of microseconds to milliseconds. It strictly depends on the transmission delay because for a packet to be pulled from the queue, all of the packets that arrived before it must be transmitted. However queuing delay vary from packet to packet and also from time to time, that is the reason why statistical measures are required to characterize it.

Assume $r$ $packet/s$ is the rate at which packets arrive at the queue. Also, assume that each packet has length $L$ $bits$. The traffic intensity, namely how much the network is occupied, is given by $I_{traff} = \dfrac{rL}{R}$. When $I_{traff} > 1$ the rate at which bits arrive at the queue exceeds the rate at which bits are sent away, the queue will then tend to fill up and some packets will eventually be dropped. For this reason it’s important to design networks where the $I_{traff}$ is never greater than $1$. When the traffic intensity is close to $0$ the output queue is empty most of the time. In the real world, however, it may happen that packets arrive in burst periodically if that is the case then the very first packet has no queuing delay since the queue is empty; from the second to the $i$-th packet the total delay would be $(i-1)\dfrac{L}{R} s$.

Packet Loss

Another measure for a node’s performances is the probability that a packet is dropped since is strictly related to the traffic intensity.

Transmission Delay

The transmission delay is the time required to push all bits of the packet through the link. It depends on the length of the packet $L$ and on the transmission rate of the link $R$, $d_{trans} = \dfrac{L}{R}$.

Propagation Delay

The propagation delay is the time that the packet physically requires to move from one node to the other according to the propagation speed of the medium. It depends on the distance $D$ between the nodes and the propagation speed $s$ of the medium, $d_{prop} = \dfrac{D}{s}$. Usually, the propagation speed is near the speed of light so a good approximation is a range between $2\cdot 10^8 m/s$ and $3\cdot 10^8 m/s$.

Throughput

The instantaneous throughput at any instant of time is the rate (in $bits/s$) at which a host is receiving a file. The average throughput, on the other hand, is given by the size of the file $F$ and the time $T$ that the file took to be completely sent, $\dfrac{F}{T} bits/s$. When a packet travel over a path of more than one node, there is usually one node whose throughput is lower than the others. Such node is called bottleneck link because even if the other nodes could potentially send data at higher rate, the final speed is limited by this node. In general, the throughput for a file transfer between two hosts is $min\{R_1, …, R_n\}$ where $R_i$ represents the transmission rate of node $i$.

Protocols Layer and Encapsulation

The Internet protocol is organized in layers in order to better organize the different operation action in a computer network. Each layer has a different task and offers a service to all layers above it (service model). The Internet protocol stack is organized in five layers:

  • Application layer
  • Transport layer
  • Network layer
  • Link layer
  • Physical layer

A packet is built from the top layer down and is examined from the bottom layer up. Each layer adds information to the packet by creating a new packet that encapsulates a payload (the packets from the level above) and a message (the information relative to the current layer).

Go to top