FIGURE 1.11
Transmitting segments with flow control
In a reliable, connection-oriented data transfer, datagrams are delivered
to the receiving host hopefully in the same sequence they’re transmitted.
A failure will occur if any data segments are lost, duplicated, or damaged
along the way—a problem solved by having the receiving host
acknowledge that it has received each and every data segment.
A service is considered connection-oriented if it has the following
characteristics:
A virtual circuit, or “three-way handshake,” is set up.
It uses sequencing.
It uses acknowledgments.
It uses flow control.
The types of flow control are buffering, windowing, and
congestion avoidance.
Windowing
Ideally, data throughput happens quickly and efficiently. And as you can
imagine, it would be painfully slow if the transmitting machine had to
actually wait for an acknowledgment after sending each and every
segment! The quantity of data segments, measured in bytes, that the
transmitting machine is allowed to send without receiving an
acknowledgment is called a window.
Windows are used to control the amount of outstanding,
unacknowledged data segments.
The size of the window controls how much information is transferred
from one end to the other before an acknowledgement is required. While
some protocols quantify information depending on the number of
packets, TCP/IP measures it by counting the number of bytes.
As you can see in
Figure 1.12
, there are two window sizes—one set to 1
and one set to 3.
FIGURE 1.12
Windowing
If you’ve configured a window size of 1, the sending machine will wait for
an acknowledgment for each data segment it transmits before
transmitting another one but will allow three to be transmitted before
receiving an acknowledgement if the window size is set to 3.
In this simplified example, both the sending and receiving machines are
workstations. Remember that in reality, the transmission isn’t based on
simple numbers but in the amount of bytes that can be sent!
If a receiving host fails to receive all the bytes that it should
acknowledge, the host can improve the communication session by
decreasing the window size.
Acknowledgments
Reliable data delivery ensures the integrity of a stream of data sent from
one machine to the other through a fully functional data link. It
guarantees that the data won’t be duplicated or lost. This is achieved
through something called positive acknowledgment with retransmission
—a technique that requires a receiving machine to communicate with the
transmitting source by sending an acknowledgment message back to the
sender when it receives data. The sender documents each segment
measured in bytes, then sends and waits for this acknowledgment before
sending the next segment. Also important is that when it sends a
segment, the transmitting machine starts a timer and will retransmit if it
expires before it gets an acknowledgment back from the receiving end.
Figure 1.13
shows the process I just described.
FIGURE 1.13
Transport layer reliable delivery
In the figure, the sending machine transmits segments 1, 2, and 3. The
receiving node acknowledges that it has received them by requesting
segment 4 (what it is expecting next). When it receives the
acknowledgment, the sender then transmits segments 4, 5, and 6. If
segment 5 doesn’t make it to the destination, the receiving node
acknowledges that event with a request for the segment to be re-sent. The
sending machine will then resend the lost segment and wait for an
acknowledgment, which it must receive in order to move on to the
transmission of segment 7.
The Transport layer, working in tandem with the Session layer, also
separates the data from different applications, an activity known as
session multiplexing, and it happens when a client connects to a server
with multiple browser sessions open. This is exactly what’s taking place
when you go someplace online like Amazon and click multiple links,
opening them simultaneously to get information when comparison
shopping. The client data from each browser session must be separate
when the server application receives it, which is pretty slick
technologically speaking, and it’s the Transport layer to the rescue for
that juggling act!
The Network Layer
The Network layer, or layer 3, manages device addressing, tracks the
location of devices on the network, and determines the best way to move
data. This means that it’s up to the Network layer to transport traffic
between devices that aren’t locally attached. Routers, which are layer 3
devices, are specified at this layer and provide the routing services within
an internetwork.
Here’s how that works: first, when a packet is received on a router
interface, the destination IP address is checked. If the packet isn’t
destined for that particular router, it will look up the destination network
address in the routing table. Once the router chooses an exit interface, the
packet will be sent to that interface to be framed and sent out on the local
network. If the router can’t find an entry for the packet’s destination
network in the routing table, the router drops the packet.
Data and route update packets are the two types of packets used at the
Network layer:
Data Packets These are used to transport user data through the
internetwork. Protocols used to support data traffic are called routed
protocols, and IP and IPv6 are key examples. I’ll cover IP addressing in
Chapter 3, “Introduction to TCP/IP,” and Chapter 4, “Easy Subnetting,”
and I’ll cover IPv6 in Chapter 14, “Internet Protocol Version 6 (IPv6).”
Route Update Packets These packets are used to update neighboring
routers about the networks connected to all routers within the
internetwork. Protocols that send route update packets are called routing
protocols; the most critical ones for CCNA are RIPv2, EIGRP, and OSPF.
Route update packets are used to help build and maintain routing tables.
Figure 1.14
shows an example of a routing table. The routing table each
router keeps and refers to includes the following information:
FIGURE 1.14
Routing table used in a router
Network Addresses Protocol-specific network addresses. A router
must maintain a routing table for individual routing protocols because
each routed protocol keeps track of a network with a different addressing
scheme. For example, the routing tables for IP and IPv6 are completely
different, so the router keeps a table for each one. Think of it as a street
sign in each of the different languages spoken by the American, Spanish,
and French people living on a street; the street sign would read
Cat/Gato/Chat.
Interface The exit interface a packet will take when destined for a
specific network.
Metric The distance to the remote network. Different routing protocols
use different ways of computing this distance. I’m going to cover routing
protocols thoroughly in Chapter 9, “IP Routing.” For now, know that
some routing protocols like the Routing Information Protocol, or RIP, use
hop count, which refers to the number of routers a packet passes through
en route to a remote network. Others use bandwidth, delay of the line, or
even tick count (1/18 of a second) to determine the best path for data to
get to a given destination.
And as I mentioned earlier, routers break up broadcast domains, which
means that by default, broadcasts aren’t forwarded through a router. Do
you remember why this is a good thing? Routers also break up collision
domains, but you can also do that using layer 2 (Data Link layer)
switches. Because each interface in a router represents a separate
network, it must be assigned unique network identification numbers, and
each host on the network connected to that router must use the same
network number.
Figure 1.15
shows how a router works in an
internetwork.
FIGURE 1.15
A router in an internetwork. Each router LAN interface is
a broadcast domain. Routers break up broadcast domains by default and
provide WAN services.
Here are some router characteristics that you should never forget:
Routers, by default, will not forward any broadcast or multicast
packets.
Routers use the logical address in a Network layer header to
determine the next-hop router to forward the packet to.
Routers can use access lists, created by an administrator, to control
security based on the types of packets allowed to enter or exit an
interface.
Routers can provide layer 2 bridging functions if needed and can
simultaneously route through the same interface.
Layer 3 devices—in this case, routers—provide connections between
virtual LANs (VLANs).
Routers can provide quality of service (QoS) for specific types of
network traffic.
The Data Link Layer
The Data Link layer provides for the physical transmission of data and
handles error notification, network topology, and flow control. This
means that the Data Link layer will ensure that messages are delivered to
the proper device on a LAN using hardware addresses and will translate
messages from the Network layer into bits for the Physical layer to
transmit.
The Data Link layer formats the messages, each called a data frame, and
adds a customized header containing the hardware destination and
source address. This added information forms a sort of capsule that
surrounds the original message in much the same way that engines,
navigational devices, and other tools were attached to the lunar modules
of the Apollo project. These various pieces of equipment were useful only
during certain stages of space flight and were stripped off the module and
discarded when their designated stage was completed. The process of
data traveling through networks is similar.
Figure 1.16
shows the Data Link layer with the Ethernet and IEEE
specifications. When you check it out, notice that the IEEE 802.2
standard is used in conjunction with and adds functionality to the other
IEEE standards. (You’ll read more about the important IEEE 802
standards used with the Cisco objectives in Chapter 2, “Ethernet
Networking and Data Encapsulation.”)
FIGURE 1.16
Data Link layer
It’s important for you to understand that routers, which work at the
Network layer, don’t care at all about where a particular host is located.
They’re only concerned about where networks are located and the best
way to reach them—including remote ones. Routers are totally obsessive
when it comes to networks, which in this case is a good thing! It’s the
Data Link layer that’s responsible for the actual unique identification of
each device that resides on a local network.
For a host to send packets to individual hosts on a local network as well
as transmit packets between routers, the Data Link layer uses hardware
addressing. Each time a packet is sent between routers, it’s framed with
control information at the Data Link layer, but that information is
stripped off at the receiving router and only the original packet is left
completely intact. This framing of the packet continues for each hop until
the packet is finally delivered to the correct receiving host. It’s really
important to understand that the packet itself is never altered along the
route; it’s only encapsulated with the type of control information required
for it to be properly passed on to the different media types.
The IEEE Ethernet Data Link layer has two sublayers:
Media Access Control (MAC) Defines how packets are placed on the
media. Contention for media access is “first come/first served” access
where everyone shares the same bandwidth—hence the name. Physical
addressing is defined here as well as logical topologies. What’s a logical
topology? It’s the signal path through a physical topology. Line discipline,
error notification (but not correction), the ordered delivery of frames, and
optional flow control can also be used at this sublayer.
Logical Link Control (LLC) Responsible for identifying Network layer
protocols and then encapsulating them. An LLC header tells the Data
Link layer what to do with a packet once a frame is received. It works like
this: a host receives a frame and looks in the LLC header to find out
where the packet is destined—for instance, the IP protocol at the Network
layer. The LLC can also provide flow control and sequencing of control
bits.
The switches and bridges I talked about near the beginning of the chapter
both work at the Data Link layer and filter the network using hardware
(MAC) addresses. I’ll talk about these next.
As data is encoded with control information at each layer of
the OSI model, the data is named with something called a protocol
data unit (PDU). At the Transport layer, the PDU is called a segment,
at the Network layer it’s a packet, at the Data Link a frame, and at the
Physical layer it’s called bits. This method of naming the data at each
layer is covered thoroughly in Chapter 2.
Switches and Bridges at the Data Link Layer
Layer 2 switching is considered hardware-based bridging because it uses
specialized hardware called an application-specific integrated circuit
(ASIC). ASICs can run up to high gigabit speeds with very low latency
rates.
Latency is the time measured from when a frame enters a
port to when it exits a port.
Bridges and switches read each frame as it passes through the network.
The layer 2 device then puts the source hardware address in a filter table
and keeps track of which port the frame was received on. This
information (logged in the bridge’s or switch’s filter table) is what helps
the machine determine the location of the specific sending device.
Figure
1.17
shows a switch in an internetwork and how John is sending packets
to the Internet and Sally doesn’t hear his frames because she is in a
different collision domain. The destination frame goes directly to the
default gateway router, and Sally doesn’t see John’s traffic, much to her
relief.
FIGURE 1.17
A switch in an internetwork
The real estate business is all about location, location, location, and it’s
the same way for both layer 2 and layer 3 devices. Though both need to be
able to negotiate the network, it’s crucial to remember that they’re
concerned with very different parts of it. Primarily, layer 3 machines
(such as routers) need to locate specific networks, whereas layer 2
machines (switches and bridges) need to eventually locate specific
devices. So, networks are to routers as individual devices are to switches
and bridges. And routing tables that “map” the internetwork are for
routers as filter tables that “map” individual devices are for switches and
bridges.
After a filter table is built on the layer 2 device, it will forward frames only
to the segment where the destination hardware address is located. If the
destination device is on the same segment as the frame, the layer 2 device
will block the frame from going to any other segments. If the destination
is on a different segment, the frame can be transmitted only to that
segment. This is called transparent bridging.
When a switch interface receives a frame with a destination hardware
address that isn’t found in the device’s filter table, it will forward the
frame to all connected segments. If the unknown device that was sent the
“mystery frame” replies to this forwarding action, the switch updates its
filter table regarding that device’s location. But in the event the
destination address of the transmitting frame is a broadcast address, the
switch will forward all broadcasts to every connected segment by default.
All devices that the broadcast is forwarded to are considered to be in the
same broadcast domain. This can be a problem because layer 2 devices
propagate layer 2 broadcast storms that can seriously choke performance,
and the only way to stop a broadcast storm from propagating through an
internetwork is with a layer 3 device—a router!
The biggest benefit of using switches instead of hubs in your internetwork
is that each switch port is actually its own collision domain. Remember
that a hub creates one large collision domain, which is not a good thing!
But even armed with a switch, you still don’t get to just break up
broadcast domains by default because neither switches nor bridges will
do that. They’ll simply forward all broadcasts instead.
Another benefit of LAN switching over hub-centered implementations is
that each device on every segment plugged into a switch can transmit
simultaneously. Well, at least they can as long as there’s only one host on
each port and there isn’t a hub plugged into a switch port! As you might
have guessed, this is because hubs allow only one device per network
segment to communicate at a time.
The Physical Layer
Finally arriving at the bottom, we find that the Physical layer does two
things: it sends bits and receives bits. Bits come only in values of 1 or 0—a
Morse code with numerical values. The Physical layer communicates
directly with the various types of actual communication media. Different
kinds of media represent these bit values in different ways. Some use
audio tones, while others employ state transitions—changes in voltage
from high to low and low to high. Specific protocols are needed for each
type of media to describe the proper bit patterns to be used, how data is
encoded into media signals, and the various qualities of the physical
media’s attachment interface.
The Physical layer specifies the electrical, mechanical, procedural, and
functional requirements for activating, maintaining, and deactivating a
physical link between end systems. This layer is also where you identify
the interface between the data terminal equipment (DTE) and the data
communication equipment (DCE). (Some old phone-company employees
still call DCE “data circuit-terminating equipment.”) The DCE is usually
located at the service provider, while the DTE is the attached device. The
services available to the DTE are most often accessed via a modem or
channel service unit/data service unit (CSU/DSU).
The Physical layer’s connectors and different physical topologies are
defined by the OSI as standards, allowing disparate systems to
communicate. The Cisco exam objectives are interested only in the IEEE
Ethernet standards.
Hubs at the Physical Layer
A hub is really a multiple-port repeater. A repeater receives a digital
signal, reamplifies or regenerates that signal, then forwards the signal out
the other port without looking at any data. A hub does the same thing
across all active ports: any digital signal received from a segment on a
hub port is regenerated or reamplified and transmitted out all other ports
on the hub. This means all devices plugged into a hub are in the same
collision domain as well as in the same broadcast domain.
Figure 1.18
shows a hub in a network and how when one host transmits, all other
hosts must stop and listen.
FIGURE 1.18
A hub in a network
Hubs, like repeaters, don’t examine any of the traffic as it enters or before
it’s transmitted out to the other parts of the physical media. And every
device connected to the hub, or hubs, must listen if a device transmits. A
physical star network, where the hub is a central device and cables extend
in all directions out from it, is the type of topology a hub creates. Visually,
the design really does resemble a star, whereas Ethernet networks run a
logical bus topology, meaning that the signal has to run through the
network from end to end.
Hubs and repeaters can be used to enlarge the area covered
by a single LAN segment, but I really do not recommend going with
this configuration! LAN switches are affordable for almost every
situation and will make you much happier.
Topologies at the Physical layer
One last thing I want to discuss at the Physical layer is topologies, both
physical and logical. Understand that every type of network has both a
physical and a logical topology.
The physical topology of a network refers to the physical layout of the
devices, but mostly the cabling and cabling layout.
The logical topology defines the logical path on which the signal will
travel on the physical topology.
Figure 1.19
shows the four types of topologies.
FIGURE 1.19
Physical vs. Logical Topolgies
Here are the topology types, although the most common, and pretty
much the only network we use today is a physical star, logical bus
technology, which is considered a hybrid topology (think Ethernet):
Bus: In a bus topology, every workstation is connected to a single
cable, meaning every host is directly connected to every other
workstation in the network.
Ring: In a ring topology, computers and other network devices are
cabled together in a way that the last device is connected to the first to
form a circle or ring.
Star: The most common physical topology is a star topology, which is
your Ethernet switching physical layout. A central cabling device
(switch) connects the computers and other network devices together.
This category includes star and extended star topologies. Physical
connection is commonly made using twisted-pair wiring.
Mesh: In a mesh topology, every network device is cabled together
with connection to each other. Redundant links increase reliability
and self-healing. The physical connection is commonly made using
fiber or twisted-pair wiring.
Hybrid: Ethernet uses a physical star layout (cables come from all
directions), and the signal travels end-to-end, like a bus route.
Dostları ilə paylaş: |