FIGURE 2.3
A router creates broadcast domain boundaries.
And key to carefully planning your network design is never to allow
broadcast domains to grow too large and get out of control. Both collision
and broadcast domains can easily be controlled with routers and VLANs,
so there’s just no excuse to allow user bandwidth to slow to a painful
crawl when there are plenty of tools in your arsenal to prevent the
suffering!
An important reason for this book’s existence is to ensure that you really
get the foundational basics of Cisco networks nailed down so you can
effectively design, implement, configure, troubleshoot, and even dazzle
colleagues and superiors with elegant designs that lavish your users with
all the bandwidth their hearts could possibly desire.
To make it to the top of that mountain, you need more than just the basic
story, so let’s move on to explore the collision detection mechanism used
in half-duplex Ethernet.
CSMA/CD
Ethernet networking uses a protocol called Carrier Sense Multiple Access
with Collision Detection (CSMA/CD), which helps devices share the
bandwidth evenly while preventing two devices from transmitting
simultaneously on the same network medium. CSMA/CD was actually
created to overcome the problem of the collisions that occur when
packets are transmitted from different nodes at the same time. And trust
me—good collision management is crucial, because when a node
transmits in a CSMA/CD network, all the other nodes on the network
receive and examine that transmission. Only switches and routers can
effectively prevent a transmission from propagating throughout the
entire network!
So, how does the CSMA/CD protocol work? Let’s start by taking a look at
Figure 2.4
.
FIGURE 2.4
CSMA/CD
When a host wants to transmit over the network, it first checks for the
presence of a digital signal on the wire. If all is clear and no other host is
transmitting, the host will then proceed with its transmission.
But it doesn’t stop there. The transmitting host constantly monitors the
wire to make sure no other hosts begin transmitting. If the host detects
another signal on the wire, it sends out an extended jam signal that
causes all nodes on the segment to stop sending data—think busy signal.
The nodes respond to that jam signal by waiting a bit before attempting
to transmit again. Backoff algorithms determine when the colliding
stations can retransmit. If collisions keep occurring after 15 tries, the
nodes attempting to transmit will then time out. Half-duplex can be
pretty messy!
When a collision occurs on an Ethernet LAN, the following happens:
1. A jam signal informs all devices that a collision occurred.
2. The collision invokes a random backoff algorithm.
3. Each device on the Ethernet segment stops transmitting for a short
time until its backoff timer expires.
4. All hosts have equal priority to transmit after the timers have expired.
The ugly effects of having a CSMA/CD network sustain heavy collisions
are delay, low throughput, and congestion.
Backoff on an Ethernet network is the retransmission delay
that’s enforced when a collision occurs. When that happens, a host
will resume transmission only after the forced time delay has expired.
Keep in mind that after the backoff has elapsed, all stations have
equal priority to transmit data.
At this point, let’s take a minute to talk about Ethernet in detail at both
the Data Link layer (layer 2) and the Physical layer (layer 1).
Half- and Full-Duplex Ethernet
Half-duplex Ethernet is defined in the original IEEE 802.3 Ethernet
specification, which differs a bit from how Cisco describes things. Cisco
says Ethernet uses only one wire pair with a digital signal running in both
directions on the wire. Even though the IEEE specifications discuss the
half-duplex process somewhat differently, it’s not actually a full-blown
technical disagreement. Cisco is really just talking about a general sense
of what’s happening with Ethernet.
Half-duplex also uses the CSMA/CD protocol I just discussed to help
prevent collisions and to permit retransmitting if one occurs. If a hub is
attached to a switch, it must operate in half-duplex mode because the end
stations must be able to detect collisions.
Figure 2.5
shows a network with
four hosts connected to a hub.
FIGURE 2.5
Half-duplex example
The problem here is that we can only run half-duplex, and if two hosts
communicate at the same time there will be a collision. Also, half-duplex
Ethernet is only about 30 to 40 percent efficient because a large 100Base-
T network will usually only give you 30 to 40 Mbps, at most, due to
overhead.
But full-duplex Ethernet uses two pairs of wires at the same time instead
of a single wire pair like half-duplex. And full-duplex uses a point-to-
point connection between the transmitter of the transmitting device and
the receiver of the receiving device. This means that full-duplex data
transfers happen a lot faster when compared to half-duplex transfers.
Also, because the transmitted data is sent on a different set of wires than
the received data, collisions won’t happen.
Figure 2.6
shows four hosts
connected to a switch, plus a hub. Definitely try not to use hubs if you can
help it!
FIGURE 2.6
Full-duplex example
Theoretically all hosts connected to the switch in
Figure 2.6
can
communicate at the same time because they can run full-duplex. Just
keep in mind that the switch port connecting to the hub as well as the
hosts connecting to that hub must run at half-duplex.
The reason you don’t need to worry about collisions is because now it’s
like a freeway with multiple lanes instead of the single-lane road provided
by half-duplex. Full-duplex Ethernet is supposed to offer 100 percent
efficiency in both directions—for example, you can get 20 Mbps with a 10
Mbps Ethernet running full-duplex, or 200 Mbps for Fast Ethernet. But
this rate is known as an aggregate rate, which translates as “you’re
supposed to get” 100 percent efficiency. No guarantees, in networking as
in life!
You can use full-duplex Ethernet in at least the following six situations:
With a connection from a switch to a host
With a connection from a switch to a switch
With a connection from a host to a host
With a connection from a switch to a router
With a connection from a router to a router
With a connection from a router to a host
Full-duplex Ethernet requires a point-to-point connection
when only two nodes are present. You can run full-duplex with just
about any device except a hub.
Now this may be a little confusing because this begs the question that if
it’s capable of all that speed, why wouldn’t it actually deliver? Well, when
a full-duplex Ethernet port is powered on, it first connects to the remote
end and then negotiates with the other end of the Fast Ethernet link. This
is called an auto-detect mechanism. This mechanism first decides on the
exchange capability, which means it checks to see if it can run at 10, 100,
or even 1000 Mbps. It then checks to see if it can run full-duplex, and if it
can’t, it will run half-duplex.
Remember that half-duplex Ethernet shares a collision
domain and provides a lower effective throughput than full-duplex
Ethernet, which typically has a private per-port collision domain plus
a higher effective throughput.
Last, remember these important points:
There are no collisions in full-duplex mode.
A dedicated switch port is required for each full-duplex node.
The host network card and the switch port must be capable of
operating in full-duplex mode.
The default behavior of 10Base-T and 100Base-T hosts is 10 Mbps
half-duplex if the autodetect mechanism fails, so it is always good
practice to set the speed and duplex of each port on a switch if you
can.
Now let’s take a look at how Ethernet works at the Data Link layer.
Ethernet at the Data Link Layer
Ethernet at the Data Link layer is responsible for Ethernet addressing,
commonly referred to as MAC or hardware addressing. Ethernet is also
responsible for framing packets received from the Network layer and
preparing them for transmission on the local network through the
Ethernet contention-based media access method.
Ethernet Addressing
Here’s where we get into how Ethernet addressing works. It uses the
Media Access Control (MAC) address burned into each and every
Ethernet network interface card (NIC). The MAC, or hardware, address is
a 48-bit (6-byte) address written in a hexadecimal format.
Figure 2.7
shows the 48-bit MAC addresses and how the bits are divided.
FIGURE 2.7
Ethernet addressing using MAC addresses
The organizationally unique identifier (OUI) is assigned by the IEEE to
an organization. It’s composed of 24 bits, or 3 bytes, and it in turn assigns
a globally administered address also made up of 24 bits, or 3 bytes, that’s
supposedly unique to each and every adapter an organization
manufactures. Surprisingly, there’s no guarantee when it comes to that
unique claim! Okay, now look closely at the figure. The high-order bit is
the Individual/Group (I/G) bit. When it has a value of 0, we can assume
that the address is the MAC address of a device and that it may well
appear in the source portion of the MAC header. When it’s a 1, we can
assume that the address represents either a broadcast or multicast
address in Ethernet.
The next bit is the Global/Local bit, sometimes called the G/L bit or U/L
bit, where U means universal. When set to 0, this bit represents a
globally administered address, as assigned by the IEEE, but when it’s a 1,
it represents a locally governed and administered address. The low-order
24 bits of an Ethernet address represent a locally administered or
manufacturer-assigned code. This portion commonly starts with 24 0s for
the first card made and continues in order until there are 24 1s for the last
(16,777,216th) card made. You’ll find that many manufacturers use these
same six hex digits as the last six characters of their serial number on the
same card.
Let’s stop for a minute and go over some addressing schemes important
in the Ethernet world.
Binary to Decimal and Hexadecimal Conversion
Before we get into working with the TCP/IP protocol and IP addressing,
which we’ll do in Chapter 3, “Introduction to TCP/IP,” it’s really
important for you to truly grasp the differences between binary, decimal,
and hexadecimal numbers and how to convert one format into the other.
We’ll start with binary numbering, which is really pretty simple. The
digits used are limited to either a 1 or a 0, and each digit is called a bit,
which is short for binary digit. Typically, you group either 4 or 8 bits
together, with these being referred to as a nibble and a byte, respectively.
The interesting thing about binary numbering is how the value is
represented in a decimal format—the typical decimal format being the
base-10 number scheme that we’ve all used since kindergarten. The
binary numbers are placed in a value spot, starting at the right and
moving left, with each spot having double the value of the previous spot.
Table 2.1
shows the decimal values of each bit location in a nibble and a
byte. Remember, a nibble is 4 bits and a byte is 8 bits.
TABLE 2.1
Binary values
Nibble Values Byte Values
8 4 2 1
128 64 32 16 8 4 2 1
What all this means is that if a one digit (1) is placed in a value spot, then
the nibble or byte takes on that decimal value and adds it to any other
value spots that have a 1. If a zero (0) is placed in a bit spot, you don’t
count that value.
Let me clarify this a little. If we have a 1 placed in each spot of our nibble,
we would then add up 8 + 4 + 2 + 1 to give us a maximum value of 15.
Another example for our nibble values would be 1001, meaning that the 8
bit and the 1 bit are turned on, which equals a decimal value of 9. If we
have a nibble binary value of 0110, then our decimal value would be 6,
because the 4 and 2 bits are turned on.
But the byte decimal values can add up to a number that’s significantly
higher than 15. This is how: If we counted every bit as a one (1), then the
byte binary value would look like the following example because,
remember, 8 bits equal a byte:
11111111
We would then count up every bit spot because each is turned on. It
would look like this, which demonstrates the maximum value of a byte:
128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255
There are plenty of other decimal values that a binary number can equal.
Let’s work through a few examples:
10010110
Which bits are on? The 128, 16, 4, and 2 bits are on, so we’ll just add them
up: 128 + 16 + 4 + 2 = 150.
01101100
Which bits are on? The 64, 32, 8, and 4 bits are on, so we just need to add
them up: 64 + 32 + 8 + 4 = 108.
11101000
Which bits are on? The 128, 64, 32, and 8 bits are on, so just add the
values up: 128 + 64 + 32 + 8 = 232.
I highly recommend that you memorize
Table 2.2
before braving the IP
sections in Chapter 3, “Introduction to TCP/IP,” and Chapter 4, “Easy
Subnetting”!
TABLE 2.2
Binary to decimal memorization chart
Binary Value Decimal Value
10000000
128
11000000
192
11100000
224
11110000
240
11111000
248
11111100
252
11111110
254
11111111
255
Hexadecimal addressing is completely different than binary or decimal—
it’s converted by reading nibbles, not bytes. By using a nibble, we can
convert these bits to hex pretty simply. First, understand that the
hexadecimal addressing scheme uses only the characters 0 through 9.
Because the numbers 10, 11, 12, and so on can’t be used (because they are
two-digit numbers), the letters A, B, C, D, E, and F are used instead to
represent 10, 11, 12, 13, 14, and 15, respectively.
Hex is short for hexadecimal, which is a numbering system
that uses the first six letters of the alphabet, A through F, to extend
beyond the available 10 characters in the decimal system. These
values are not case sensitive.
Table 2.3
shows both the binary value and the decimal value for each
hexadecimal digit.
TABLE 2.3
Hex to binary to decimal chart
Hexadecimal Value Binary Value Decimal Value
0
0000
0
1
0001
1
2
0010
2
3
0011
3
4
0100
4
5
0101
5
6
0110
6
7
0111
7
8
1000
8
9
1001
9
A
1010
10
B
1011
11
C
1100
12
D
1101
13
E
1110
14
F
1111
15
Did you notice that the first 10 hexadecimal digits (0–9) are the same
value as the decimal values? If not, look again because this handy fact
makes those values super easy to convert!
Now suppose you have something like this: 0x6A. This is important
because sometimes Cisco likes to put 0x in front of characters so you
know that they are a hex value. It doesn’t have any other special meaning.
So what are the binary and decimal values? All you have to remember is
that each hex character is one nibble and that two hex characters joined
together make a byte. To figure out the binary value, put the hex
characters into two nibbles and then join them together into a byte. Six
equals 0110, and A, which is 10 in hex, equals 1010, so the complete byte
would be 01101010.
To convert from binary to hex, just take the byte and break it into nibbles.
Let me clarify this.
Say you have the binary number 01010101. First, break it into nibbles—
0101 and 0101—with the value of each nibble being 5 since the 1 and 4
bits are on. This makes the hex answer 0x55. And in decimal format, the
binary number is 01010101, which converts to 64 + 16 + 4 + 1 = 85.
Here’s another binary number:
11001100
Your answer would be 1100 = 12 and 1100 = 12, so therefore, it’s
converted to CC in hex. The decimal conversion answer would be 128 +
64 + 8 + 4 = 204.
One more example, then we need to get working on the Physical layer.
Suppose you had the following binary number:
10110101
The hex answer would be 0xB5, since 1011 converts to B and 0101
converts to 5 in hex value. The decimal equivalent is 128 + 32 + 16 + 4 + 1
= 181.
Make sure you check out Written Lab 2.1 for more practice
with binary/decimal/hex conversion!
Ethernet Frames
The Data Link layer is responsible for combining bits into bytes and bytes
into frames. Frames are used at the Data Link layer to encapsulate
packets handed down from the Network layer for transmission on a type
of media access.
The function of Ethernet stations is to pass data frames between each
other using a group of bits known as a MAC frame format. This provides
error detection from a cyclic redundancy check (CRC). But remember—
this is error detection, not error correction. An example of a typical
Ethernet frame used today is shown in
Figure 2.8
.
FIGURE 2.8
Typical Ethernet frame format
Encapsulating a frame within a different type of frame is
called tunneling.
Following are the details of the various fields in the typical Ethernet
frame type:
Preamble An alternating 1,0 pattern provides a 5 MHz clock at the start
of each packet, which allows the receiving devices to lock the incoming bit
stream.
Start Frame Delimiter (SFD)/Synch The preamble is seven octets
and the SFD is one octet (synch). The SFD is 10101011, where the last pair
of 1s allows the receiver to come into the alternating 1,0 pattern
somewhere in the middle and still sync up to detect the beginning of the
data.
Destination Address (DA) This transmits a 48-bit value using the
least significant bit (LSB) first. The DA is used by receiving stations to
determine whether an incoming packet is addressed to a particular node.
The destination address can be an individual address or a broadcast or
multicast MAC address. Remember that a broadcast is all 1s—all Fs in hex
—and is sent to all devices. A multicast is sent only to a similar subset of
nodes on a network.
Source Address (SA) The SA is a 48-bit MAC address used to identify
the transmitting device, and it uses the least significant bit first.
Broadcast and multicast address formats are illegal within the SA field.
Length or Type 802.3 uses a Length field, but the Ethernet_II frame
uses a Type field to identify the Network layer protocol. The old, original
802.3 cannot identify the upper-layer protocol and must be used with a
proprietary LAN—IPX, for example.
Data This is a packet sent down to the Data Link layer from the Network
layer. The size can vary from 46 to 1,500 bytes.
Frame Check Sequence (FCS) FCS is a field at the end of the frame
that’s used to store the cyclic redundancy check (CRC) answer. The CRC
is a mathematical algorithm that’s run when each frame is built based on
the data in the frame. When a receiving host receives the frame and runs
the CRC, the answer should be the same. If not, the frame is discarded,
assuming errors have occurred.
Let’s pause here for a minute and take a look at some frames caught on
my trusty network analyzer. You can see that the frame below has only
three fields: Destination, Source, and Type, which is shown as Protocol
Type on this particular analyzer:
Destination: 00:60:f5:00:1f:27
Source: 00:60:f5:00:1f:2c
Protocol Type: 08-00 IP
This is an Ethernet_II frame. Notice that the Type field is IP, or 08-00,
mostly just referred to as 0x800 in hexadecimal.
The next frame has the same fields, so it must be an Ethernet_II frame as
well:
Destination: ff:ff:ff:ff:ff:ff Ethernet Broadcast
Source: 02:07:01:22:de:a4
Protocol Type: 08-00 IP
Did you notice that this frame was a broadcast? You can tell because the
destination hardware address is all 1s in binary, or all Fs in hexadecimal.
Let’s take a look at one more Ethernet_II frame. I’ll talk about this next
example again when we use IPv6 in Chapter 14, “Internet Protocol
Version 6 (IPv6),” but you can see that the Ethernet frame is the same
Ethernet_II frame used with the IPv4 routed protocol. The Type field has
0x86dd when the frame is carrying IPv6 data, and when we have IPv4
data, the frame uses 0x0800 in the protocol field:
Destination: IPv6-Neighbor-Discovery_00:01:00:03
(33:33:00:01:00:03)
Source: Aopen_3e:7f:dd (00:01:80:3e:7f:dd)
Type: IPv6 (0x86dd)
This is the beauty of the Ethernet_II frame. Because of the Type field, we
can run any Network layer routed protocol and the frame will carry the
data because it can identify the Network layer protocol!
Dostları ilə paylaş: |