CCNA Semester 3: Module 5 — Answers and Questions

CCNA 3:Module 5

;
Options With Highlight Colours are Correct Answer
1. Which two statements are true about the default operation of STP in a Layer 2 switched environment that has redundant connections between switches? (Choose two.)

The root switch is the switch with the highest speed ports.
Decisions on which port to block when two ports have equal cost depend on the port priority and identity.
All trunking ports are designated and not blocked.
Root switches have all ports set as root ports.
Non-root switches each have only one root port.

2. Which two statements describe the BIDs used in a spanning tree topology? (Choose two.)
They are sent out by the root bridge only after the inferior BPDUs are sent.
They consist of a bridge priority and MAC address.
Only the root bridge will send out a BID.
They are used by the switches in a spanning tree topology to elect the root bridge.
The switch with the fastest processor will have the lowest BID.

3. In which two ways is the information that is contained in BPDUs used by switches? (Choose two.)
to negotiate a trunk between switches
to set the duplex mode of a redundant link
to identify the shortest path to the root bridge
to prevent loops by sharing bridging tables between connected switches
to determine which ports will forward frames as part of the spanning tree

4. Which two actions does an RSTP edge port take if it receives a BPDU? (Choose two.)
immediately loses its edge status
inhibits the generation of a TCN
goes immediately to a learning state
disables itself
becomes a normal spanning-tree port

5. Refer to the exhibit. All switches in the network have empty MAC tables. STP has been disabled on the switches in the network. How will a broadcast frame that is sent by host PC1 be handled on the network?
Switch SW1 will block the broadcast and drop the frame.
Switch SW1 will forward the broadcast out all switch ports, except the originating port. This will generate an endless loop in the network.
Switch SW1 will forward the broadcast out all switch ports, except the originating port. All hosts in the network will replay with a unicast frame sent to host PC1.
Switch SW1 will forward the traffic out all switch ports except the originating port as a unicast frame. All hosts in the network will replay with a unicast frame sent to switch SW1.

6. Which two items are true regarding the spanning-tree portfast command? (Choose two.)
PortFast is Cisco proprietary.
PortFast can negatively effect DHCP services.
PortFast is used to more quickly prevent and eliminate bridging loops.
Enabling PortFast on trunks that connect to other switches improves convergence.
If an access port is configured with PortFast, it immediately transitions from a blocking to a forwarding state.

7. Refer to the exhibit. Server sends an ARP request for the MAC address of its default gateway. If STP is not enabled, what will be the result of this ARP request?

Router_1 will drop the broadcast and reply with the MAC address of the next hop router.
Switch_A will reply with the MAC address of the Router_1 E0 interface.
Switch_A and Switch_B will continuously flood the message onto the network.
The message will cycle around the network until its TTL is exceeded.

8. What is the first step in the process of convergence in a spanning tree topology?
election of the root bridge
blocking of the non-designated ports
selection of the designated trunk port
determination of the designated port for each segment

9. How can a network administrator influence which STP switch becomes the root bridge?
Configure all the interfaces on the switch as the static root ports.
Change the BPDU to a lower value than that of the other switches in the network.
Assign a lower IP address to the switch than that of the other switches in the network.
Set the switch priority to a smaller value than that of the other switches in the network.

10. Refer to the exhibit. The spanning-tree port priority of each interface is at the default setting. The network administrator enters the spanning-tree vlan 1 root primary command on S4. What is the effect of the command?
Spanning tree blocks Gi0/1 on S3.
Gi0/2 on S3 transitions to a root port.
Port priority makes Gi0/2 on S1 a root port.
S4 is already the root bridge, so there are no port changes.

11. What two features of the Spanning-Tree Protocol contribute to the time it takes for a switched network to converge after a topology change occurs? (Choose two.)
the max-age timer
the spanning-tree hold down timer
the forward delay
the spanning-tree path cost
the blocking delay

12. In which STP state does a port record MAC addresses but not forward user data?
blocking
Learning
disabling
listening
forwarding

13. Which three statements are accurate regarding RSTP and STP? (Choose three.)

RSTP uses a faster algorithm to determine root ports.
RSTP introduced the extended system ID to allow for more than 4096 VLANs.
Both RSTP and STP use the portfast command to allow ports to immediately transition to forwarding state.
Like STP PortFast, an RSTP edge port that receives a BPDU loses its edge port status immediately and becomes a normal spanning-tree port.
Configuration commands to establish primary and secondary root bridges are identical for STP and RSTP.
Because of the format of the BPDU packet, RSTP is backward compatible with STP.

14. What two elements will exist in a converged network with one spanning tree? (Choose two.)
one root bridge per network
all non-designated ports forwarding
one root port per non-root bridge
multiple designated ports per segment
one designated port per network

15. Which statement or set of paired statements correctly compares STP with RSTP?
STP and RSTP use the same BPDU format.
STP specifies backup ports. RSTP has only root ports, alternate ports, and designated ports.
STP port states are independent of port roles. RSTP ties together the port state and port role.
STP waits for the network to converge before placing ports into forwarding state. RSTP places alternate ports into forwarding state immediately.
16. Refer to the exhibit. What can be determined from the output shown?
Two hosts communicating between ports Fa0/2 and Fa0/4 have a cost of 38.
The priority was statically configured to identify the root.
STP is disabled on this switch.
The timers have been altered to reduce convergence time.

17. Which two criteria does a switch use to select the root bridge? (Choose two.)
bridge priority
switching speed
number of ports
base MAC address
switch location
memory size

18. What three link types have been defined for Rapid Spanning-Tree Protocol? (Choose three.)
Shared
end-to-end
edge-type
boundary-type
point-to-many
point-to-point

19. What Rapid Spanning Tree Protocol (RSTP) role is assigned to the forwarding port elected for every switched Ethernet LAN segment?
alternate
backup
Designated
root

20. When PVST+ was developed, the Bridge ID was modified to include which information?
bridge priority
MAC address
protocol
VLAN ID

CCNA Semester 3 Module 3 : Answers and Questions

Options With Highlight Colours are Correct Answer

1. Refer to the exhibit. The switches in the exhibit are connected with trunks within the same VTP management domain. Each switch is labeled with its VTP mode. A new VLAN is added to Switch3. This VLAN does not show up on the other switches. What is the reason for this?
VLANs cannot be created on transparent mode switches.
Transparent mode switches do not forward VTP advertisements.
VLANs created on transparent mode switches are not included in VTP advertisements.
Server mode switches neither listen to nor forward VTP messages from transparent mode switches.

2. Which two statements are true about the implementation of VTP? (Choose two.)
Switches must be connected via trunks.
The VTP domain name is case sensitive.
Transparent mode switches cannot be configured with new VLANs.
The VTP password is mandatory and case sensitive.
Switches that use VTP must have the same switch name.
3. Which two statements describe VTP transparent mode operation? (Choose two.)
Transparent mode switches can create VLAN management information.
Transparent mode switches can add VLANs of local significance only.
Transparent mode switches pass any VLAN management information that they receive to other switches.
Transparent mode switches can adopt VLAN management changes that are received from other switches.
Transparent mode switches originate updates about the status of their VLANS and inform other switches about that status.
4. Which three VTP parameters must be identical on all switches to participate in the same VTP domain? (Choose three.)
revision number
domain name
pruning
mode
domain password
version number

5. What causes a VTP configured switch to issue a summary advertisement?
A five-minute update timer has elapsed.
A port on the switch has been shutdown.
The switch is changed to the transparent mode.
A new host has been attached to a switch in the management domain.

6. Refer to the exhibit. Switches SW1 and SW2 are interconnected via a trunk link but failed to exchange VLAN information. The network administrator issued the show vtp status command to troubleshoot the problem. On the basis of the provided command output, what could be done to correct the problem?
Switch SW2 must be configured as a VTP client.
The switches must be interconnected via an access link.
The switches must be configured with the same VTP domain name.
Both switches must be configured with the same VTP revision number.
7. Refer to the exhibit. Which two facts can be confirmed by this output? (Choose two.)
If this switch is added to an established network, the other VTP-enabled switches in the same VTP domain will consider their own VLAN information to be more recent than the VLAN information advertised by this switch.
This switch shows no configuration revision errors.
This switch has established two-way communication with the neighboring devices.
This switch is configured to advertise its VLAN configuration to other VTP-enabled switches in the same VTP domain.
This switch is configured to allows the network manager to maximize bandwidth by restricting traffic to specific network devices.
8. Refer to the exhibit. Switch S1 is in VTP server mode. Switches S2 and S3 are in client mode. An administrator accidentally disconnects the cable from F0/1 on S2. What will the effect be on S2?
S2 will automatically transition to VTP transparent mode.
S2 will remove all VLANs from the VLAN database until the cable is reconnected.
S2 will retain the VLANs as of the latest known revision, but will lose the VLANs if it is reloaded.
S2 will automatically send a VTP request advertisement to 172.17.99.11 when the cable is reconnected.
9. Refer to the exhibit. What information can be learned from the output provided?
It verifies the configured VTP password.
It verifies the VTP domain is configured to use VTP version 2.
It verifies VTP advertisements are being exchanged.
It verifies the VTP domain name is V1.

10. How are VTP messages sent between switches in a domain?
Layer 2 broadcast
Layer 2 multicast
Layer 2 unicast
Layer 3 broadcast
Layer 3 multicast
Layer 3 unicast
11. What statement describes the default propagation of VLANs on a trunked link?
only VLAN 1
all VLANs
no VLANs
the native VLAN

12. Which two statements are true about VTP pruning? (Choose two.)
Pruning is enabled by default.
Pruning can only be configured on VTP servers.
Pruning must be configured on all VTP servers in the domain.
VLANs on VTP client-mode switches will not be pruned.
Pruning will prevent unnecessary flooding of broadcasts across trunks.

13. What does a client mode switch in a VTP management domain do when it receives a summary advertisement with a revision number higher than its current revision number?
It suspends forwarding until a subset advertisement update arrives.
It issues an advertisement request for new VLAN information.
It increments the revision number and forwards it to other switches.
It deletes the VLANs not included in the summary advertisement.
It issues summary advertisements to advise other switches of status changes.
14. Refer to the exhibit. All switches in the network participate in the same VTP domain. What happens when the new switch SW2 with a default configuration and revision number of 0 is inserted in the existing VTP domain Lab_Network?
The switch operates as a VTP client.
The switch operates in VTP transparent mode.
The switch operates as a VTP server and deletes the existing VLAN configuration in the domain.
The switch operates as a VTP server, but does not impact the existing VLAN configuration in the domain.
The switch operates as a VTP server in the default VTP domain and does not affect the configuration in the existing VTP domain.
15. What are two features of VTP client mode operation? (Choose two.)
unable to add VLANs
can add VLANs of local significance
forward broadcasts out all ports with no respect to VLAN information
can only pass VLAN management information without adopting changes
can forward VLAN information to other switches in the same VTP domain

16. Refer to the exhibit. S2 was previously used in a lab environment and has been added to the production network in server mode. The lab and production networks use the same VTP domain name, so the network administrator made no configuration changes to S2 before adding it to the production network. The lab domain has a higher revision number. After S2 was added to the production network, many computers lost network connectivity. What will solve the problem?
Reset the revision number on S2 with either the delete VTP command or by changing the domain name and then changing it back.
Re-enter all appropriate VLANs, except VLAN 1, manually on Switch1 so that they propagate throughout the network.*
Change S1 to transparent VTP mode to reclaim all VLANs in vlan.dat and change back to server mode.
Change S2 to client mode so the VLANs will automatically propagate.

17. A network administrator is replacing a failed switch with a switch that was previously on the network. What precautionary step should the administrator take on the replacement switch to avoid incorrect VLAN information from propagating through the network?
Enable VTP pruning.
Change the VTP domain name.
Change the VTP mode to client.
Change all the interfaces on the switch to access ports.

18. Refer to the exhibit. Switch1 is not participating in the VTP management process with the other switches that are shown in the exhibit. What are two possible explanations for this? (Choose two.)

Switch1 is in client mode.
Switch2 is in server mode.
Switch2 is in transparent mode.
Switch1 is in a different management domain.
Switch1 has end devices that are connected to the ports.
Switch1 is using VTP version 1, and Switch2 is using VTP version 2.

19. Refer to the exhibit. All switches in the VTP domain are new. Switch SW1 is configured as a VTP server, switches SW2 and SW4 are configured as VTP clients, and switch SW3 is configured in VTP transparent mode. Which switch or switches receive VTP updates and synchronize their VLAN configuration based on those updates?
All switches receive updates and synchronize VLAN information.
Only switch SW2 receives updates and synchronizes VLAN information.
Only switches SW3 and SW4 receive updates and synchronize VLAN information.
SW3 and SW4 receive updates, but only switch SW4 synchronizes VLAN information.

20. Which statement is true when VTP is configured on a switched network that incorporates VLANs?
VTP is only compatible with the 802.1Q standard.
VTP adds to the complexity of managing a switched network.
VTP allows a switch to be configured to belong to more than one VTP domain.
VTP dynamically communicates VLAN changes to all switches in the same VTP domain.

CCNA 3 Module 3 : Answers and Questions

Options With Highlight Colours are Correct Answer
1. What statement about the 802.1q trunking protocol is true?
802.1q is Cisco proprietary.
802.1q frames are mapped to VLANs by MAC address.
802.1q does NOT require the FCS of the original frame to be recalculated.
802.1q will not perform operations on frames that are forwarded out access ports.
2. Which two statements describe the benefits of VLANs? (Choose two.)
VLANs improve network performance by regulating flow control and window size.
VLANs enable switches to route packets to remote networks via VLAN ID filtering.
VLANs reduce network cost by reducing the number of physical ports required on switches.
VLANs improve network security by isolating users that have access to sensitive data and applications.
VLANs divide a network into smaller logical networks, resulting in lower susceptibility to broadcast storms.

3. What are two characteristics of VLAN1 in a default switch configuration? (Choose two.)
VLAN1 should renamed.
VLAN 1 is the management VLAN.
All switch ports are members of VLAN1.
Only switch port 0/1 is assigned to VLAN1.
Links between switches must be members of VLAN1.

4. Refer to the exhibit. SW1 and SW2 are new switches being installed in the topology shown in the exhibit. Interface Fa0/1 on switch SW1 has been configured with trunk mode “on”. Which statement is true about forming a trunk link between the switches SW1 and SW2?

Interface Fa0/2 on switch SW2 will negotiate to become a trunk link if it supports DTP.
Interface Fa0/2 on switch SW2 can only become a trunk link if statically configured as a trunk.
Interface Fa0/1 converts the neighboring link on the adjacent switch into a trunk link if the neighboring interface is configured in nonegotiate mode.
Interface Fa0/1 converts the neighboring link on the adjacent switch into a trunk link automatically with no consideration of the configuration on the neighboring interface.

5. Refer to the exhibit. Computer 1 sends a frame to computer 4. On which links along the path between computer 1 and computer 4 will a VLAN ID tag be included with the frame?

A
A, B
A, B, D, G
A, D, F
C, E
C, E, F

6. The network administrator wants to separate hosts in Building A into two VLANs numbered 20 and 30. Which two statements are true concerning VLAN configuration? (Choose two.)
The VLANs may be named.
VLAN information is saved in the startup configuration.
Non-default VLANs created manually must use the extended range VLAN numbers.
The network administrator may create the VLANs in either global configuration mode or VLAN database mode.
Both VLANs may be named BUILDING_A to distinguish them from other VLANs in different geographical locations.

7. Refer to the exhibit. Which two conclusions can be drawn regarding the switch that produced the output shown? (Choose two.)
The network administrator configured VLANs 1002-1005.
The VLANs are in the active state and are in the process of negotiating configuration parameters.
A FDDI trunk has been configured on this switch.
The command switchport access vlan 20 was entered in interface configuration mode for Fast Ethernet interface 0/1.
Devices attached to ports fa0/5 through fa0/8 cannot communicate with devices attached to ports fa0/9 through fa0/12 without the use of a Layer 3 device.

8. What happens to the member ports of a VLAN when the VLAN is deleted?
The ports cannot communicate with other ports.
The ports default back to the management VLAN.
The ports automatically become a part of VLAN1.
The ports remain a part of that VLAN until the switch is rebooted. They then become members of the management VLAN.

9. A network administrator is removing several VLANs from a switch. When the administrator enters the no vlan 1 command, an error is received. Why did this command generate an error?

VLAN 1 can never be deleted.
VLAN 1 can only be deleted by deleting the vlan.dat file.
VLAN 1 can not be deleted until all ports have been removed from it.
VLAN 1 can not be deleted until another VLAN has been assigned its responsibilities.

10. What is the effect of the switchport mode dynamic desirable command?
DTP cannot negotiate the trunk since the native VLAN is not the default VLAN.
The remote connected interface cannot negotiate a trunk unless it is also configured as dynamic desirable.
The connected devices dynamically determine when data for multiple VLANs must be transmitted across the link and bring the trunk up as needed.
A trunk link is formed if the remote connected device is configured with the switchport mode dynamic auto or switchport mode trunk commands.

11. Refer to the exhibit. The exhibited configurations do not allow the switches to form a trunk. What is the most likely cause of this problem?

Cisco switches only support the ISL trunking protocol.
The trunk cannot be negotiated with both ends set to auto.
By default, Switch1 will only allow VLAN 5 across the link.
A common native VLAN should have been configured on the switches.

12. Switch port fa0/1 was manually configured as a trunk, but now it will be used to connect a host to the network. How should the network administrator reconfigure switch port Fa0/1?

Disable DTP.
Delete any VLANs currently being trunked through port Fa0/1.
Administratively shut down and re-enable the interface to return it to default.
Enter the switchport mode access command in interface configuration mode.

13. Refer to the exhibit. Computer B is unable to communicate with computer D. What is the most likely cause of this problem?

The link between the switches is up but not trunked.
VLAN 3 is not an allowed VLAN to enter the trunk between the switches.
The router is not properly configured to route traffic between the VLANs.
Computer D does not have a proper address for the VLAN 3 address space.

14. Refer to the exhibit. The network administrator has just added VLAN 50 to Switch1 and Switch2 and assigned hosts on the IP addresses of the VLAN in the 10.1.50.0/24 subnet range. Computer A can communicate with computer B, but not with computer C or computer D. What is the most likely cause of this problem?

There is a native VLAN mismatch.
The link between Switch1 and Switch2 is up but not trunked.
The router is not properly configured for inter-VLAN routing.
VLAN 50 is not allowed to entering the trunk between Switch1 and Switch2.

15. Refer to the exhibit. Which statement is true concerning interface Fa0/5?
The default native VLAN is being used.
The trunking mode is set to auto.
Trunking can occur with non-Cisco switches.
VLAN information about the interface encapsulates the Ethernet frames.

16. What statements describe how hosts on VLANs communicate?
Hosts on different VLANs use VTP to negotiate a trunk.
Hosts on different VLANs communicate through routers.
Hosts on different VLANs should be in the same IP network.
Hosts on different VLANs examine VLAN ID in the frame tagging to determine if the frame for their network.

17. Refer to the exhibit. How far is a broadcast frame that is sent by computer A propagated in the LAN domain?

none of the computers will receive the broadcast frame
computer A, computer B, computer C
computer A, computer D, computer G
computer B, computer C
computer D, computer G
computer A, computer B, computer C, computer D, computer E, computer F, computer G, computer H, computer I

18. What is a valid consideration for planning VLAN traffic across multiple switches?
Configuring interswitch connections as trunks will cause all hosts on any VLAN to receive broadcasts from the other VLANs.
A trunk connection is affected by broadcast storms on any particular VLAN that is carried by that trunk.
Restricting trunk connections between switches to a single VLAN will improve efficiency of port usage.
Carrying all required VLANs on a single access port will ensure proper traffic separation.

19. Which two statements about the 802.1q trunking protocol are true? (Choose two.)
802.1q is Cisco proprietary.
802.1q frames are mapped to VLANs by MAC address.
If 802.1q is used on a frame, the FCS must be recalculated.
802.1q will not perform operations on frames that are forwarded out access ports.

802.1q allows the encapsulation of the original frame to identify the VLAN from which a frame originated.

20. What switch port modes will allow a switch to successfully form a trunking link if the neighboring switch port is in “dynamic desirable” mode?
dynamic desirable mode
on or dynamic desirable mode
on, auto, or dynamic desirable mode
on, auto, dynamic desirable, or nonegotiate mode

21. Refer to the exhibit. Company HR is adding PC4, a specialized application workstation, to a new company office. The company will add a switch, S3, connected via a trunk link to S2, another switch. For security reasons the new PC will reside in the HR VLAN, VLAN 10. The new office will use the 172.17.11.0/24 subnet. After installation, the existing PCs are unable to access shares on PC4. What is the likely cause?
The switch to switch connection must be configured as an access port to permit access to VLAN 10 on S3.
The new PC is on a different subnet so Fa0/2 on S3 must be configured as a trunk port.
PC4 must use the same subnet as the other HR VLAN PCs.
A single VLAN cannot span multiple switches.

22. What must the network administrator do to remove Fast Ethernet port fa0/1 from VLAN 2 and assign it to VLAN 3?
Enter the no vlan 2 and the vlan 3 commands in global configuration mode.
Enter the switchport access vlan 3 command in interface configuration mode.
Enter the switchport trunk native vlan 3 command in interface configuration mode.
Enter the no shutdown in interface configuration mode to return it to the default configuration and then configure the port for VLAN 3.

Managing the MAC Address Table

Managing the MAC Address Table

Switches use MAC address tables to determine how to forward traffic between ports. These MAC tables include dynamic and static addresses. The figure shows a sample MAC address table from the output of the show mac-address-table command that includes static and dynamic MAC addresses.

Note: The MAC address table was previously referred to as content addressable memory (CAM) or as the CAM table.

Dynamic addresses are source MAC addresses that the switch learns and then ages when they are not in use. You can change the aging time setting for MAC addresses. The default time is 300 seconds. Setting too short an aging time can cause addresses to be prematurely removed from the table. Then, when the switch receives a packet for an unknown destination, it floods the packet to all ports in the same LAN (or VLAN) as the receiving port. This unnecessary flooding can impact performance. Setting too long an aging time can cause the address table to be filled with unused addresses, which prevents new addresses from being learned. This can also cause flooding.

The switch provides dynamic addressing by learning the source MAC address of each frame that it receives on each port, and then adding the source MAC address and its associated port number to the MAC address table. As computers are added or removed from the network, the switch updates the MAC address table, adding new entries and aging out those that are currently not in use.

A network administrator can specifically assign static MAC addresses to certain ports. Static addresses are not aged out, and the switch always knows which port to send out traffic destined for that specific MAC address. As a result, there is no need to relearn or refresh which port the MAC address is connected to. One reason to implement static MAC addresses is to provide the network administrator complete control over access to the network. Only those devices that are known to the network administrator can connect to the network.

To create a static mapping in the MAC address table, use the mac-address-table static vlan {1-4096, ALL} interfaceinterface-id command.

To remove a static mapping in the MAC address table, use the no mac-address-table static vlan {1-4096, ALL} interfaceinterface-id command.

The maximum size of the MAC address table varies with different switches. For example, the Catalyst 2960 series switch can store up to 8,192 MAC addresses. There are other protocols that may limit the absolute number of MAC address available to a switch.

Sequence of Cisco IOS commands

Describe the Boot Sequence

In this topic, you will learn the sequence of Cisco IOS commands that a switch executes from the off state to displaying the login prompt. After a Cisco switch is turned on, it goes through the following boot sequence:

The switch loads the boot loader software. The boot loader is a small program stored in NVRAM and is run when the switch is first turned on.

The boot loader:

Performs low-level CPU initialization. It initializes the CPU registers, which control where physical memory is mapped, the quantity of memory, and its speed.
Performs power-on self-test (POST) for the CPU subsystem. It tests the CPU DRAM and the portion of the flash device that makes up the flash file system.
Initializes the flash file system on the system board.
Loads a default operating system software image into memory and boots the switch. The boot loader finds the Cisco IOS image on the switch by first looking in a directory that has the same name as the image file (excluding the .bin extension). If it does not find it there, the boot loader software searches each subdirectory before continuing the search in the original directory.

The operating system then initializes the interfaces using the Cisco IOS commands found in the operating system configuration file, config.text, stored in the switch flash memory.

Recovering from a System Crash

The boot loader also provides access into the switch if the operating system cannot be used. The boot loader has a command-line facility that provides access to the files stored on Flash memory before the operating system is loaded. From the boot loader command line you can enter commands to format the flash file system, reinstall the operating system software image, or recover from a lost or forgotten password.

Symmetric and Asymmetric Switching

Symmetric and Asymmetric Switching

In this topic, you will learn the differences between symmetric and asymmetric switching in a network. LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports.

Symmetric switching provides switched connections between ports with the same bandwidth, such as all 100 Mb/s ports or all 1000 Mb/s ports. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10 Mb/s, 100 Mb/s, and 1000 Mb/s ports. The figure shows the differences between symmetric and asymmetric switching.

Asymmetric

Asymmetric switching enables more bandwidth to be dedicated to a server switch port to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. For the switch to match the different data rates on different ports, entire frames are kept in the memory buffer and are moved to the port one after the other as required.

Symmetric and Asymmetric Switching

In this topic, you will learn the differences between symmetric and asymmetric switching in a network. LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is allocated to the switch ports.

Symmetric switching provides switched connections between ports with the same bandwidth, such as all 100 Mb/s ports or all 1000 Mb/s ports. An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a combination of 10 Mb/s, 100 Mb/s, and 1000 Mb/s ports. The figure shows the differences between symmetric and asymmetric switching.

Asymmetric

Asymmetric switching enables more bandwidth to be dedicated to a server switch port to prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a server at the same time. Memory buffering is required on an asymmetric switch. For the switch to match the different data rates on different ports, entire frames are kept in the memory buffer and are moved to the port one after the other as required.

Key components of the Ethernet standard

In this topic, you will learn about key components of the Ethernet standard that play a significant role in the design and implementation of switched networks. You will explore how Ethernet communications function and how switches play a role in the communication process.

CSMA/CD

Ethernet signals are transmitted to every host connected to the LAN using a special set of rules to determine which station can access the network. The set of rules that Ethernet uses is based on the IEEE carrier sense multiple access/collision detect (CSMA/CD) technology. You may recall from CCNA Exploration: Networking Fundamentals that CSMA/CD is only used with half-duplex communication typically found in hubs. Full-duplex switches do not use CSMA/CD.

Carrier Sense

In the CSMA/CD access method, all network devices that have messages to send must listen before transmitting.

If a device detects a signal from another device, it waits for a specified amount of time before attempting to transmit.

When there is no traffic detected, a device transmits its message. While this transmission is occurring, the device continues to listen for traffic or collisions on the LAN. After the message is sent, the device returns to its default listening mode.

Multi-access

If the distance between devices is such that the latency of the signals of one device means that signals are not detected by a second device, the second device may also start to transmit. The media now has two devices transmitting signals at the same time. The messages propagate across the media until they encounter each other. At that point, the signals mix and the messages are destroyed, a collision. Although the messages are corrupted, the jumble of remaining signals continues to propagate across the media.

Collision Detection

When a device is in listening mode, it can detect when a collision occurs on the shared media, because all devices can detect an increase in the amplitude of the signal above the normal level.

When a collision occurs, the other devices in listening mode, as well as all the transmitting devices, detect the increase in the signal amplitude. Every device that is transmitting continues to transmit to ensure that all devices on the network detect the collision.

Jam Signal and Random Backoff

When a collision is detected, the transmitting devices send out a jamming signal. The jamming signal notifies the other devices of a collision, so that they invoke a backoff algorithm. This backoff algorithm causes all devices to stop transmitting for a random amount of time, which allows the collision signals to subside.

After the delay has expired on a device, the device goes back into the “listening before transmit” mode. A random backoff period ensures that the devices that were involved in the collision do not try to send traffic again at the same time, which would cause the whole process to repeat. However, during the backoff period, a third device may transmit before either of the two involved in the collision have a chance to re-transmit.

Power over Ethernet

Two other characteristics you want to consider when selecting a switch are Power over Ethernet (PoE) and Layer 3 functionality.

Power over Ethernet

Power over Ethernet (PoE) allows the switch to deliver power to a device over the existing Ethernet cabling. As you can see in the figure, this feature can be used by IP phones and some wireless access points. PoE allows you more flexibility when installing wireless access points and IP phones because you can install them anywhere you can run an Ethernet cable. You do not need to consider how to run ordinary power to the device. You should only select a switch that supports PoE if you are actually going to take advantage of the feature, because it adds considerable cost to the switch.

Click the switch icon to see PoE ports.

Click the phone icon to see the phone ports.

Click the wireless access point to see its ports.

Layer 3 Functions

Click the Layer 3 functions button in the figure to see some Layer 3 functions that can be provided by switches in a hierarchical network.

Typically, switches operate at Layer 2 of the OSI reference model where they deal primarily with the MAC addresses of devices connected to switch ports. Layer 3 switches offer advanced functionality that will be discussed in greater detail in the later chapters of this course. Layer 3 switches are also known as multilayer switches.

Access Layer Switch Features

Now that you know which factors to consider when choosing a switch, let us examine which features are required at each layer in a hierarchical network. You will then be able to match the switch specification with its ability to function as an access, distribution, or core layer switch.

Access layer switches facilitate the connection of end node devices to the network. For this reason, they need to support features such as port security, VLANs, Fast Ethernet/Gigabit Ethernet, PoE, and link aggregation.

Port security allows the switch to decide how many or what specific devices are allowed to connect to the switch. All Cisco switches support port layer security. Port security is applied at the access. Consequently, it is an important first line of defense for a network. You will learn about port security in Chapter 2.

VLANs are an important component of a converged network. Voice traffic is typically given a separate VLAN. In this way, voice traffic can be supported with more bandwidth, more redundant connections, and improved security. Access layer switches allow you to set the VLANs for the end node devices on your network.

Port speed is also a characteristic you need to consider for your access layer switches. Depending on the performance requirements for your network, you must choose between Fast Ethernet and Gigabit Ethernet switch ports. Fast Ethernet allows up to 100 Mb/s of traffic per switch port. Fast Ethernet is adequate for IP telephony and data traffic on most business networks, however, performance is slower than Gigabit Ethernet ports. Gigabit Ethernet allows up to 1000 Mb/s of traffic per switch port. Most modern devices, such as workstations, notebooks, and IP phones, support Gigabit Ethernet. This allows for much more efficient data transfers, enabling users to be more productive. Gigabit Ethernet does have a drawback-switches supporting Gigabit Ethernet are more expensive.

Another feature requirement for some access layer switches is PoE. PoE dramatically increases the overall price of the switch across all Cisco Catalyst switch product lines, so it should only be considered when voice convergence is required or wireless access points are being implemented, and power is difficult or expensive to run to the desired location.

Link aggregation is another feature that is common to most access layer switches. Link aggregation allows the switch to use multiple links simultaneously. Access layer switches take advantage of link aggregation when aggregating bandwidth up to distribution layer switches.

Because the uplink connection between the access layer switch and the distribution layer switch is typically the bottleneck in communication, the internal forwarding rate of access layer switches does not need to be as high as the link between the distribution and access layer switches. Characteristics such as the internal forwarding rate are less of a concern for access layer switches because they only handle traffic from the end devices and forward it to the distribution layer switches.

Another feature requirement for some access layer switches is PoE. PoE dramatically increases the overall price of the switch across all Cisco Catalyst switch product lines, so it should only be considered when voice convergence is required or wireless access points are being implemented, and power is difficult or expensive to run to the desired location.

Link aggregation is another feature that is common to most access layer switches. Link aggregation allows the switch to use multiple links simultaneously. Access layer switches take advantage of link aggregation when aggregating bandwidth up to distribution layer switches.

Because the uplink connection between the access layer switch and the distribution layer switch is typically the bottleneck in communication, the internal forwarding rate of access layer switches does not need to be as high as the link between the distribution and access layer switches. Characteristics such as the internal forwarding rate are less of a concern for access layer switches because they only handle traffic from the end devices and forward it to the distribution layer switches.

Hierarchical Network Design Principles

Hierarchical Network Design Principles

Just because a network seems to have a hierarchical design does not mean that the network is well designed. These simple guidelines will help you differentiate between well-designed and poorly designed hierarchical networks. This section is not intended to provide you with all the skills and knowledge you need to design a hierarchical network, but it offers you an opportunity to begin to practice your skills by transforming a flat network topology into a hierarchical network topology.

Network Diameter

When designing a hierarchical network topology, the first thing to consider is network diameter. Diameter is usually a measure of distance, but in this case, we are using the term to measure the number of devices. Network diameter is the number of devices that a packet has to cross before it reaches its destination. Keeping the network diameter low ensures low and predictable latency between devices.

Rollover Network Diameter button in the figure.

In the figure, PC1 communicates with PC3. There could be up to six interconnected switches between PC1 and PC3. In this case, the network diameter is 6. Each switch in the path introduces some degree of latency. Network device latency is the time spent by a device as it processes a packet or frame. Each switch has to determine the destination MAC address of the frame, check its MAC address table, and forward the frame out the appropriate port. Even though that entire process happens in a fraction of a second, the time adds up when the frame has to cross many switches.

In the three-layer hierarchical model, Layer 2 segmentation at the distribution layer practically eliminates network diameter as an issue. In a hierarchical network, network diameter is always going to be a predictable number of hops between the source and destination devices.

Bandwidth Aggregation

Each layer in the hierarchical network model is a possible candidate for bandwidth aggregation. Bandwidth aggregation is the practice of considering the specific bandwidth requirements of each part of the hierarchy. After bandwidth requirements of the network are known, links between specific switches can be aggregated, which is called link aggregation. Link aggregation allows multiple switch port links to be combined so as to achieve higher throughput between switches. Cisco has a proprietary link aggregation technology called EtherChannel, which allows multiple Ethernet links to be consolidated. A discussion of EtherChannel is beyond the scope of this course. To learn more, visit:

http://www.cisco.com/en/US/tech/tk389/tk213/tsd_technology_support_protocol_home.html


in figure, computers PC1 and PC3 require a significant amount of bandwidth because they are used for developing weather simulations. The network manager has determined that the access layer switches S1, S3, and S5 require increased bandwidth. Following up the hierarchy, these access layer switches connect to the distribution switches D1, D2, and D4. The distribution switches connect to core layer switches C1 and C2. Notice how specific links on specific ports in each switch are aggregated. In this way, increased bandwidth is provided for in a targeted, specific part of the network. Note that in this figure, aggregated links are indicated by two dotted lines with an oval tying them together. In other figures, aggregated links are represented by a single, dotted line with an oval.

Redundancy

Redundancy is one part of creating a highly available network. Redundancy can be provided in a number of ways. For example, you can double up the network connections between devices, or you can double the devices themselves. This chapter explores how to employ redundant network paths between switches. A discussion on doubling up network devices and employing special network protocols to ensure high availability is beyond the scope of this course. For an interesting discussion on high availability, visit:

http://www.cisco.com/en/US/products/ps6550/products_ios_technology_home.html.

Implementing redundant links can be expensive. Imagine if every switch in each layer of the network hierarchy had a connection to every switch at the next layer. It is unlikely that you will be able to implement redundancy at the access layer because of the cost and limited features in the end devices, but you can build redundancy into the distribution and core layers of the network.

In the figure, redundant links are shown at the distribution layer and core layer. At the distribution layer, there are two distribution layer switches, the minimum required to support redundancy at this layer. The access layer switches, S1, S3, S4, and S6, are cross-connected to the distribution layer switches. This protects your network if one of the distribution switches fails. In case of a failure, the access layer switch adjusts its transmission path and forwards the traffic through the other distribution switch.

Some network failure scenarios can never be prevented, for example, if the power goes out in the entire city, or the entire building is demolished because of an earthquake. Redundancy does not attempt to address these types of disasters. To learn more about how a business can continue to work and recover from a disaster, visit: http://www.cisco.com/en/US/netsol/ns516/networking_solutions_package.html

Start at the Access Layer

Imagine that a new network design is required. Design requirements, such as the level of performance or redundancy necessary, are determined by the business goals of the organization. Once the design requirements are documented, the designer can begin selecting the equipment and infrastructure to implement the design.

When you start the equipment selection at the access layer, you can ensure that you accommodate all network devices needing access to the network. After you have all end devices accounted for, you have a better idea of how many access layer switches you need. The number of access layer switches, and the estimated traffic that each generates, helps you to determine how many distribution layer switches are required to achieve the performance and redundancy needed for the network. After you have determined the number of distribution layer switches, you can identify how many core switches are required to maintain the performance of the network.

Benefits of a Hierarchical Network

There are many benefits associated with hierarchical network designs.

Scalability

Hierarchical networks scale very well. The modularity of the design allows you to replicate design elements as the network grows. Because each instance of the module is consistent, expansion is easy to plan and implement. For example, if your design model consists of two distribution layer switches for every 10 access layer switches, you can continue to add access layer switches until you have 10 access layer switches cross-connected to the two distribution layer switches before you need to add additional distribution layer switches to the network topology. Also, as you add more distribution layer switches to accommodate the load from the access layer switches, you can add additional core layer switches to handle the additional load on the core.

Redundancy

As a network grows, availability becomes more important. You can dramatically increase availability through easy redundant implementations with hierarchical networks. Access layer switches are connected to two different distribution layer switches to ensure path redundancy. If one of the distribution layer switches fails, the access layer switch can switch to the other distribution layer switch. Additionally, distribution layer switches are connected to two or more core layer switches to ensure path availability if a core switch fails. The only layer where redundancy is limited is at the access layer. Typically, end node devices, such as PCs, printers, and IP phones, do not have the ability to connect to multiple access layer switches for redundancy. If an access layer switch fails, just the devices connected to that one switch would be affected by the outage. The rest of the network would continue to function unaffected.

Performance

Communication performance is enhanced by avoiding the transmission of data through low-performing, intermediary switches. Data is sent through aggregated switch port links from the access layer to the distribution layer at near wire speed in most cases. The distribution layer then uses its high performance switching capabilities to forward the traffic up to the core, where it is routed to its final destination. Because the core and distribution layers perform their operations at very high speeds, there is no contention for network bandwidth. As a result, properly designed hierarchical networks can achieve near wire speed between all devices.

Security

Security is improved and easier to manage. Access layer switches can be configured with various port security options that provide control over which devices are allowed to connect to the network. You also have the flexibility to use more advanced security policies at the distribution layer. You may apply access control policies that define which communication protocols are deployed on your network and where they are permitted to go. For example, if you want to limit the use of HTTP to a specific user community connected at the access layer, you could apply a policy that blocks HTTP traffic at the distribution layer. Restricting traffic based on higher layer protocols, such as IP and HTTP, requires that your switches are able to process policies at that layer. Some access layer switches support Layer 3 functionality, but it is usually the job of the distribution layer switches to process Layer 3 data, because they can process it much more efficiently.

Manageability

Manageability is relatively simple on a hierarchical network. Each layer of the hierarchical design performs specific functions that are consistent throughout that layer. Therefore, if you need to change the functionality of an access layer switch, you could repeat that change across all access layer switches in the network because they presumably perform the same functions at their layer. Deployment of new switches is also simplified because switch configurations can be copied between devices with very few modifications. Consistency between the switches at each layer allows for rapid recovery and simplified troubleshooting. In some special situations, there could be configuration inconsistencies between devices, so you should ensure that configurations are well documented so that you can compare them before deployment.