lunes, 11 de mayo de 2009

CCNP3. MODULE 3: IMPLEMENTING SPANNING TREE

Overview

This module introduces the fundamentals of Spanning Tree Protocol (STP) in a switched network. It explains how the root bridge and its backup are elected, and also covers features for enhancing STP performance, such as Rapid STP (RSTP) and Multiple STP (MSTP). In addition, you will learn how EtherChannel is configured and how it interoperates with STP. The module provides guidelines on improving STP resiliency when network faults occur.


3.1 Describing STP

3.1.1 Describing Transparent Bridges

Switches have replaced bridges as the network device for implementing transparent bridging in modern networks. The basic functionality of a switch is identical to that of a transparent bridge on a per-VLAN basis. To understand STP, it is helpful to look at the behavior of a transparent bridge without spanning tree.

A transparent bridge has these characteristics:

  • It must not modify the frames that are forwarded.

  • It learns addresses by “listening” on a port for the source address of a device. When a source MAC address is read in frames coming into a specific port, the bridge assumes that the frames destined for that MAC address can be sent out of that port. The bridge then builds a table that records which source addresses are seen on which port. A bridge is always listening and learning MAC addresses in this manner.

  • It must forward all broadcasts out of all ports, except for the port that initially received the broadcast.

  • If a destination address is unknown to the bridge, it forwards the frame out of all ports, except for the port that initially received the frame. This is called unicast flooding.

Transparent bridging must be transparent to the devices on the network. End stations require no configuration. The existence of the bridging protocol operation is not directly visible to the end stations.

As with traditional shared Ethernet, transparent bridges inherently lack the capability to provide redundancy. STP provides a mechanism in the Ethernet transparent bridge environment to discover the Layer 2 topology dynamically and to ensure that there is only one path through the network. Without STP, there is no way to make a transparent bridge environment redundant. STP also protects a network against accidental miscablings because it prevents unwanted bridging loops.

Note:
The spanning tree algorithm is implemented in other media types, such as Token Ring. STP has a different purpose and function in Token Ring than in Ethernet, because bridging loops can be desirable in Token Ring.

3.1.2 Identifying Traffic Loops

A bridge loop occurs when there is no Layer 2 mechanism, such as time-to-live, to manage the redundant paths and stop the frame from circulating endlessly. Station A has two potential paths to station B via the two intermediate bridges.

Figure describes what happens when station A sends frames to station B if there are no provisions to deal with redundant paths enabled.


3.1.3 Explaining a Loop Free Network

In a loop free network, the network cannot create Layer 2 broadcast storms or flooded unicast storms. A loop free network can be achieved manually by shutting down or disconnecting all redundant links between bridges. However, this leaves no redundancy in the network and requires manual intervention in the event of a link failure.

STP resolves this problem: If there are alternative links to a destination on a switch, only one link is used to forward data. The switch ports associated with the alternative paths remain aware of the network topology and forward frames over an alternative link if a failure occurs on a primary link.

The spanning tree algorithm (STA) runs on each switch to activate or block redundant links. To find the redundant links, the STA chooses a reference point in the network and determines if there are redundant paths to that reference point. If the STA finds a redundant path, it chooses which path forwards frames and which paths are blocked. This effectively severs the redundant links within the network until they are needed when the primary link toward the reference point fails.

Spanning tree standards often refer to a “bridge,” but it is likely that all the devices exchanging spanning tree information are Layer 2 switches.


3.1.4 Describing the 802.1D Spanning Tree Protocol


With 802.1D STP, switches reconfigure the paths over which they forward frames, thereby creating a loop free path when there are redundant switch paths through the network. This is accomplished by forwarding traffic over specific ports and by blocking traffic from being forwarded out of other ports. STP prevents loops by using the following mechanisms:

  • STP communicates Layer 2 information between adjacent switches by exchanging bridge protocol data unit (BPDU) messages.

  • A single root bridge is chosen to serve as the reference point from which a loop free topology is built for all switches exchanging BPDUs.

  • Each switch, except for the root bridge, selects a root port that provides the best path to the root bridge.

  • In a triangular design similar to the one in Figure , on the link between the two nonroot switch ports, a port on one switch becomes a designated port, and the port on the other switch is in a blocking state and does not forward frames. This effectively breaks any loop. Typically, the designated port is on the switch with the best path to the root bridge.

STP sends BPDUs out of every port of the bridge.

The information provided in a BPDU includes the following:

  • Root ID: The lowest bridge ID (BID) in the topology

  • Cost of path: Cost of all links from the transmitting switch to the root bridge

  • BID: BID of the transmitting switch

  • Port ID: Transmitting switch port ID

  • STP timer values: Maximum age, hello time, forward delay

BPDUs contain the required information for STP configuration. The Type field for the BPDU message is 0x00, and it uses the multicast MAC address 01-80-C2-00-00-00.


3.1.5 Describing the Root Bridge

STP uses a root bridge, root ports, and designated ports to establish a loop free path through the network. The first step in creating a loop free spanning tree is to select a root bridge to be the reference point that all switches use to establish forwarding paths. The STP topology is converged after a root bridge has been selected, and each bridge has selected its root port, designated bridge, and the participating ports. STP uses BPDUs as it transitions port states to achieve convergence.

Spanning tree elects a root bridge in each broadcast domain on the LAN. Path calculation through the network is based on the root bridge. The bridge is selected using the bridge ID (BID), which consists of a 2-byte Priority field plus a 6-byte MAC address. In spanning tree, lower BID values are preferred. The Priority field value helps determine which bridge is going to be the root and can be manually altered. In a default configuration, the Priority field is set at 32768. When the default Priority field is the same for all bridges, selecting the root bridge is based on the lowest MAC address.

The root bridge maintains the stability of the forwarding paths between all switches for a single STP instance. A spanning tree instance is when all switches exchanging BPDUs and participating in spanning tree negotiation are associated with a single root. If this is done for all VLANs, it is called a Common Spanning Tree (CST) instance. There is also a Per VLAN Spanning Tree (PVST) implementation that provides one instance, and therefore one root bridge, for each VLAN.

The BID and root ID are each 8-byte fields carried in a BPDU. These values are used to complete the root bridge election process. A switch identifies the root bridge by evaluating the root ID field in the BPDUs that it receives. The unique BID is carried in the Root ID field of the BPDUs sent by each switch in the tree.

When a switch first boots and begins sending BPDUs, it has no knowledge of a root ID, so it populates the Root ID field of outbound BPDUs with its own BID.

The switch with the lowest numerical BID assumes the role of root bridge for that spanning tree instance. If a switch receives BPDUs with a lower BID than its own, it places the lowest value into the Root ID field of its outbound BPDUs.

Spanning tree operation requires that each switch have a unique BID. In the original 802.1D standard, the BID was composed of the Priority Field and the MAC address of the switch, and all VLANs were represented by a CST. Because PVST requires that a separate instance of spanning tree run for each VLAN, the BID field is required to carry VLAN ID (VID) information, which is accomplished by reusing a portion of the Priority field as the extended system ID.

To accommodate the extended system ID, the original 802.1D 16-bit Bridge Priority field is split into two fields, resulting in these components in the BID :

  • Bridge Priority: A 4-bit field that carries the bridge priority. Because of the limited bit count, priority is conveyed in discrete values in increments of 4096 rather than discrete values in increments of 1, as they would be in a full 16-bit field. The default priority, in accordance with IEEE 802.1D, is 32,768, which is the mid-range value.

  • Extended System ID: A 12-bit field that carries the VID for PVST.

  • MAC address: A 6-byte field with the MAC address of a single switch.

By virtue of the MAC address, a BID is always unique. When the priority and extended system ID are appended to the switch MAC address, each VLAN on the switch can be represented by a unique BID.

If no priority has been configured, every switch has the same default priority and the election of the root for each VLAN is based on the MAC address. This is a fairly random means of selecting the ideal root bridge and, for this reason, it is advisable to assign a lower priority to the switch that should serve as root bridge.

Only four bits are used to set the bridge priority. Because of the limited bit count, priority is configurable only in increments of 4096.

A switch responds with the possible priority values if an incorrect value is entered:

Switch(config)#spanning-tree vlan 1 priority 1234
% Bridge Priority must be in increments of 4096.
% Allowed values are:
0 4096 8192 12288 16384 20480 24576 28672
32768 36864 40960 45056 49152 53248 57344 61440

If no priority has been configured, every switch will have the same default priority of 32768. Assuming all other switches are at default priority, the spanning-tree vlan vlan-id root primary command sets a value of 24576. Also, assuming all other switches are at default priority, the spanning-tree vlan vlan-id root secondary command sets a value of 28672.

The switch with the lowest BID becomes the root bridge for a VLAN. Specific configuration commands are used to determine which switch will become the root bridge.

A Cisco Catalyst switch running PVST maintains an instance of spanning tree for each active VLAN that is configured on the switch. A unique BID is associated with each instance. For each VLAN, the switch with the lowest BID becomes the root bridge for that VLAN. Whenever the bridge priority changes, the BID also changes. This results in the recomputation of the root bridge for the VLAN.

To configure a switch to become the root bridge for a specified VLAN, use the spanning-tree vlan vlan-ID root primary command.


CAUTION:

Spanning tree commands take effect immediately, so network traffic is disrupted while the reconfiguration occurs.


A secondary root is a switch that may become the root bridge for a VLAN if the primary root bridge fails. To configure a switch as the secondary root bridge for the VLAN, use the command spanning-tree vlan vlan-ID root secondary. Assuming that the other bridges in the VLAN retain their default STP priority, this switch will become the root bridge in the event that the primary root bridge fails. This command can be executed on more than one switch to configure multiple backup root bridges.

BPDUs are exchanged between switches, and the analysis of the BID and root ID information from those BPDUs determines which bridge is selected as the root bridge. and

In the example shown, both switches have the same priority for the same VLAN. The switch with the lowest MAC address is elected as the root bridge. In the example, switch X is the root bridge for VLAN 1, with a BID of 0x8001:0c0011111111.


3.1.6 Describing Port Roles

On a nonroot bridge, the spanning tree determines each port’s role in the topology and the most desirable forwarding path for data frames as the switch receives BPDUs on the ports. There are four 802.1D port roles. and .

Each Layer 2 port on a switch running STP exists in one of these five port states :

  • Blocking: The Layer 2 port is a nondesignated port and does not participate in frame forwarding. The port receives BPDUs to determine the location and root ID of the root switch and which port roles (root, designated, or nondesignated) each switch port should assume in the final active STP topology. By default, the port spends 20 seconds in this state (max age).

  • Listening: Spanning tree has determined that the port can participate in frame forwarding according to the BPDUs that the switch has received. At this point, the switch port is receiving BPDUs and also transmitting its own BPDUs and informing adjacent switches that the switch port is preparing to participate in the active topology. By default, the port spends 15 seconds in this state (forward delay).

  • Learning: The Layer 2 port prepares to participate in frame forwarding and begins to populate the CAM table. The port is still sending and receiving BPDUs. By default, the port spends 15 seconds in this state (forward delay).

  • Forwarding: The Layer 2 port is considered part of the active topology. It forwards frames and also sends and receives BPDUs.

  • Disabled: This is not really an STP state; rather it is the state resulting from administratively shutting down a switch port. In this state, the Layer 2 port does not participate in spanning tree and does not forward frames.

STP uses timers to determine how long to transition ports. STP also uses timers to determine the health of neighbor bridges and how long to cache MAC addresses in the bridge table.

The timers operate as follows:

  • Hello timer: Determines how often root bridge sends configuration BPDUs. The default is 2 seconds.

  • Maximum Age (Max Age): Tells the bridge how long to keep ports in the blocking state before listening. The default is 20 seconds.

  • Forward Delay (Fwd Delay): Determines how long to stay in the listening state before going to the learning state, and how long to stay in the learning state before forwarding. The default is 15 seconds.

The root bridge informs the nonroot bridges of the time intervals to use and the STP timers can be tuned based on network size. The default parameters give STP ample opportunity to ensure a loop-free topology. Mistuning the parameters can cause serious network instability.

Nonroot bridges place various ports in their proper roles by listening to BPDUs as they come in on all ports. Receiving BPDUs on multiple ports indicates a redundant path to the root bridge.

The switch looks at the following components in the BPDU to determine which switch ports forward data and which block data :

  • Lowest path cost

  • Lowest sender BID

  • Lowest sender port ID

The switch looks at the path cost first, which is calculated on the basis of the link speed and the number of links the BPDU has traversed. Ports with the lowest cost are eligible to be placed in forwarding mode. All other ports that are receiving BPDUs continue in blocking mode.

If the path cost and sender BID are equal, as with parallel links between two switches, the switch uses the port ID. In this case, the port with the lowest port ID forwards data frames, and all other ports continue to block data frames.

Each bridge advertises the spanning tree path cost in the BPDU. This spanning tree path cost is the cumulative cost of all the links from the root bridge to the switch sending the BPDU. The receiving switch uses this cost to determine the best path to the root bridge. The lowest cost is considered to be the best path.

Port cost values per link are shown in the table in the Revised IEEE Spec column. The lower values are associated with higher bandwidth and, therefore, are the more desirable paths. This revised specification uses a nonlinear scale with port cost values. In the previous IEEE specification, the cost value was calculated based on Gigabit Ethernet being the maximum Ethernet bandwidth, with an associated value of 1, from which all other values were derived in a linear manner.

In Figure , switch Y receives a BPDU from the root bridge (switch X) on its switch port on the Fast Ethernet segment, and another BPDU on its switch port on the Ethernet segment. The root path cost in both cases is zero. The local path cost on the Fast Ethernet switch port is 19, while the local path cost on the Ethernet switch port is 100. As a result, the switch port on the Fast Ethernet segment has the lowest path cost to the root bridge and is elected as the root port for switch Y.

STP selects one designated port per segment to forward traffic. Other switch ports on the segment typically become nondesignated ports and continue blocking, or they could be a root port and continue forwarding, as shown in Figures - .

The nondesignated ports receive BPDUs but block data traffic and do not forward data traffic to prevent loops. The switch port on the segment with the lowest path cost to the root bridge is elected as the designated port. If multiple switch ports on a switch have the same path cost and are connecting to the same neighbor switch, the switch port with the lowest sender port ID becomes the designated port.

Because ports on the root bridge all have a root path cost of zero, all ports on the root bridge are designated ports.

Figure depicts a scenario with switches running STP and exchanging information. This exchange yields the following results:

  • Election of a root bridge as a Layer 2 topology point of reference

  • Determination of the best path to the root bridge from each switch

  • Election of a designated switch and corresponding designated port for every switched segment

  • Removal of loops in the switched network by transitioning some switch links to a blocked state

  • Determination of the “active topology” for each instance or VLAN running STP

The active topology is the final set of communication paths that are created by switch ports forwarding frames. After the active topology has been established, the switched network must reconfigure the active topology using Topology Change Notifications (TCNs) if a link failure occurs.

A TCN BPDU is generated when a bridge discovers a change in topology, usually because of a link failure, bridge failure, or a port transitioning to forwarding state. The TCN BPDU is set to 0x80 in the Type field and is forwarded on the root port toward the root bridge. The upstream bridge acknowledges the BPDU with a Topology Change Acknowledgment (TCA). In the Flag field, the least significant bit is for the TCN, and the most significant bit is for the TCA.

The bridge sends this message to its designated bridge, which is the closest neighbor to the root of a particular bridge (or the root, if it is directly connected). The designated bridge acknowledges the topology change back to the sending neighbor and sends the message to its designated bridge. This process repeats until the root bridge gets the message. This is how the root learns about the topology changes in the network.

When a topology change occurs the root sends messages throughout the tree so that the content addressable memory (CAM) tables can adjust and provide a new path for the end host devices.


3.1.7 Explaining Enhancements to STP

The 802.1D STP standard was developed long before VLANs were introduced and has some limitations that the Cisco proprietary PVST addresses. PVST allows separate instances of spanning tree and includes Cisco proprietary features, such as PortFast and UplinkFast, which provide much faster convergence.

The 802.1Q standard has defined standards-based technologies for handling VLANs. To reduce the complexity of this standard, the 802.1 committee specified only a single instance of spanning tree for all VLANs. Not only does this provide a considerably less flexible approach than Cisco’s PVST, but it also creates an interoperability problem. To address both these issues, Cisco introduced PVST+ in version 4.1 on the Cisco Catalyst 5000 Series (all Cisco Catalyst 4000 and 6000 series switches support PVST+). PVST+ allows the two schemes to interoperate in a seamless and transparent manner in almost all topologies and configurations.

There are both advantages and disadvantages to using a single spanning tree. On the upside, it allows switches to be simpler in design and place a lighter load on the CPU. On the downside, a single spanning tree precludes load balancing and can lead to incomplete connectivity in certain VLANs (the single STP VLAN might select a link that is not included in other VLANs). Given these tradeoffs, most network designers have concluded that the downsides of having one spanning tree outweigh the benefits.

Two new IEEE standards, RSTP (802.1w) and MSTP (802.1s), improve on the original 802.1D STP standard and provide similar functionality to the Cisco proprietary features. Rapid Spanning Tree Protocol (RSTP) provides much faster convergence, while Multiple Spanning Tree Protocol (MSTP) allows for multiple instances of spanning tree.

Per VLAN Rapid Spanning Tree (PVRST) allows RSTP to be implemented, giving faster convergence, while still using the Cisco proprietary PVST.

Spanning tree PortFast causes an interface configured as a Layer 2 access port to transition from the blocking to forwarding state immediately, bypassing the listening and learning states. You can use PortFast on Layer 2 access ports that are connected to a single workstation or a server. If an interface configured with PortFast receives a BPDU, spanning tree can put the port into the blocking state by using a feature called BPDU guard.

CAUTION:

Because the purpose of PortFast is to minimize the time that access ports must wait for spanning tree to converge, it should be used only on access ports. If you enable PortFast on a port connecting to another switch, you risk creating a spanning tree loop.


Figure lists the commands used to implement and verify PortFast on an interface. Figure describes the commands.

The documents listed in Figure are available on the IEEE Web site.


3.2 Implementing RSTP

3.2.1 Describing the Rapid Spanning Tree Protocol


The immediate consideration with STP is convergence time. Depending on the type of failure, it takes anywhere from 30 to 50 seconds to converge the network. RSTP helps with convergence issues that plague legacy STP. RSTP has additional features similar to UplinkFast and BackboneFast that offer better recovery at Layer 2.

RSTP is based on the IEEE 802.1w standard. Numerous differences exist between RSTP and STP. RSTP requires a full-duplex point-to-point connection between adjacent switches to achieve fast convergence. Half duplex generally denotes a shared medium in which multiple hosts share the same wire; a point-to-point connection cannot reside in this environment. As a result, RSTP cannot achieve fast convergence in half-duplex mode. STP and RSTP also have port designation differences. RSTP has alternate and backup port designations, which are absent from the STP environment. Ports not participating in spanning tree are called edge ports. Edge ports can be statically configured by the PortFast parameter. The edge port immediately becomes a non-edge port if a BPDU is heard on the port. Non-edge ports participate in the spanning tree algorithm and only non-edge ports generate topology changes (TCs) on the network when transitioning to forwarding state. TCs are not generated for any other RSTP states. In legacy STP, TCNs were generated for any active port that was not configured for PortFast.

RSTP speeds the recalculation of the spanning tree when the Layer 2 network topology changes. It redefines STP port roles and states, and the BPDUs.

RSTP is proactive and therefore negates the need for the 802.1D delay timers. RSTP (802.1w) supersedes 802.1D, while still retaining backward compatibility. Much of the 802.1D terminology remains, and most parameters are unchanged. In addition, 802.1w is capable of reverting back to 802.1D to interoperate with legacy switches on a per-port basis.

The RSTP BPDU format is the same as the IEEE 802.1D BPDU format, except that the Version field is set to 2 to indicate RSTP, and the Flags field makes use of all 8 bits.

In a switched domain, there can be only one forwarding path toward a single reference point; this is the root bridge. The RSTP spanning tree algorithm (STA) elects a root bridge in exactly the same way as 802.1D elects a root.

However, there are critical differences that make RSTP the preferred protocol for preventing Layer 2 loops in a switched network environment. Many of the differences stem from the Cisco-proprietary enhancements, which are transparent and integrated into the protocol at a low level. These enhancements, such as BPDUs carrying and sending information about port roles only to neighbor switches, require no additional configuration, and generally perform better than the Cisco-proprietary 802.1D enhancements.

Because the RSTP and Cisco-proprietary enhancements are functionally similar, features such as UplinkFast and BackboneFast are not compatible with RSTP.


3.2.2 Describing RSTP Port States


RSTP provides rapid convergence following a failure or during reestablishing a switch, switch port, or link. An RSTP topology change causes a transition in the appropriate switch ports to the forwarding state through either explicit handshakes or a proposal and agreement process and synchronization.

With RSTP, the role of a port is separated from the state of a port. For example, a designated port could be in the discarding state temporarily, even though its final state is to be forwarding.

RSTP port states correspond to the three basic operations of a switch port: discarding, learning, and forwarding.

Figure describes the characteristics of RSTP port states. In all port states, a port accepts and processes BPDU frames.

Figure compares STP and RSTP port states.


3.2.3 Describing RSTP Port Roles


The port role defines the ultimate purpose of a switch port and the way it handles data frames. Port roles and port states are able to transition independently of each other. Figure depicts the port roles used by RSTP

Figure defines port roles.

Establishing additional port roles allows RSTP to define a standby switch port before a failure or topology change. The alternative port moves to the forwarding state if there is a failure on the designated port for the segment.



CCNP3. MODULE 1: NEW REQUIREMENTS

Overview

This module looks at the need for multilayer switches within Cisco’s overall network design. A review of Intelligent Information Networks (IIN) and Service-Oriented Network Architectures (SONA) sets the groundwork for the course. Additionally, a quick overview of the characteristics of Layer 2 and Layer 3 networks aids in identifying the reasons for using a multilayer switch.
This module begins by discussing operational problems found in non-hierarchical networks at Layers 2 and 3 of the Open Systems Interconnection (OSI) model. The Enterprise Composite Network Model (ECNM) is then introduced, and the features and benefits of ECNM are explained. Issues that exist in traditionally designed networks can be resolved by applying this state-of-the-art design to their networks.

1.1 Introducing Campus Networks

1.1.1 Intelligent Information Network and Service-Oriented Network Architecture

Intelligent Information Network (IIN) encompasses these features :

Integration of networked resources and information assets that have been largely unlinked: The current converged networks that integrate voice, video, and data require Information Technology (IT) departments to link the IT infrastructure more closely with the network.
Intelligence across multiple products and infrastructure layers: The intelligence built into each component of the network is extended network-wide and applies end-to-end.
Active participation of the network in the delivery of services and applications: With added intelligence, IIN makes it possible for the network to actively manage, monitor, and optimize service and application delivery across the entire IT environment.

IIN offers much more than basic connectivity, bandwidth for users, and access to applications. It offers end-to-end functionality and centralized, unified control that promotes true business transparency and agility.

The IIN technology vision offers an evolutionary approach that consists of three phases in which functionality can be added to the infrastructure as required.

  • Integrated transport: All traffic—data, voice, and video—consolidates onto an IP network for secure network convergence. By integrating data, voice, and video transport into a single, standards-based, modular network, organizations can simplify network management and generate enterprise-wide efficiencies. Network convergence also lays the foundation for a new class of IP-enabled applications delivered through Cisco IP Communications solutions.

  • Integrated services: After the network infrastructure has been converged, IT resources can be pooled and shared or “virtualized” to flexibly address the changing needs of the organization. Integrated services help unify common elements, such as storage and data center server capacity. By extending virtualization capabilities to encompass server, storage, and network elements, an organization can transparently use all its resources more efficiently. Business continuity is also enhanced because shared resources across the IIN provide services in the event of a local system failure.

  • Integrated applications: With Application-Oriented Networking (AON) technology, Cisco has entered the third phase of building the IIN. This phase focuses on making the network “application-aware” so that it can optimize application performance and deliver networked applications to users more efficiently. In addition to capabilities such as content caching, load balancing, and application-level security, Cisco AON makes it possible for the network to simplify the application infrastructure by integrating intelligent application message handling, optimization, and security into the existing network.

Using IIN, Cisco is helping organizations address new IT challenges, such as the deployment of service-oriented architectures (SOA), Web services, and virtualization. Cisco Service-Oriented Network Architecture (SONA) is a framework that guides the evolution of enterprise networks to an IIN. SONA provides the following advantages to enterprises:

  • Outlines the path toward the IIN

  • Illustrates how to build integrated systems across a fully converged IIN

  • Improves flexibility and increases efficiency, which results in optimized applications, processes, and resources

Cisco SONA uses the extensive product line services, proven architectures, and experience of Cisco and its partners to help enterprises achieve their business goals.

The Cisco SONA framework shows how integrated systems can allow a dynamic, flexible architecture, and provide for operational efficiency through standardization and virtualization. It brings forth the notion that the network is the common element that connects and enables all components of the IT infrastructure.


Cisco SONA outlines these three layers of the IIN:

  • Network infrastructure layer: Interconnects all IT resources across a converged network foundation. The IT resources include servers, storage, and clients. The network infrastructure layer represents how these resources exist in different places in the network, including the campus, branch, data center, WAN and Metropolitan Area Network (MAN), and teleworker. The objective for customers in this layer is to have anywhere and anytime connectivity.

  • Interactive services layer: Enables efficient allocation of resources to applications and business processes that are delivered through the networked infrastructure. This layer comprises these services:

    • Voice and collaboration

    • Mobility

    • Security and identity

    • Storage

    • Computer

    • Application networking

    • Network infrastructure virtualization

    • Services management

    • Adaptive management

  • Application layer: Includes business applications and collaboration applications. The objective for customers in this layer is to meet business requirements and achieve efficiencies by leveraging the interactive services layer.

1.1.2 Cisco Network Models


Cisco provides the enterprise-wide systems architecture that helps companies protect, optimize, and grow the infrastructure that supports their business processes. The architecture integrates the entire network—campus, data center, WAN, branches, and teleworkers—offering staff secure access to the tools, processes, and services.

Cisco provides the following network models with Cisco Enterprise Architecture:

  • Campus architecture: Combines a core infrastructure of intelligent switching and routing with tightly integrated productivity-enhancing technologies, including IP Communications, mobility, and advanced security. The architecture provides the enterprise with high availability through a resilient multilayer design, redundant hardware and software features, and automatic procedures for reconfiguring network paths when failures occur. Multicast provides optimized bandwidth consumption, and quality of service (QoS) prevents oversubscription to ensure that real-time traffic, such as voice and video or critical data, is not dropped or delayed. Integrated security protects against and mitigates the impact of worms, viruses, and other attacks on the network, even at the port level. Cisco enterprise-wide architecture extends support for standards, such as 802.1x and Extensible Authentication Protocol (EAP). It also provides the flexibility to add IP Security (IPSec) and Multiprotocol Label Switching (MPLS) Virtual Private Networks (VPNs), identity and access management, and VLANs to compartmentalize access. This helps improve performance and security and decreases costs. The enterprise campus architecture will be the focus of this course.

  • Data center architecture: Cohesive, adaptive network architecture that supports the requirements for consolidation, business continuance, and security while enabling emerging SOAs, virtualization, and on-demand computing. IT staff can easily provide departmental staff, suppliers, or customers with secure access to applications and resources. This approach simplifies and streamlines management, significantly reducing overhead. Redundant data centers provide backup using synchronous and asynchronous data and application replication. The network and devices offer server and application load balancing to maximize performance. This solution allows enterprises to scale without major changes to the infrastructure.

  • Branch architecture: Enables enterprises to extend head-office applications and services, such as security, IP Communications, and advanced application performance, to thousands of remote locations and users, or to a small group of branches. Cisco integrates security, switching, network analysis, caching, and converged voice and video services into a series of integrated services routers in the branch so that enterprises can deploy new services when they are ready without buying new equipment. This solution provides secure access to voice, mission-critical data, and video applications anywhere, anytime. Advanced network routing, VPNs, redundant WAN links, application content caching, and local IP telephony call processing provide a robust architecture with high levels of resilience for all the branch offices. An optimized network leverages the WAN and LAN to reduce traffic and save bandwidth and operational expenses. Enterprises can easily support branch offices with the ability to centrally configure, monitor, and manage devices located at remote sites, including tools, such as AutoQoS, that proactively resolve congestion and bandwidth issues before they affect network performance.

  • Teleworker architecture: Allows enterprises to securely deliver voice and data services to remote small or home offices over a standard broadband access service, providing a business resiliency solution for the enterprise and a flexible work environment for employees. Centralized management minimizes IT support costs, and robust integrated security mitigates the unique security challenges of this environment. Integrated security and identity-based networking services enable the enterprise to help extend campus security policies to the teleworker. Staff can securely log into the network over an “always-on” VPN and gain access to authorized applications and services from a single cost-effective platform. The productivity can further be enhanced by adding an IP phone, providing cost-effective access to a centralized IP communications system with voice and unified messaging services.

  • WAN architecture: Offers the convergence of voice, video, and data services over a single IP communications network. This approach enables enterprises to cost-effectively span large geographic areas. QoS, granular service levels, and comprehensive encryption options help ensure the secure delivery of high-quality corporate voice, video, and data resources to all corporate sites, enabling staff to work productively and efficiently from any location. Security is provided with multiservice VPNs (IPSec and MPLS) over Layer 2 and Layer 3 WANs, as well as hub-and-spoke and full mesh topologies.

1.1.3 Describing Non-Hierarchical Campus Network Issues



The simplest Ethernet network infrastructure is composed of a single collision and broadcast domain. This type of network is referred to as a “flat” network because any traffic that is transmitted within it is seen by all of the interconnected devices, even if they are not the intended destination of the transmission. The benefit of this type of network is that it is very simple to install and configure, so it is a good fit for home networking and small offices. The downside of a flat network infrastructure is that it does not scale well as demands on the network increase. Following are some of the issues with non-hierarchical networks:

  • Traffic collisions increase as devices are added, reducing network throughput.

  • Broadcast traffic increases as devices are added to the network, causing over-utilization of network resources.

  • Isolating problems on a large flat network can be difficult.


Figure shows the key network hardware devices in a non-hierarchical network and the function of each.


1.1.4 Describing Layer 2 Network Issues


Layer 2 switches can significantly improve performance in a carrier sense multiple access collision detect (CSMA/CD) network when used in place of hubs. This is because each switch port represents a single collision domain, and the device connected to that port does not have to compete with other devices to access the media. Ideally, every host on a given network segment is connected to its own switch port, thus eliminating all media contention as the switch manages network traffic at Layer 2. An additional benefit of Layer 2 switching is that large broadcast domains can be broken up into smaller segments by assigning switch ports to different VLAN segments.

For all their benefits, some drawbacks still exist in non-hierarchical switched networks:

  • If switches are not configured with VLANs, very large broadcast domains may be created.

  • If VLANs are created, traffic cannot move between VLANs using only Layer 2 devices.

  • As the Layer 2 network grows, the potential for bridge loops increases. Therefore, the use of a Spanning Tree Protocol (STP) becomes imperative.


1.1.5 Describing Routed Network Issues


A major limitation of Layer 2 switches is that they cannot switch traffic between Layer 3 network segments (IP subnets for example). Traditionally, this was done using a router. Unlike switches, a router acts as a broadcast boundary and does not forward broadcasts between its interfaces. Additionally, a router provides an optimal path determination function. The router examines each incoming packet to determine which route the packet should take through the network. Also, the router can act as a security device, manage QoS, and apply network policy. Although routers used in conjunction with Layer 2 switches resolve many issues, some concerns still remain:


  • When security or traffic management components, such as access control lists (ACLs), are configured on router interfaces, the network may experience delays as the router processes each packet in software.

  • When routers are introduced into a switched network, end-to-end VLANs are no longer supported because routers terminate the VLAN.

  • Routers are more expensive per interface than Layer 2 switches, so their placement in the network should be well planned. Non-hierarchical networks, by their very nature, require more interconnections and, hence, more routed interfaces.

  • In a non-hierarchical network, the number of router interconnections may result in peering problems between neighboring routers.

  • Because traffic flows are hard to determine, it becomes difficult to predict where hardware upgrades are needed to mitigate traffic bottlenecks.

1.1.6 Multilayer Switching


Multilayer switching is hardware-based switching and routing integrated into a single platform. In some cases, frame (Layer 2) and packet (Layer 3) forwarding operations are handled by the same specialized hardware ASIC and other specialized circuitry. A multilayer switch does everything to a frame and packet that a traditional switch and router do, including the following:

  • Provides multiple simultaneous switching paths

  • Segments broadcast and failure domains

  • Provides destination-specific frame forwarding based on Layer 2 information

  • Determines the forwarding path based on Layer 3 information

  • Validates the integrity of the Layer 2 frame and Layer 3 packet via checksums and other methods

  • Verifies packet expiration and updates accordingly

  • Processes and responds to any option information

  • Updates forwarding statistics in the MIB

  • Applies security and policy controls, if required

  • Provides optimal path determination

  • Can (if it is a sophisticated modular type) support a wide variety of media types and port densities

  • Has the ability to support QoS

  • Has the ability to support VoIP and inline power requirements

Because it is designed to handle high-performance LAN traffic, you can place a multilayer switch anywhere within the network, thereby replacing traditional switches and routers cost-effectively. In most cases, a lower cost access switch connects end users and multilayer switches are used in the distribution and core layers of the campus network model.


1.1.7 Issues with Multilayer Switches and VLANs in a Non-Hierarchical Network


Multilayer switches combine switching and routing on a single hardware platform and can enhance overall network performance when deployed properly. Multilayer switches provide very high-speed Layer 2 and Layer 3 functionality by caching much of the forwarding information between sources and destinations.

However, the following issues exist when a multilayer switch is deployed in an improperly designed network:

  • Because multilayer switches condense the functions of switching and routing in a single chassis, they can create single points of failure if redundancy for these devices is not carefully planned and implemented.

  • Switches in a flat network are interconnected, creating many paths between destinations. If active, these potential redundant paths create bridging loops. To control this, the network must run a STP. Networks that use the IEEE 802.1D protocol may experience periods of disconnection and frame flooding during a topology change.

  • Multilayer switch functionality may be underutilized if a multilayer switch is simply a replacement for the traditional role of a router in a non-hierarchical network.


1.1.8 The Enterprise Composite Network Model

The Enterprise Composite Network Model (ECNM) can be used to divide the enterprise network into physical, logical, and functional areas. These areas allow network designers and engineers to associate specific network functionality on equipment based upon its placement and function in the model.

The ECNM provides a modular framework for designing networks. This modularity allows flexibility in network design and facilitates ease of implementation and troubleshooting. The hierarchical model divides networks into the building access, building distribution, and building core layers, as follows:

  • Building access layer: Grants user access to network devices. In a network campus, the building access layer generally incorporates switched LAN devices with ports that provide connectivity to workstations and servers. In the WAN environment, the building access layer at remote sites may provide access to the corporate network across WAN technology.

  • Building distribution layer: Aggregates the wiring closets and uses switches to segment workgroups and isolate network problems.

  • Building core layer: Also known as the campus backbone submodule, this layer is a high-speed backbone and is designed to switch packets as fast as possible. Because the core is critical for connectivity, it must provide a high level of availability and adapt to changes very quickly.

An enterprise campus is defined as one or more buildings, with multiple virtual and physical networks, connected across a high-performance, multilayer-switched backbone. The ECNM contains these three major functional areas:

  • Enterprise campus: Contains the modules required to build a hierarchical, highly robust campus network that offers performance, scalability, and availability. This area contains the network elements required for independent operation within a single campus, such as access from all locations to central servers. The functional area does not offer remote connections or Internet access.

  • Enterprise edge: Aggregates connectivity from the various resources external to the enterprise network. As traffic comes into the campus, this area filters traffic from the external resources and routes it into the enterprise campus functional area. It contains all the network elements for efficient and secure communication between the enterprise campus and remote locations, remote users, and the Internet. The enterprise edge would replace the Demilitarized Zone (DMZ) of most networks.

  • Service provider edge: Represents connections to resources external to the campus. This area facilitates communication to WAN and Internet service provider (ISP) technologies.


1.1.9 Benefits of the Enterprise Composite Network Model



To scale the hierarchical model, Cisco introduced ECNM, which further divides the enterprise network into physical, logical, and functional areas. ECNM contains functional areas, each of which has its own building access, building distribution, and building core (or campus backbone) layers.

ECNM has these features:

  • It is a deterministic network with clearly defined boundaries between modules. The model also has clear demarcation points so that the designer knows exactly where traffic is located.

  • It increases network scalability and eases the design task by making each module discrete.

  • It provides scalability by allowing enterprises to add modules easily. As network complexity grows, designers can add new functional modules.

  • It offers more network integrity in network design, allowing the designer to add services and solutions without changing the underlying network design.

Figure shows the benefits that ECNM offers for each of the submodules where it is implemented.


1.1.10 Describing the Campus Infrastructure Module


The enterprise campus functional area includes the campus infrastructure, network management, server farm, and edge distribution modules. Each module has a specific function within the campus network:

  • Campus infrastructure module: Includes building access and building distribution submodules. It connects users within the campus to the server farm and edge distribution modules. The campus infrastructure module is composed of one or more floors or buildings connected to the campus backbone submodule.

  • Network management module: Performs system logging, authentication, network monitoring, and general configuration management functions.

  • Server farm module: Contains e-mail and corporate servers providing application, file, print, e-mail, and Domain Name System (DNS) services to internal users.

  • Edge distribution module: Aggregates the connectivity from the various elements at the enterprise edge functional area and routes the traffic into the campus backbone submodule.

The campus infrastructure module connects users within a campus to the server farm and edge distribution modules. The campus infrastructure module comprises building access and building distribution switches connected through the campus backbone to campus resources.

A campus infrastructure module includes these submodules:

  • Building access submodule (also known as building access layer): Contains end-user workstations, IP phones, and Layer 2 access switches that connect devices to the building distribution submodule. The building access submodule performs services such as support for multiple VLANs, private VLANs, and establishment of trunk links to the building distribution layer and IP phones. Each building access switch has connections to redundant switches in the building distribution submodule.

  • Building distribution submodule (also known as building distribution layer): Provides aggregation of building access devices, often using Layer 3 switching. The building distribution submodule performs routing, QoS, and access control. Traffic generally flows through the building distribution switches and onto the campus core or backbone. This submodule provides fast failure recovery because each building distribution switch maintains two equal-cost paths in the routing table for every Layer 3 network number. Each building distribution switch has connections to redundant switches in the core.

  • Campus backbone submodule (also known as building core layer): Provides redundant and fast-converging connectivity between buildings and the server farm and edge distribution modules. The purpose of the campus backbone submodule is to switch traffic as fast as possible between campus infrastructure submodules and destination resources. Forwarding decisions should be made at the ASIC level whenever possible. Routing, ACLs, and processor-based forwarding decisions should be avoided at the core and implemented at building distribution devices whenever possible. High-end Layer 2 or Layer 3 switches are used at the core for high throughput, with optimal routing, QoS, and security capabilities available when needed.

1.1.11 Reviewing Switch Configuration Interfaces


In the era of the early high-end Cisco Catalyst switches, the Cisco Catalyst operating system (CatOS) and the command-line interface (CLI) were significantly different from the Cisco IOS mode navigation interfaces available on all newer Cisco Catalyst platforms. The two interfaces have different features and a different prompt and CLI syntax.


Note:
Desktop Express-based switches use a Cisco Network Assist (GUI interface) not a CLI.

The original Cisco Catalyst interface is sometimes referred to as the “set-based” or, more recently, “Catalyst software” CLI.

In the Cisco Catalyst software, commands are executed at the switch prompt, which can be either non-privileged (where a limited subset of user-level commands is available) or at a password-protected privileged mode (where all commands are available). Configuration commands are prefaced with the keyword set.

In the example below, the Cisco Catalyst software commands execute the following:

Step 1 Show the status of a port.

Step 2 Move to enable mode, which requires a password.

Step 3 Enable the port.

Console> show port 3/5
.
.
Console> enable
Enter password:
Console(enable) set port enable 3/5

Cisco Catalyst switch platforms have had a number of different operating systems and user interfaces. Over the years, Cisco has made great strides in converting the interface on nearly every Cisco Catalyst platform to the Cisco IOS interface familiar to users of Cisco routing platforms. Unlike the Cisco Catalyst software, various modes are navigated to execute specific commands.

Here is an example of how switch port 3 might be enabled on an access layer switch using the Cisco IOS interface and how its status is verified after configuration. Compare how the Cisco IOS interface is navigated here to the previous example using Cisco Catalyst software.

Switch# config terminal
Switch(config)# interface fastethernet 0/3
Switch(config-if)# no shut
Switch(config-if)# end
Switch# show interface fastethernet 0/3

Some widely used Cisco Catalyst switch platforms that support the Cisco IOS interface are 2950, 2960, 3550, 3560, 3750, 4500*, 6500*, and 8500.

* These platforms have an option to use either Cisco IOS or Cisco Catalyst software for Layer 2 configuration.

The Catalyst software interface exists on several modular Cisco Catalyst platforms, including the Cisco Catalyst 4500, 5500, 6000, and 6500 Series.

For example, on the Cisco Catalyst 6500, you have the option of using the Cisco Catalyst software, Cisco Catalyst software plus Cisco IOS software, or Cisco IOS software functionality.

The Cisco IOS interface is used across a wide variety of Cisco Catalyst switch platforms, particularly the fixed and stackable switches, and is therefore the interface of reference throughout the remainder of the course. Labs may provide direction on the use of specific Cisco Catalyst software commands, depending on the equipment provided.

Summary

The SONA framework guides the evolution of the enterprise network toward IIN. The Cisco Enterprise Architecture, with a hierarchical network model, facilitates the deployment of converged networks. Non-hierarchical network designs do not scale and do not provide the required security necessary in a modern topology. Layer 2 networks do not provide adequate security or hierarchical networking. Router-based networks provide greater security and hierarchical networking; however, they can introduce latency issues.

Multilayer switches combine both Layer 2 and Layer 3 functionality to support the modern campus network topology. Multilayer switches can be used in non-hierarchical networks; however, they do not perform at the optimal level in this context.

The enterprise composite model identifies the key components and logical design for a modern topology. Implementation of an ECNM provides a secure, robust network with high availability. The Campus Infrastructure, as part of an ECNM, provides additional security and high availability at all levels of the campus.