martes, 30 de junio de 2009

CCNP MODULO 2.1 DEFINING VLANS

MODULO 2. DEFINING VLANS

Overview



This module defines the purpose of VLANS and describes how VLAN implementation can simplify network management and troubleshooting and can improve network performance. When VLANs are created, their names and descriptions are stored in a VLAN database that can be shared between switches. You will see how design considerations determine which VLANs span all the switches in a network and which VLANs remain local to a switch block. The configuration components of this module describe how individual switch ports may carry traffic for one or more VLANs, depending on their configuration as access or trunk ports. This module explains both why and how VLAN implementation occurs in an enterprise network.



2.1 Implementing Best Practices for VLAN Topologies

2.1.1 Describing Issues in a Poorly Designed Network


A poorly designed network has increased support costs, reduced service availability, and limited support for new applications and solutions. Less than optimal performance affects end users and access to central resources. Here are some of the issues that stem from a poorly designed network.

Failure domains: One of the most important reasons to implement an effective design is to minimize the extent of a network problem when it occurs. When Layer 2 and Layer 3 boundaries are not clearly defined, failure in one network area can have a far-reaching effect.
Broadcast domains: Broadcasts exist in every network. Many applications and many network operations require broadcasts to function properly. To minimize the negative impact of broadcasts, broadcast domains should have clearly defined boundaries and include an optimal number of devices.

Large amount of unknown MAC unicast traffic: Cisco Catalyst switches limit unicast frame forwarding to ports associated with the specific unicast address. However, frames arriving for a destination MAC address not recorded in the MAC table are subsequently flooded out all switch ports, which is referred to as an “unknown MAC unicast flooding.” Because this causes excessive traffic on switch ports, network interface cards (NICs) have to attend to a larger number of frames on the wire, and security can be compromised as data is propagated on a wire for which it was not intended.

Multicast traffic on ports where not intended: IP multicast is a technique that allows IP traffic to be propagated from one source to a multicast group identified by a single IP and MAC destination group address pair. Similar to unicast flooding and broadcasting, multicast frames are flooded out on all switch ports. A proper design contains multicast frames.

Difficulty in management and support: Because a poorly designed network may be disorganized, poorly documented, and lacking easily identifiable traffic flows, support, maintenance, and problem resolution become time-consuming and arduous tasks.

Possible security vulnerabilities: A poorly designed switched network with little attention to security requirements at the access layer can compromise the integrity of the entire network.
A poorly designed network always has a negative impact and becomes a burden for any organization in terms of support and related costs.



2.1.2 Grouping Business Functions into VLANs


Hierarchical network addressing means that IP network numbers are applied to the network segments or VLANs in an orderly fashion that takes the network as a whole into consideration. Blocks of contiguous network addresses are reserved for, and configured on, devices in a specific area of the network.


Here are some benefits of hierarchical addressing.

Ease of management and troubleshooting: Hierarchical addressing groups network addresses contiguously. Network management and troubleshooting are more efficient, because a hierarchical IP addressing scheme makes problem components easier to locate.

Minimized errors: Orderly network address assignment can minimize errors and duplicate address assignments.

Reduced number of routing table entries: In a hierarchical addressing plan, routing protocols are able to perform route summarization, which allows a single routing table entry to represent a collection of IP network numbers. Route summarization makes routing table entries more manageable and provides the following benefits:

  • Reduced number of CPU cycles when recalculating a routing table or sorting through the routing table entries to find a match
  • Reduced router memory requirements
  • Faster convergence after a change in the network
  • Easier troubleshooting

The Enterprise Composite Network Model (ECNM) provides a modular framework for designing and deploying networks. It also provides the ideal structure for overlaying a hierarchical IP addressing scheme. Here are some guidelines to follow.
Design the IP addressing scheme so that blocks of 4, 8, 16, 32, or 64 contiguous network numbers can be assigned to the subnets in a given building distribution and access switch block. This approach allows each switch block to be summarized into one large address block.
At the Building Distribution layer, continue to assign network numbers contiguously out toward to the access layer devices.

Have a single IP subnet correspond with a single VLAN. Each VLAN is a separate broadcast domain.
Subnet at the same binary value on all network numbers, avoiding variable length subnet masks when possible to minimize errors and confusion when troubleshooting or configuring new devices and segments.
For example, a business with approximately 250 employees is looking to move to the enterprise composite network model. Figure shows the number of users in each department.


Six VLANs are required to accommodate one VLAN per user community. Therefore, in following the guidelines of the ECNM, six IP subnets are required.
The business has decided to use network 10.0.0.0 as its base address.
The Sales Department is the largest department, which requires a minimum of 102 addresses for its users. Therefore, a subnet mask of 255.255.255.0 (/24) is chosen, giving a maximum number of 254 hosts per subnet.

It has been decided, for future growth, to have one switch block per building as follows:
  • Building A is allocated 10.1.0.0/16.
  • Building B is allocated 10.2.0.0/16.
  • Building C is allocated 10.3.0.0/16.
Building A VLANs and IP Subnets
Figure shows the allocation of VLANs and IP subnets within building A.


Building B VLANs and IP Subnets
Figure shows the allocation of VLANs and IP subnets within building B.


Building C VLANs and IP Subnets
Figure shows the allocation of VLANs and IP subnets within building C.
Some of the currently unused VLANs and IP subnets would be used to manage the network devices. If the company decides to implement additional technologies, such as IP telephony, some of the unused VLANs and IP subnets would be allocated to the voice VLANs.

2.1.3 Describing Interconnection Technologies


A number of technologies are available to interconnect devices in the campus network. Some of the more common technologies are listed here. The interconnection technology selected depends on the amount of traffic the link must carry. A mixture of copper and fiber-optic cabling will likely be used, based on distances, noise immunity requirements, security, and other business requirements.


  • Fast Ethernet (100 Mbps Ethernet): This LAN specification (IEEE 802.3u) operates at 100 Mbps over twisted-pair cable. The Fast Ethernet standard raises the speed of Ethernet from 10 Mbps to 100 Mbps with only minimal changes to the existing cable structure. A switch with port functioning at both 10 and 100 Mbps can move frames between ports without Layer 2 protocol translation.

  • Gigabit Ethernet: An extension of the IEEE 802.3 Ethernet standard, Gigabit Ethernet increases speed tenfold over that of Fast Ethernet, to 1000 Mbps, or 1 gigabit per second (Gbps). IEEE 802.3z specifies operations over fiber optics, and IEEE 802.3ab specifies operations over twisted-pair cable.

  • 10-Gigabit Ethernet: 10-Gigabit Ethernet was formally ratified as an IEEE 802.3 Ethernet standard in June 2002. This technology is the next step for scaling the performance and functionality of an enterprise. With the deployment of Gigabit Ethernet becoming more common, 10-Gigabit will become the norm for uplinks.

  • EtherChannel: This feature provides link aggregation of bandwidth over Layer 2 links between two switches. EtherChannel bundles individual Ethernet ports into a single logical port or link, providing aggregate bandwidth of up to 1600 Mbps (eight 100 Mbps links, full duplex) or up to 16 Gbps (8-Gigabit links, full duplex) between two Cisco Catalyst switches. All interfaces in each EtherChannel bundle must be configured with similar speed, duplex, and VLAN membership.

Figure discusses the use of each interconnection technology in the Campus Infrastructure module.


2.1.4 Determining Equipment and Cabling Needs



There are four objectives in the design of any high-performance network: security, availability, scalability, and manageability. The ECNM, when implemented properly, provides the framework to meet these objectives. In the migration from a current network infrastructure to the ECNM, a number of infrastructure changes may be needed, including the replacement of current equipment and existing cable plant.

This list describes the equipment and cabling decisions that should be considered when altering infrastructure.

  • Replace hubs and legacy switches with new switches at the Building Access layer. Select equipment with the appropriate port density at the access layer to support the current user base while preparing for growth. Some designers begin by planning for about 30 percent growth. If the budget allows, use modular access switches to accommodate future expansion. Consider planning for support of inline power and quality of service (QoS) if IP telephony may be implemented in the future.

  • When building the cable plant from the Building Access layer to the Building Distribution layer devices, remember that these links carry aggregate traffic from the end nodes at the access layer to the building distribution switches. Ensure that these links have adequate bandwidth capability. EtherChannel bundles can be used here to add bandwidth as necessary.

  • At the Building Distribution layer, select switches with adequate performance to handle the load of the current Building Access layer. Also plan some port density for adding trunks later to support new access layer devices. The devices at this layer should be multilayer (Layer 2/Layer 3) switches that support routing between the workgroup VLANs and network resources. Depending on the size of the network, the building distribution layer devices may be fixed chassis or modular. Plan for redundancy in the chassis and in the connections to the access and core layers, as the business objectives dictate.

  • The campus backbone equipment must support high-speed data communications between other submodules. Be sure to size the backbone for scalability and plan on redundancy.

Cisco has online tools to assist designers in making the proper selection of devices and uplink ports based on business and technology needs. Cisco suggests oversubscription ratios that can be used to plan bandwidth requirements between key devices on a network with average traffic flows.

  • Access to distribution layer links: The oversubscription ratio should be no higher than 20:1. That is, the link can be 1/20 of the total bandwidth available cumulatively to all end devices using that link.

  • Distribution to core links: The oversubscription ratio should be no higher than 4:1.

  • Between core devices: There should be little to no oversubscription planning. That is, the links between core devices should be able to carry traffic at the speed represented by the aggregate number bandwidth of all the distribution uplinks into the core.




CAUTION:

These ratios are appropriate for estimating average traffic from access layer, end-user devices. They are not accurate for planning oversubscription from the server farm or edge distribution modules. They are also not accurate for planning bandwidth needed on access switches hosting typical user applications with high bandwidth consumption (for example, non-client server databases or multimedia flows to unicast addresses). Using QoS end to end prioritizes the traffic that would need to be dropped in the event of congestion.



2.1.5 Considering Traffic Source to Destination Paths



Figure lists different types of traffic that may exist on the network and require consideration before device placement and VLAN configuration.




Figure describes the specific traffic types.



Considering IP Telephony

The size of an enterprise network drives the design and placement of certain types of devices. If the network is designed according to the ECNM, there will be distinct devices separating the access, distribution, and backbone areas of the network. The network design and the types of applications supported determine where certain traffic sources are located. Multicast and IP telephony applications share some common traffic types. Specifically, if a Cisco CallManager is providing music on hold, it may need to multicast that traffic stream.

Consider the following points when determining where to place the servers:

  • Cisco CallManager servers must be accessible throughout the network at all times. Ensure that there are redundant NICs in the publisher and subscriber servers and redundant connections between those NICs and the upstream switch from the server. It is recommended that voice traffic be configured on its own VLAN. Cisco CallManagers are typically located within the Server Farm block in the ECNM design.

  • VLAN trunks must be configured appropriately to carry IP telephony traffic throughout the network or to specific destinations.

When you deploy voice, it is recommended that you enable two VLANs at the access layer: a native VLAN for data traffic and a voice VLAN.



Separate voice and data VLANs are recommended for the following reasons:

  • Address space conservation and voice device protection from external networks

  • QoS trust boundary extension to voice devices

  • Protection from malicious network attacks

  • Ease of management and configuration

Considering IP Multicast Traffic

The multilayer campus design is ideal for control and distribution of IP multicast traffic. The Layer 3 multicast control is provided by Protocol Independent Multicast (PIM) routing protocol. Multicast control at the wiring closet is provided by Internet Group Membership Protocol (IGMP) snooping or Cisco Group Multicast Protocol (CGMP). Multicast control is extremely important because of the large amount of traffic involved when several high-bandwidth multicast streams are provided. Consider the following when designing the network for multicast traffic:

  • IP multicast servers may exist within a server farm or be distributed throughout the network at appropriate locations.

  • Select distribution layer switches to act as PIM rendezvous points (RPs) and place them where they are central to the location of the largest distribution of receiving nodes. RPs are typically used to temporarily connect multicast sources and receivers.



2.1.6 Describing End-to-End VLANs



The term end-to-end VLAN refers to a single VLAN associated with switch ports that are widely dispersed throughout an enterprise network. Traffic for the VLAN is carried throughout the switched network. If many VLANs in a network are end-to-end, special links (trunks) are required between switches to carry the traffic of all the different VLANs.

An end-to-end VLAN has these characteristics:

  • Geographically dispersed throughout the network.

  • Users are grouped into the VLAN regardless of physical location.

  • As a user moves throughout a campus, the VLAN membership of that user remains the same.

  • Users are typically associated with a given VLAN for network management reasons.

  • All devices on a given VLAN typically have addresses on the same IP subnet.

Because a VLAN represents a Layer 3 segment, end-to-end VLANs allow a single Layer 3 segment to be geographically dispersed throughout the network. Reasons for implementing this design might include the following:

  • Grouping users: Users can be grouped on a common IP segment even though they are geographically dispersed.

  • Security: A VLAN may contain resources that should not be accessible to all users on the network, or there may be a reason to confine certain traffic to a particular VLAN.

  • Applying QoS: Traffic from a given VLAN can be given higher or lower access priority to network resources.

  • Routing avoidance: If much of the VLAN user traffic is destined for devices on that same VLAN and routing to those devices is not desirable, users can access resources on their VLAN without their traffic being routed off the VLAN, even though the traffic may traverse multiple switches.

  • Special purpose VLAN: Sometimes a VLAN is provisioned to carry a single type of traffic that must be dispersed throughout the campus (for example, multicast, voice, or visitor VLANs).

  • Poor design: For no clear purpose, users are placed in VLANs that span the campus or even WANs.

Some items should be considered when implementing end-to-end VLANS. Switch ports are provisioned for each user and associated with a given VLAN. Because users on an end-to-end VLAN may be anywhere in the network, all switches must be aware of that VLAN. This means that all switches carrying traffic for end-to-end VLANs are required to have identical VLAN databases. Also, flooded traffic for the VLAN is, by default, passed to every switch even if it does not currently have any active ports in the particular end-to-end VLAN. Finally, troubleshooting devices on a campus with end-to-end VLANs can be challenging, because the traffic for a single VLAN can traverse multiple switches in a large area of the campus.

For example, in a military setting, one VLAN is designated to carry top-secret data. Users with access to that data are widely dispersed throughout the network. Because all devices on that VLAN have similar security requirements, security is handled by access lists at the Layer 3 devices that route traffic onto the segment (VLAN). Security can be applied VLAN-wide without addressing security at each switch in the network, which might have only a single user on the top-secret VLAN.


2.1.7 Describing Local VLANs



In the past, network designers attempted to implement the 80/20 rule when designing networks. The rule was based on the observation that, in general, 80 percent of the traffic on a network segment was passed between local devices, and only 20 percent of the traffic was destined for remote network segments. Therefore, end-to-end VLANs were typically used. Designers now consolidate servers in central locations on the network and provide access to external resources such as the Internet through one or two paths on the network, since the bulk of traffic now traverses a number of segments. Therefore, the paradigm now is closer to a 20/80 proportion in which the greater flow of traffic leaves the local segment, so local VLANs have become more useful.


Additionally, the concept of end-to-end VLANs was very attractive when IP address configuration was a manually administered and burdensome process. Therefore, anything that reduced this burden as users moved between networks was an improvement. But, given the ubiquity of DHCP, the process of configuring IP at each desktop is no longer a significant issue. As a result, there are few benefits to extending a VLAN throughout an enterprise. It is often more efficient to group all users of a set of geographically common switches into a single VLAN, regardless of the organizational function of those users, especially from a troubleshooting perspective. VLANs that have boundaries based upon campus geography rather than organizational function are called “local VLANs.” Local VLANs are generally confined to a wiring closet.





Here are some local VLAN characteristics and user guidelines:

  • Local VLANs should be created with physical boundaries rather than the job functions of the users on the end devices.

  • Traffic from a local VLAN is routed to reach destinations on other networks.

  • A single VLAN does not extend beyond the Building Distribution submodule.

VLANs on a given access switch should not be advertised to all other switches in the network.



2.1.8 Benefits of Local VLANs in Enterprise Campus Network



Local VLANs are part of the ECNM design, where VLANs used at the access layer should extend no further than their associated distribution switch. Traffic is routed from the local VLAN as it is passed from the distribution layer into the core. This design can mitigate Layer 2 troubleshooting issues that occur when a single VLAN traverses the switches throughout an enterprise campus network. Implementing the ECNM using local VLANs provides the following benefits:

  • Deterministic traffic flow: The simple layout provides a predictable Layer 2 and Layer 3 traffic path. In the event of a failure that was not mitigated by the redundancy features, the simplicity of the model facilitates expedient problem isolation and resolution within the switch block.

  • Active redundant paths: When implementing Per VLAN Spanning Tree (PVST) or Multiple Spanning Tree Protocol (MSTP), all links can be used to make use of the redundant paths.

  • High availability: Redundant paths exist at all infrastructure levels. Local VLAN traffic on access switches can be passed to the building distribution switches across an alternative Layer 2 path in the event of primary path failure. Router redundancy protocols can provide failover should the default gateway for the access VLAN fail. When both the Spanning Tree Protocol (STP) instance and VLAN are confined to a specific access and distribution block, Layer 2 and Layer 3 redundancy measures and protocols can be configured to failover in a coordinated manner.

  • Finite failure domain: If VLANs are local to a switch block and the number of devices on each VLAN is kept small, failures at Layer 2 are confined to a small subset of users.

  • Scalable design: Following the ECNM design, new access switches can be easily incorporated and new submodules can be added when necessary.

2.1.9 Mapping VLANs in a Hierarchical Network


When mapping VLANs onto the new hierarchical network design, keep these parameters in mind.

  • Examine the subnetting scheme that has been applied to the network and associate a VLAN to each subnet.

  • Configure routing between VLANs at the distribution layer using multilayer switches.

  • Make end-user VLANs and subnets local to a specific switch block.

  • Ideally, limit a VLAN to one access switch or switch stack. However, it may be necessary to extend a VLAN across multiple access switches within a switch block to support a capability such as wireless mobility.






No hay comentarios:

Publicar un comentario