<CHAPTEROBJECTIVE>The difference between the 80/20 rule and the 20/80 rule</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The new campus internetwork model</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>Understanding the details of switching technologies</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The differences between layer 2 switching, layer 3 switching, routing, layer 4 switching, and multi-layer switching</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The three layers in the Cisco hierarchical model</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The different Cisco switch solutions available at the access layer</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The different Cisco switch solutions available at the distribution layer</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The different Cisco switch solutions available at the core layer</CHAPTEROBJECTIVE>
<CHAPTEROBJECTIVE>The differences between a switch block and core block</CHAPTEROBJECTIVE></CHAPTEROBJECTIVEBLOCK>
<PARA><DROPCAP>A</DROPCAP> campus network is a building or group of buildings that connects to one network, called an enterprise network. Typically, one company owns the entire network, including the wiring between buildings. This local area network (LAN) typically uses Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI), or Asynchronous Transfer Mode (ATM) technologies.</PARA>
<PARA>The main challenge for network administrators is to make the campus network run efficiently and effectively. To do this, they must understand current campus networks as well as the new emerging campus networks. Therefore, in this chapter, you will learn about current and future requirements of campus internetworks. We'll explain the limitations of traditional campus networks as well as the benefits of the emerging campus designs. You will learn how to choose from among the new generation of Cisco switches to maximize the performance of your networks. Understanding how to design for the emerging campus networks is not only critical to your success on the Switching exam, it's also critical for implementing production networks.</PARA>
<PARA>As part of the instruction in network design, we'll discuss the specifics of technologies, including how to implement Ethernet and the differences between layer 2, layer 3, and layer 4 switching technologies. In particular, you will learn how to implement FastEthernet, Gigabit Ethernet, Fast EtherChannel, and Multi-Layer Switching (MLS) in the emerging campus designs. This will help you learn how to design, implement, and maintain an efficient and effective internetwork.</PARA>
<PARA>Finally, you will learn about the Cisco hierarchical model, which is covered in all the Cisco courses. In particular, you will learn which catalyst switches can-and should-be implemented at each layer of the Cisco model. And you will learn how to design networks based on switch and core blocks.</PARA>
<PARA>This chapter, then, will provide you with a thorough overview of campus network design (past, present, and future) and teach you how, as a network administrator, to choose the most appropriate technology for a particular network's needs. This will allow you to configure and design your network now, with the future in mind.</PARA>
<PARA><DROPCAP>I</DROPCAP>t doesn't seem that terribly long ago that the mainframe ruled the world and the PC was just used to placate some users. However, in their arrogance, mainframe administrators never really took the PC seriously, and like rock `n' roll naysayers, they said it would never last. Maybe they were right after all-at least in a way. In the last year or two, server farms have replaced distributed servers in the field. </PARA>
<PARA>In the last 15 years we have seen operators and managers of the mainframe either looking for other work or taking huge pay cuts. Their elitism exacerbated the slap in the face when people with no previous computer experience were suddenly making twice their salary after passing a few key certification exams.</PARA>
<PARA>Mainframes were not necessarily discarded, they just became huge storage areas for data and databases. The NetWare and NT server took over as a file/print server and soon started running most other programs and applications as well. </PARA>
<PARA>The last 20 years have witnessed the birth of the LAN and the growth of WANs and the Internet. So where are networks headed in the twenty-first century? Are we still going to see file and print servers at all branch locations? Are all workstations just going to connect to the Internet with ISPs to separate the data, voice, and other multimedia applications? </PARA>
</SECTION>
<SECTION ID="1.2"><TITLE>Looking Backwards at Traditional Campus Networks</TITLE>
<PARA><DROPCAP>I</DROPCAP>n the 1990s, the traditional campus network started as one LAN and grew and grew until segmentation needed to take place just to keep the network up and running. In this era of rapid expansion, response time was secondary to just making sure the network was functioning.</PARA>
<PARA>And by looking at the technology, you can see why keeping the network running was such a challenge. Typical campus networks ran on 10BaseT or 10Base2 (thinnet). As a result, the network was one large collision domain-not to mention even one large broadcast domain. Despite these limitations, Ethernet was used because it was scalable, effective, and somewhat inexpensive compared to other options. ARCnet was used in some networks, but Ethernet and ARCnet are not compatible, and the networks became two separate entities. ARCnet soon became history.</PARA>
<PARA>Because a campus network can easily span many buildings, bridges were used to connect the buildings together; this broke up the collision domains, but the network was still one large broadcast domain. More and more users were attached to the hubs used in the network, and soon the performance of the network was considered extremely slow.</PARA>
<SECTION ID="1.2.1" POS="1"><TITLE>Performance Problems and Solutions</TITLE>
<PARA>Availability and performance are the major problems with traditional campus networks. Bandwidth helps compound these problems. The three performance problems in traditional campus networks included collisions, broadcasts and multicasts, and bandwidth.</PARA>
<SECTION ID="1.2.1.1"><TITLE>Collisions</TITLE>
<PARA>A campus network typically started as one large collision domain, so all devices could see and also collide with each other. If a host had to broadcast, then all other devices had to listen, even though they themselves were trying to transmit. And if a device were to jabber (malfunction), it could almost bring the entire network down. </PARA>
<PARA>Because routers didn't really become cost effective until the late 1980s, bridges were used to break up collision domains, but the network was still one large broadcast domain and the broadcast problems still existed. However, bridges did break up the collision domain, and that was an improvement. Bridges also solved distance-limitation problems because they usually had repeater functions built into the electronics and/or they could break up the physical segment.</PARA>
</SECTION>
<SECTION ID="1.2.1.2"><TITLE>Bandwidth</TITLE>
<PARA>The <KEYTERM>bandwidth</KEYTERM> of a segment is measured by the amount of data that can be transmitted at any given time. Think of bandwidth as a water hose; the amount of water that can go through the hose depends on different elements:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Pressure</PARA></LISTITEM>
<LISTITEM><PARA>Distance</PARA></LISTITEM>
</LIST>
<PARA>The pressure is the current and the bandwidth is the size of the hose. If you have a hose that is only 1/4 inch in diameter, you won't get much water through it regardless of the current or the size of the pump on the transmitting end.</PARA>
<PARA>Another issue is distance. The longer the hose, the more the water pressure drops. You can put a repeater in the middle of the hose and reamplify the pressure of the line, which would help, but you need to understand that all lines (and hoses) have degradation of the signal, which means that the pressure drops off the farther the signal goes down the line. For the remote end to understand digital signaling, the pressure must stay at a minimum value. If it drops below this minimum value, the remote end will not be able to receive the data. In other words, the far end of the hose would just drip water instead of flow. You can't water your crops with drips of water; you need a constant water flow.</PARA>
<PARA>The solution to bandwidth issues is maintaining your distance limitations and designing your network with proper segmentation of switches and routers. Congestion on a segment happens when too many devices are trying to use the same bandwidth. By properly segmenting the network, you can eliminate some of the bandwidth issues. You never will have enough bandwidth for your users; you'll just have to accept that fact. However, you can always make it better.</PARA>
</SECTION>
<SECTION ID="1.2.1.3"><TITLE>Broadcasts and Multicasts</TITLE>
<PARA>Remember that all protocols have broadcasts built in as a feature, but some protocols can really cause problems if not configured correctly. Some protocols that, by default, can cause problems if not correctly implemented are Internet Protocol (IP), Address Resolution Protocol (ARP), Network Basic Input Output System (NetBIOS), Internetworking Packet eXchange (IPX), Service Advertising Protocol (SAP), and Routing Information Protocol (RIP). However, remember that there are features built into the Cisco router Internetworking Operating System (IOS) that, if correctly designed and implemented, can alleviate these problems. Packet filtering, queuing, and choosing the correct routing protocols are some examples of how Cisco routers can eliminate some broadcast problems.</PARA>
<PARA>Multicast traffic can also cause problems if not configured correctly. Multi- casts are broadcasts that are destined for a specific or defined group of users. If you have large multicast groups or a bandwidth-intensive application like Cisco's IPTV application, multicast traffic can consume most of the network bandwidth and resources.</PARA>
<PARA>To solve broadcast issues, create network segmentation with bridges, routers, and switches. However, understand that you'll move the bottleneck to the routers, which break up the broadcast domains. Routers process each packet that is transmitted on the network, which can cause the bottleneck if an enormous amount of traffic is generated.</PARA>
<PARA>Virtual LANs (VLANs) are a solution as well, but VLANs are just broadcast domains with boundaries created by routers. A VLAN is a group of devices on different network segments defined as a broadcast domain by the network administrator. The benefit of VLANs is that physical location is no longer a factor for determining the port into which you would plug a device into the network. You can plug a device into any switch port, and the network administrator gives that port a VLAN assignment. Remember that routers or layer 3 switches must be used for different VLANs to communicate.</PARA>
<PARA>The traditional campus network placed users and groups in the same physical location. If a new salesperson was hired, they had to sit in the same physical location as the other sales personal and be connected to the same physical network segment in order to share network resources. Any deviation from this caused major headaches for the network administrators. Figure 1.1 shows the traditional 80/20 network.</PARA>
<SLUG NUM="1.1">Figure 1.1: A traditional 80/20 network [f0101.eps]</SLUG>
<PARA>The rule that needed to be followed in this type of network was called the <KEYTERM>80/20 rule</KEYTERM> because 80 percent of the users' traffic was supposed to remain on the local network segment and only 20 percent or less was supposed to cross the routers or bridges to the other network segments. If more than 20 percent of the traffic crossed the network segmentation devices, performance issues arose. </PARA>
<PARA>Because network administrators are responsible for the network design and implementation, network performance was improved in the 80/20 network by making sure all of the network resources for the users were contained within their own network segment. The resources include network servers, printers, shared directories, software programs, and applications. </PARA>
</SECTION>
<SECTION ID="1.2.1.5"><TITLE>The New 20/80 Rule</TITLE>
<PARA>With new Web-based applications and computing, any PC can be a subscriber or publisher at any time. Also, because businesses are pulling servers from remote locations and creating server farms (sounds like a mainframe, doesn't it?) to centralize network services for security, reduced cost, and administration, the old 80/20 rule is obsolete and could not possibly work in this environment. All traffic must now traverse the campus backbone, which means we now have a <KEYTERM>20/80 rule</KEYTERM> in effect. Twenty percent of what the user performs on the network is local, whereas up to 80 percent crosses the network segmentation points to get to network services. Figure 1.2 shows the new 20/80 rule network. </PARA>
<SLUG NUM="1.2">Figure 1.2: A 20/80 network [f0102.eps]</SLUG>
<PARA>The problem with the 20/80 rule is not the network wiring and topology as much as it is the routers themselves. They must be able to handle an enormous amount of packets quickly and efficiently at wire speed. This is probably where we should be talking about how great Cisco routers are and how our networks would be nothing without them. We'll get to that later in this chapter-trust me. </PARA>
<PARA>With this new 20/80 rule, more and more users need to cross broadcast domains (VLANs), and this puts the burden on routing, or layer 3 switching. By using VLANs within the new campus model, you can control traffic patterns and control user access easier than in the traditional campus network. Virtual LANs break up broadcast domains by using either a router or switch that can perform layer 3 functions. Figure 1.3 shows how VLANs are created and might look in an internetwork.</PARA>
<SLUG NUM="1.3">Figure 1.3: VLANs break up broadcast domains in a switched internetwork. [f0103.eps]</SLUG>
<PARA><NOBR REF="3">Chapter 3</NOBR> includes detailed information about VLANs and how to configure them in an internetwork. It is imperative that you understand VLANs because the traditional way of building the campus network is being redesigned and VLANs are a large factor in building the new campus model.</PARA>
</SECTION>
</SECTION>
</SECTION>
</SECTION>
<SECTION ID="1.3"><TITLE>The New Campus Model</TITLE>
<PARA><DROPCAP>T</DROPCAP>he changes in customer network requirements-in combination with the problems with collision, bandwidth, and broadcasts-have necessitated a new network campus design. Higher user demands and complex applications force the network designers to think more about traffic patterns instead of solving a typical isolated department issue. We can no longer just think about creating subnets and putting different departments into each subnet. We need to create a network that makes everyone capable of reaching all network services easily. Server farms, where all enterprise servers are located in one physical location, really take a toll on the existing network infrastructure and make the way we used to design networks obsolete. We must pay attention to traffic patterns and how to solve bandwidth issues. This can be accomplished with higher-end routing and switching techniques.</PARA>
<PARA>Because of the new bandwidth-intensive applications, video and audio to the desktop, as well as more and more work being performed on the Internet, the new campus model must be able to perform the following:</PARA>
<RUNINPARA>When a network change takes place, the network must be able to adapt very quickly to new changes and keep data moving quickly. </RUNINPARA></RUNINBLOCK>
<RUNINPARA>The network design must have provisions that make sure the network stays up and running even if a link fails. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>Scalable size and throughput</RUNINHEAD>
<RUNINPARA>As users and new devices are added to the network, the network infrastructure must be able to handle the new increase in traffic. </RUNINPARA></RUNINBLOCK>
<RUNINPARA>Enterprise applications accessed by all users must be available to support all users on the internetwork. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>The new 20/80 rule</RUNINHEAD>
<RUNINPARA>Instead of 80 percent of the users' traffic staying on the local network, 80 percent of the traffic will now cross the backbone and only 20 percent will stay on the local network. </RUNINPARA></RUNINBLOCK>
<RUNINPARA>Campus networks must support multiple protocols, both routed and routing protocols. Routed protocols are used to send user data through the internetwork (for example, IP or IPX). Routing protocols are used to send network updates between routers, which will in turn update their routing tables. Examples of routing protocols include RIP, Enhanced Interior Gateway Routing Protocol (EIGRP), and Open Shortest Path First (OSPF).</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>Multicasting</RUNINHEAD>
<RUNINPARA>Multicasting is sending a broadcast to a defined subnet or group of users. Users can be placed in multicast groups, for example, for videoconferencing. </RUNINPARA></RUNINBLOCK>
<PARA>The new campus model provides remote services quickly and easily to all users. The users have no idea where the resources are located in the internetwork, nor should they. There are three types of network services, which are created and defined by the administrator and should appear to the users as local services:</PARA>
<PARA><KEYTERM>Local services</KEYTERM> are network services that are located on the same subnet or network as the users accessing them. Users do not cross layer 3 devices and the network services are in the same broadcast domain as the users. This type of traffic never crosses the backbone.</PARA>
<PARA><KEYTERM>Remote services</KEYTERM> are close to users but not on the same network or subnet as the users. The users would have to cross a layer 3 device to communicate with the network services. However, they might not have to cross the backbone.</PARA>
<PARA><KEYTERM>Enterprise services</KEYTERM> are defined as services that are provided to all users on the internetwork. Layer 3 switches or routers are required in this scenario because an enterprise service must be close to the core and would probably be based in its own subnet. Examples of these services include Internet access, e-mail, and possibly videoconferencing. When servers that host enterprise services are placed close to the backbone, all users would be the same distance from the servers, but all user data would have to cross the backbone to get to the services.</PARA>
<PARA><DROPCAP>S</DROPCAP>witching technologies are crucial to the new network design. Because the prices on layer 2 switching have been dropping dramatically, it is easier to justify the cost of buying switches for your entire network. This doesn't mean that every business can afford switch ports for all users, but it does allow for a cost-effective upgrade solution when the time comes.</PARA>
<PARA>To understand switching technologies and how routers and switches work together, you must understand the Open Systems Interconnection (OSI) model. This section will give you a general overview of the OSI model and the devices that are specified at each layer.</PARA>
<NOTE>For more detailed information about the OSI model, please see <EMPHASIS FORMAT="italic">CCNA: Cisco Certified Network Associate Study Guide</EMPHASIS>, by Todd Lammle (Sybex, 2000). You'll need a basic understanding of the OSI model to fully understand discussions in which it is included throughout the rest of the book. </NOTE>
<SECTION ID="1.4.1" POS="1"><TITLE>Open Systems Interconnection (OSI) Model</TITLE>
<!-- <PARA>As you probably already know, the OSI model has seven layers, each of which specifies functions that allow data to be transmitted from host to host on an internetwork. Figure 1.4 shows the OSI model and the functions of each layer.</PARA> -->
<SLUG NUM="1.4">Figure 1.4: The OSI model and the layer functions [f0104.eps]</SLUG>
<!-- <PARA>The OSI model is the cornerstone for application developers to write and create networked applications that run on an internetwork. What is important to network engineers and technicians is the encapsulation of data as it is transmitted on a network.</PARA> -->
<!-- <PARA><KEYTERM>Data encapsulation</KEYTERM> is the process by which the information in a protocol is wrapped, or contained, in the data section of another protocol. In the OSI reference model, each layer encapsulates the layer immediately above it as the data flows down the protocol stack.</PARA>
<PARA>The logical communication that happens at each layer of the OSI reference model doesn't involve many physical connections because the information each protocol needs to send is encapsulated in the layer of protocol information beneath it. This encapsulation produces a set of data called a packet (see Figure 1.5).</PARA> -->
<!-- <SLUG NUM="1.5">Figure 1.5: Data encapsulation at each layer of the OSI reference model [f0105.eps]</SLUG> -->
<!-- <PARA>Looking at Figure 1.5, you can follow the data down through the model as it's encapsulated at each layer of the OSI reference model. Cisco courses typically focus only on layers 2-4. </PARA>
<PARA>Each layer communicates only with its peer layer on the receiving host, and they exchange Protocol Data Units (PDUs). The PDUs are attached to the data at each layer as it traverses down the model and is read only by its peer on the receiving side. Each layer has a specific name for the PDU, as shown in Table 1.1.</PARA>
<TABLE NUM="1.1" TABLEENTRYNUM="2">
<TABLETITLE>OSI Encapsulation</TABLETITLE>
<TABLEHEAD>
<TABLEROW>
<TABLEENTRY><PARA>OSI Layer</PARA></TABLEENTRY>
<TABLEENTRY><PARA>Name of Protocol Data Units (PDUs)</PARA></TABLEENTRY>
</TABLEROW>
</TABLEHEAD>
<TABLEBODY>
<TABLEROW>
<TABLEENTRY><PARA>Transport</PARA></TABLEENTRY>
<TABLEENTRY><PARA>Segment</PARA></TABLEENTRY>
</TABLEROW>
<TABLEROW>
<TABLEENTRY><PARA>Network</PARA></TABLEENTRY>
<TABLEENTRY><PARA>Packet</PARA></TABLEENTRY>
</TABLEROW>
<TABLEROW>
<TABLEENTRY><PARA>Data Link</PARA></TABLEENTRY>
<TABLEENTRY><PARA>Frames</PARA></TABLEENTRY>
</TABLEROW>
<TABLEROW>
<TABLEENTRY><PARA>Physical</PARA></TABLEENTRY>
<TABLEENTRY><PARA>Bits</PARA></TABLEENTRY>
</TABLEROW>
</TABLEBODY>
</TABLE>
<PARA>Starting at the Application layer, data is converted for transmission on the network, then encapsulated in Presentation layer information. When the Presentation layer receives this information, it looks like generic data. The Presentation layer hands the data to the Session layer, which is responsible for synchronizing the session with the destination host.</PARA>
<PARA>The Session layer then passes this data to the Transport layer, which transports the data from the source host to the destination host in a reliable fashion. But before this happens, the Network layer adds routing information to the packet. It then passes the packet on to the Data Link layer for framing and for connection to the Physical layer. The Physical layer sends the data as 1s and 0s to the destination host across fiber or copper wiring. Finally, when the destination host receives the 1s and 0s, the data passes back up through the model, one layer at a time. The data is de-encapsulated at each of the OSI model's peer layers.</PARA> -->
<PARA>At a transmitting device, the data encapsulation method is as follows:</PARA>
<LIST MARK="number">
<LISTITEM><PARA>User information is converted to data for transmission on the network.</PARA></LISTITEM>
<LISTITEM><PARA>Data is converted to segments at the Transport layer, and a reliable session is possibly set up.</PARA></LISTITEM>
<LISTITEM><PARA>Segments are converted to packets or datagrams at the Network layer, and routing information is added to the PDU.</PARA></LISTITEM>
<LISTITEM><PARA>Packets or datagrams are converted to frames at the Data Link layer, and hardware addresses are used to communicate with local hosts on the network medium.</PARA></LISTITEM>
<LISTITEM><PARA>Frames are converted to bits, and 1s and 0s are encoded within the digital signal.</PARA></LISTITEM>
</LIST>
<PARA>Now that you have a sense of the OSI model and how routers and switches work together, it is time to turn our attention to the specifics of each layer of switching technology.</PARA>
<PARA>Layer 2 switching is hardware based, which means it uses the Media Access Control (MAC) address from the host's network interface cards (NICs) to filter the network. Switches use Application-Specific Integrated Circuits (ASICs) to build and maintain filter tables. It is OK to think of a layer 2 switch as a multiport bridge. </PARA>
<PARA>Layer 2 switching provides the following:</PARA>
<PARA>Layer 2 switching is so efficient because there is no modification to the data packet, only to the frame encapsulation of the packet, and only when the data packet is passing through dissimilar media (such as from Ethernet to FDDI).</PARA>
<PARA>Use layer 2 switching for workgroup connectivity and network segmentation (breaking up collision domains). This allows you to create a flatter network design and one with more network segments than traditional 10BaseT shared networks.</PARA>
<PARA>Layer 2 switching has helped develop new components in the network infrastructure:</PARA>
<RUNINBLOCK><RUNINHEAD>Server farms</RUNINHEAD>
<RUNINPARA>Servers are no longer distributed to physical locations because virtual LANs can be created to create broadcast domains in a switched internetwork. This means that all servers can be placed in a central location, yet a certain server can still be part of a workgroup in a remote branch, for example. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>Intranets</RUNINHEAD>
<RUNINPARA>Allows organization-wide client/server communications based on a Web technology.</RUNINPARA></RUNINBLOCK>
<PARA>These new technologies are allowing more data to flow off of local subnets and onto a routed network, where a router's performance can become the bottleneck.</PARA>
<SECTION ID="1.4.2.1"><TITLE>Limitations of Layer 2 Switching</TITLE>
<PARA>Layer 2 switches have the same limitations as bridge networks. Remember that bridges are good if you design the network by the 80/20 rule: users spend 80 percent of their time on their local segment.</PARA>
<PARA>Bridged networks break up collision domains, but the network is still one large broadcast domain. Similarly, layer 2 switches (bridges) cannot break up broadcast domains, which can cause performance issues and limits the size of your network. Broadcast and multicasts, along with the slow convergence of spanning tree, can cause major problems as the network grows. Because of these problems, layer 2 switches cannot completely replace routers in the internetwork. </PARA>
</SECTION>
</SECTION>
<SECTION ID="1.4.3"><TITLE>Routing</TITLE>
<PARA>We want to explain how routing works and how routers work in an internetwork before discussing layer 3 switching in the next section. Routers and layer 3 switches are similar in concept but not design. In this section, we'll discuss routers and what they provide in an internetwork today.</PARA>
<PARA>Routers break up collision domains like bridges do. In addition, routers also break up broadcast/multicast domains.</PARA>
<PARA>The benefits of routing include:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Break up of broadcast domains</PARA></LISTITEM>
<PARA>Routers provide optimal path determination because the router examines each and every packet that enters an interface and improves network segmentation by forwarding data packets to only a known destination network. Routers are not interested in hosts, only networks. If a router does not know about a remote network to which a packet is destined, it will just drop the packet and not forward it. Because of this packet examination, traffic management is obtained.</PARA>
<PARA>The Network layer of the OSI model defines a virtual-or logical-network address. Hosts and routers use these addresses to send information from host to host within an internetwork. Every network interface must have a logical address, typically an IP address.</PARA>
<PARA>Security can be obtained by a router reading the packet header information and reading filters defined by the network administrator (access lists).</PARA>
<PARA>The only difference between a layer 3 switch and a router is the way the administrator creates the physical implementation. Also, traditional routers use microprocessors to make forwarding decisions, and the switch performs only hardware-based packet switching. However, some traditional routers can have other hardware functions as well in some of the higher-end models. Layer 3 switches can be placed anywhere in the network because they handle high-performance LAN traffic and can cost-effectively replace routers. </PARA>
<PARA>Layer 3 switching is all hardware-based packet forwarding, and all packet forwarding is handled by hardware ASICs. Layer 3 switches really are no different functionally than a traditional router and perform the same functions, which are listed here:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Determine paths based on logical addressing</PARA></LISTITEM>
<LISTITEM><PARA>Run layer 3 checksums (on header only)</PARA></LISTITEM>
<LISTITEM><PARA>Use Time to Live (TTL)</PARA></LISTITEM>
<LISTITEM><PARA>Process and responds to any option information</PARA></LISTITEM>
<LISTITEM><PARA>Can update Simple Network Management Protocol (SNMP) managers with Management Information Base (MIB) information</PARA></LISTITEM>
<LISTITEM><PARA>Quality of service (QoS)</PARA></LISTITEM>
</LIST>
<NOTE>The Cisco 12000 Gigabit Switch router (GSR) performs (layer 3 switching) by using a crossbar switch matrix, but all in the Catalyst family of switches use ASIC switching.</NOTE>
<PARA>Layer 4 switching is considered a hardware-based layer 3 switching technology that can also consider the application used (for example, Telnet or FTP). Layer 4 switching provides additional routing above layer 3 by using the port numbers found in the Transport layer header to make routing decisions. These port numbers are found in Request for Comments (RFC) 1700 and reference the upper-layer protocol, program, or application.</PARA>
<PARA>Layer 4 information has been used to help make routing decisions for quite a while. For example, extended access lists can filter packets based on layer 4 port numbers. Another example is accounting information gathered by NetFlow switching in Cisco's higher-end routers. </PARA>
<PARA>The largest benefit of layer 4 switching is that the network administrator can configure a layer 4 switch to prioritize data traffic by application, which means a QoS can be defined for each user. For example, a number of users can be defined as a Video group and be assigned more priority, or bandwidth, based on the need for videoconferencing. </PARA>
<PARA>However, because users can be part of many groups and run many applications, the layer 4 switches must be able to provide a huge filter table or response time would suffer. This filter table must be much larger than any layer 2 or 3 switch. A layer 2 switch might have a filter table only as large as the number of users connected to the network, maybe even less if some hubs are used within the switched fabric. However, a layer 4 switch might have five or six entries for each and every device connected to the network! If the layer 4 switch does not have a filter table that includes all the information, the switch will not be able to produce wire-speed results. </PARA>
<PARA><KEYTERM>Multi-layer switching</KEYTERM> combines layer 2, 3, and 4 switching technologies and provides high-speed scalability with low latency. It accomplishes this high combination of high-speed scalability with low latency by using huge filter tables based on the criteria designed by the network administrator.</PARA>
<PARA>Multi-layer switching can move traffic at wire speed and also provide layer 3 routing, which can remove the bottleneck from the network routers. This technology is based on the idea of route once, switch many.</PARA>
<PARA>Multi-layer switching can make routing/switching decisions based on the following: </PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>MAC source/destination address in a Data Link frame</PARA></LISTITEM>
<LISTITEM><PARA>IP source/destination address in the Network layer header</PARA></LISTITEM>
<LISTITEM><PARA>Protocol filed in the Network layer header</PARA></LISTITEM>
<LISTITEM><PARA>Port source/destination numbers in the Transport layer header</PARA></LISTITEM>
</LIST>
<PARA>There is no performance difference between a layer 3 and a layer 4 switch because the routing/switching is all hardware based. </PARA>
<NOTE>MLS will be discussed in more detail in <NOBR REF="8">Chapter 8</NOBR>.</NOTE>
<PARA>It is important that you have an understanding of the different OSI layers and what they provide before continuing on to the Cisco three-layer hierarchical model.</PARA>
<PARA><DROPCAP>M</DROPCAP>ost of us learned about hierarchy early in life. Anyone with older siblings learned what it was like to be at the bottom of the hierarchy! Regardless of where you were first exposed to hierarchy, most of us experience it in many aspects of our lives. <KEYTERM>Hierarchy</KEYTERM> helps us to understand where things belong, how things fit together, and what functions go where. It brings order and understandability to otherwise complex models. If you want a pay raise, hierarchy dictates that you ask your boss, not your subordinate. That is the person whose role it is to grant (or deny) your request.</PARA>
<PARA>Hierarchy has many of the same benefits in network design that it has in other areas. When used properly in network design, it makes networks more predictable. It helps us to define and expect at which levels of the hierarchy we should perform certain functions. You would ask your boss, not your subordinate, for a raise because of their positions in the business hierarchy. The hierarchy requires that you ask someone at a higher level than yours. Likewise, you can use tools like access lists at certain levels in hierarchical networks and you must avoid them at others.</PARA>
<PARA>Let's face it, large networks can be extremely complicated, with multiple protocols, detailed configurations, and diverse technologies. Hierarchy helps us to summarize a complex collection of details into an understandable model. Then, as specific configurations are needed, the model dictates the appropriate manner for them to be applied.</PARA>
<PARA>The Cisco hierarchical model is used to help you design a scalable, reliable, cost-effective hierarchical internetwork. Cisco defines three layers of hierarchy, as shown in Figure 1.6, each with specific functionality. </PARA>
<SLUG NUM="1.6">Figure 1.6: The Cisco hierarchical model [f0106.eps]</SLUG>
<PARA>The three layers are as follows:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Core</PARA></LISTITEM>
<LISTITEM><PARA>Distribution</PARA></LISTITEM>
<LISTITEM><PARA>Access</PARA></LISTITEM>
</LIST>
<PARA>Each layer has specific responsibilities. Remember, however, that the three layers are logical and not necessarily physical. Three layers do not necessarily mean three separate devices. Consider the OSI model, another logical hierarchy. The seven layers describe functions but not necessarily protocols, right? Sometimes a protocol maps to more than one layer of the OSI model, and sometimes multiple protocols communicate within a single layer. In the same way, when you build physical implementations of hierarchical networks, you may have many devices in a single layer, or you might have a single device performing functions at two layers. The definition of the layers is logical, not physical.</PARA>
<PARA>Before we examine these layers and their functions, consider a common hierarchical design as shown in Figure 1.7. The phrase "keep local traffic local" has almost become a cliché in the networking world. However, the underlying concept has merit. Hierarchical design lends itself perfectly to fulfilling this concept. Now, let's take a closer look at each of the layers. </PARA>
<SLUG NUM="1.7">Figure 1.7: A hierarchical network design [f0107.eps]</SLUG>
<SECTION ID="1.5.1.1"><TITLE>Core Layer</TITLE>
<PARA>The <KEYTERM>core layer</KEYTERM> is literally the core of the network. At the top of the hierarchy, the core layer is responsible for transporting large amounts of traffic both reliably and quickly. The only purpose of the core layer of the network is to switch traffic as quickly as possible. The traffic transported across the core is common to a majority of users. However, remember that user data is processed at the distribution layer, and the distribution layer forwards the requests to the core, if needed. </PARA>
<PARA>If there is a failure in the core, <EMPHASIS FORMAT="italic">every single</EMPHASIS> user can be affected. Therefore, fault tolerance at this layer is an issue. The core is likely to see large volumes of traffic, so speed and latency are driving concerns here. Given the function of the core, we can now look at some design specifics to consider. Let's start with some things you know you don't want to do:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Don't do anything to slow down traffic. This includes using access lists, routing between virtual local area networks (VLANs), and packet filtering.</PARA></LISTITEM>
<LISTITEM><PARA>Don't support workgroup access here.</PARA></LISTITEM>
<LISTITEM><PARA>Avoid expanding the core when the internetwork grows (i.e., adding routers). If performance becomes an issue in the core, give preference to upgrades over expansion.</PARA></LISTITEM>
</LIST>
<PARA>Now, there are a few things that you want to make sure to get done as you design the core:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Design the core for high reliability. Consider data-link technologies that facilitate both speed and redundancy, such as FDDI, FastEthernet (with redundant links), or even ATM.</PARA></LISTITEM>
<LISTITEM><PARA>Design with speed in mind. The core should have very little latency.</PARA></LISTITEM>
<LISTITEM><PARA>Select routing protocols with lower convergence times. Fast and redundant data-link connectivity is no help if your routing tables are shot!</PARA></LISTITEM>
<PARA>The <KEYTERM>distribution layer</KEYTERM> is sometimes referred to as the workgroup layer and is the communication point between the access layer and the core. The primary function of the distribution layer is to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed. The distribution layer must determine the fastest way that user requests are serviced (for example, how a file request is forwarded to a server). After the distribution layer determines the best path, it forwards the request to the core layer. The core layer is then responsible for quickly transporting the request to the correct service. </PARA>
<PARA>The distribution layer is the place to implement policies for the network. Here, you can exercise considerable flexibility in defining network operation. There are several items that generally should be done at the distribution layer:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Implement tools such as access lists, packet filtering, and queuing.</PARA></LISTITEM>
<LISTITEM><PARA>Implement security and network policies, including address translation and firewalls.</PARA></LISTITEM>
<LISTITEM><PARA>Redistribute between routing protocols, including static routing.</PARA></LISTITEM>
<LISTITEM><PARA>Route between VLANs and other workgroup support functions.</PARA></LISTITEM>
<LISTITEM><PARA>Define broadcast and multicast domains.</PARA></LISTITEM>
</LIST>
<PARA>Things to avoid at the distribution layer are limited to those functions that exclusively belong to one of the other layers.</PARA>
</SECTION>
<SECTION ID="1.5.1.3"><TITLE>Access Layer</TITLE>
<PARA>The <KEYTERM>access layer</KEYTERM> controls user and workgroup access to internetwork resources. The access layer is sometimes referred to as the desktop layer. The network resources that most users need will be available locally. Any traffic for remote services is handled by the distribution layer. The following functions should be included at this layer:</PARA>
<LIST MARK="bullet">
<LISTITEM><PARA>Continued (from distribution layer) access control and policies.</PARA></LISTITEM>
<LISTITEM><PARA>Creation of separate collision domains (segmentation).</PARA></LISTITEM>
<LISTITEM><PARA>Workgroup connectivity to the distribution layer.</PARA></LISTITEM>
<LISTITEM><PARA>Technologies such as dial-on-demand routing (DDR) and Ethernet switching are frequently seen here in the access layer. Static routing (instead of dynamic routing protocols) is seen here as well. </PARA></LISTITEM>
</LIST>
<PARA>As already noted, three separate levels do not have to imply three separate routers. It could be fewer, or it could be more. Remember that this is a <EMPHASIS FORMAT="italic">layered </EMPHASIS>approach. </PARA>
<PARA><DROPCAP>U</DROPCAP>nderstanding the campus size and traffic is an important factor in network design. A large campus is defined as several or many colocated buildings, and a medium campus is one or more colocated buildings. Small campus networks have only one building.</PARA>
<!-- <PARA>By understanding your campus size, you can choose Cisco products that will fit your business needs and grow with your company. Cisco switches are produced to fit neatly within its three-layer model. This helps you decide which equipment to use for your network efficiently and quickly. </PARA> -->
<SECTION ID="1.6.1" POS="1"><TITLE>Access, Distribution, and Core Layer Switches</TITLE>
<!-- <PARA>The access layer, as you already know, is where users gain access to the internetwork. The switches deployed at this layer must be able to handle connecting individual desktop devices to the internetwork. </PARA>
<PARA>The Cisco solutions at the access layer include the following:</PARA>
<RUNINBLOCK><RUNINHEAD>1900/2800</RUNINHEAD>
<RUNINPARA>Provide switched 10Mbps to the desktop or to 10BaseT hubs in small to medium campus networks. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>2900</RUNINHEAD>
<RUNINPARA>Provides 10/100Mbps switched access for up to 50 users and gigabit speeds for servers and uplinks. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>4000</RUNINHEAD>
<RUNINPARA>Provides a 10/100/1000Mbps advanced high-performance enterprise solution for up to 96 users and up to 36 Gigabit Ethernet ports for servers. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>5000/5500</RUNINHEAD>
<RUNINPARA>Used in large campuses to provide access for more than 250 users. The Catalyst 5000 series supports 10/100/1000Mbps Ethernet switching. </RUNINPARA></RUNINBLOCK>
<PARA>As discussed earlier, the primary function of the distribution layer is to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed.</PARA>
<PARA>Distribution layer switches are the aggregation point for multiple access switches and must be capable of handling large amounts of traffic from these access layer devices. The distribution layer switches must also be able to participate in multi-layer switching (MLS) and be able to handle a route processor. </PARA>
<PARA>The Cisco switches that provide these functions are as follows:</PARA>
<RUNINBLOCK><RUNINHEAD>2926G</RUNINHEAD>
<RUNINPARA>A robust switch that uses an external router processor like a 4000 or 7000 series router.</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>5000/5500</RUNINHEAD>
<RUNINPARA>The most effective distribution layer switch, it can support a large amount of connections and also an internal route processor module called a Route Switch Module (RSM). It can switch process up to 176KBps. </RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>6000</RUNINHEAD>
<RUNINPARA>The Catalyst 6000 can provide up to 384 10/100 Ethernet connections, 192 100FX FastEthernet connections, and 130 Gigabit Ethernet ports. </RUNINPARA></RUNINBLOCK>
<PARA>The core layer must be efficient and do nothing to slow down packets as they traverse the backbone. The following switches are recommended for use in the core:</PARA>
<RUNINBLOCK><RUNINHEAD>5000/5500</RUNINHEAD>
<RUNINPARA>The 5000 is a great distribution layer switch, and the 5500 is a great core layer switch. The Catalyst 5000 series of switches includes the 5000, 5002, 5500, 5505, and 5509. All of the 5000 series switches use the same cards and modules, which makes them cost effective and provides protection for your investment.</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>6500</RUNINHEAD>
<RUNINPARA>The Catalyst 6500 series switches are designed to address the need for gigabit port density, high availability, and multi-layer switching for the core layer backbone and server-aggregation environments. These switches use the Cisco IOS to utilize the high speeds of the ASICs, which allows the delivery of wire-speed traffic management services end to end.</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>8500</RUNINHEAD>
<RUNINPARA>The Cisco Catalyst 8500 is a core layer switch that provides high-performance switching. The Catalyst 8500 uses Application-Specific Integrated Circuits (ASICs) to provide multiple-layer protocol support including Internet Protocol (IP), IP multicast, bridging, Asynchronous Transfer Mode (ATM) switching, and CiscoAssure policy-enabled Quality of Service (QoS).</RUNINPARA></RUNINBLOCK>
<PARA>All of these switches provide wire-speed multicast forwarding, routing, and Protocol Independent Multicast (PIM) for scalable multicast routing. These switches are perfect for providing the high bandwidth and performance needed for a core router. The 6500 and 8500 switches can aggregate multiprotocol traffic from multiple remote wiring closets and workgroup switches.</PARA>
</SECTION> -->
<SLUG NONUM="a1"/>
</SECTION>
</SECTION>
<SECTION ID="1.7"><TITLE>The Building Block</TITLE>
<!-- <PARA><DROPCAP>R</DROPCAP>emember the saying "Everything I need to know I learned in kindergarten"? Well, it appears to be true. Cisco has determined that if you follow the hierarchical model they have designed, it promotes a building block approach to network design. If you did well with building blocks in your younger years, you can just apply that same technique to building large, multimillion-dollar networks. Kind of makes you glad it's someone else's money you're playing with, doesn't it?</PARA>
<PARA>In all seriousness, Cisco has determined some fundamental campus elements that help you build network building blocks:</PARA>
<RUNINBLOCK><RUNINHEAD>Switch blocks</RUNINHEAD>
<RUNINPARA>Access layer switches connected to the distribution layer devices</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>Core blocks</RUNINHEAD>
<RUNINPARA>Support of multiple switch blocks connected together with possibly 5500, 6500, or 8500 switches.</RUNINPARA></RUNINBLOCK>
<PARA>Within these fundamental elements, there are three contributing variables:</PARA>
<RUNINBLOCK><RUNINHEAD>Server blocks</RUNINHEAD>
<RUNINPARA>Groups of network servers on a single subnet</RUNINPARA></RUNINBLOCK>
<RUNINBLOCK><RUNINHEAD>WAN blocks</RUNINHEAD>
<RUNINPARA>Multiple connections to an ISP or multiple ISPs</RUNINPARA></RUNINBLOCK>
<PARA>The <KEYTERM>switch block</KEYTERM> is a combination of layer 2 switches and layer 3 routers. The layer 2 switches connect users in the wiring closet into the access layer and provide 10 or 100Mbps dedicated connections; 1900/2820 and 2900 Catalyst switches can be used in the switch block. </PARA>
<PARA>From here, the access layer switches will connect into one or more distribution layer switches, which will be the central connection point for all switches coming from the wiring closets. The distribution layer device is either a switch with an external router or a multi-layer switch. The distribution layer switch will then provide layer 3 routing functions, if needed. </PARA>
<PARA>The distribution layer router will prevent broadcast storms that could happen on an access layer switch from propagating throughout the entire internetwork. The broadcast storm would be isolated to only the access layer switch in which the problem exists.</PARA>
<PARA>To understand how large a switch block can be, you must understand the traffic types and the size and number of workgroups that will be using them. The number of switches that can collapse from the access layer to the distribution layer depend on the following:</PARA>
<LISTITEM><PARA>Routers at the distribution layer</PARA></LISTITEM>
<LISTITEM><PARA>Number of users connected to the access layer switches</PARA></LISTITEM>
<LISTITEM><PARA>Distance VLANs must traverse the network</PARA></LISTITEM>
<LISTITEM><PARA>Spanning tree domain size</PARA></LISTITEM>
</LIST>
<PARA>If routers at the distribution layer become the bottleneck in the network (which means the CPU processing is too intensive), the switch block has grown too large. Also, if too many broadcasts or multicast traffic slow down the switches and routers, your switch blocks have grown too large. </PARA>
<NOTE>A large number of users does not determine whether the switch block is too large, the amount of traffic going across the network does. </NOTE>
</SECTION>
</SECTION>
<SECTION ID="1.7.2"><TITLE>Core Block</TITLE>
<PARA>If you have two or more switch blocks, the Cisco rule of thumb states that you need a <KEYTERM>core block</KEYTERM>. No routing is performed at the core, only transferring of data. It is a pass-through for the switch block, the server block, and the Internet. Figure 1.8 shows a possible core block. </PARA>
<SLUG NUM="1.8">Figure 1.8: The core block [f0108.eps]</SLUG>
<PARA>The core is responsible for transferring data to and from the switch blocks as quickly as possible. You can build a fast core with a frame, packet, or cell (ATM) network technology. The Switching exam is based on an Ethernet core network. </PARA>
<PARA>Typically, you would only have one subnet configured on the core network. However, for redundancy and load balancing, you could have two or more subnets configured. </PARA>
<PARA>Switches can trunk on a certain port or ports. This means that a port on a switch can be a member of more than one VLAN at the same time. However, the distribution layer will handle the routing and trunking for VLANs, and the core is only a pass-through once the routing has been performed. Because of this, core links will not carry multiple subnets per link, the distribution layer will. </PARA>
<PARA>A Cisco 6500 or 8500 switch is recommended at the core, and even though only one of those switches might be sufficient to handle the traffic, Cisco recommends two switches for redundancy and load balancing. You could consider a 5500 Catalyst switch if you don't need the power of the 6500 or the 8500. </PARA>
<PARA>A <KEYTERM>collapsed core</KEYTERM> is defined as one switch performing both core and distribution layer functions. The collapsed core is typically found in a small network; however, the functions of the core and distribution layer are still distinct. </PARA>
<PARA>Redundant links between the distribution layer and the access layer switches and between each access layer switch may support more than one VLAN. The distribution layer routing is the termination for all ports. </PARA>
<PARA>Figure 1.9 shows a collapsed core network design.</PARA>
<PARA>In a collapsed core network, Spanning Tree Protocol (STP) blocks the redundant links to prevent loops. Hot Standby Routing Protocol (HSRP) can provide redundancy in the distribution layer routing. It can keep core connectivity if the primary routing process fails. </PARA>
<NOTE>HSRP is covered in <NOBR REF="8">Chapter 8</NOBR>. </NOTE>
</SECTION>
<SECTION ID="1.7.2.2"><TITLE>Dual Core</TITLE>
<PARA>If you have more than two switch blocks and need redundant connections between the core and distribution layer, you need to create a dual core. Figure 1.10 shows a possible dual core configuration. Each connection would be a separate subnet. </PARA>
<PARA>In Figure 1.10, you can see that each switch block is redundantly connected to each of the two core blocks. The distribution layer routers already have links to each subnet in the routing tables, provided by the layer 3 routing protocols. If a failure on a core switch takes place, convergence time will not be an issue. HSRP can be used to provide quick cutover between the cores. Notice that there is no redundancy between the two core networks, so STP will not be used on the core. </PARA>
</SECTION>
<SECTION ID="1.7.2.3"><TITLE>Core Size</TITLE>
<PARA>Routing protocols are the main factor in determining the size of your core. This is because routers, or any layer 3 device, isolate the core. Routers send updates to other routers, and as the network grows, so do these updates, so it takes longer to converge, or have all the routers update. Because at least one of the routers will connect to the Internet, it's possible that there will be more updates throughout the internetwork. </PARA>
<PARA>The routing protocol dictates the size of the distribution layer devices that can communicate to the core. Table 1.2 shows a few of the more popular routing protocols and the number of blocks each routing protocol supports. Remember that this includes all blocks, including server, mainframe, and WAN.</PARA>
<TABLE NUM="1.2" TABLEENTRYNUM="4">
<TABLETITLE>Blocks Supported by Routing Protocol</TABLETITLE>
<PARA>Typically, layer 2 switches are in the remote closets and represent the access layer, the layer where users gain access to the internetwork. Ethernet switched networks scale well in this environment, where the layer 2 switches then connect into a larger, more robust layer 3 switch representing the distribution layer. The layer 3 device is then connected into a layer 2 device representing the core. Because routing is not necessarily recommended in a classic design model at the core, the model then looks like Table 1.3.</PARA>
<SECTION ID="1.7.3.1"><TITLE>Spanning Tree Protocol (STP)</TITLE>
<PARA><NOBR REF="4">Chapters 4</NOBR> avd <NOBR REF="5">5</NOBR> details the Spanning Tree Protocol (STP), but some discussion is necessary here. STP is used by layer 2 bridges to stop network loops in networks that have more than one physical link to the same network. There is a limit to the number of links in a layer 2 switched backbone that needs to be taken into account. As you increase the number of core switches, the problem becomes that the number of links to distribution links must increase also, for redundancy reasons. If the core is running the Spanning Tree Protocol, then it can compromise the high-performance connectivity between switch blocks. The best design on the core is to have two switches without STP running. You can do this only by having a core without links between the core switches. This is demonstrated in Figure 1.11. </PARA>
<SLUG NUM="1.11">Figure 1.11: Layer 2 backbone scaling without STP [f0111.eps]</SLUG>
<PARA>Figure 1.11 shows redundancy between the core and distribution layer without spanning tree loops. This is accomplished by not having the two core switches linked together. However, each distribution layer 3 switch has a connection to each core switch. This means that each layer 3 switch has two equal-cost paths to every other router in the campus network.</PARA>
<PARA>As discussed in "Scaling Layer 2 Backbones," you'll typically find layer 2 switches connecting to layer 3 switches, which connect to the core with the layer 2 switches. However, it is possible that some networks might have layer 2/layer 3/layer 3 designs (layer 2 connecting to layer 3 connecting to layer 3). But this is not cheap, even if you're using someone else's money. There is always some type of network budget, and you need to have good reason to spend the type of money needed to build layer 3 switches into the core. </PARA>
<PARA>There are three reasons you would implement layer 3 switches into the core:</PARA>
<PARA>If you have only layer 2 devices at the core layer, the STP will be used to stop network loops if there is more than one connection between core devices. The STP has a convergence time of over 50 seconds, and if the network is large, this can cause an enormous amount of problems if it has just one link failure. </PARA>
<PARA>STP is not implemented in the core if you have layer 3 devices. Routing protocols, which have a much faster convergence time than STP, are used to maintain the network. </PARA>
<PARA>If you provide layer 3 devices in the core, the routing protocols can load balance with multiple equal-cost links. This is not possible with layer 3 devices only at the distribution layer because you would have to selectively choose the root for utilizing more than one path. </PARA>
<PARA>Because routing is typically performed in the distribution layer devices, each distribution layer device must have reachability information about each of the other distribution layer devices. These layer 3 devices use routing protocols to maintain the state and reachability information about neighbor routers. This means that each distribution device becomes a peer with every other distribution layer device, and scalability becomes an issue because every device has to keep information for every other device. </PARA>
<PARA>If your layer 3 devices are located in the core, you can create a hierarchy, and the distribution layer devices will no longer be peer to each other's distribution device. This is typical in an environment in which there are more than 100 switch blocks. </PARA>
</SECTION>
</SECTION>
</SECTION>
<SECTION ID="1.8"><TITLE>Summary</TITLE>
<PARA><DROPCAP>I</DROPCAP>n this chapter, you learned about switches and the different models available from Cisco. It is imperative that you understand the different models and what they are used for in the Cisco hierarchical design. </PARA>
<PARA>The past and future requirements of campus internetworks are an important part of your studies for your Cisco Switching exam. We discussed the current campus designs as well as how to implement FastEthernet, Gigabit Ethernet, Fast EtherChannel, and Multi-Layer Switching (MLS) in the emerging campus designs. </PARA>
<PARA>We also discussed the differences between layer 2, layer 3, and layer 4 switching technologies. You learned about the Cisco three-layer model and the different catalyst switches that can be implemented at each layer of the Cisco model. </PARA>
<PARA>The chapter ended with a discussion of the switch and core blocks, which are based on the Cisco three-layer model, and how to design networks based on this model.</PARA>
<!-- <PARA>In the following table, the first column contains definitions of different types of switching. Fill in the second column with the number or numbers of the correct switching technology.</PARA>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>Which device is used to break up broadcast domains?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>Which device is used to break up collision domains?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>What are the four methods of encapsulating user data through the OSI model?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>Which Cisco layer is used to pass traffic as quickly as possible?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>What is the Protocol Data Unit (PDU) used at the Transport layer?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>What is the PDU used at the Network layer?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>Which Cisco layer is used to break up collision domains?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>Which OSI layer creates frames by encapsulating packets with a header and trailer?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>What devices can provide multicast control and security?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
<TESTBLOCK><QUESTIONBLOCK><QUESTION>What breaks up broadcast domains in a layer 2 switched network?</QUESTION></QUESTIONBLOCK></TESTBLOCK>
</TESTDATA>-->
<SLUG NONUM="w3"/>
</TESTSECTION>
</SECTION>
<SECTION ID="1.10"><TITLE>Answers to Written Lab</TITLE>
<SECTION ID="1.10.1" POS="1"><TITLE>Answers to Lab 1.1</TITLE>
<TABULARENTRY>Provides end users with access to the network</TABULARENTRY>
<TABULARENTRY>1</TABULARENTRY>
</TABULARROW>
<TABULARROW>
<TABULARENTRY>Communicates between the switch blocks and to the enterprise servers</TABULARENTRY>
<TABULARENTRY>3</TABULARENTRY>
</TABULARROW>
<TABULARROW>
<TABULARENTRY>Switches traffic as quickly as possible</TABULARENTRY>
<TABULARENTRY>3</TABULARENTRY>
</TABULARROW>
</TABULARBODY>
</TABULARDATA>
</SECTION>
<TESTSECTION ID="1.10.3"><TITLE>Answers to Lab 1.3</TITLE>
<TESTDATA>
<TESTBLOCK><ANSWERBLOCK><ANSWER>A layer 3 device, usually a router. Layer 2 devices do not break up broadcast domains.</ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>A layer 2 device, typically a switch. Although routers break up both collision domains and broadcast domains, layer 2 switches are primarily used to break up collision domains. </ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Segment, packet, frame, bits. It is important to understand the question. This question asked for the encapsulation methods, which means how data is encapsulated as user data goes from the Application layer down to the Physical layer.</ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>The core layer should have no packet manipulation, if possible.</ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Port or socket. TCP uses port numbers. IPX uses sockets. </ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>A packet or datagram is the PDU used at the Network layer.</ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Access layer. Remember, the distribution layer is used to break up broadcast domains and the access layer is used to break up collision domains. </ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Data Link. Data is encapsulated with header and trailer information at the Data Link layer. </ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Routers or layer 3 devices are the only devices that control broadcasts and provide packet filtering</ANSWER></ANSWERBLOCK></TESTBLOCK>
<TESTBLOCK><ANSWERBLOCK><ANSWER>Virtual LANs. These are configured on the layer 2 switches and layer 3 devices provide a means for moving traffic between the VLANs. </ANSWER></ANSWERBLOCK></TESTBLOCK>