Site home page
(news and notices)

Get alerts when Linktionary is updated

Book updates and addendums

Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)

Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!

Contribute to this site

Electronic licensing info

 

 

Internet Architecture and Backbone

Related Entries    Web Links    New/Updated Information

  
Search Linktionary (powered by FreeFind)

Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.

The Internet is a packet-switching network with a distributed mesh topology. Information travels in packets across a network that consists of multiple paths to a destination. Networks are interconnected with routers, which forward packets along paths to their destinations. The mesh topology provides redundant links. If a link fails, packets are routed around the link along different paths.

The Internet is sometimes called a backbone network, but this is misleading since the Internet is actually many backbones that are interconnected to form a mesh. The term "backbone" comes from an early research network called the NSFNET, which was funded by the U.S. National Science Foundation. This network created the hierarchical model that is still used today, in which local service providers connect to regional services, which, in turn, connect to national or global service providers. Today, many backbones exist and they are interconnected so that traffic can flow from any host to any other host. In addition, many regional networks directly connect with one another, bypassing the backbone networks.

The networks of the Internet are managed by large independent service providers such as MCI Worldcom, Sprint, Earthlink, Cable and Wireless, and others. There are NSPs (network service providers), ISPs (Internet service providers), and exchange points. NSPs build national or global networks and sell bandwidth to regional NSPs. Regional NSPs then resell bandwidth to local ISPs. Local ISPs sell and manage services to end users.

Internet Topologies

The history of the early Internet is outlined under the topic "Internet." This topic takes up the story in the mid-1980s with the creation of the NSFNET. As mentioned, the NSFNET redefined the Internet's early architecture and operation, and defined the hierarchy of networks and service providers that still applies today.

The NSFNET-connected sites included supercomputer centers, research centers, and universities, all of which connected on a no-fee basis. At the time, the network was considered a high-speed backbone. It was initially deployed as a series of 56 Kbits/sec links; but by 1991, it was running over T3 links with T1 on ramps. Organizations were connected with 28.8 or 56K connections.

As mentioned, the network was hierarchical. There was the top-level backbone to which regional networks connected. Local networks then connected to the regional networks via a relatively short link. The backbone network and the regional networks were managed by different authorities and provided bandwidth and transport services for local networks. Bandwidth was resold.

The ISP business model was developed by the early network providers and service providers. Entrepreneurs could set up facilities in their local area and purchase bandwidth, routing, and transport services from higher-level NSPs. The local ISP would then resell those services to end users. Many ISPs were started by one person who had extra bandwidth to sell. A typical ISP installs dial-up facilities (modems, modem banks, concentrators, access and authentication servers, and so on) and then metered and billed users for service.

Internet Exchanges

The NSFNET backbone concept worked well. Similar backbones had been created by other U.S. federal agencies, including MILNET (the military network), NSI (NASA Science Internet), and Esnet (Energy Sciences Network). Obviously, there was a need to exchange traffic among these networks, so two interconnection points called FIXes (Federal Internet Exchanges) were built. FIX-West was located in the Bay Area and FIX-East was located near Washington, D.C.

The FIXes are Internet exchanges. The participating agencies used the exchanges to peer with one another. Peering is a relationship in which different network authorities agree to exchange route advertisements and traffic. Each agency had a router at the FIX locations that exchanged routing information and traffic with the other agency's routers. The traffic flowing among these routers was constrained by policies of the individual agencies, as well as a federal AUP (acceptable use policy) that limited non-federal agency traffic.

By interconnecting different backbones, the Internet became a mesh network rather than a single backbone network. At this point, any reference to backbone refers to just one of the major trunks that provides transit services between mid-level networks. The hierarchical structure of the NSFNET with its top-level, mid-level, and feeder networks still remains, but there are multiple overlapping backbones, as shown in Figure I-4. Note the following:

  • Major backbone interconnects and exchanges traffic at Internet exchange sites.

  • Regional networks feed into the backbone via Internet exchange sites or direct connects.

  • Some networks exchange traffic directly via private peering links that bypass the backbone networks.

Internet Exchanges and NAPs (Network Access Points)

By 1993, the NSFNET decided to defund the NSFNET and do away with the AUP in order to promote commercialization of the Internet. Many commercial Internet networks came online during this time. In fact, the regional networks that were originally supported by the NSF turned into commercial service providers, including UUNet, PSINet, BBN, Intermedia, Netcom, and others.

The NSF's plan for privatization included the creation of NAPs (network access points), which are Internet exchanges with open access policies that support commercial and international traffic. One can think of the NAPs as being like airports that serve many different airlines. The airlines lease airport space and use its facilities. Likewise, NSPs lease space at NAPs and use their switching facilities to exchange traffic with other parts of the Internet.

Part of NSF's strategy was that all NSPs that received government funding must connect to all the NAPs. In 1993, the NFS awarded NAP contracts to MFS (Metropolitan Fiber Systems) Communications for a NAP in Washington, D.C., Ameritech for a NAP in Chicago, Pacific Bell for a NAP in San Francisco, and Sprint for a NAP in New York. MFS already operated MAEs (metropolitan area exchanges) in Washington, D.C. (MAE East) and in California's Silicon Valley (MAE West). A MAE is a fiber-optic loop covering a metropolitan area that provides a connection point for local service providers and businesses.

A NAP is a physical facility with equipment racks, power supplies, cable trays, and facilities for connecting to outside communication systems. The NAP operator installs switching equipment. Originally, NAPs used FDDI and switched Ethernet, but ATM switches or Gigabit Ethernet switches are common today. NSPs install their own routers at the NAP and connect them to the switching facilities, as shown in Figure I-5. Thus, traffic originating from an ISP crosses the NSP's router into the NAP's switching facility to routers owned by other NSPs that are located at the NAP. Refer to Geoff Huston's paper called "Interconnection, Peering, and Settlements" at the Web address listed on the related entries page for a complete discussion of NAPs and peering.

Most NAPs now consist of core ATM switches surrounded by routers. Traffic is exchanged across ATM PVCs (permanent virtual circuits). Usually, a NAP provides a default fully meshed set of PVCs that provide circuits to every other NSP router located at the NAP. However, an NSP can remove a PVC to block traffic from a particular NSP. However, larger NSPs may not want to peer with smaller NSPs because there is no equal exchange of traffic. A rule of thumb is that NSPs with a presence at every NAP peer with one another on an equal basis.

NAP operators do not establish peering agreements between NSPs, but only provide the facilities where peering can take place. Peering agreements are bilateral agreements negotiated between NSPs that define how they will exchange traffic at the NAPs. In addition, all IP datagram routing is handled by the NSP's equipment. However, the NAP provides the switching equipment over which the packets traverse after being routed.

The NSF also funded the creation of the Routing Arbiter service, which provided routing coordination in the form of a route server and a routing arbiter database (RADB). Route servers would handle routing tasks at NAPs while the RADB generated the route server configuration files. RADB was part of a distributed set of databases known as the Internet Routing Registry, a public repository of announced routes and routing policy in a common format. NSPs use information in the registry to configure their backbone routers. See "Routing Registries."

You can learn more about the NAPs by referring to the following Web sites.

Worldcom MAE information site

http://www.mae.net/

MFS Communications MAE information

http://www.mfst.com/MAE/doc/mae-info.html

The Ameritech Chicago NAP home page

http://nap.aads.net/main.html

PAIX.net, a neutral Internet exchange

http://www.paix.net/

Equinix IBX (Internet business exchange)

http://www.equinix.com/

Above.net ISX (Internet service exchange)

http://www.above.net/

Today, Internet exchanges are only one part of the Internet architecture. Many NSPs establish private peering arrangements, as previously mentioned. A private connection is a direct physical link that avoids forwarding traffic through the NAPs switching facility, which is often overburdened. NSPs create private connections in two ways. One method is to run a cable between their respective routers at the NAP facility. Another more costly approach is to lay cable or lease lines between their own facilities.

Internap Network Services Corporation provides an Internet exchange service designed to maximize performance. Its Assimilator proprietary technology provides intelligent routing and route management to extend and enhance BGP4 routing. Assimilator allows the P-NAP to make intelligent routing decisions, such as choosing the faster of multiple backbones to route data if the destination is multihomed. Internap customer packets are sent immediately to the correct Internet backbone, rather than to a randomly chosen public or private peering point.

Networks and Autonomous Systems

The many individually managed networks that make up the Internet are called autonomous systems or ASs. An AS is both a management domain and a routing domain. A typical AS is operated by an NSP or ISP. Each AS on the Internet is identified by a number assigned to it by the Internet authorities (now ICANN).

An AS may use one or more interior routing protocols to maintain internal routing tables. The usual interior routing protocol is OSPF (Open Shortest Path First) or IS-IS (Intermediate System-to-Intermediate System).

An exterior routing protocol handles the exchange of routing information among ASs. An AS must present a coherent interior routing plan and a consistent picture of the destinations reachable through the AS. The exterior routing protocol for the Internet is BGP (Border Gateway Protocol). BGP runs in "border routers" that connect ASs with other ASs. A border router at the edge of one AS tells a border router at the edge of another AS about the routes on its internal networks. These routes are advertised as an aggregation of addresses. An analogy is the way that the ZIP code 934xx represents a group of postal areas on the central coast of California. Route aggregation is a way of using the IP address space more efficiently. ISPs can aggregate blocks of addresses and advertise those addresses on the Internet with a single network address. At the same time, ISPs can allocate those addresses any way they like, as single address, just a few addresses, or large blocks to distribute to lower-level ISPs.

See "Autonomous System," "BGP (Border Gateway Protocol)," "CIDR (Classless Inter-Domain Routing)," "Route Aggregation," "Registries on the Internet," "Routing," and "Routing Registries" for more information.

PoPs and Internet Data Centers

A PoP is any facility where customers can connect into a service provider's facilities and gain access to much larger networks. Some PoPs are designed for end-user access, while others are designed to allow ISPs to connect into NSP networks. A PoP is not an Internet-specific entity. The ILECs and CLECs have their own PoPs that house voice and data equipment.

An ISP may be large enough to build its own PoP facility or lease space in an existing PoP where it collocates its equipment. Collocation makes sense, since PoP facilities provide security, backup power, disaster protection, fast Internet connections, Internet exchange switches, Internet Web services, and so on. In some cases, the ISP does not own any equipment, but leases everything from an NSP. Such an ISP is basically in the business of reselling services to end users and supporting those end users.

Over the years, end-user dial-up connection methods have changed, especially with the introduction of 56K modem technology. Up until the mid-1990s, ISPs would install a bank of modems and access servers at their own facilities. End-users would then dial and connect with the ISP's modems. When 56K modem technology came along, full modem speed could only be achieved by trunking calls from the carrier's PoP to the ISP's facility over a digital connection (T1 or T3 lines). See "Modems" for more information. In many cases, ISPs simply collocate their modem pools and access servers to the carrier's facilities in order to avoid expensive leased lines, or lease a bank of modems that are installed at a carrier or service provider facility.

Figure I-6 illustrates the facilities for ISP and NSP. The lower part shows subscribers dialing into the ISP facility across the PSTN. The local ISP trunks its traffic into the regional NSP, which in turn forwards traffic onto Internet backbones or other connections. Note that the lower part of the illustration assumes subscribers access is through the PSTN. ISPs may support other access methods, such as metro-Ethernet and wireless access services.

Internet data centers have become huge facilities that provide collocation and outsourcing facilities. They provide security, disaster protection, professional services, high-bandwidth connections, and so on. As mentioned, many ISPs are really virtual ISPs that resell services provided by larger carriers instead of investing in their own ISP infrastructure. In this role, the virtual ISP becomes a pure Internet service retailer that basically acquires new Internet customers, provides help-desk services, and handles billing and customer management. See "ISPs (Internet Service Providers)."

Private companies also use the facilities to host their Web sites and provide Internet access for their remote users via VPNs. See "VPN (Virtual Private Network)."

For a continuation of the topics discussed here, see "Network Access Services," "Network Core Technologies," "Internet Connections," and "Optical Networks."

Here is a list of relavent RFCs:

  • RFC 1093 (The NSFNET Routing Architecture, February 1989)

  • RFC 1136 (Administrative Domains and Routing Domains, a Model for Routing in the Internet, December 1989)

  • RFC 1287 (Towards the Future Internet Architecture, December 1991)

  • RFC 1480 (The US Domain, June 1993)

  • RFC 1862 (Report of the IAB Workshop on Internet Information Infrastructure, November 1995)

  • RFC 1958 (Architectural Principles of the Internet, June 1996)

  • RFC 2901 (Guide to Administrative Procedures of the Internet Infrastructure, August 2000)

  • RFC 2990 (Next Steps for the IP QoS Architecture, November 2000)



Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
All rights reserved under Pan American and International copyright conventions.