Полезная информация

cc/td/doc/product/software/ios120/12cgcr
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Configuring Multichassis Multilink  PPP

Configuring Multichassis Multilink  PPP

This chapter describes how to configure Multichassis Multilink PPP. Prior to Release 11.2, Cisco IOS supported Multilink PPP. Beginning with Release 11.2, Cisco IOS software also supports Multichassis Multilink PPP (MMP).

Multilink PPP provides the capability of splitting and recombining packets to a single end-system across a logical pipe (also called a bundle) formed by multiple links. Multilink PPP provides bandwidth on demand and reduces transmission latency across WAN links.

MMP, on the other hand, provides the additional capability for links to terminate at multiple routers with different remote addresses. MMP can also handle both analog and digital traffic.

This feature is intended for situations with large pools of dial-in users, where a single chassis cannot provide enough dial-in ports. This feature allows companies to provide a single dialup number to its users and to apply the same solution to analog and digital calls. This feature allows internet service providers, for example, to allocate a single ISDN rotary number to several ISDN PRIs across several routers.

For a complete description of the MMP commands in this chapter, refer to the "Multichassis Multilink  PPP Commands" chapter of the Dial Solutions Command Reference. To locate documentation of other commands that appear in this chapter, use the command reference master index or search online.

MMP is supported on the Cisco 7500, 4500, and 2500 series platforms and on synchronous serial, asynchronous serial, ISDN BRI, ISDN PRI, and dialer interfaces.

MMP does not require reconfiguration of telephone company switches.

Understand Multichassis Multilink PPP

Routers or access servers are configured to belong to groups of peers, called stack groups. All members of the stack group are peers; stack groups do not need a permanent lead router. Any stack group member can answer calls coming from a single access number, which is usually an ISDN PRI hunt group. Calls can come in from remote user devices, such as routers, modems, ISDN terminal adapters, or PC cards.

Once a connection is established with one member of a stack group, that member owns the call. If a second call comes in from the same client and a different router answers the call, the router establishes a tunnel and forwards all packets belonging to the call to the router that owns the call. Establishing a tunnel and forwarding calls through it to the router that owns the call, is sometimes called projecting the PPP link to the call master.

If a more powerful router is available, it can be configured as a member of the stack group and the other stack group members can establish tunnels and forward all calls to it. In such a case, the other stack group members are just answering calls and forwarding traffic to the more powerful offload router.


Note High-latency WAN lines between stack group members can make stack group operation inefficient.

MMP call handling, bidding, and Layer 2 forwarding operations in the stack group proceed as follows, as shown in Figure 115:

    1. When the first call comes in to the stack group, Router A answers.

    2. In the bidding, Router A wins because it already has the call. Router A becomes the call-master for that session with the remote device. (Router A might also be called the host to the master bundle interface.)

    3. When the remote device that initiated the call needs more bandwidth, it makes a second Multilink PPP call to the group.

    4. When the second call comes in Router D answers it and informs the stack group. Router A wins the bidding because it already is handling the session with that remote device.

    5. Router D establishes a tunnel to Router A, and forwards the raw PPP data to Router A.

    6. Router A reassembles and resequences the packets.

    7. If more calls come in to Router D and they too belong to Router A, the tunnel between A and D enlarges to handle the added traffic. Router D does not establish an additional tunnel to A.

    8. If more calls come in and are answered by any other router, that router also establishes a tunnel to A and forwards the raw PPP data.

    9. The reassembled data is passed on the corporate network as if it had all come through one physical link.


Figure 115: Typical Multichassis Multilink PPP Scenario


In contrast to the previous figure, Figure 116 features an offload router. Access servers that belong to a stack group answer calls, establish tunnels, and forward calls to a Cisco 4700 router that wins the bidding and is the call-master for all the calls. The Cisco  4700 reassembles and resequences all the packets coming in through the stack group.


Figure 116: Multichassis Multilink PPP with an Offload Router as a Stack Group Member



Note You can build stack groups using different access server, switching, and router platforms. However, universal access servers such as the Cisco AS5200 should not be combined with ISDN-only access servers such as the 4x00 platform. Because calls from the central office are allocated in an arbitrary way, this combination could result in an analog call being delivered to a digital-only access server, which would not be able to handle the call.

Requirements

MMP support on a group of routers requires that each router be configured to support the following:

Configure Multichassis Multilink PPP

To configure MMP, perform the tasks in the following sections, in the order listed:

Configure the Stack Group and Identify Members

To configure the stack group on the router, use the following commands beginning in global configuration mode:
Step Command Purpose

1 . 

sgbp group group-name

Create the stack group and assign this router to it.

2 . 

sgbp member peer-name [peer-ip-address]

Specify a peer member of the stack group.

Repeat this step for each additional stack group peer.


Note Only one stack group can be configured per access server or router.

Configure a Virtual Template and Create a Virtual Template Interface

You need to configure a virtual template for MMP when asynchronous or synchronous serial interfaces are used, but dialers are not defined. When dialers are configured on the physical interfaces, do not specify a virtual template interface.

To configure a virtual template for any non-dialer interfaces, use the following commands beginning in global configuration mode:
Step Command Purpose

1 . 

multilink virtual-template number

Define a virtual template for the stack group.

This step is not required if ISDN interfaces or other dialers are configured and used by the physical interfaces.

2 . 

ip local pool default ip-address

Specify an IP address pool by using any pooling mechanism---for example, IP local pooling or DHCP pooling.

3 . 

interface virtual-template number

Create a virtual template interface, and enter interface configuration mode.

This step is not required if ISDN interfaces or other dialers are configured and used by the physical interfaces.

4 . 

ip unnumbered ethernet 0

Specify unnumbered IP.

5 . 

encapsulation ppp

Enable PPP encapsulation on the virtual template interface.

6 . 

ppp multilink

Enable Multilink PPP on the virtual template interface.

7 . 

ppp authentication chap

Enable PPP authentication on the virtual template interface.

If dialers are or will be configured on the physical interfaces, the ip unnumbered command, mentioned in Step 4, will be used in configuring the dialer interface. For examples that show MMP configured with and without dialers, see the "MMP Configuration Examples" at the end of this chapter.


Note Never define a specific IP address on the virtual template because projected virtual access interfaces are always cloned from the virtual template interface. If a subsequent PPP link also gets projected to a stack member with a virtual access interface already cloned and active, we will have identical IP addresses on the two virtual interfaces. IP will erroneously route between them.

For more information about address pooling, see the "Configuring Media-Independent PPP and Multilink PPP" chapter in this manual.

Monitor and Maintain MMP Virtual Interfaces

To monitor and maintain virtual interfaces, you can use any of the following commands in EXEC mode:
Command Purpose

show ppp multilink

Display MLP and MMP bundle information.

show sgbp

Display the status of the stack group members.

show sgbp queries

Display the current seed bid value.

MMP Configuration Examples

The examples in this section show MMP configuration without and with dialers.

Multichassis Multilink PPP Using PRI but no Dialers

The following example shows the configuration of MMP when no dialers are involved. Comments in the configuration discuss the commands. Variations are shown for a Cisco AS5200 access server or Cisco 4000 series router, and for an E1 controller.

sgbp group stackq
sgbp member systemb 1.1.1.2
sgbp member systemc 1.1.1.3
username stackq password therock
! First make sure the multilink virtual template number is defined globally on 
¡ each router that is a member of the stack group.
multilink virtual-template 1
! If you have not configured any dialer interfaces for the physical interfaces in
! question (PRI, BRI, async, sync serial), you can define a virtual template.
interface virtual-template 1
 ip unnumbered e0
 ppp authentication chap
 ppp multilink 
! Never define a specific IP address on the virtual template because projected
! virtual access interfaces are always cloned from the virtual template interface.
! If a subsequent PPP link also gets projected to a stack member with a virtual
! access interface already cloned and active, we will have identical IP addresses
! on the two virtual interfaces. IP will erroneously route between them. ! On an AS5200 or 4XXX platform: ! On a TI controller ! controller T1 0 framing esf linecode b8zs pri-group timeslots 1-24 ! interface Serial 0:23 no ip address encapsulation ppp no ip route-cache ppp authentication chap ppp multilink ! ! On an E1 Controller ! controller E1 0 framing crc4 linecode hdb3 pri-group timeslots 1-31 interface Serial 0:15 no ip address encapsulation ppp no ip route-cache ppp authentication chap ppp multilink

Multichassis Multilink PPP with Dialers

When dialers are configured on the physical interfaces and when the interface itself is a dialer, do not specify a virtual template interface. For dialers, you only need to define the stack group name, common password and its members across all the stack members. No virtual template interface is defined at all.

Only the PPP commands in dialer interface configuration are applied to the bundle interface. Subsequent projected PPP links are also cloned with the PPP commands from the dialer interface.

This section includes the following examples:

MMP with Explicitly Defined Dialer

The following example includes a dialer that is explicitly specified by the interface dialer command and configured by the commands that immediately follow:

sgbp group stackq
sgbp member systemb 1.1.1.2
sgbp member systemc 1.1.1.3
username stackq password therock
interface dialer 1
 ip unnumbered e0
 dialer map .....
 encapsulation ppp
 ppp authentication chap
 dialer-group 1
 ppp multilink
!
! on a T1 controller
!
controller T1 0
 framing esf
 linecode b8zs
 pri-group timeslots 1-24
interface Serial0:23
 no ip address
 encapsulation ppp
 dialer in-band
 dialer rotary 1
 dialer-group 1
!
! or on an E1 Controller
! 
controller E1 0
 framing crc4 
 linecode hdb3
 pri-group timeslots 1-31
interface Serial0:15
 no ip address
 encapsulation ppp
 no ip route-cache
 ppp authentication chap
 ppp multilink

MMP with ISDN PRI but no Explicitly Defined Dialer

ISDN PRIs and BRIs by default are dialer interfaces.That is, a PRI configured without an explicit interface dialer command is still a dialer interface. The following example configures ISDN PRI. The D channel configuration on serial interface 0:23 is applied to all the B channels. MMP is enabled, but no virtual interface template needs to be defined.

sgbp group stackq
sgbp member systemb 1.1.1.2
sgbp member systemc 1.1.1.3
username stackq password therock
isdn switch-type primary-4ess
controller t1 0
 framing esf
 linecode b8zs
 pri-group timeslots 1-23
! 
interface Serial0:23
 ip unnumbered e0
 dialer map .....
 encap ppp
 ppp authentication chap
 dialer-group 1
 dialer rot 1
!
ppp multilink

Multichassis Multilink PPP with Offload Server

The following example shows a virtual template interface for a system being configured as an offload server (via the sgbp seed-bid offload command). All other stack group members must be defined with sgbp seed-bid default command (or if you do not enter any sgbp seed-bid command, it defaults to this).

multilink virtual-template 1
 sgbp group stackq
 sgbp member systemb 1.1.1.2
 sgbp member systemc 1.1.1.3
 sgbp seed-bid offload
 username stackq password therock
 
interface virtual-template 1
 ip unnumbered e0
 ppp authentication chap
 ppp multilink

hometocprevnextglossaryfeedbacksearchhelp
Copyright 1989-1998 © Cisco Systems Inc.