[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US4679186A - Alternate self-routing packet switching node having fault detection capabilities - Google Patents

Alternate self-routing packet switching node having fault detection capabilities Download PDF

Info

Publication number
US4679186A
US4679186A US06/654,764 US65476484A US4679186A US 4679186 A US4679186 A US 4679186A US 65476484 A US65476484 A US 65476484A US 4679186 A US4679186 A US 4679186A
Authority
US
United States
Prior art keywords
signal
stage
nodes
switch nodes
switching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US06/654,764
Inventor
Chin-Tau A. Lea
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
AT&T Corp
Original Assignee
AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES filed Critical AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
Assigned to BELL TELEPHONE LABORATORIES, INCORPORATED, A NY CORP. reassignment BELL TELEPHONE LABORATORIES, INCORPORATED, A NY CORP. ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: LEA, CHIN-TAU A.
Priority to US06/654,764 priority Critical patent/US4679186A/en
Priority to GB858523355A priority patent/GB8523355D0/en
Priority to DE19853533846 priority patent/DE3533846A1/en
Priority to KR8506993A priority patent/KR920008431B1/en
Priority to GB08523676A priority patent/GB2168221B/en
Priority to IT22265/85A priority patent/IT1185379B/en
Priority to BE0/215634A priority patent/BE903317A/en
Priority to DE3534355A priority patent/DE3534355C2/en
Priority to FR8514263A priority patent/FR2570914B1/en
Priority to JP21120485A priority patent/JPH0659052B2/en
Priority to CA000491633A priority patent/CA1234206A/en
Priority to NL8502633A priority patent/NL8502633A/en
Priority to CH4180/85A priority patent/CH669293A5/en
Publication of US4679186A publication Critical patent/US4679186A/en
Application granted granted Critical
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/557Error correction, e.g. fault recovery or fault tolerance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric

Definitions

  • This invention relates to a packet switching architecture for the packet switching of voice and data signals.
  • the invention specifically pertains to a self-routing switching node responsive to the transmission of address information and the availability of routes within a packet switching network to select one of a multitude of paths for the communication of a packet through the packet switching network and further responsive to a decouple signal to select only a predetermined one of the paths in response to the address information.
  • Self-routing packet switching networks such as those using unbuffered banyan switching nodes communicate packets through the switching network on the basis of address information transmitted through the switching network prior to the transmission of the packet.
  • Such a switching network has only one unique path between each input and output pair of the network.
  • a problem with the existence of only one unique route is the effect of unbalanced traffic conditions on the network and failures of switching nodes within the network.
  • the result of unbalance of traffic conditions or a failing node is that it is not possible to route a packet between a given set of input or output pairs of the network. From a maintenance point of view, the existence of only one unique path between each input and output pair results in ease of switch node failure detection since it is possible to send a test packet along any selected path with the network.
  • the switching nodes are grouped into sets that are arranged into a plurality of stages to form a switching network.
  • a given set of switching nodes of one stage is interconnected to a particular set of switching nodes in the preceding stage via links, and each switching node of this set is capable of responding to address information transmitted from a preceding switching node and a multipath signal to set up one of a plurality of paths to the next sequential stage and responsive to address information transmitted from the preceding switching node and a unipath signal to set up a predetermined path to the next sequential stage.
  • each of the switching nodes is responsive to the receipt of the address signals for transmitting to the other switching nodes within the set a link busy signal indicating that the link over which the address was received is now being serviced by that switching node and that the other nodes in the set should enter the inactive state with respect to the designated link.
  • FIG. 1 illustrates, in block diagram form, a prior art self-routing packet switching network
  • FIG. 2 illustrates, in block diagram form, a packet switching network which is the subject matter of this invention
  • FIG. 3 illustrates how certain of the figures may be combined
  • FIG. 4 illustrates information transmitted by a trunk controller during the setup and routing of a packet through the switching network illustrated in FIG. 2:
  • FIG. 5 is a detailed block diagram of switch node 102-0 of FIG. 2;
  • FIGS. 6 and 7 illustrate in greater detail link control 500 of switch node 102-0;
  • FIG. 8 is a timing diagram indicating the signals utilized during the setup of a path through the switching network illustrated in FIG. 2;
  • FIG. 9 illustrates the state diagram executed by controller 621 of FIG. 6 during the routing of information through switch node 102-0;
  • FIG. 10 illustrates, in block diagram form, trunk controller 207-0 of FIG. 2;
  • FIG. 11 illustrates, in block diagram form, the reconfiguration of the network illustrated in FIG. 2.
  • FIG. 2 shows an illustrative packet switching network which is the focus of this invention.
  • the interconnections within a section of a particular stage, such as section 3 of stage 3, are representative of all interconnections within all sections of that particular stage.
  • the manner in which interconnection is made between two nodes of a pair is illustrated in greater detail for nodes 104-0 and 104-1.
  • all numerical designations exclusive of those in the 300 series define interconnection and switching node positions which are identical to those of FIG. 1. Only the links which transfer packets between trunk controller 207-0 to trunk controller 208-1 are illustrated in FIG. 2.
  • FIG. 2 Within each stage of FIG.
  • each switching node is responsive to information received via a decouple conductor, e.g., DC2 which is either the multipath or unipath signal, to function as in either a multiple path network or a single path network.
  • a decouple conductor e.g., DC2 which is either the multipath or unipath signal
  • each node within a stage performs the identical functions as a node in the network illustrated in FIG. 1.
  • each switching node is paired with another switching node such that a packet received on one of the inputs of a pair of nodes can be transferred from any of the four outputs associated with that pair of switching nodes.
  • a path through the switching network of FIG. 2 is set up by trunk controller 207-0 transmitting address information through the network prior to the transmission of data information. If the decouple signal is not present, this address information is used by each pair of switching nodes within a given stage to determine which two of the four outputs associated with that pair of switching nodes can be used in the communication of the packet to the subsequent stage. If the decouple signal is present, this address information is used by the switching node receiving the address information to determine which of the two outputs of the receiving node is designated by the address information for communicating the packet to the subsequent stage. If the designated output is busy, then the packet cannot be communicated to the subsequent stage. The trunk controller will have to attempt, at a later point in time, to establish the path.
  • both of the outputs are idle, then a predetermined one of the outputs is utilized; however, if only one output is available then that output is used to communicate the packet to the subsequent stage.
  • the method used for pairing the switching nodes together as illustrated in FIG. 2 is defined as follows. Let
  • switching node 103-0 in stage 3 is represented by
  • Another method for pairing the switching nodes together is defined as follows. Let the node n's position and the link number be defined as previously described. The binary representation of the partner of a node
  • the links illustrated in FIG. 2 are all of the available paths that can be utilized for switching the packet illustrated in FIG. 4 to trunk controller 208-1 if none of the stages are receiving the decouple signals.
  • the path is set up through the switching network by trunk controller 207-0 first transmitting the address field of the packet illustrated in FIG. 4 and a setup signal during path setup time to the switching network via input link 200-0. If all of the stages are receiving decouples signals, the path between trunk controller 207-0 to trunk controller 208-1 can only be set up via nodes 100-0, 101-0, 102-0, 103-0, 104-0, and 105-0.
  • the path setup time is determined by system timing generator 65.
  • Each switching node upon receipt of the address information and the setup signal deletes the most significant bit of address information that it receives. For example, a switching node in stage 1 receives all six address bits, A0-A5, and deletes the A5 bit and then, retransmits to the second stage bits A0 through A4 and the setup signal. This process continues until the address field and setup signal reach stage 6 where a switching node receives only the A0 bit.
  • Switching node 100-0 is responsive to address information on input link 200-0 to transfer this information and the setup signal to either node 101-0 via link 201-0 or to node 101-1 via cable 300-0, node 100-1, and link 201-2.
  • Node 100-0 performs this transfer operation in response to the address information by directly interrogating whether or not link 201-0 is idle and also by interrogating whether or not link 201-2 is idle via node 100-1 and cable 300-0.
  • the address information and setup signal reach node 101-0 of section 2 of stage 2 via link 201-0, it can then be communicated to section 4 of stage 3 via either node 101-0 or node 101-2 via cable 301-0. Similarly, if the address information and setup signal are conveyed to section 2 via link 201-2, then switching node 101-1 or switching node 101-3 can be used to communicate the address information and setup signal to section 4 of stage 3.
  • the node in section 2 of stage 2 is designated for transferring the address information that node enters a wait state awaiting confirmation that a path has been completed to trunk controller 208-1. Upon receipt of this confirmation, the node enters a busy state and transfers the subsequent data information of the packet to the selected output link until the end flag is detected in the packet. Once this occurs, the node enters the idle state.
  • the address information and setup signal can be transferred to section 4 of stage 3 via links 202-0, 202-2, 202-4, 202-6, the switching nodes 102-0 through 102-7 that comprise section 4 are responsive to the address information on one of the previously mentioned links to transfer the address information and setup signal to stage 4 via links 203-0, 203-2, 203-4, 203-6, 203-8, 203-10, 203-12, or 203-14.
  • FIG. 2 at stage 3 has eight possible output links for transferring the address information and setup signal and the subsequent packet between trunk controller 207-0 to trunk controller 208-1 whereas FIG. 1 has only one link.
  • stages 4, 5, and 6 are concerned with routing the packet to the proper output conductor and because of this are simply paired with the adjacent switching node.
  • nodes 103-0 and 103-1 are responsive to the receipt of address information on links 203-0, 203-2, 203-8, or 203-10 for communicating the address information and setup signal to nodes 104-0 or 104-1 of stage 5 via links 204-0 and 204-4 or links 204-2 and 204-6, respectively.
  • Nodes 104-0 and 104-1 are responsive to receipt of the address information to communicate the address information to node 105-1 via links 205-1 or 205-3.
  • Node 105-0 is responsive to the address information to send the setup signal to trunk controller 208-1.
  • Trunk controller 208-1 is responsive to receipt of the setup signal from node 105-0 to transmit back an acknowledge signal to node 105-0.
  • the latter is responsive to the acknowledge signal to retransmit the acknowledge signal to stage 5 via the link that had been previously selected during the path setup operation.
  • the acknowledge signal then is rippled back to trunk controller 207-0 via the previously setup path.
  • the acknowledge signal is received by trunk controller 207-0, it commences transmission of the packet illustrated in FIG. 4. As each node in the path that has been selected receives an acknowledge signal from the preceding node, that node receiving the acknowledge signal enters a busy state locking the path through the network until the end flag is received in the packet data. If the node, while in the wait state, does not receive the acknowledge signal, it is forced into the idle state during the next path setup time by the path signal.
  • Nodes 102-0 and 102-4 are illustrated in greater detail in FIG. 5. Each node consists of two link control circuits such as link control circuit 500. Nodes 102-0 and 102-4 intercommunicate via cable 302-0 which comprises subcables 504 through 508. The link control circuits are responsive to the decouple signal, DC2, to ignore all information transmitted on cable 302-0. As described in greater detail in the discussion of FIG. 6 and FIG. 7 that illustrate link control 500, each link control circuit has four states: idle, setup, wait, and busy. When a link control circuit is in the idle state, it is responsive to the setup signal received via a link to enter the setup state. While in the setup state, link control circuits 500 through 503 monitor subcables 504 through 507 for the address information.
  • link control circuits 500 and 502 respond to the most significant bit being a "0" by establishing a path through either links 203-0 or 203-8, respectively. Conflicts are avoided since link control circuit 500 attempts to establish this communication path at a different point in time than link control 502 utilizing timing information that is received from system timing generator 65 via cable 66. If the most significant bit of the address information is a "1”, link control circuits 501 and 503 attempt to establish a communication path through links 203-1 and 203-9, respectively; and if the most significant bit is a "0", link control circuits 502 and 500 attempt to establish a path through links 203-0 and 203-8.
  • Whether or not a link control circuit can establish a path depends on whether or not the link connected to it is idle. If a link control circuit establishes the path, it then enters a wait state during which it continues to communicate the remaining address bits of the address information received via the subcable to the subsequent stages.
  • the link control circuit When the link control circuit receives an acknowledge signal back from the succeeding stage, it enters the busy state. If the link control does not receive the acknowledge signal before transmission of the next setup pulse indicating that no path was completed, the link control is placed in the idle state by the next setup signal. Once the link control circuit receives the acknowledge signal, it enters the busy state and remains in the busy state until the end flag is received.
  • the link control circuits When receiving the DC2 signal, the link control circuits also communicate information over subcable 508 indicating whether or not they currently have a path established for a given incoming link such as 202-0.
  • the purpose of this information communicated over subcable 508 is to keep the other link control circuits from erroneously responding to packet data bits by interpreting them as address information and a setup signal.
  • Link control circuit 500 is illustrated in greater detail in FIG. 6 and FIG. 7.
  • Link control 502 is identical in design; however, link control circuits 501 and 503 are different in that the lower inputs of gates 616 through 619 do not have a negative true input since these link control circuits response to the address bit being a "1".
  • each link comprises two conductors, for example, link 202-0 comprises conductors 600 and 601.
  • each of the four link control circuits within a node can independently be in one of four states: idle, setup, wait, and busy.
  • the link control circuit is in the busy state, the two conductors in each link both have the function of transferring the data information to the link control circuit.
  • one conductor (even conductor) communicates all even data bits, such as D0
  • the other conductor (odd conductor) communicates all odd data bits, such as D1.
  • these two conductors within a given link serve different purposes and are designated as the even and odd conductors.
  • conductor 600 is designated as I00 (even conductor)
  • conductor 601 is designated as I01 (odd conductor).
  • Link 202-16 comprises conductors 602 (I10, even conductor) and 603 (I11, odd conductor); link 202-8 comprises conductors 604 (I20, even conductor) and 605 (I21, odd conductor); and link 202-18 comprises conductors 606 (I30, even conductor) and 607 (I31, odd conductor).
  • the originating trunk controller transmits a setup signal for six system clock signals on the even conductor and transmits the address information on the odd conductor for the same period of time.
  • FIG. 8 shows the transmission which takes place between the six stages on the even and odd conductors during the path setup time.
  • the system clock provides the fundamental timing for the switching nodes in the six stages; and the path signal on line 830 defines the start of path setup time.
  • trunk controller 207-0 transmits the information shown on lines 800 and 801 to switching node 100-0 via the odd and the even conductors of link 200-0, respectively.
  • the address detection circuit of node 100-0 is responsive to the setup signal on line 800 to interrogate the A5 address bit, a "0", that is being received on the odd conductor and is shown as the input to stage 1 on line 801.
  • Switching node 100-0 is properly responsive to this information to transfer any subsequent information received on the odd and even conductor from trunk 207-0 to the selected node in stage 2 starting with time 823.
  • the setup pulse and the address bits are transferred through the stages as illustrated by lines 802 through 812.
  • the node removes the most significant address bit. For example, stage 1 removed the A5 address bit. Since the A5 address bit has been removed in stage 1, the receiving node in stage 2 receives the A4 address bit as shown on line 803 simultaneously with the reception of the setup signal as shown on line 802. As shown in line 812, the node in stage 6 which received the information on the odd and even conductors transfers this to trunk controller 208-1 at a time 824. Trunk controller 208-1 is responsive to the setup signal to transfer back on the odd conductor an acknowledge signal. The acknowledge signal then is transferred through all the switching stages back to trunk controller 207-0 as illustrated in lines 815 through 821. When trunk controller 207-0 receives the acknowledge signal via stage 1, it then commences the transmission of the packet data.
  • Link control circuits 501 through 503 are similar in design, and the differences are pointed out in the following discussion.
  • Controller 621 performs the control functions for link control 500.
  • Address detection block 646 is operational during the setup state and in the absence of the DC2 signal to detect the occurrence of the address bit being received from one of the attached links and to ascertain that no other link control in the pair of nodes is currently switching data for that particular link.
  • Address detection block 646 is receiving the DC2 signal, a "0" signal, the address detection block does not respond to information received on links 202-24 and 202-16 since DC2 disables and gates 613 and 615 from responding to address information on the previously links.
  • address detection block 646 detects the end of the setup state in order to signal controller 621 to transfer to the wait state. Address detection block 646 determines that the setup state is over when it is no longer receiving the setup signal.
  • Data selection block 647 is utilized to transfer information from a selected link to outgoing link 203-0 of link control 500.
  • Data selection 647 becomes operational during the setup state after the first bit of the address information has been decoded; and the determination is made that link control 500 is to communicate the remainder of the address information and setup signal to link 203-0.
  • Data selection block 647 is also operational during the busy state to transfer the information data from the selected input link to link 203-0. However, during the wait state, data selection block 647 is not active and does not transfer any bits on link 203-0. Since during the wait state, link control 500 is awaiting the acknowledge signal from stage 4 via conductor 652 of link 203-0.
  • Flag detector 636 is responsive to the receipt of the end flag in the information data to signal controller 621 to enter the idle state.
  • Acknowledge transmission block 660 is utilized by controller 621 to retransmit the acknowledge signal received from stage 4 back to stage 2.
  • OR gate 608 responding to the state G0 bits of link control 501, 502, and 503 which are transferred to gate 608 via cable 508 from latcher similar to latch 622. If the output of OR gate 608 is a "0", this indicates that the link is not active for the transmission of packet data or path setup information in another link control circuit of the node pair. Since the address bit on conductor 601 is a "0" (A3 as shown on line 805 of FIG. 8), the output of gate 616 transmits a "1" to arbitration circuit 620. A gate similar to gate 616 in link controls 501 and 503 only responds to an address bit A3 being a "1". Arbitration circuit 620's outputs, J0 through J3, respond to its inputs, K0 through K3, as defined by the following equations:
  • Arbitration circuit 620 is responsive to a "1" received on its K0 input from gate 616 to transmit a "1" to controller 621 via conductor 661.
  • Controller 621 is responsive to a "1" on conductor 661 to leave the idle state and enter the setup state and to set the G0 bit position of latch 622 to a "1".
  • a "1" is transmitted via conductor 650 to gates 623 and 627, and the latter are enabled to transfer the subsequent information being received on conductor 600 and 601 to conductors 651 and 652 which are the conductors of output link 203-0 via gate 631, gate 632, flip-flop 633 and flip-flop 634 plus gate 635.
  • the fact that G0 bit in latch 622 has been set is transmitted via subcable 508 to link control circuits 501, 502, and 503 to indicate that link 202-0 has been selected by link control circuit 500.
  • controller 621 is responsive to receipt of a "0" via conductor 600 to enter the wait state. Upon entering the latter state, controller 621 prepares link control 500 to receive the acknowledge signal from stage 4. Controller 621 transmits the OPENd signal via conductor 653 which disables gate 635 from transmitting on conductor 652 and also resets flip-flop 637.
  • link control circuit 500 does not receive an acknowledge signal from stage 4 via conductor 652 before time 832 as shown in line 830 of FIG. 8, controller 621 is forced into the idle state by receipt of a signal from OR gate 640 and AND gate 639 upon receipt of the path signal by gate 639.
  • the one reason for an acknowledge signal not being received back from stage 4 is that it was impossible to set up a path to trunk controller 208-1. Forcing controller 621 to the idle state via OR gate 640 and AND gate 639 by the path signal, assures that the controller 621 does not remain in the wait state indefinitely.
  • controller 621 While in the busy state, controller 621 transfers any subsequent data received on conductors 600 and 601 to conductors 651 and 652, respectively, while monitoring the data being transferred to detect the end flag.
  • the end flag is detected by flag detector 636 (which is enabled by the busy signal)
  • a signal indicating this fact is transmitted to controller 621 via OR gate 640.
  • Controller 621 is responsive to the receipt of the end flag indication to enter the idle state.
  • FIG. 9 illustrates the state table for controller 621.
  • the state table as illustrated defines the overall operation of controller 621 in detail.
  • Trunk controller 207-0 is illustrated in FIG. 10. Trunk controller 207-0 is responsive to data received from a data customer to transmit this data to the switching network illustrated in FIG. 2. When the data customer has data for transmission, the data customer transmits to control 1004 the start signal via 1013. Control 1004 is responsive to the start signal to transmit the hold signal via conductor 1014 to the data customer and to route the subsequent data transmitted by the data customer via cable 1012 through data synchronization circuit 1002, parallel to serial converter 1003, and data selector 1005 to shift register 1006. Control 1004 initializes this path by transmitting the appropriate signals on conductors 1015, 1016 and 1017. Also, control 1004 starts counter 1011 to count six bit times which represent the six address bits received from the data customer being transferred into shift register 1006.
  • control 1004 After the six address bits have been stored in shift register 1006, control 1004 transmits the hold signal to the data customer via conductor 1014. The transmission of a "1" on conductor 1019 to control 1004 by counter 1011 informs control 1004 that the six address bits have been stored in shift register 1006. At this point, control 1004 is in a wait state until system timing generator 65 generates the path signal. Upon receipt of the path signal, control 1004 conditions data selector 1007 and 1008 to accept data from their "0" inputs and transfers the address bits stored in shift register 1006 through data selector 1008 to link 200-0 via gate 1009 and conductor 1001. At the same time, since data selector 1007 is selecting its "0" input, a "1" signal is transmitted out on conductor 1000, and the latter is the previously described setup signal.
  • control 1004 disables gate 1009 which enables flip-flop 1010 and awaits the response of the acknowledge pulse back on conductor 1001.
  • control 1004 transmits a signal via conductor 1014 informing the data customer that transmission of data can continue.
  • the data customer ceases to transmit the start signal via conductor 1013.
  • control 1004 transmits a signal to flag generator 1023 via conductor 1024 that causes flag generator 1023 to transmit the end flag bits via data selectors 1007 and 1008 and conductors 1000 and 1001, respectively. In response to this end flag, the nodes in the path release the path.
  • FIG. 11 The topological transformation of FIG. 3 is illustrated in FIG. 11 with pairs of nodes being shown as a single element. The designation is such that the numeric designation of these elements corresponds to the lowest numerical designation of the first switching node of FIG. 3.
  • the switching node pair 102-0 and 102-4 is designated on FIG. 12 as 1102-0, and in the second stage of FIG. 3, the switching pair 101-1 and 101-3 is designated 1101-1.
  • the topological transformation is formally defined as follows.
  • the shuffling function, S i to perform this operation on the i th stage nodes of FIG. 3 after shuffling the new position of the link [P n-1 . . . P i P i-1 . . . P 2 P 1 P 0 ] is defined as
  • the topology of FIG. 3 is given by S i+1 D i S i -1 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A self-routing switching network interconnecting a plurality of interface controllers for communicating packets and circuit switched information via multiple paths with provision for diagnostic testing by allowing only single paths between any given set of input and output ports. The network comprises stages of switching nodes that are grouped in pairs. The stages are interconnected by links and each pair of switching nodes shares the same set of input links from the preceding stage. In normal operation, a pair of nodes determines one of a multiple of paths through the switching network by one of the pair responding to address information communicated via a link from an interface controller. After establishing a path through a particular node, that node transmits the address information to the next sequential stage. In order to perform maintenance operations, all pairs of nodes within a given stage are controlled by a single input signal which, when present, cause the nodes to cease to operate in pairs and to commence singular operation. This results in only one path existing through the stage for a given set of input ports and output ports.

Description

CROSS-REFERENCE TO RELATED APPLICATION
Concurrently filed herewith and assigned to the same assignee as this application are:
C. A. Lea, "Multiple Paths In A Self-Routing Packet And Circuit Switching Network," Ser. No. 654,765; and
C. A. Lea and W. A. Montgomery, "A Self-Routing Packet Switching Network With Intrastage Packet Communication," Ser. No. 654,763.
TECHNICAL FIELD
This invention relates to a packet switching architecture for the packet switching of voice and data signals. The invention specifically pertains to a self-routing switching node responsive to the transmission of address information and the availability of routes within a packet switching network to select one of a multitude of paths for the communication of a packet through the packet switching network and further responsive to a decouple signal to select only a predetermined one of the paths in response to the address information.
BACKGROUND OF THE INVENTION
Self-routing packet switching networks such as those using unbuffered banyan switching nodes communicate packets through the switching network on the basis of address information transmitted through the switching network prior to the transmission of the packet. Such a switching network has only one unique path between each input and output pair of the network. A problem with the existence of only one unique route is the effect of unbalanced traffic conditions on the network and failures of switching nodes within the network. The result of unbalance of traffic conditions or a failing node is that it is not possible to route a packet between a given set of input or output pairs of the network. From a maintenance point of view, the existence of only one unique path between each input and output pair results in ease of switch node failure detection since it is possible to send a test packet along any selected path with the network.
One known method for alleviating the reliability and traffic problems in a self-routing network is discussed in the report entitled, "Development of a Voice Funnel System," Bolt, Beranek, and Newman, Inc., Report No. 4098, Aug., 1979, pages III-29 through III-76, which discloses the use of an extra stage of banyan switching nodes at the input of a self-routing network to resolve the previously mentioned problems. The report proposes that this extra stage be identical to other stages of the network and be utilized by adding an extra bit of addressing to the address field of each packet being routed through the switching network. This extra stage of switching would proceed the first routing stage of the network. The extra address bit would be controlled by hardware/software external to the switching network and would determine the route through the switching network. The hardware/software would use this bit so as to avoid a node which was failing or experiencing heavy traffic.
Another method for alleviating these reliability and traffic problems is disclosed in the co-pending application of C. A. Lea, "Multiple Paths in a Self-Routing Packet Switching Network," Ser. No. 654,765. This system utilizes switching nodes which upon receipt of address information select between a multitude of paths through the switching system by the transmission of communication request signals and the receipt of communication available signals from downstream switching nodes. This system avoids the reliability problems by automatically bypassing switching nodes which are experiencing either unbalanced traffic conditions or failing. From a maintenance point of view, a problem does exist in this system since each individual switching node within the path automatically exercises control in order to bypass the failing or overburdened downstream nodes. The result of this decision being made at each individual switching node is that no single control circuit can set up a predetermined path through the network in order to test for failing switched nodes. Hence, it is possible for this system to slowly degrade due to failing nodes without maintenance features being able to detect this degradation except when extremely high traffic conditions are being experienced.
SUMMARY OF THE INVENTION
The above-mentioned problems are solved and a technical advance is achieved in accordance with the principles of this invention in an illustrative switching node that is responsive to an external signal to select one of a multitude of paths designated by a received address so as to bypass switching nodes in subsequent stages which are experiencing unbalanced or failure conditions and further responsive to another external signal to select a predetermined path as designated by received address information so as to allow maintenance procedures to be exercised on the switching network of which the switching node is a component.
Advantageously, the switching nodes are grouped into sets that are arranged into a plurality of stages to form a switching network. A given set of switching nodes of one stage is interconnected to a particular set of switching nodes in the preceding stage via links, and each switching node of this set is capable of responding to address information transmitted from a preceding switching node and a multipath signal to set up one of a plurality of paths to the next sequential stage and responsive to address information transmitted from the preceding switching node and a unipath signal to set up a predetermined path to the next sequential stage.
In addition, each of the switching nodes is responsive to the receipt of the address signals for transmitting to the other switching nodes within the set a link busy signal indicating that the link over which the address was received is now being serviced by that switching node and that the other nodes in the set should enter the inactive state with respect to the designated link.
BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 illustrates, in block diagram form, a prior art self-routing packet switching network;
FIG. 2 illustrates, in block diagram form, a packet switching network which is the subject matter of this invention;
FIG. 3 illustrates how certain of the figures may be combined;
FIG. 4 illustrates information transmitted by a trunk controller during the setup and routing of a packet through the switching network illustrated in FIG. 2:
FIG. 5 is a detailed block diagram of switch node 102-0 of FIG. 2;
FIGS. 6 and 7 illustrate in greater detail link control 500 of switch node 102-0;
FIG. 8 is a timing diagram indicating the signals utilized during the setup of a path through the switching network illustrated in FIG. 2;
FIG. 9 illustrates the state diagram executed by controller 621 of FIG. 6 during the routing of information through switch node 102-0;
FIG. 10 illustrates, in block diagram form, trunk controller 207-0 of FIG. 2; and
FIG. 11 illustrates, in block diagram form, the reconfiguration of the network illustrated in FIG. 2.
DETAILED DESCRIPTION
FIG. 2 shows an illustrative packet switching network which is the focus of this invention. The interconnections within a section of a particular stage, such as section 3 of stage 3, are representative of all interconnections within all sections of that particular stage. The manner in which interconnection is made between two nodes of a pair is illustrated in greater detail for nodes 104-0 and 104-1. For ease of understanding, all numerical designations exclusive of those in the 300 series define interconnection and switching node positions which are identical to those of FIG. 1. Only the links which transfer packets between trunk controller 207-0 to trunk controller 208-1 are illustrated in FIG. 2. Within each stage of FIG. 2 with the exception of stage 6, each switching node is responsive to information received via a decouple conductor, e.g., DC2 which is either the multipath or unipath signal, to function as in either a multiple path network or a single path network. If the decouple or unipath signal is present on the decouple conductor, each node within a stage performs the identical functions as a node in the network illustrated in FIG. 1. If the decouple signal is not present on the decouple conductor but rather the multipath signal is present, each switching node is paired with another switching node such that a packet received on one of the inputs of a pair of nodes can be transferred from any of the four outputs associated with that pair of switching nodes.
A path through the switching network of FIG. 2 is set up by trunk controller 207-0 transmitting address information through the network prior to the transmission of data information. If the decouple signal is not present, this address information is used by each pair of switching nodes within a given stage to determine which two of the four outputs associated with that pair of switching nodes can be used in the communication of the packet to the subsequent stage. If the decouple signal is present, this address information is used by the switching node receiving the address information to determine which of the two outputs of the receiving node is designated by the address information for communicating the packet to the subsequent stage. If the designated output is busy, then the packet cannot be communicated to the subsequent stage. The trunk controller will have to attempt, at a later point in time, to establish the path. If both of the outputs are idle, then a predetermined one of the outputs is utilized; however, if only one output is available then that output is used to communicate the packet to the subsequent stage. Once a path has been set up through the switching network, the path remains set up until an end flag is detected in the data information by each node in the path. Since the path remains set up until the end flag is received, the network can be used for both packet and circuit switched transmission.
The method used for pairing the switching nodes together as illustrated in FIG. 2 is defined as follows. Let
[P.sub.m-1. . . P.sub.2 P.sub.1 ].sub.i.sup.n
(where m equals the number of stages in the network, n equals the node number, and i equals the stage number) be the binary representation of node n's position within stage "i". Each "P" represents one binary bit. Also, let
[P.sub.m-1. . . P.sub.2 P.sub.1 P.sub.0 ].sub.i.sup.l
be the binary representation of link "1" to the node in stage "i". The binary representation of the partner of a node
[P.sub.m-1. . . P.sub.i P.sub.1 ].sub.i.sup.n
is
[P.sub.m-1. . . P.sub.i. . . P.sub.1 ].sub.i.sup.n
where i≦m/2 and is
[p.sub.m-1. . . P.sub.i. . . P.sub.1 ].sub.i.sup.n
where i>m/2.
For example, switching node 103-0 in stage 3 is represented by
[00000].sub.3.sup.0
and its partner is
[00000].sub.3.sup.4
which can also be written as
[00100].sub.3.sup.4
Another method for pairing the switching nodes together is defined as follows. Let the node n's position and the link number be defined as previously described. The binary representation of the partner of a node
[P.sub.m-1. . . P.sub.i P.sub.1].sub.i.sup.n
is
P.sub.m-1. . . P.sub.(m/2-i+1). . . P.sub.1 ].sub.i.sup.n
where i≦m/2 and is
[P.sub.m-1. . . P.sub.i. . . P.sub.1 ].sub.i.sup.n
where i>m/2.
To illustrate the overall function of the packet switching network illustrated in FIG. 2 consider the routing of the packet illustrated in FIG. 4 from trunk controller 207-0 to trunk controller 208-1. The links illustrated in FIG. 2 are all of the available paths that can be utilized for switching the packet illustrated in FIG. 4 to trunk controller 208-1 if none of the stages are receiving the decouple signals. The path is set up through the switching network by trunk controller 207-0 first transmitting the address field of the packet illustrated in FIG. 4 and a setup signal during path setup time to the switching network via input link 200-0. If all of the stages are receiving decouples signals, the path between trunk controller 207-0 to trunk controller 208-1 can only be set up via nodes 100-0, 101-0, 102-0, 103-0, 104-0, and 105-0. The path setup time is determined by system timing generator 65. Each switching node upon receipt of the address information and the setup signal deletes the most significant bit of address information that it receives. For example, a switching node in stage 1 receives all six address bits, A0-A5, and deletes the A5 bit and then, retransmits to the second stage bits A0 through A4 and the setup signal. This process continues until the address field and setup signal reach stage 6 where a switching node receives only the A0 bit.
Consider now, in greater detail, the determination of the path between trunk controller 207-0 to trunk controller 208-1 when none of the stages are receiving decouple signals. Switching node 100-0 is responsive to address information on input link 200-0 to transfer this information and the setup signal to either node 101-0 via link 201-0 or to node 101-1 via cable 300-0, node 100-1, and link 201-2. Node 100-0 performs this transfer operation in response to the address information by directly interrogating whether or not link 201-0 is idle and also by interrogating whether or not link 201-2 is idle via node 100-1 and cable 300-0. If the address information and setup signal reach node 101-0 of section 2 of stage 2 via link 201-0, it can then be communicated to section 4 of stage 3 via either node 101-0 or node 101-2 via cable 301-0. Similarly, if the address information and setup signal are conveyed to section 2 via link 201-2, then switching node 101-1 or switching node 101-3 can be used to communicate the address information and setup signal to section 4 of stage 3.
Once, the node in section 2 of stage 2 is designated for transferring the address information that node enters a wait state awaiting confirmation that a path has been completed to trunk controller 208-1. Upon receipt of this confirmation, the node enters a busy state and transfers the subsequent data information of the packet to the selected output link until the end flag is detected in the packet. Once this occurs, the node enters the idle state.
The previous discussion illustrates that there are four links available for transferring the address information and setup signal and the remainder of the packet between stage 2 and stage 3 of FIG. 2 versus only one link that is available in FIG. 1 when a packet is being transferred through the switching network from trunk controller 207-0 to trunk controller 208-1. Note, that the transfer of packets on the cables such as cable 300-0 is bidirectional such that packets can be transferred from switching node 100-1 to switching node 100-0 via cable 300-0 and vice versa.
From the previous discussion, it can be seen that the address information and setup signal can be transferred to section 4 of stage 3 via links 202-0, 202-2, 202-4, 202-6, the switching nodes 102-0 through 102-7 that comprise section 4 are responsive to the address information on one of the previously mentioned links to transfer the address information and setup signal to stage 4 via links 203-0, 203-2, 203-4, 203-6, 203-8, 203-10, 203-12, or 203-14. Once again, in comparison to FIG. 1, FIG. 2 at stage 3 has eight possible output links for transferring the address information and setup signal and the subsequent packet between trunk controller 207-0 to trunk controller 208-1 whereas FIG. 1 has only one link.
The remaining stages of FIG. 2, namely stages 4, 5, and 6 are concerned with routing the packet to the proper output conductor and because of this are simply paired with the adjacent switching node. Within stage 4, nodes 103-0 and 103-1 are responsive to the receipt of address information on links 203-0, 203-2, 203-8, or 203-10 for communicating the address information and setup signal to nodes 104-0 or 104-1 of stage 5 via links 204-0 and 204-4 or links 204-2 and 204-6, respectively. Nodes 104-0 and 104-1 are responsive to receipt of the address information to communicate the address information to node 105-1 via links 205-1 or 205-3. Node 105-0 is responsive to the address information to send the setup signal to trunk controller 208-1.
Trunk controller 208-1 is responsive to receipt of the setup signal from node 105-0 to transmit back an acknowledge signal to node 105-0. The latter is responsive to the acknowledge signal to retransmit the acknowledge signal to stage 5 via the link that had been previously selected during the path setup operation. The acknowledge signal then is rippled back to trunk controller 207-0 via the previously setup path. When the acknowledge signal is received by trunk controller 207-0, it commences transmission of the packet illustrated in FIG. 4. As each node in the path that has been selected receives an acknowledge signal from the preceding node, that node receiving the acknowledge signal enters a busy state locking the path through the network until the end flag is received in the packet data. If the node, while in the wait state, does not receive the acknowledge signal, it is forced into the idle state during the next path setup time by the path signal.
In general, it can be shown that for an "n" stage banyan network, the architecture illustrated in FIG. 2 provides at the middle stage |n alternate output links if none of the stages are receiving decouple signals. This additional number of alternate output links greatly reduces the problem of traffic unbalance.
Nodes 102-0 and 102-4 are illustrated in greater detail in FIG. 5. Each node consists of two link control circuits such as link control circuit 500. Nodes 102-0 and 102-4 intercommunicate via cable 302-0 which comprises subcables 504 through 508. The link control circuits are responsive to the decouple signal, DC2, to ignore all information transmitted on cable 302-0. As described in greater detail in the discussion of FIG. 6 and FIG. 7 that illustrate link control 500, each link control circuit has four states: idle, setup, wait, and busy. When a link control circuit is in the idle state, it is responsive to the setup signal received via a link to enter the setup state. While in the setup state, link control circuits 500 through 503 monitor subcables 504 through 507 for the address information. If the most significant bit of the received address information is a "0", link control circuits 500 and 502 respond to the most significant bit being a "0" by establishing a path through either links 203-0 or 203-8, respectively. Conflicts are avoided since link control circuit 500 attempts to establish this communication path at a different point in time than link control 502 utilizing timing information that is received from system timing generator 65 via cable 66. If the most significant bit of the address information is a "1", link control circuits 501 and 503 attempt to establish a communication path through links 203-1 and 203-9, respectively; and if the most significant bit is a "0", link control circuits 502 and 500 attempt to establish a path through links 203-0 and 203-8. Whether or not a link control circuit can establish a path depends on whether or not the link connected to it is idle. If a link control circuit establishes the path, it then enters a wait state during which it continues to communicate the remaining address bits of the address information received via the subcable to the subsequent stages.
When the link control circuit receives an acknowledge signal back from the succeeding stage, it enters the busy state. If the link control does not receive the acknowledge signal before transmission of the next setup pulse indicating that no path was completed, the link control is placed in the idle state by the next setup signal. Once the link control circuit receives the acknowledge signal, it enters the busy state and remains in the busy state until the end flag is received.
When receiving the DC2 signal, the link control circuits also communicate information over subcable 508 indicating whether or not they currently have a path established for a given incoming link such as 202-0. The purpose of this information communicated over subcable 508 is to keep the other link control circuits from erroneously responding to packet data bits by interpreting them as address information and a setup signal.
Link control circuit 500 is illustrated in greater detail in FIG. 6 and FIG. 7. Link control 502 is identical in design; however, link control circuits 501 and 503 are different in that the lower inputs of gates 616 through 619 do not have a negative true input since these link control circuits response to the address bit being a "1".
As illustrated in FIG. 6 and FIG. 7, each link comprises two conductors, for example, link 202-0 comprises conductors 600 and 601. As previously described, each of the four link control circuits within a node can independently be in one of four states: idle, setup, wait, and busy. When the link control circuit is in the busy state, the two conductors in each link both have the function of transferring the data information to the link control circuit. During the busy state, one conductor (even conductor) communicates all even data bits, such as D0, and the other conductor (odd conductor) communicates all odd data bits, such as D1. During the idle, setup, and wait state however, these two conductors within a given link serve different purposes and are designated as the even and odd conductors. For example, within link 202-0, conductor 600 is designated as I00 (even conductor), and conductor 601 is designated as I01 (odd conductor).
Link 202-16 comprises conductors 602 (I10, even conductor) and 603 (I11, odd conductor); link 202-8 comprises conductors 604 (I20, even conductor) and 605 (I21, odd conductor); and link 202-18 comprises conductors 606 (I30, even conductor) and 607 (I31, odd conductor). During the setup state, the originating trunk controller transmits a setup signal for six system clock signals on the even conductor and transmits the address information on the odd conductor for the same period of time.
FIG. 8 shows the transmission which takes place between the six stages on the even and odd conductors during the path setup time. As illustrated on line 813, the system clock provides the fundamental timing for the switching nodes in the six stages; and the path signal on line 830 defines the start of path setup time. Initially, trunk controller 207-0 transmits the information shown on lines 800 and 801 to switching node 100-0 via the odd and the even conductors of link 200-0, respectively. During time 822, the address detection circuit of node 100-0 is responsive to the setup signal on line 800 to interrogate the A5 address bit, a "0", that is being received on the odd conductor and is shown as the input to stage 1 on line 801. Switching node 100-0 is properly responsive to this information to transfer any subsequent information received on the odd and even conductor from trunk 207-0 to the selected node in stage 2 starting with time 823. The setup pulse and the address bits are transferred through the stages as illustrated by lines 802 through 812.
As will be described in greater detail later, at each stage, the node removes the most significant address bit. For example, stage 1 removed the A5 address bit. Since the A5 address bit has been removed in stage 1, the receiving node in stage 2 receives the A4 address bit as shown on line 803 simultaneously with the reception of the setup signal as shown on line 802. As shown in line 812, the node in stage 6 which received the information on the odd and even conductors transfers this to trunk controller 208-1 at a time 824. Trunk controller 208-1 is responsive to the setup signal to transfer back on the odd conductor an acknowledge signal. The acknowledge signal then is transferred through all the switching stages back to trunk controller 207-0 as illustrated in lines 815 through 821. When trunk controller 207-0 receives the acknowledge signal via stage 1, it then commences the transmission of the packet data.
Consider now the operation of link control 500 of FIG. 5 as illustrated in FIG. 6 and FIG. 7. Link control circuits 501 through 503 are similar in design, and the differences are pointed out in the following discussion. Controller 621 performs the control functions for link control 500. Address detection block 646 is operational during the setup state and in the absence of the DC2 signal to detect the occurrence of the address bit being received from one of the attached links and to ascertain that no other link control in the pair of nodes is currently switching data for that particular link. Address detection block 646 is receiving the DC2 signal, a "0" signal, the address detection block does not respond to information received on links 202-24 and 202-16 since DC2 disables and gates 613 and 615 from responding to address information on the previously links. In addition, address detection block 646 detects the end of the setup state in order to signal controller 621 to transfer to the wait state. Address detection block 646 determines that the setup state is over when it is no longer receiving the setup signal.
Data selection block 647 is utilized to transfer information from a selected link to outgoing link 203-0 of link control 500. Data selection 647 becomes operational during the setup state after the first bit of the address information has been decoded; and the determination is made that link control 500 is to communicate the remainder of the address information and setup signal to link 203-0. Data selection block 647 is also operational during the busy state to transfer the information data from the selected input link to link 203-0. However, during the wait state, data selection block 647 is not active and does not transfer any bits on link 203-0. Since during the wait state, link control 500 is awaiting the acknowledge signal from stage 4 via conductor 652 of link 203-0.
Flag detector 636 is responsive to the receipt of the end flag in the information data to signal controller 621 to enter the idle state. Acknowledge transmission block 660 is utilized by controller 621 to retransmit the acknowledge signal received from stage 4 back to stage 2.
As stated in the previous example, assume that the information shown as line 804 in FIG. 8 is being received on conductor 600 (even conductor) of link 202-0 and that the information shown on line 805 of FIG. 8 is being received on conductor 601 (odd conductor). Further assume, that link control circuit 500 is in the idle state. Link control 500 responds to this information on conductors 600 and 601 during time 825, and link control 502 responds during time 826. This difference in response time avoids contention problems between the link control circuits. In order to determine whether any other link control is responding to information data or path setup information, gate 608 of address detection block 646 monitors signals from the other three link control circuits to ascertain that these circuits are not currently receiving packet data or path setup information on link 202-0. The monitoring is performed by OR gate 608 responding to the state G0 bits of link control 501, 502, and 503 which are transferred to gate 608 via cable 508 from latcher similar to latch 622. If the output of OR gate 608 is a "0", this indicates that the link is not active for the transmission of packet data or path setup information in another link control circuit of the node pair. Since the address bit on conductor 601 is a "0" (A3 as shown on line 805 of FIG. 8), the output of gate 616 transmits a "1" to arbitration circuit 620. A gate similar to gate 616 in link controls 501 and 503 only responds to an address bit A3 being a "1". Arbitration circuit 620's outputs, J0 through J3, respond to its inputs, K0 through K3, as defined by the following equations:
J0=K0
J1=K0 K1
J2=K0 K1 K2
J3=K0 K1 K3 K3
Arbitration circuit 620 is responsive to a "1" received on its K0 input from gate 616 to transmit a "1" to controller 621 via conductor 661. Controller 621 is responsive to a "1" on conductor 661 to leave the idle state and enter the setup state and to set the G0 bit position of latch 622 to a "1". When the G0 bit position is set, a "1" is transmitted via conductor 650 to gates 623 and 627, and the latter are enabled to transfer the subsequent information being received on conductor 600 and 601 to conductors 651 and 652 which are the conductors of output link 203-0 via gate 631, gate 632, flip-flop 633 and flip-flop 634 plus gate 635. In addition, the fact that G0 bit in latch 622 has been set is transmitted via subcable 508 to link control circuits 501, 502, and 503 to indicate that link 202-0 has been selected by link control circuit 500.
The transfer of subsequent information by data selection block 647 continues until time 827. A this time, gate 645 of address detection block 646 senses that link control circuit 500 is no longer receiving the setup signal on conductor 600 and transmits "0" signal indicating this fact to controller 621 via conductor 662. Controller 621 is responsive to receipt of a "0" via conductor 600 to enter the wait state. Upon entering the latter state, controller 621 prepares link control 500 to receive the acknowledge signal from stage 4. Controller 621 transmits the OPENd signal via conductor 653 which disables gate 635 from transmitting on conductor 652 and also resets flip-flop 637.
When the acknowledge signal is received from stage 4 at time 828, flip-flop 637 is set; and its Q output transmits a "1" via conductor 654 to controller 621. In response to a "1" on conductor 654, controller 621 retransmits the acknowledge signal to stage 2 and enters the busy state. Controller 621 retransmits the acknowledge signal to stage 2 by transmitting a "1" via conductor 655 to gates 641 through 644. Since the G0 output is a "1" and this fact is transmitted on conductor 650, gate 641 transmits an acknowledge pulse on conductor 601 back to stage 2. In addition, the OPENd signal enables data selection block 647 to transmit data on conductor 652 by enabling gate 635.
If link control circuit 500 does not receive an acknowledge signal from stage 4 via conductor 652 before time 832 as shown in line 830 of FIG. 8, controller 621 is forced into the idle state by receipt of a signal from OR gate 640 and AND gate 639 upon receipt of the path signal by gate 639. The one reason for an acknowledge signal not being received back from stage 4 is that it was impossible to set up a path to trunk controller 208-1. Forcing controller 621 to the idle state via OR gate 640 and AND gate 639 by the path signal, assures that the controller 621 does not remain in the wait state indefinitely.
While in the busy state, controller 621 transfers any subsequent data received on conductors 600 and 601 to conductors 651 and 652, respectively, while monitoring the data being transferred to detect the end flag. When the end flag is detected by flag detector 636 (which is enabled by the busy signal), a signal indicating this fact is transmitted to controller 621 via OR gate 640. Controller 621 is responsive to the receipt of the end flag indication to enter the idle state.
FIG. 9 illustrates the state table for controller 621. The state table as illustrated defines the overall operation of controller 621 in detail.
Trunk controller 207-0 is illustrated in FIG. 10. Trunk controller 207-0 is responsive to data received from a data customer to transmit this data to the switching network illustrated in FIG. 2. When the data customer has data for transmission, the data customer transmits to control 1004 the start signal via 1013. Control 1004 is responsive to the start signal to transmit the hold signal via conductor 1014 to the data customer and to route the subsequent data transmitted by the data customer via cable 1012 through data synchronization circuit 1002, parallel to serial converter 1003, and data selector 1005 to shift register 1006. Control 1004 initializes this path by transmitting the appropriate signals on conductors 1015, 1016 and 1017. Also, control 1004 starts counter 1011 to count six bit times which represent the six address bits received from the data customer being transferred into shift register 1006. After the six address bits have been stored in shift register 1006, control 1004 transmits the hold signal to the data customer via conductor 1014. The transmission of a "1" on conductor 1019 to control 1004 by counter 1011 informs control 1004 that the six address bits have been stored in shift register 1006. At this point, control 1004 is in a wait state until system timing generator 65 generates the path signal. Upon receipt of the path signal, control 1004 conditions data selector 1007 and 1008 to accept data from their "0" inputs and transfers the address bits stored in shift register 1006 through data selector 1008 to link 200-0 via gate 1009 and conductor 1001. At the same time, since data selector 1007 is selecting its "0" input, a "1" signal is transmitted out on conductor 1000, and the latter is the previously described setup signal.
After the six address pulses have been transmitted out as determined by counter 1011, control 1004 disables gate 1009 which enables flip-flop 1010 and awaits the response of the acknowledge pulse back on conductor 1001. Upon receipt of the acknowledge pulse, control 1004 transmits a signal via conductor 1014 informing the data customer that transmission of data can continue. After the data customer has finished transmission of either a packet or the circuit switched data, the data customer ceases to transmit the start signal via conductor 1013. In response to the absence of the start signal, control 1004 transmits a signal to flag generator 1023 via conductor 1024 that causes flag generator 1023 to transmit the end flag bits via data selectors 1007 and 1008 and conductors 1000 and 1001, respectively. In response to this end flag, the nodes in the path release the path.
In order to adapt the architecture of FIG. 3 for the utilization of very large scale integration (VLSI), it is necessary to perform a topological transformation of the architecture illustrated in FIG. 3 such that all of the pairs of switching nodes physically occupy a physical position next one another. The topological transformation of FIG. 3 is illustrated in FIG. 11 with pairs of nodes being shown as a single element. The designation is such that the numeric designation of these elements corresponds to the lowest numerical designation of the first switching node of FIG. 3. For example, in stage 3 of FIG. 3, the switching node pair 102-0 and 102-4 is designated on FIG. 12 as 1102-0, and in the second stage of FIG. 3, the switching pair 101-1 and 101-3 is designated 1101-1. The topological transformation is formally defined as follows. Since the network has been transformed so that two switching nodes that share traffic are together, the shuffling function, Si, to perform this operation on the ith stage nodes of FIG. 3 after shuffling the new position of the link [Pn-1. . . Pi Pi-1. . . P2 P1 P0 ] is defined as
S.sub.i [[P.sub.n-1. . . P.sub.i+1 P.sub.i P.sub.i-1. . . P.sub.2 P.sub.1 P.sub.0 ]]=[P.sub.n-1. . . P.sub.i+1 P.sub.i-1. . . P.sub.2 P.sub.1 P.sub.i P.sub.0 ]
where i=2,3,. . . n/2
Let Si -1 be the inverse function of Si then
S.sub.i.sup.-1 [[P.sub.n-1. . . P.sub.i+1 P.sub.i P.sub.i-1. . . P.sub.2 P.sub.1 P.sub.0 ]]=[P.sub.n-1. . . P.sub.i+1 P.sub.1 P.sub.i-1. . . P.sub.2 P.sub.0 ]
Define Ti as follows
T.sub.i [[P.sub.n-1. . . P.sub.i+1 P.sub.i P.sub.i-1. . . P.sub.2 P.sub.1 P.sub.0 ]]=[P.sub.n-1. . . P.sub.i+1 P.sub.0 P.sub.i-1. . . P.sub.2 P.sub.1 P.sub.i ]
Di is the topological describing rule that defines the way links of the ith -stage nodes are connected to the links of the (i+1)th stage nodes and Di =Tn-i. The topology of FIG. 3 is given by Si+1 Di Si -1.
It is to be understood that the above-described embodiment is merely illustrative of the principles of this invention; other arrangements may be devised by those skilled in the art without departing from the spirit and scope of the invention. In particular, one skilled in the art could readily observe that for the purposes of different maintenance strategies that the decoupling signals could be distributed to different groupings of nodes other than that shown in the illustrative embodiment.

Claims (12)

What is claimed is:
1. An alternate-path, self-routing, packet and circuit switching system for switching packets and circuit switched information from a plurality of digital data units and for detecting failures within said system, said system comprising:
a switching network having a plurality of stages each comprising interconnected switch nodes;
a plurality of distributed controllers each for interfacingly connecting an individual one of said digital data units to one of said switch nodes;
each of said controllers responsive to a receipt of one of said packets and start of circuit switched information for transmitting address signals to establish a path through said switching network;
each of said switch nodes of one stage being interconnected to a set of switch nodes in the next sequential stage;
each of said switch nodes responsive to receipt of said address signals and a multipath signal for setting up said path via any one of a subset of said set of switch nodes in said next sequential stage; and
each of said switch nodes responsive to receipt of said address signals and a unipath signal for setting up said path via a predetermined one of said set of switch nodes in said next sequential stage.
2. The system of claim 1 wherein each of said switch nodes further comprises means for designating one of said subset of switch nodes in said sequential stage in response to said address information and said multipath signal;
means for transmitting a communication request signal to the designated one of said subset of said nodes;
means responsive to a communication unavailable signal from said designated one of said subset for entering an idle state; and
each of said nodes further comprises means responsive to said communication unavailable signal for transmitting another communication unavailable signal to the last preceding stage.
3. The system of claim 2 wherein each of said switch nodes further comprises means responsive to receipt of said address information for transmitting an input link busy signal to the switching nodes of the stage containing said receiving switch node that are interconnected to said receiving switching node.
4. The system of claim 3 wherein said address information comprises address signals and said receiving switching node further comprises means responsive to receipt of said address information to eliminate the most significant address signal of said address signals and to retransmit the remaining address signals to the next sequential stage.
5. A switching network for switching packets and circuit switched information from a plurality of input ports to a plurality of output ports, said network comprising:
a plurality of stages;
each of said stages responsive to address information from the preceding stage and a multipath signal for selecting one of a plurality of paths through that stage to the next sequential stage; and
each of said stages responsive to address information from the preceding stage and a unipath signal for utilizing a predetermined path to said next sequential stage.
6. The system of claim 5 wherein said address information comprises address signals and each of said stages comprises interconnected switch nodes;
each of said switch nodes of one stage being interconnected to a set of switch nodes in the next sequential stage; and
each of said switch nodes responsive to receipt of said address signals and said multipath signal for setting up said path via one of said set of switch nodes in said next sequential stage.
7. The system of claim 6 wherein each of said switch nodes further comprises means for designating one of said set of switch nodes in said sequential stage in response to receipt of said address signals;
means for transmitting a communication request signal to the designated one of said set of said nodes;
means responsive to receipt of a communication unavailable signal from said designated one of said set for entering an idle state; and
each of said nodes further comprises means responsive to said communication available signal for transmitting another communication unavailable signal to the last preceding stage.
8. The system of claim 7 wherein each of said switch nodes further comprises means responsive to receipt of said address signals for transmitting an input link busy signal to the switch nodes of the stage containing said receiving switch nodes that are intra-connected to said receiving switch node; and
said intra-connected switch nodes responsive to said input link busy signal for entering an inactive state with respect to said address signals.
9. The system of claim 8 wherein
said receiving switch node further comprises means responsive to receipt of said address signals for eliminating the most significant address signal; and
means for retransmitting the remaining address signals to the next sequential stage.
10. A switching network for switching packets and circuit switched information from a plurality of input ports to a plurality of output ports, said network comprising:
a plurality of stages;
each of said stages comprising a plurality of interconnected switch nodes;
each of said switch nodes of one stage being interconnected to a set of switch nodes in the next sequential stage;
one of said switch nodes in one of said stages comprises means responsive to the receipt of address information for generating designating signals;
means responsive to receipt of a multipath signal for generating an enable signal;
means responsive to a unipath signal for generating a disable signal;
means responsive to said designating signals and said enable signal for selecting any one of said set of switch nodes in said next sequential stage for establishing a path through said switching network and further responsive to said designating signals and said disabling signal for selecting a predetermined one of said set of switching nodes to establish said path through said switching network; and
means responsive to said subsequent packet and circuit switched information for communicating the latter information through said one of said switching nodes to said next sequential stage.
11. The switching network of claim 10 wherein said one of said switch nodes further comprising means responsive to a communication unavailability signal from the selected subsequent node for placing said one of said switch nodes in an idle state.
12. The switching network of claim 11 wherein said switching network further comprises sets of links interconnecting said stages with each of said links being multiply connected to a particular set of switch nodes in a given individual stage;
said one of said switch nodes further comprising means responsive to link busy signals from switch nodes in the same set of switch nodes as said one of said switch nodes for entering an inactive state with respect to one of said set of links under active use by another switch node of said set of switch nodes.
US06/654,764 1984-09-26 1984-09-26 Alternate self-routing packet switching node having fault detection capabilities Expired - Lifetime US4679186A (en)

Priority Applications (13)

Application Number Priority Date Filing Date Title
US06/654,764 US4679186A (en) 1984-09-26 1984-09-26 Alternate self-routing packet switching node having fault detection capabilities
GB858523355A GB8523355D0 (en) 1984-09-26 1985-09-20 Self-routing packet switching node
DE19853533846 DE3533846A1 (en) 1984-09-26 1985-09-23 DIVERSE PACKAGE AND PERMANENT CONNECTION SWITCHING SYSTEM WITH ERROR DISPLAY POSSIBILITY
KR8506993A KR920008431B1 (en) 1984-09-26 1985-09-24 Switching system
GB08523676A GB2168221B (en) 1984-09-26 1985-09-25 Self-routing packet switching node having fault detection capabilities
IT22265/85A IT1185379B (en) 1984-09-26 1985-09-25 SWITCHING NODE OF EXTERNAL SELF-ROADING PACKAGES WITH FAULT DETECTION CAPACITY
BE0/215634A BE903317A (en) 1984-09-26 1985-09-26 SELF-ROUTING PACKET SWITCHING NODE WITH FAULT DETECTION POSSIBILITIES
DE3534355A DE3534355C2 (en) 1984-09-26 1985-09-26 Route-routing packet and permanent connection switching system with error display option
FR8514263A FR2570914B1 (en) 1984-09-26 1985-09-26 SELF-ROUTING PACKET SWITCHING NODE WITH FAULT DETECTION POSSIBILITIES
JP21120485A JPH0659052B2 (en) 1984-09-26 1985-09-26 Self-routing exchange method
CA000491633A CA1234206A (en) 1984-09-26 1985-09-26 Alternate self-routing packet switching node having fault detection capabilities
NL8502633A NL8502633A (en) 1984-09-26 1985-09-26 SELF-ROTARY SWITCHING SYSTEM.
CH4180/85A CH669293A5 (en) 1984-09-26 1985-09-26 GUIDE-SEARCHING INTERFACE WITH ERROR DISPLAY POSSIBILITIES.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US06/654,764 US4679186A (en) 1984-09-26 1984-09-26 Alternate self-routing packet switching node having fault detection capabilities

Publications (1)

Publication Number Publication Date
US4679186A true US4679186A (en) 1987-07-07

Family

ID=24626148

Family Applications (1)

Application Number Title Priority Date Filing Date
US06/654,764 Expired - Lifetime US4679186A (en) 1984-09-26 1984-09-26 Alternate self-routing packet switching node having fault detection capabilities

Country Status (11)

Country Link
US (1) US4679186A (en)
JP (1) JPH0659052B2 (en)
KR (1) KR920008431B1 (en)
BE (1) BE903317A (en)
CA (1) CA1234206A (en)
CH (1) CH669293A5 (en)
DE (2) DE3533846A1 (en)
FR (1) FR2570914B1 (en)
GB (2) GB8523355D0 (en)
IT (1) IT1185379B (en)
NL (1) NL8502633A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4809261A (en) * 1987-07-10 1989-02-28 Solid State Systems, Inc. Space and time switch for 22 PCM highways
US4845722A (en) * 1987-10-16 1989-07-04 Digital Equipment Corporation Computer interconnect coupler employing crossbar switching
US4887076A (en) * 1987-10-16 1989-12-12 Digital Equipment Corporation Computer interconnect coupler for clusters of data processing devices
WO1991007039A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network
WO1991007032A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network
US5091796A (en) * 1990-01-10 1992-02-25 Fujitsu Limited Optical communication system having an improved protection line switching mechanism
US5271000A (en) * 1991-03-22 1993-12-14 International Business Machines Corporation Method and apparatus for testing and evaluation of distributed networks
US5303383A (en) * 1991-05-01 1994-04-12 Ncr Corporation Multiprocessor computer system
US5377180A (en) * 1991-12-23 1994-12-27 U.S. Philips Corporation Data switching device
US5450578A (en) * 1993-12-23 1995-09-12 Unisys Corporation Method and apparatus for automatically routing around faults within an interconnect system
US5495589A (en) * 1993-12-23 1996-02-27 Unisys Corporation Architecture for smart control of bi-directional transfer of data
WO1996036920A1 (en) * 1995-05-17 1996-11-21 Estes Mark D Associative network method and apparatus
US5629925A (en) * 1994-04-29 1997-05-13 Alcatel Sel Aktiengesellschaft Method of operating a switching network, as well as switching network and exchange therefor
US5680634A (en) * 1991-01-16 1997-10-21 Estes; Mark D. Fixed interconnection network method and apparatus for a modular mixed-resolution, N-dimensional configuration control mechanism
US6138251A (en) * 1997-06-30 2000-10-24 Sun Microsystems, Inc. Method and system for reliable remote object reference management
US20020018480A1 (en) * 2000-08-11 2002-02-14 Peter Galicki Multiprocessor network node failure detection and recovery
US6412002B1 (en) 1999-11-15 2002-06-25 Ncr Corporation Method and apparatus for selecting nodes in configuring massively parallel systems
US6418526B1 (en) 1999-11-15 2002-07-09 Ncr Corporation Method and apparatus for synchronizing nodes in massively parallel systems
US6430150B1 (en) * 1996-02-14 2002-08-06 Fujitsu Limited Communication node, restoration method and communication network
US20030012178A1 (en) * 2001-04-06 2003-01-16 Mussman Harry Edward Alternate routing of voice communication in a packet-based network
US6519697B1 (en) 1999-11-15 2003-02-11 Ncr Corporation Method and apparatus for coordinating the configuration of massively parallel systems
US6639895B1 (en) 1998-10-05 2003-10-28 Performance Technologies, Incorporated Fault tolerant network switch
US6745240B1 (en) 1999-11-15 2004-06-01 Ncr Corporation Method and apparatus for configuring massively parallel systems
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US7039014B1 (en) * 2000-12-26 2006-05-02 Cisco Technology, Inc. Network-wide connection-based debug mechanism
US20100157983A1 (en) * 2001-04-06 2010-06-24 Level 3 Communications, Inc. System and Method for Providing Alternate Routing in a Network

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19513564A1 (en) * 1995-04-18 1996-10-24 Sel Alcatel Ag Method for operating a telecommunications network and network access switching center and transit switching center
FR2736483B1 (en) * 1995-07-07 1997-08-14 Cit Alcatel ATM CELL CONNECTION NETWORK
JPWO2004106404A1 (en) 2003-05-27 2006-07-20 富士通株式会社 Organic conductive polymer composition, transparent conductive film and transparent conductor using the same, input device using the transparent conductor, and manufacturing method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4347498A (en) * 1979-11-21 1982-08-31 International Business Machines Corporation Method and means for demand accessing and broadcast transmission among ports in a distributed star network
US4380063A (en) * 1980-06-19 1983-04-12 International Business Machines Corp. Flow control mechanism for block switching nodes
US4491945A (en) * 1982-06-25 1985-01-01 At&T Bell Laboratories Fast packet switch
US4512011A (en) * 1982-11-01 1985-04-16 At&T Bell Laboratories Duplicated network arrays and control facilities for packet switching
US4577311A (en) * 1980-12-24 1986-03-18 Duquesne Jean F Packet-based telecommunication system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58150349A (en) * 1982-03-02 1983-09-07 Mitsubishi Electric Corp Packet communication network
EP0097351A3 (en) * 1982-06-21 1986-02-26 Nec Corporation Router unit and routing network for determining an output port by detecting a part of an input packet

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4347498A (en) * 1979-11-21 1982-08-31 International Business Machines Corporation Method and means for demand accessing and broadcast transmission among ports in a distributed star network
US4380063A (en) * 1980-06-19 1983-04-12 International Business Machines Corp. Flow control mechanism for block switching nodes
US4577311A (en) * 1980-12-24 1986-03-18 Duquesne Jean F Packet-based telecommunication system
US4491945A (en) * 1982-06-25 1985-01-01 At&T Bell Laboratories Fast packet switch
US4512011A (en) * 1982-11-01 1985-04-16 At&T Bell Laboratories Duplicated network arrays and control facilities for packet switching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IEEE Transactions on Computers, vol. C 31, No. 5, May, 1982, New York (US) G. B. Adams, III, et al., The Extra Stage Cube: A Fault Tolerant Interconnection Network for Supersystems , pp. 443 454. *
IEEE Transactions on Computers, vol. C-31, No. 5, May, 1982, New York (US) G. B. Adams, III, et al., "The Extra Stage Cube: A Fault-Tolerant Interconnection Network for Supersystems", pp. 443-454.

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4809261A (en) * 1987-07-10 1989-02-28 Solid State Systems, Inc. Space and time switch for 22 PCM highways
US5138611A (en) * 1987-10-16 1992-08-11 Digital Equipment Corporation Blocking message transmission or signaling error in response to message addresses in a computer interconnect coupler for clusters of data processing devices
US4887076A (en) * 1987-10-16 1989-12-12 Digital Equipment Corporation Computer interconnect coupler for clusters of data processing devices
US5084871A (en) * 1987-10-16 1992-01-28 Digital Equipment Corporation Flow control of messages in a local area network
US4845722A (en) * 1987-10-16 1989-07-04 Digital Equipment Corporation Computer interconnect coupler employing crossbar switching
WO1991007039A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network
WO1991007032A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network
AU618206B2 (en) * 1989-11-06 1991-12-12 American Telephone And Telegraph Company Automatic fault recovery in a packet network
AU626746B2 (en) * 1989-11-06 1992-08-06 American Telephone And Telegraph Company Automatic fault recovery in a packet network
US5091796A (en) * 1990-01-10 1992-02-25 Fujitsu Limited Optical communication system having an improved protection line switching mechanism
US5680634A (en) * 1991-01-16 1997-10-21 Estes; Mark D. Fixed interconnection network method and apparatus for a modular mixed-resolution, N-dimensional configuration control mechanism
US5852740A (en) * 1991-01-16 1998-12-22 Estes; Mark D. Polymorphic network methods and apparatus
US5271000A (en) * 1991-03-22 1993-12-14 International Business Machines Corporation Method and apparatus for testing and evaluation of distributed networks
US5872904A (en) * 1991-05-01 1999-02-16 Ncr Corporation Computer system using a master processor to automatically reconfigure faulty switch node that is detected and reported by diagnostic processor without causing communications interruption
US5303383A (en) * 1991-05-01 1994-04-12 Ncr Corporation Multiprocessor computer system
US5522046A (en) * 1991-05-01 1996-05-28 Ncr Corporation Communication system uses diagnostic processors and master processor module to identify faults and generate mapping tables to reconfigure communication paths in a multistage interconnect network
US20060013207A1 (en) * 1991-05-01 2006-01-19 Mcmillen Robert J Reconfigurable, fault tolerant, multistage interconnect network and protocol
US7058084B2 (en) 1991-05-01 2006-06-06 Ncr Corporation Multistage interconnect network combines back channel replies received from destinations into a single result and transmits to the source
US7706361B2 (en) 1991-05-01 2010-04-27 Teradata Us, Inc. Reconfigurable, fault tolerant, multistage interconnect network and protocol
US6243361B1 (en) 1991-05-01 2001-06-05 Ncr Corporation Multistage interconnect network uses a master processor to perform dynamic configuration for all switch nodes based on a predetermined topology
US5377180A (en) * 1991-12-23 1994-12-27 U.S. Philips Corporation Data switching device
US5495589A (en) * 1993-12-23 1996-02-27 Unisys Corporation Architecture for smart control of bi-directional transfer of data
US5450578A (en) * 1993-12-23 1995-09-12 Unisys Corporation Method and apparatus for automatically routing around faults within an interconnect system
US5729754A (en) * 1994-03-28 1998-03-17 Estes; Mark D. Associative network method and apparatus
CN1082756C (en) * 1994-04-29 2002-04-10 阿尔卡塔尔有限公司 Switching network and operation method thereof
US5629925A (en) * 1994-04-29 1997-05-13 Alcatel Sel Aktiengesellschaft Method of operating a switching network, as well as switching network and exchange therefor
WO1996036920A1 (en) * 1995-05-17 1996-11-21 Estes Mark D Associative network method and apparatus
US6430150B1 (en) * 1996-02-14 2002-08-06 Fujitsu Limited Communication node, restoration method and communication network
US6138251A (en) * 1997-06-30 2000-10-24 Sun Microsystems, Inc. Method and system for reliable remote object reference management
US6639895B1 (en) 1998-10-05 2003-10-28 Performance Technologies, Incorporated Fault tolerant network switch
US6519697B1 (en) 1999-11-15 2003-02-11 Ncr Corporation Method and apparatus for coordinating the configuration of massively parallel systems
US6745240B1 (en) 1999-11-15 2004-06-01 Ncr Corporation Method and apparatus for configuring massively parallel systems
US6418526B1 (en) 1999-11-15 2002-07-09 Ncr Corporation Method and apparatus for synchronizing nodes in massively parallel systems
US6412002B1 (en) 1999-11-15 2002-06-25 Ncr Corporation Method and apparatus for selecting nodes in configuring massively parallel systems
US20020018480A1 (en) * 2000-08-11 2002-02-14 Peter Galicki Multiprocessor network node failure detection and recovery
US7002975B2 (en) * 2000-08-11 2006-02-21 Texas Instruments Incorporated Multiprocessor network node failure detection and recovery
US7848245B1 (en) 2000-12-26 2010-12-07 Cisco Technology, Inc Network-wide connection-based debug mechanism
US7039014B1 (en) * 2000-12-26 2006-05-02 Cisco Technology, Inc. Network-wide connection-based debug mechanism
US7720084B2 (en) 2001-04-06 2010-05-18 Level 3 Communications, Llc Alternate routing of voice communication in a packet-based network
US20080112327A1 (en) * 2001-04-06 2008-05-15 Level 3 Communications, Llc Alternate routing of voice communication in a packet-based network
US7339934B2 (en) * 2001-04-06 2008-03-04 Level 3 Communications, Llc Alternate routing of voice communication in a packet-based network
US20030012178A1 (en) * 2001-04-06 2003-01-16 Mussman Harry Edward Alternate routing of voice communication in a packet-based network
US20100157983A1 (en) * 2001-04-06 2010-06-24 Level 3 Communications, Inc. System and Method for Providing Alternate Routing in a Network
US20100226363A1 (en) * 2001-04-06 2010-09-09 Mcguigan David Frederick Alternate routing of voice communication in a packet-based network
US8948190B2 (en) 2001-04-06 2015-02-03 Level 3 Communications, Llc Alternate routing of voice communication in a packet-based network
US9185142B2 (en) 2001-04-06 2015-11-10 Level 3 Communications, Llc System and method for providing alternate routing in a network
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching

Also Published As

Publication number Publication date
NL8502633A (en) 1986-04-16
IT1185379B (en) 1987-11-12
IT8522265A0 (en) 1985-09-25
GB8523355D0 (en) 1985-10-23
FR2570914A1 (en) 1986-03-28
GB2168221A (en) 1986-06-11
JPS6184942A (en) 1986-04-30
GB8523676D0 (en) 1985-10-30
FR2570914B1 (en) 1989-01-06
DE3534355C2 (en) 1995-04-13
KR920008431B1 (en) 1992-09-28
CH669293A5 (en) 1989-02-28
GB2168221B (en) 1988-09-28
JPH0659052B2 (en) 1994-08-03
DE3533846A1 (en) 1986-04-03
CA1234206A (en) 1988-03-15
KR860002764A (en) 1986-04-28
BE903317A (en) 1986-01-16
DE3534355A1 (en) 1986-04-17

Similar Documents

Publication Publication Date Title
US4679186A (en) Alternate self-routing packet switching node having fault detection capabilities
US4656622A (en) Multiple paths in a self-routing packet and circuit switching network
US4696000A (en) Nonblocking self-routing packet and circuit switching network
US4550397A (en) Alternate paths in a self-routing packet switching network
US4731878A (en) Self-routing switch node combining electronic and photonic switching
JP2515313B2 (en) Packet switching network
US4661947A (en) Self-routing packet switching network with intrastage packet communication
CA2100235C (en) Switch-based microchannel planar apparatus
CA1227556A (en) Data signal switching systems
EP0588104A2 (en) Multipath torus switching apparatus
JPH0216628B2 (en)
US4439826A (en) Diagnostic system for a distributed control switching network
JPH03182138A (en) Multistage changing network and switch
AU687559B2 (en) Time switch system
JPH05153676A (en) Time division channel system
JPH0662034A (en) Link selection system
JPS6230444A (en) Communication network control system
JPH02190055A (en) Basic switch, and frame switch using it and information processing system including the same frame switch

Legal Events

Date Code Title Description
AS Assignment

Owner name: BELL TELEPHONE LABORATORIES, INCORPORATED, 600 MOU

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:LEA, CHIN-TAU A.;REEL/FRAME:004320/0254

Effective date: 19840920

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12