A method of data propagation in a telecommunications network and a network device for a telecommunications network
The present invention relates to a telecommunications network and a device for a telecommunications network, such as, for example, a switch. The invention particularly, but not exclusively, relates to a telecommunications network which reduces latency in data communications, such as Voice Over IP (VoIP) .
Present telecommunications networks, such as the Internet and telephone communications systems, operate packet switched parts of the network and circuit switched parts of the network.
Packet switched networks are extremely flexible in that there are numerous routes that a packet may take to reach its destination. In fact, sequential packets of a particular piece of data may arrive out of sequence due to taking different routes to arrive
at the destination. Internet protocols, such as TCP/IP, reorder any out of sequence packets and send a request for any packets which are missing. A packet traversing a packet switched network makes a number of Λhops' as it passes between nodes of the network. At each node, the device, such as a router, which is propagating the packet, decides, based on its current knowledge of the network, the next hop that the packet should take. As the device which is propagating the packet has to read the packet header and make a decision as to where the packet should be sent, there is an inherent latent period from when the packet arrives to it being sent to the next node in the network. This is especially true when the device is particularly busy, as the duration that the packet remains with the device is somewhat dependent on the time that the processor of the device takes to process the packet. As such, the latency associated with a device at a particular node in a packet switching network varies depending on the amount of traffic it is currently processing. Routing of packets is defined in the Open Systems Interconnection reference model as "layer 3" or the network layer.
Circuit switched networks are typically used for the likes of telephone calls. A circuit switched network will generate a dedicated path for a single connection between two end points. Data travelling in a circuit switched network may still pass through a switch device but, as the path has already been chosen, no decision is required at each network node
as to the next destination device for the data. As such, the data is simply received and forwarded in the hardware layer of the switch device. Data in a circuit switched network is forwarded as defined in the Open Systems Interconnection reference model as "Layer 1" or the physical layer.
As the circuit switched network forwards data in the physical layer, the latent period is not dependent on the delays inherent in packet routing processes.
Known telephony circuit switches comprise serial line interfaces, register sets, custom logic and a control interface. A common manner of working involves reading in an 8 bit cell of information from a serial input line, storing this cell in a temporary register, and then switching the contents of the register to a particular output serial line at a particular timing position relevant to the start of a fixed 125 microsecond frame period. This is the basis of Tier 1 Telephony switched circuits as implemented globally, and is often referred to as Time Slot Interchange switching since the input stream of 8 bit cells can be switched to a different time slot on any output serial data line. Tier 1 Telephony relies on the propagation of both serial clock and frame signals from switch to switch.
In the situation where telephone or video calls are carried over packet switched networks, such as in the case of VoIP, latency issues may impact upon call quality and compromise quality of service.
According to a first aspect of the present invention there is provided a method of propagating data across a telecommunications network comprising the steps of: i) establishing a path from a source node to a destination node via zero or more intermediate nodes, each node deciding on the next node in the route to the destination node using routing table knowledge and media allocation knowledge; ii) transmitting the data to the destination node by the established path by means of the physical layer of the nodes.
Optionally, the method may comprise determining the quality of service required by the incoming data and, where a high quality of service is determined, completing steps (i) and (ii) .
Preferably, step (i) comprises propagating a packet switched device management link between adjacent nodes using a small fraction of the communication means capacity available.
Preferably, step (i) further comprises reserving communication means and switching resources on each intermediate node on the established path for the transmission of the data.
Preferably, where identical data requires propagation from the source node to two different destination nodes, the data only requires to be
propagated once along the common parts of the established path, the data being duplicated at the appropriate branch nodes.
The method according to the first aspect of the invention changes the nature of network level latency from probabilistic as in packet routing networks to deterministic as in switched circuit networks. In addition, the method makes use of Layer 1 transmission and switching. In other words, data flows do not traverse shared routing resources, which places the onus of the telecommunication security at the end points of the network and renders intermediate nodes impervious to attack by hackers. This leads to a significant increase in network security. Also, the method raises the so- called end to endednes property of the network to the same level characteristic of a circuit switched network. What is more, the method relies upon hardware for providing guaranteed and consistent levels of quality of service and, as such, is able to provision service suited to, for example, VoIP at global scale without the variable quality of service common in today's Internet telephony services. The method delivers a ten-fold increase in efficiency in transporting data which provides a foundation for disrupting global telecommunications services and displacing legacy operators with their sunk investment in telecommunications switches, links and routers.
According to a second aspect of the present invention there is provided a telecommunications network device comprising: communication means for connecting to one or more other network devices within the telecommunications network; management data propagation means for receiving and propagating management instructions across the communication means between network nodes; management data control means for interpreting management instructions and altering device settings; physical layer switching means for propagating payload data as a network switch; wherein the device is enabled to send management data in a packet switched mode and payload data in a circuit switched mode or a packet switched mode across a network of such nodes.
Preferably, the communication means receives and transmits both the management data and payload data with a flexible allocation of bandwidth between the two types of data.
Preferably, the device is enabled to automatically configure itself within a telecommunications network according to the IPv6 internet protocols for setting IPv6 addresses and then discovering IPv6 routing table knowledge.
Preferably, the device is configurable to interface with both packet switch and circuit switch network devices.
Preferably, the physical layer switching means serialise incoming data.
Preferably, the physical layer switching means comprises an input serialiser/deserialiser buffer, a frame buffer pool, an output serialiser/deserialiser buffer to serialise the outgoing data.
Preferably, management data control means can reserve device resources for a particular data stream.
Preferably, resources are reserved on the device by placing a setting into a media allocation table which is maintained at both ends of each communication means.
Preferably, the resources which may be reserved on the device are one or more of the following: media capacity; concurrent read processors; concurrent write processors; and frame buffer pool capacity.
Preferably, entries in the media allocation table are logged enabling accurate recording of resource usage of a particular user including timestamps.
According to a third aspect of the present invention there is provided a telecommunications network
comprising a plurality of network devices, of which one or more of the network devices are a telecommunications network device according to the second aspect of the present invention.
According to a fourth aspect of the invention there is provided a data communication carried over a network according to a third aspect of the invention.
Embodiments of the present invention will now be described with reference to the accompanying drawings, in which:
Fig. 1 illustrates a flow diagram of telecommunications network device according to the present invention;
Fig. 2 illustrates two network switches according to the present invention; and
Fig. 3 illustrates an embodiment of a telecommunications network according to the present invention.
Referring to Fig.3, a telecommunications network Cl is shown having a plurality of nodes Nl, N2, N3, N4 and N5. Furthermore, Voice-Over IP (VoIP) clients Vl, V2 are shown at each end of the plurality of nodes. For simplicity only the nodes required to link the two VOIP clients Vl, V2 are shown. It
should be appreciated that there may be many other nodes within the telecommunications network.
In traditional packet switched networks, a VoIP call traverses network nodes as a series of packets and, as such, may take a variety of paths to reach the destination. This means that packets may arrive out of sequence. In particular, this can occur when a router has a large amount of traffic forcing it to discard packets. The TCP/IP protocol handles packets being discarded by requesting those packets to be sent again. However, in the case of time dependent data transmission, such as a VOIP call, packets arriving out of sequence are often of no value as their time slot has already passed. Even if a router in a packet switched network does not discard any packets, the latency associated with the routing processes is dependent on the amount of information the processor has to process. The processor must make a decision for each packet as to where the packet must be sent next: the packet is forwarded in the network layer (known as "layer 3" in the Open Systems Interconnection reference model) .
In a circuit switched network the path is already established and, as such, the destination of the data packets does not have to be assessed by any intermediary nodes. The data is simply forwarded at the physical layer (known as "layer 1" in the Open Systems Interconnection reference model)
In a telecommunications network according to the present invention, and as shown in Fig. 3, packet based transmission is only used for management of the call as shown by a management communication link Ll. The actual data transmission takes place using circuit switched transmission over the same physical communication link L2.
For example, a VoIP call initiated by VoIP client Vl is intended for VoIP client V2. The VoIP client Vl initiates the call to node Nl of the network Cl. Node Nl then uses the management communication link Ll to establish the next node towards the VoIP client C2, which is node N2. The management information sent to node N2 includes a request to reserve resources of that node for the data transmission. The management of the call continues to propagate through the network until it reaches the VoIP client V2, each time reserving appropriate resources for the data transmission. A return path can then be instigated in a similar manner. However, the return path does not have to take the same route as the outward path. In Fig. 3 Ll and L2 may occupy the same or separate communications links.
Data transmission can then be started in a circuit switched manner, as each node has been configured by the management information in the management communication link to act as a switch for the data and forward the information entirely in the physical layer. As such, data of the VoIP call can be
transmitted without the increased and variable latency associated with packet routing.
A telecommunications network such as this has the advantage of a flexible routing nature that is traditionally associated with a packet switched network along with low network level latency period of circuit switched network.
In addition, the telecommunications network of the present invention can also be used in multicasting situations. In traditional packet based multicasting, the source server must send a data stream for each user request. However, the management information of the management communication network simply detects where a request is for the same information and duplicates the required data at the appropriate node. For example, a server distributing a 5MB/s IPTV multicast to 1 Million users would traditionally be required to distribute at 5GB/s. However, using the telecommunication network of the present invention, the server could simply provide the IPTV multicast at 5MB/s and the various nodes of the telecommunication network duplicate the data transmissions at the appropriate nodes. The communication management network providing the appropriate nodes with management instructions where stream bifurcation is required.
The present invention makes use of common computer memory and highly concurrent hardware processes thus
enabling far greater scalability than possible with known Time Slot Interchange switches. The present invention eliminates the legacy of switching data in 8 bit cells by allowing any number of bits to be switched depending on the switch configuration.
Referring to Fig. 1, a telecommunications network device, which in this instance is a network switch, is shown.
Detailed process description - data ingress
Data is written to the input serialiser/de- serialisers (serdes) buffer Ml by process Pl. In this example, the input serdes buffer Ml is implemented in dual port RAM and its size is configurable. Process Pl reads 32 bits of data at a time from the input serdes bus Bl and writes all 32 bits to the next available free memory location in the input serdes buffer Ml. The read timing of process Pl must remain synchronized to the input serdes bus Bl clock rate. During the switch initialisation processes, an optimum physical frame length is negotiated for each media link by process Pl. For example, this frame length could be 65536 bits long. Process Pl uses a semaphore to indicate that an entire frame of data has been written to the input serdes buffer Ml, and subsequently continues with reading the next 32 bits from input serdes bus Bl and writing it to the next unused location of the input buffer Ml.
Process P2 waits on the above semaphore and as soon as a frame of data is in the input serdes buffer Ml, it reads the frame from the input serdes buffer and writes it in it's entirety to the first unused location in the frame buffer pool M2. The frame buffer pool M2 is a shared pool of complete frames that is made accessible to a plurality of data read processes such as Process P3 and Process P5. Process P2 then resets the above said semaphore indicating to Pl that the frame has been read by P5 and that the memory locations used by that frame can now be overwritten by new incoming data. Buffer read and write address pointers are initialised and managed as common in the art. In the case where these pointers attempt to overlap each other, an error condition may be indicated.
In this manner all incoming serial data is parallelized by the serdes and written to the frame buffer pool M2. Without any processes active in reading and switching the contents of the frame buffer pool M2, the switch would become blocked as soon as the frame buffer pool became full.
This network switch architecture preferably makes use of dual port RAM in order to isolate the tight timing requirements involving in reading and writing data to serialiser / de-serialisers (serdes' s) from the less time critical task of performing the actual switching.
Detailed process description - egress of switched data.
In this case, data will be switched through the switch and will be turned from parallel data in the frame buffer pool M2 into serial data and placed onto the desired output media link. This can emulate known Tier 1 telephony switching and provides the means to interface to all legacy Tier 1 telephony switched circuit networks. However, it should be appreciated that alternative protocols from that of Tier 1 can also be implemented. Process P3 refers to a preset bitmap in order to determine which bits should be picked out of the frame buffer pool M2. There should be one bitmap per output serdes buffer M3. The content of the bitmap is set up by means of network switch management software. The data bits picked up by Process P3 are concatenated and written to the output serdes buffer M3. When an entire frame of data has been written, Process P3 will set a semaphore in order to indicate to later processes P4 that a frame has been completed. Process P3 should be implemented in hardware in order that many P3 type processes can work concurrently. The number of P3 processes will be determined by the number of output serdes buffers as well as the number of distinct flows being switched through the switch. A distinct flow can be defined as all data belonging to a single, distinct telecommunications requirement between two endpoints in a network in one direction. The amount of switch processing which can be done in
this time bounded window is determined by the amount of concurrent hardware resource available. Highly concurrent hardware processes can easily be leveraged in order to significantly increase the switch bandwidth at minimal cost and low complexity. Process P4 will read the contents of the output serdes buffer M3 and write it synchronously to the output serdes bus B2.
Detailed process description - egress of packetised data
In this case, data will be switched through the switch and will be extracted from the frame buffer pool M2 by a process P5 refers to the same bitmaps described above and writes the data extracted from the frame buffer pool to Content Addressable Memory (CAM) M4. This is only part of the switch which has any knowledge of higher level data structures, for example, packets, as commonly used in packet networks such as but not limited to Transmission Control Protocol / Internet Protocol. Process P5 does not have this knowledge and simply assembles the bits marked by the bitmap and writes them to the CAM M4. Process P7 has knowledge of packet boundaries and data integrity and will only forward intact packets to the packet interface. Spoiled packets are discarded by process P7.
Detailed process description - ingress of packetised data
In the reverse sequence, packets coming into packet interface will be written to the CAM M4 by process P7. Process P6 will read this data from the CAM M4 dividing the data into frame boundaries and writing the data to the frame buffer pool M2. A packet to frame bitmap is used by process P6 in order to make sure the bits belonging to the incoming packet are written to the desired frame locations in the frame buffer pool M2. This packet to frame bitmap (not shown in Drawing 1) is set up by switch management software.
Any number of bits can be used to transport packets with the single constraint that where the packet flow demands any quality of service level above "best effort", then sufficient bits should be assigned to the packet flow such that the egress data rate matches or exceeds the packets ingress data rate measured in bits per second. This provides the means to connect the present invention to all packet based legacy network infrastructure. As described above, process P3 and process P4 will ensure the frame buffer pool M2 is read and written to the appropriate output serdes bus B2.
Idle and active link state is maintained by process P5 and process P3. This allows the switch to remain non blocking in the case where a particular data source is not ready in time for switching.
Detailed process description - switch management.
The state of the switch is controlled by management software. This software is capable of taking commands from an operator or a policy server and reserving switching resources such as media capacity, concurrent read processors and frame buffer pool M2 capacity. When an operator wishes to set up a path through a switch, commands are given by the operator to the switch, for example, through a console. The management software reserves the requested resources, resulting in a dedicated path through the switched network.
Alternatively the management software is able to automatically reserve resources and set up the switch accordingly to information contained in header information of packetised data streams. For example, in common packet networking, there may be a provision for a Type of Service (TOS) setting. If the TOS value requests a high quality of service then the management software will request the building of links between switches using dedicated media. If TOS value does not request a high quality of service, then the management software would place the traffic onto a "best effort" shared media link.
The management software interfaces to the switch memory and processes using a number of bitmaps. By setting the values of these bitmaps, the management software configures the switch.
The management software is capable of forwarding a resource reservation request to any number of
downstream switches. Typically these requests would propagate downstream until the last such switch in the operator's network is reached. Traffic would then leave the network en route to its destination. A short while later, the end user device would send reply traffic. When this reply traffic reaches the first switch in the operators network, the management software would build an appropriate path all the way back to the switch that made the first requests. Note that the outbound and returns paths do not necessarily follow the same route through the network.
The management software relies on an integral Internet Protocol management router in order to communicate with adjacent nodes. This router does not route any user data, it only has to route management packets over low bandwidth links that are configured during media intialisation . The routing table information is, however, used in order to decide how to request the building of the best path for data across the circuit switched media. In this manner the best features of packet switching are integrated with the best features of circuit switching.
Detailed process description - multicasting
The switch can perform physical level multicasting. In this case two or more destination paths are set up for a single incoming bit stream. This could be of use in distributing video far more efficiently
than otherwise in common practice. If the incoming bitstream is not packetised, then the switch is capable of packetising the data at the point at which it leaves the operators network. If the incoming bitstream is packetised then the switch is capable of rewriting the header information at the point where at which it leaves the operators network.
The relatively high efficiency of this multicasting scheme arises from the characteristic where a single bitstream, for example, a composite video feed, replaces many individual streams, each addressed to a particular network address.
The invention disclosed allows each single media link attached between switches to establish its own synchronous domain, the scope of which does not extend beyond the media link and the input and output serdes buffers at each respective end of the said link. This has the advantage of increasing the stability of a network constructed of such switch nodes, since the isolation provided by the loose frame level synchronisation between input media and output media minimises the amplification effect and eliminates the coupling effect in common complex, tightly synchronised networks such as SONET/SDH.
The invention disclosed allows a seamless gateway between packet switched networks and circuit switched networks to be implemented simply by connecting both methods of working.
Referring to Fig. 2, two switches SWl, SW2 are shown connected to each other by one or more media links MLl, ML2. Each media link is full duplex and may comprise of one or more physical wires, fibre optic cable or wireless stages. In Fig. 2 each media link, and using MLl as an example, plus the input and output buffers Bl, B2 at each end of the said link comprise a tightly synchronised timing domain, which is shown inside the dotted line. The large number of processes Pl indicated inside switch SW2 are loosely synchronised and thus break the tight synchronisation between respectively connected switches such as SWl and SW2. The information traversing the media is clocked into and out of buffers Bl and B2 in a manner synchronised to the media line speed measure in bits per second. These buffers are in turn processed at a rate that is not synchronised to the media line speed. This eliminates onerous tight timing and synchronisation requirements common to Tier 1 telephony networks such as SONET/SDH. The nodes can connect packet switch and circuit switch networks because they manage their manage their internal timing and synchronisation requirements in a manner compatible with both technologies.