[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20040156371A1 - High speed parser - Google Patents

High speed parser Download PDF

Info

Publication number
US20040156371A1
US20040156371A1 US10/361,752 US36175203A US2004156371A1 US 20040156371 A1 US20040156371 A1 US 20040156371A1 US 36175203 A US36175203 A US 36175203A US 2004156371 A1 US2004156371 A1 US 2004156371A1
Authority
US
United States
Prior art keywords
output
cam
parser
search
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/361,752
Other versions
US7233597B2 (en
Inventor
Pamela Kumar
Cyril Chemparathy
Mohit Sharma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Texas Instruments Inc
Original Assignee
Texas Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Texas Instruments Inc filed Critical Texas Instruments Inc
Priority to US10/361,752 priority Critical patent/US7233597B2/en
Assigned to TEXAS INSTRUMENTS INCORPORATED reassignment TEXAS INSTRUMENTS INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEMPARATHY, CYRIL JOHN, KUMAR, PAMELA, SHARMA, MOHIT, TEXAS INSTRUMENTS (INDIA) PRIVATE LIMITED
Publication of US20040156371A1 publication Critical patent/US20040156371A1/en
Application granted granted Critical
Publication of US7233597B2 publication Critical patent/US7233597B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present invention relates to integrated circuits, and more specifically to a method and apparatus for quickly parsing a packet.
  • Parsers are generally used to determine data portions of interest in a sequence of bits. Typically, the bits are examined according to a pre-specified convention (protocol) to determine the data portions of interest. For example, a parser may be used in a router (example of a device) to determine data portions such as destination address and type of service parameters in a received IP packet as is well known in the relevant arts.
  • a parser may be used in a router (example of a device) to determine data portions such as destination address and type of service parameters in a received IP packet as is well known in the relevant arts.
  • a device may implement a parser substantially in the form of software instructions in which the bits are generally examined sequentially according to a protocol to determine the specific bits of interest. For example, when parsing an IP packet, a parser may first examine the IP version number (e.g., version 4 or 6) and a header length to determine the specific bit positions at which a destination port number of interest is present.
  • IP version number e.g., version 4 or 6
  • One problem with such software implementations is that it may take a long time (or a large number of clock cycles) to parse data bits. Such long duration for parsing may be unacceptable in many environments. For example, with high speed technologies such as those based on optical fibre, packets may be received at a high frequency and it may be necessary to parse packets quickly (e.g., at “wire-speed”). Otherwise, the long parsing times may impede the throughput performance of a device processing the packets and the resulting long parsing times may be undesirable.
  • a parser receives input data according to a packet format, and generates data units of interest on corresponding pre-specified paths.
  • the data units may be generated quickly even if the bit positions of the data units of interest can vary according to the protocol.
  • the parser contains a multiplexer and a content addressable memory (CAM) which controls the multiplexer as a N to 1 selector, the details of which are described below.
  • CAM content addressable memory
  • the multiplexer receives multiple inputs (contained within input data) and selects one of the inputs to generate a multiplexer output according to a select value.
  • the content addressable memory (CAM) may be programmed with multiple search rules and associated output values to cause the appropriate select value to be provided to the multiplexer.
  • the search rule and the associated output value is based on the bit positions at which the desired data unit is present in the input data according to the packet format.
  • the multiplexer receives multiple inputs (contained within input data) and selects one of the inputs to generate a multiplexer output according to the select value.
  • the parser may generate data units of interest quickly on the corresponding pre-specified output paths.
  • An implementation of the parser is based on a ternary CAM in which each search rule is implemented as a compare data and a mask.
  • a search rule is determined to be a matching search rule if the input data matches the compare data at bit positions specified by the mask.
  • Multiple output values may be associated with each search rule, and the output values may be provided as select values for a corresponding number of multiplexers. Hence, each output value of a search rule corresponds to a desired output field from the high speed parser.
  • An embodiment of a ternary CAM is configured to parse a packet according to Internet Protocol (IP).
  • IP Internet Protocol
  • the search rules are designed to search according to a version number.
  • some of the data units e.g., destination port number of the following TCP/UDP header
  • each search rule may be designed to search for a specific header length value and generate the appropriate select values.
  • each CAM may contain multiple CAM units, with different CAM units being implemented using different CAM technologies (corresponding to the type of search operation required for different fields of the incoming stream of data).
  • a multiplexer may contain multiple cells driving a common output line through a switch controlled by the select value output from the multiplexer.
  • Each cell receives (and stores) a bit (“input bit”) of the multiple bits to be selected.
  • the output line is first driven to a first logical value, and only one of the cells drives the output line to a second logical value (based on the output generated by the multiplexer) if the corresponding input bit does not equal the first logical value.
  • the output line may first be driven to a logical value of 1, and a cell receiving a specific input may cause the output line to be driven to a logical value of 0 if the specific input equals 0.
  • the specific one of the cells driving the output line may be determined by a select value received by the multiplexer. The remaining cells may not affect the output line irrespective of the corresponding input bits.
  • Each cell may contain a pair of switches (e.g., transistors) coupled in series between the output line and a pre-specified voltage level (e.g., ground).
  • switches e.g., transistors
  • a first switch may be switched on according to a control signal (e.g., asserted according to select value) and a second switch may be switched on according to a bit stored by the cell.
  • the pair of switches cause the output line to be charged to a second logical level when the first bit has one logical value but does not change the first logical level on the output line if the first bit has another logical level.
  • the output line may be shared by all the cells processing a corresponding bit of the different inputs, and only a control signal corresponding to only one of the multiple cells is set to one state and the control signal of the remaining ones of the multiple cells are set to another state according to the select value.
  • search rules may be stored into a CAM.
  • An incoming data string is compared with all the rules and select values are output based on the matching rule.
  • the input data string is also stored in the multiplexer.
  • the select value from the CAM-based rule table controls the multiplexer such that the appropriate field (or fields) of the input data is finally output from the multiplexer.
  • FIG. (FIG.) 1 is a block diagram illustrating an example environment in which the present invention can be implemented
  • FIG. 2 is a block diagram illustrating the embodiment of a router provided in accordance with the present invention.
  • FIG. 3 is a block diagram illustrating the internals of an embodiment of a ternary content addressable memory (CAM) provided in accordance with the present invention
  • FIG. 4 is a block diagram illustrating the details of an embodiment of a parser
  • FIG. 5 is a block diagram illustrating the manner in which input data may be compared with the search rules
  • FIG. 6 depicts a logical model of an embodiment of multiplexer which selects one of two inputs for a 4-bit field
  • FIG. 7 is a circuit diagram illustrating the details of cell 700 in an embodiment of the present invention.
  • FIG. 8 is a circuit diagram illustrating the charge and sense circuit in an embodiment of the present invention.
  • FIG. 9 is a block diagram illustrating the details of implementing CAM using pipelining techniques in an embodiment of the present invention.
  • a parser in accordance with the present invention may contain a content addressable memory (CAM) and multiple multiplexers.
  • Each multiplexer may provide one of the desired data units (e.g., bytes) on a corresponding output path.
  • Each multiplexer may be provided as inputs all the possible values from which the one input is selected and provided on the corresponding output path.
  • the content of the CAM may be configured such that the appropriate selection values are provided to the multiplexers for different combinations of input data to be parsed.
  • At least a portion of an input data, which determines the packet format, is provided as an input to the CAM. At least the portions of input data which may be selected by each multiplexer are provided as inputs to the multiplexer. The output(s) of the CAM may then be provided as selection values to the multiplexers, which causes the multiplexers to generate the desired data units on the corresponding output paths.
  • each desired data unit may be caused to be provided on a pre-specified communication path by appropriate design of the CAM and the multiplexers. Accordingly, the combined circuit operates as a high speed parser.
  • FIG. 1 is a block diagram illustrating an example environment in which the present invention can be implemented.
  • the environment is shown containing networks 110 , 130 and 140 , and router 120 .
  • Router 120 represents an example system in which the present invention can be implemented.
  • the invention can be implemented in other devices such as web cache servers, traffic sniffers, load balancing devices. Each system of FIG. 1 is described in further detail below.
  • Each of the network 110 , 130 and 140 may contain multiple machines and routers.
  • the machines may communicate with each other, and router 120 may provide the connectivity between machines in different networks.
  • Router 120 represents an example device in which the present invention can be implemented. For illustration, it is assumed that router 120 receives packets from one network at high speed and needs to quickly extract different fields of interest to process the packets at high speed (e.g., wire-speed). For example, some of the fields may be needed to search a routing table. Parsing generally refers to determining the location of the specific fields according to the packet format. Special challenges are presented if the packet format permits the position of the fields to change. The manner in which the present invention enables packets to be parsed quickly is described below with reference to FIG. 2.
  • FIG. 2 is a block diagram illustrating the details of an embodiment of router 120 (an example of a system) according to an aspect of the present invention.
  • Router 120 is described as receiving and processing TCP/IP packets encapsulated in Ethernet format for illustration.
  • router 120 can be designed to operate in several other environments, as will be apparent to one skilled in the relevant arts based on the disclosure provided herein.
  • Router 120 is shown containing layer 2 (L 2 ) lookup 210 , layer 3 (L 3 ) router 220 , layer 4 (L 4 ) packet classifier 230 , parser 240 , and forwarding engine 270 . Each component is described below in greater detail.
  • Forwarding engine 270 receives packets on input path 201 , and processes the packets according to various decisions made by layer 2 lookup 210 , layer 3 router 220 , layer 4 packet classifier 230 .
  • forwarding engines may operate as a layer- 2 bridge.
  • IP routing may be performed.
  • the processing may depend on layer- 4 determinations performed by layer 4 packet classifier 230 .
  • forwarding engine 270 depends on the type of packet to be processed, the medium/protocols using which packets are sent and received, and the different services (e.g., higher priority to HTTP packets) that may need to be provided to different packets depending on various inputs. Forwarding engine 270 may be implemented in a known way.
  • forwarding engine 270 represents an example processing block, and parsers generally need to be implemented to provide the information needed by the processing blocks to provide a desired utility.
  • processing blocks need to be designed taking into consideration the specific output path on which a corresponding desired data unit of interest will be provided.
  • Layer 2 lookup 210 may receive layer- 2 information (e.g., An Ethernet protocol type and Ethernet destination address) on path 211 , and determine the manner in which the corresponding packet is to be forwarded.
  • layer- 2 lookup 210 may indicate on path 217 that a received packet is simply to be bridged if the packet is not an IP packet.
  • Layer 2 lookup 210 may further indicate the specific port (not shown) on which the packet is to be forwarded.
  • forwarding engine 270 causes router 120 to operate as a bridge in a known way.
  • Layer 3 routing 220 may receive layer- 3 information (e.g., a destination address) and determine the specific port on which a corresponding packet is to be forwarded at least for some protocols (e.g., IP having a value of 0 ⁇ 800 in the Ethernet protocol type field).
  • the specific port to be forwarded may be determined according to a routing table, and the corresponding information may be communicated to forwarding engine 270 .
  • Layer 4 packet classifier 230 may receive layer- 4 information (e.g., port numbers in TCP/UDP packets) and determine the manner in which a corresponding packet is to be processed.
  • layer- 4 information e.g., port numbers in TCP/UDP packets
  • the layer- 4 information is used to implement fire-walls (or perform filtering operation, in general).
  • the layer- 4 information is used to determine whether a HTTP packet is to be forwarded to a local web cache (not shown). In response, forwarding engine 270 may process the packet accordingly.
  • Blocks 210 , 220 and 230 are described as operating on data of only a corresponding one of the layers for simplicity. However, blocks may be implemented to use data in multiple layers as may be necessary consistent with the design requirements. In addition, implementation of blocks examining other layers will also be apparent to one skilled in the relevant arts based on the disclosure provided herein. Such embodiments are contemplated to be within the scope and spirit of various aspects of the present invention.
  • Parser 240 parses the input data received on path 201 to provide each desired data unit on a corresponding output path according to a pre-specified convention.
  • the layer- 2 , layer- 3 and layer- 4 portions of information are provided on (pre-specified bit positions) paths 211 , 212 and 213 respectively. Parser 240 may need to provide the corresponding data units on the corresponding paths (outputs) quickly.
  • FIG. 3 is a block diagram illustrating the details of an embodiment of a ternary content addressable memory (TCAM) 300 .
  • a CAM generally refers to a memory in which the data is retrieved from a memory location according to a comparison of input data with the content of the memory.
  • CAMs are generally distinguished from random access memories (RAM) in that RAMs receive an address as an input and generate an output at a memory location specified by the output. The manner in which a TCAM can be used is described below.
  • TCAM 300 is configured with data and masks forming ‘search rules’.
  • search rules The manner in which search rules can be implemented in the context of ternary CAMs is described in sections below with examples.
  • TCAM 300 When input data is provided on input path 301 , TCAM 300 generates on path 399 an output data stored associated with a search rule matching the input data as described below in further detail.
  • TCAM 300 is shown containing search field 320 and output field 340 organized in the form of multiple rows 330 - 1 through 330 -N.
  • the search field in each row may in turn contain compare data 360 and associated mask 370 .
  • the compare data and mask in each row form a search rule.
  • the input data is compared with the compare data (on a bit by bit basis) in each row at the positions specified by the mask in the same row. If a match is detected at all the compared positions of a row, the data in the corresponding output field 340 is provided on path 399 .
  • the comparison of each row may be viewed as a search, and the comparison data along with the corresponding mask forms a search rule.
  • a CAM is described as being implemented using a single unit (TCAM 300 ), alternative embodiments may be implemented which use multiple units, with different units generating the byte positions for different fields. Each of the units may be implemented with different technology (ternary CAM, range match CAM, etc.) as is suited to parse the specific packet formats for the desired fields.
  • search rules depends on the specific type of features provided by the corresponding type of CAM unit.
  • TCAMs can be used to parse a received packet is described below in further details with reference to FIG. 4.
  • FIG. 4 is a block diagram illustrating the details of an embodiment of parser 400 .
  • Parser 400 is shown containing CAM 410 , output register 450 , and multiplexers 460 -A through 460 -Q. Only the representative components and interconnections are shown for clarity. The components are described below.
  • Output register 450 stores the output data received on path 415 , and provides the control values to multiplexers 460 -A through 460 -Q using the received data.
  • Output register 450 may be logically viewed as containing Q partitions (portions) 451 -A through 451 -Q, with each partition providing a control value (on paths 456 -A through 456 -Q) to the corresponding multiplexer.
  • each partition (and thus the width of paths 456 -A through 456 -Q) generally depends on the number of inputs provided to a corresponding multiplexer.
  • each multiplexer receives all the 128 bits of parsed data as inputs, and thus the widths of each partition equals 7 bits.
  • optimizations may be performed to determine the specific possible inputs from which each output (of multiplexer) needs to be selected, and multiplexers may be designed only with the required inputs. In such a situation, the width of each partition may be minimized.
  • Each of the multiplexers 460 -A through 460 -Q selects one of the corresponding connected inputs according to the corresponding control value 456 -A through 456 -Q respectively.
  • Multiplexers 460 -C, 460 -E and 460 -Q are respectively shown providing the selected data on output paths 469 -C, 469 -E, and 469 -Q, which may respectively be contained in paths 211 , 212 , and 213 of FIG. 2.
  • Each output path generates one of the desired fields, and the specific field selected by each multiplexer is determined by the corresponding control value received from output register 450 .
  • CAM 410 receives input data to be parsed on path 201 , and generates the control values (used for selection by multiplexers 460 -A through 460 -Q) on path 415 according to the stored values (configured search rules and associated output data/control values).
  • CAM 410 may also contain compare data field 460 , mask data field 470 and output field 440 , which respectively operate similarly to the corresponding fields 360 , 370 and 340 of FIG. 3.
  • the input data is compared with the bits in each data field only at positions specified by corresponding mask, and the value in the corresponding (same row) output field is provided as output to be stored in output register 450 .
  • Each value (in portions 451 -A through 451 -Q respectively) in output register 450 in turn controls the corresponding specific bit positions (or field) of data selected from the input data received on path 201 . For example, if partition 451 -C contains a value of 10, byte 10 of the input data may be selected on path 469 -C. Similarly, if partition 451 -Q contains a value of 76, byte 76 of the input data may be selected on path 469 -Q.
  • each path 469 -A through 469 -Q can be made to generate a corresponding desired pre-specified field in input data on path 201 by appropriate programming of CAM 410 , even if the positions of the fields can change according to a pre-specified protocol.
  • CAM 410 needs to be programmed consistent with the protocol format of input data to be parsed and the design of the multiplexers selecting the desired data units. The description is continued with reference to TCP/IP protocol format illustrating the general considerations in programming the CAM.
  • each data unit of interest may be present in specific bit positions or in different bit positions depending on the manner in which protocol is defined.
  • the destination IP address would be available in bytes 30 - 33 at least for version 4 IP.
  • the destination port number may start at byte positions 36 , 40 , 44 , 48 , . . . , 76 (in IP version 4) depending on the header length.
  • CAM 410 may be programmed with search rules and the associated output values to determine the correct byte positions for data units of interest in input data.
  • search rules for each data unit of interest need to be implemented taking into consideration various combinations of possible byte positions consistent with the protocol format.
  • IP protocol Version 4 and the destination address and port numbers described above
  • FIG. 5 depicts the content of CAM 410 illustrating the manner in which search rules can be implemented and the matching output can be generated according to an aspect of the present invention.
  • parser 400 needs to generate data representing the Ethernet protocol type, IP destination address, TCP source port number and TCP destination port number only.
  • the IP version and header length fields determine the specific byte locations of the source port number.
  • the source port number may be present in byte location 34 for header length of 5 respectively. Accordingly, the input data needs to be searched according to the header length and version number to determine the specific byte position for the source port number.
  • the byte positions are fixed for an IP protocol type and destination address.
  • CAM 410 is shown containing several rows ( 530 -A through 530 -K) and columns ( 560 -A through 560 -X). Each row implements a search rule as described below with several examples.
  • Row 530 -A searches for packets of IP version 4 and having a value of 5 in header length field since columns 560 -C and 560 -D respectively contain compare data values 0 ⁇ 0800 and 0 ⁇ 45, with the corresponding mask bits set to 1.
  • Output field 540 in row 530 -A is shown containing values 12 , 30 , 34 and 36 which respectively equal the byte position of Ethernet protocol type, IP destination address, TCP source port and TCP destination port in case of IP version 4 and header length value of 5.
  • column 560 -C contains the same value as all the corresponding searches are designed for IP protocol. However as searches are designed for header length 5 through F, column 560 -D contains values ranging from 45-4F (4 representing IP version 4) in different rows.
  • the search rule specified by row 530 -J is matched and corresponding output field values ( 12 , 30 , 70 and 72 ) are forwarded as select values to the corresponding multiplexers 460 on output path 415 .
  • a select value indicates the specific one of the inputs the corresponding multiplexers needs to select.
  • the data units starting at byte positions 12 , 30 , 70 and 72 are selected (and provided as outputs of parser 400 ) reflecting the Ethernet protocol type, IP destination address, TCP source port and TCP destination port.
  • CAM 410 is described as providing selection values for fields which do not change position in a packet (e.g., Ethernet Type starts at byte number 12 counted starting from 0), alternative embodiments can be implemented with improvements which do not compare data related to such fixed fields. In such a case, the data values in the fields can potentially bypass both the parser and the multiplexer, and be provided directly to the subsequent blocks (e.g., 210 , 220 ) with appropriate buffering.
  • parser 400 contains several multiplexers 460 -A through 460 -Q. It may be desirable to implement multiplexers while satisfying several requirements.
  • the description is continued with reference to an example implementation of multiplexer 460 -C. The description may be applicable to other multiplexers as well.
  • An aspect of the present invention allows multiplexer 460 -C to be implemented while minimizing the number of transistors (and thus potentially circuit area and power consumption).
  • the details of an embodiment of multiplexer 460 -C are provided in FIG. 6, which is shown selecting one of two inputs, with each input containing four bits (nibble) each.
  • the first nibble is shown containing bit D 00 , D 10 , D 20 , D 30
  • the second nibble is shown containing bits D 01 , D 11 , D 21 , D 31
  • the manner in which multiplexer 460 can be implemented to select one of the two inputs is described below with reference to FIG. 6.
  • FIG. 6 depicts the details of multiplexer 460 -C as containing eight cells 680 - 1 through 680 - 8 , and four charge and sense circuits 690 - 1 through 690 - 4 .
  • a first group containing cells 680 - 1 through 680 - 4 which receive bits D 00 , D 10 , D 20 , and D 30 respectively.
  • a second group containing cells 680 - 4 through 680 - 7 which respectively receive bits D 01 , D 11 , D 21 , and D 31 .
  • control lines 613 -A and 613 -B are controlled by control lines 613 -A and 613 -B. Only one of the control lines (even assuming more than two control lines) is at one logical value (e.g., 1) and the remaining control lines are at the other logical value. Only one of the control lines may be asserted in a known way based on the control value generated on path 415 by CAM 410 . The bits in the group receiving the control line are generated on output lines 621 - 624 respectively.
  • Charge and sense circuits (CSC) 690 - 1 through 690 - 4 are respectively connected to output lines 621 - 624 .
  • Each charge and sense circuit performs any necessary actions prior to or after the bits are selected on output lines 621 - 624 , consistent with the design of the cells.
  • Example embodiments of cells and charge and sense circuits are described below in further detail.
  • FIG. 7 is a circuit diagram illustrating the details of cell 700 in an embodiment of the present invention.
  • Cell 700 may correspond to any of the cells 680 - 1 through 680 - 8 .
  • each cell may be used to process an input bit of a multiplexer.
  • Cell 700 is shown containing two blocks, Block-A and Block-B. The two blocks are described below in further detail.
  • block-A illustrates the manner in which an input bit received on path 730 (contained in 201 of FIG. 2) can be stored locally.
  • Block-B illustrates the manner in which the bit may be selected on output line 799 (contained in path 461 -C) according to select values generated by CAM 410 . The blocks are described below in further detail.
  • Block-A receives a data bit (e.g., D 00 of FIG. 6) to be stored on input bit line 730 and the inverted bit on line 780 .
  • a data bit e.g., D 00 of FIG. 6
  • write enable line 740 When write enable line 740 is asserted, N-MOS transistors 750 -A and 750 -B are switched on (conduct) causing the data bit to be stored in latch formed by inverter gates 760 -C and 760 -D.
  • the latch may be implemented in a known way.
  • Block-B is shown containing two N-MOS transistors 750 -C and 750 -D, control signal 613 -A and shared line 799 . As noted above, block-B operates to drive shared line 799 to a bit value stored by block-A (specifically, the latch therein). Each component of block-B is described below in further detail.
  • control signals may be generated from the select values specified for the corresponding multiplexer.
  • a select value of 101 in binary format may cause control signals of 0010 0000 to be generated, indicating that the 6 th input is to be selected (and not the others). Accordingly, the bit corresponding to the sixth input would be provided on shared line 799 as described above.
  • N-MOS transistor 750 -D is connected to the inverted value of bit (ND) stored in block-A.
  • N-MOS transistor 750 -D conducts when a value of 0 is received on input line 730 and stored in block-A.
  • the gate of N-MOS transistor 750 -C is shown connected to select signal 613 -A, and thus transistor 750 -C conducts when control signal 613 -A is asserted.
  • the transistors represent switches which conduct upon receiving one value of an input and do not conduct upon receiving another value on the input.
  • Shared line 799 is shared by several cells (similar to 700 ) and is first charged to a high logical value (1). If none of the cells discharge the line, shared line 799 remains at high logical value. Only the specific cell (in a row, e.g., one of D 00 and D 01 in FIG. 6) selected according to the control signal/select value can discharge line 799 .
  • FIG. 8 is a circuit diagram illustrating the charge and sense circuit 800 in an embodiment of the present invention.
  • Charge and sense circuit 800 may correspond to any of the charge and sense circuits 690 - 1 through 690 - 4 , even though the description is provided in relation to circuit 690 - 1 .
  • Charge and sense circuit 800 is shown containing two PMOS transistors ( 810 -A and 810 -B), two NMOS transistors ( 810 -C and 810 -D), a latch represented by two inventor gates 820 -A and 820 -B, charge initiate signal 801 and evaluation signal 850 .
  • the manner in which shared line 799 is charged and output value is evaluated on path 621 is described below.
  • Transistor 810 -A conducts when gate input 801 is set to 0, causing shared line 799 to be charged.
  • Evaluation signal 850 may be asserted to propagate the value on shared line 799 to output line 621 as described below.
  • output line 621 remains at previous propagated value as also described below.
  • shared line 799 may be either at 0 or 1. If a 0 is present, P-MOS transistor 810 -B conducts, causing point 812 to be raised to 1. Latch containing back to back connected inventors 820 -A and 820 -B inverts the value at point 812 , and stores the resulting 0 as desired. On the other hand, if shared line 799 is at 1, transistor 810 -B is switched off and transistors 810 -C and 810 -D conduct, causing point 812 to be set to 0. The latch then provides a value of 1 on output line 612 , as desired.
  • evaluation signal 850 When evaluation signal 850 is not asserted, transistor 810 -D does not conduct. As a result, the logical level of point 812 is not changed and the value on output 621 is not affected.
  • the input to the parser can be different data than the data which is processed according to the results of the parsing operation.
  • the parser may be found to be useful in IPV6 type environments in which the packet format varies depending on the content of various fields.
  • the present invention enables a high speed parser to be implemented efficiently for several types of environments.
  • NMOS transistors may be used instead of PMOS transistors.
  • CAM 410 can be implemented using several approaches as described below.
  • FIG. 9 is a block diagram illustrating the manner in which CAM 410 can be implemented using pipelining techniques in an embodiment of the present invention.
  • the block diagram is shown containing sub-CAMs 910 -A through 910 -D and logical AND gates 920 -B through 920 -D. The components are described below in further detail.
  • Each sub-CAM ( 910 -A through 910 -D) may contain multiple locations, with each location corresponding to a search rule implemented using CAM 410 . As may be appreciated, each sub-CAM searches a portion of the search rule. For example, the first location of all the sub-CAMs 910 -A through 910 -D together implement the search rule in the first location of search field 420 .
  • each location of the sub-CAMs may contain an output line, even though the output line for only the first location is shown for each sub-CAM.
  • Each AND gate ( 920 -B through 920 -D) performs a logical AND operation of an output line from a previous stage ( 910 -A through 910 -C respectively) and an output line of present stage ( 910 -B through 910 -D).
  • AND gates may be provided associated with all the locations of the sub-CAMs, even though AND gates are shown associated with only the first locations.
  • line 999 (output of AND gate 920 -D) may be asserted to a 1 only if a match is detected with the first location of all the four sub-CAMs.
  • line 999 is asserted to 1
  • the data in the corresponding output field (not shown) may be provided as multiplexer select values.
  • each sub-CAM may be implemented using a different technology (e.g., ternary, binary or range search) as is suitable to parse packets of a specific packet format of interest.
  • the pipelining techniques may facilitate operation of parsers with shorter bus-widths.
  • input bus 901 may be 32 bytes wide, and carry four groups of 32-bytes of data in four successive clock cycles, and store strobes 902 -A through 902 -D may be asserted in the corresponding clock cycles.
  • the first 32 bytes of a packet may be provided as an input to sub-CAM 910 -A by asserting only store strobe 902 -A, and the next 32 bytes may be provided as an input to sub-CAM 910 -B by asserting only strobe pulse 902 -B, etc.
  • each bit of the sub-CAMs is implemented using SRAM bit cells well known in the relevant arts.
  • Multiplexers 460 -A through 460 -Q may also be implemented to operate with 32 byte bus.
  • each multiplexer may contain 1024 cells (similar to the one shown in FIG. 7).
  • the 1024 cells may be partitioned into four groups, with each group storing the bits received in the corresponding clock cycle.
  • write enable line 740 for each cell may be enabled in only one of the four clock cycles.
  • An appropriate AND logic may be used to enable each write enable signal in one of the four clock cycles depending on the specific partition in which the cell belongs. Such implementation will be apparent to one skilled in the relevant arts based on the disclosure provided herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A high speed parser containing a content addressable memory (CAM) providing select values to multiplexers. The CAM is programmed to implement search rules which examine input data for specific semantics according to a protocol, and outputs the specific bit positions at which the corresponding desired data units are present. The outputs are provided to multiplexers to cause the desired data units to be selected on the corresponding output paths of the multiplexors.

Description

    RELATED APPLICATION
  • The present application is related to the co-pending U.S. application Entitled, “Multiplexer”, Ser. No. UNASSIGNED, filed on even date with the present application, naming as inventors: Kumar et al, and assigned to common assignee.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The present invention relates to integrated circuits, and more specifically to a method and apparatus for quickly parsing a packet. [0003]
  • 2. Related Art [0004]
  • Parsers are generally used to determine data portions of interest in a sequence of bits. Typically, the bits are examined according to a pre-specified convention (protocol) to determine the data portions of interest. For example, a parser may be used in a router (example of a device) to determine data portions such as destination address and type of service parameters in a received IP packet as is well known in the relevant arts. [0005]
  • In a prior approach, a device may implement a parser substantially in the form of software instructions in which the bits are generally examined sequentially according to a protocol to determine the specific bits of interest. For example, when parsing an IP packet, a parser may first examine the IP version number (e.g., version 4 or 6) and a header length to determine the specific bit positions at which a destination port number of interest is present. [0006]
  • One problem with such software implementations is that it may take a long time (or a large number of clock cycles) to parse data bits. Such long duration for parsing may be unacceptable in many environments. For example, with high speed technologies such as those based on optical fibre, packets may be received at a high frequency and it may be necessary to parse packets quickly (e.g., at “wire-speed”). Otherwise, the long parsing times may impede the throughput performance of a device processing the packets and the resulting long parsing times may be undesirable. [0007]
  • Therefore, what is needed are method and apparatus for parsing packets quickly. [0008]
  • SUMMARY
  • A parser according to the present invention receives input data according to a packet format, and generates data units of interest on corresponding pre-specified paths. The data units may be generated quickly even if the bit positions of the data units of interest can vary according to the protocol. In an embodiment, the parser contains a multiplexer and a content addressable memory (CAM) which controls the multiplexer as a N to 1 selector, the details of which are described below. [0009]
  • The multiplexer receives multiple inputs (contained within input data) and selects one of the inputs to generate a multiplexer output according to a select value. The content addressable memory (CAM) may be programmed with multiple search rules and associated output values to cause the appropriate select value to be provided to the multiplexer. The search rule and the associated output value is based on the bit positions at which the desired data unit is present in the input data according to the packet format. The multiplexer receives multiple inputs (contained within input data) and selects one of the inputs to generate a multiplexer output according to the select value. [0010]
  • Due to the use of the CAM for searching the input data according to the protocol, the parser may generate data units of interest quickly on the corresponding pre-specified output paths. [0011]
  • An implementation of the parser is based on a ternary CAM in which each search rule is implemented as a compare data and a mask. A search rule is determined to be a matching search rule if the input data matches the compare data at bit positions specified by the mask. Multiple output values may be associated with each search rule, and the output values may be provided as select values for a corresponding number of multiplexers. Hence, each output value of a search rule corresponds to a desired output field from the high speed parser. [0012]
  • An embodiment of a ternary CAM is configured to parse a packet according to Internet Protocol (IP). As the packet format varies depending on the version number (e.g., 4 or 6) of the IP, the search rules are designed to search according to a version number. In addition, some of the data units (e.g., destination port number of the following TCP/UDP header) may be dependent on a header length field. Accordingly, each search rule may be designed to search for a specific header length value and generate the appropriate select values. [0013]
  • Alternative embodiments may be implemented using other CAM technologies (e.g., binary CAM, range search CAM) depending on the available technologies and the specific parsing strategy that may need to be employed for a specific protocol/packet format or search rules. In addition, each CAM may contain multiple CAM units, with different CAM units being implemented using different CAM technologies (corresponding to the type of search operation required for different fields of the incoming stream of data). [0014]
  • According to another aspect of the present invention, a multiplexer may contain multiple cells driving a common output line through a switch controlled by the select value output from the multiplexer. Each cell receives (and stores) a bit (“input bit”) of the multiple bits to be selected. The output line is first driven to a first logical value, and only one of the cells drives the output line to a second logical value (based on the output generated by the multiplexer) if the corresponding input bit does not equal the first logical value. [0015]
  • For example, the output line may first be driven to a logical value of 1, and a cell receiving a specific input may cause the output line to be driven to a logical value of 0 if the specific input equals 0. The specific one of the cells driving the output line may be determined by a select value received by the multiplexer. The remaining cells may not affect the output line irrespective of the corresponding input bits. [0016]
  • Each cell may contain a pair of switches (e.g., transistors) coupled in series between the output line and a pre-specified voltage level (e.g., ground). A first switch may be switched on according to a control signal (e.g., asserted according to select value) and a second switch may be switched on according to a bit stored by the cell. [0017]
  • The pair of switches cause the output line to be charged to a second logical level when the first bit has one logical value but does not change the first logical level on the output line if the first bit has another logical level. The output line may be shared by all the cells processing a corresponding bit of the different inputs, and only a control signal corresponding to only one of the multiple cells is set to one state and the control signal of the remaining ones of the multiple cells are set to another state according to the select value. [0018]
  • Hence, in operation, search rules may be stored into a CAM. An incoming data string is compared with all the rules and select values are output based on the matching rule. In parallel, the input data string is also stored in the multiplexer. The select value from the CAM-based rule table controls the multiplexer such that the appropriate field (or fields) of the input data is finally output from the multiplexer. [0019]
  • Further features and advantages of the invention, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.[0020]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be described with reference to the accompanying drawings, wherein: [0021]
  • FIG. (FIG.) [0022] 1 is a block diagram illustrating an example environment in which the present invention can be implemented;
  • FIG. 2 is a block diagram illustrating the embodiment of a router provided in accordance with the present invention; [0023]
  • FIG. 3 is a block diagram illustrating the internals of an embodiment of a ternary content addressable memory (CAM) provided in accordance with the present invention; [0024]
  • FIG. 4 is a block diagram illustrating the details of an embodiment of a parser; [0025]
  • FIG. 5 is a block diagram illustrating the manner in which input data may be compared with the search rules; [0026]
  • FIG. 6 depicts a logical model of an embodiment of multiplexer which selects one of two inputs for a 4-bit field; [0027]
  • FIG. 7 is a circuit diagram illustrating the details of [0028] cell 700 in an embodiment of the present invention;
  • FIG. 8 is a circuit diagram illustrating the charge and sense circuit in an embodiment of the present invention; and [0029]
  • FIG. 9 is a block diagram illustrating the details of implementing CAM using pipelining techniques in an embodiment of the present invention. [0030]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1. OVERVIEW AND DISCUSSION OF THE INVENTION
  • A parser in accordance with the present invention may contain a content addressable memory (CAM) and multiple multiplexers. Each multiplexer may provide one of the desired data units (e.g., bytes) on a corresponding output path. Each multiplexer may be provided as inputs all the possible values from which the one input is selected and provided on the corresponding output path. The content of the CAM may be configured such that the appropriate selection values are provided to the multiplexers for different combinations of input data to be parsed. [0031]
  • Thus, in an example scenario, at least a portion of an input data, which determines the packet format, is provided as an input to the CAM. At least the portions of input data which may be selected by each multiplexer are provided as inputs to the multiplexer. The output(s) of the CAM may then be provided as selection values to the multiplexers, which causes the multiplexers to generate the desired data units on the corresponding output paths. [0032]
  • As a result, each desired data unit may be caused to be provided on a pre-specified communication path by appropriate design of the CAM and the multiplexers. Accordingly, the combined circuit operates as a high speed parser. [0033]
  • Several aspects of the invention are described below with reference to examples for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. One skilled in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details, or with other methods, etc. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the invention. [0034]
  • 2. EXAMPLE ENVIRONMENT
  • FIG. 1 is a block diagram illustrating an example environment in which the present invention can be implemented. The environment is shown containing [0035] networks 110, 130 and 140, and router 120. Router 120 represents an example system in which the present invention can be implemented. However, the invention can be implemented in other devices such as web cache servers, traffic sniffers, load balancing devices. Each system of FIG. 1 is described in further detail below.
  • Each of the [0036] network 110, 130 and 140 may contain multiple machines and routers. The machines may communicate with each other, and router 120 may provide the connectivity between machines in different networks.
  • [0037] Router 120 represents an example device in which the present invention can be implemented. For illustration, it is assumed that router 120 receives packets from one network at high speed and needs to quickly extract different fields of interest to process the packets at high speed (e.g., wire-speed). For example, some of the fields may be needed to search a routing table. Parsing generally refers to determining the location of the specific fields according to the packet format. Special challenges are presented if the packet format permits the position of the fields to change. The manner in which the present invention enables packets to be parsed quickly is described below with reference to FIG. 2.
  • 3. ROUTER
  • FIG. 2 is a block diagram illustrating the details of an embodiment of router [0038] 120 (an example of a system) according to an aspect of the present invention. Router 120 is described as receiving and processing TCP/IP packets encapsulated in Ethernet format for illustration. However, router 120 can be designed to operate in several other environments, as will be apparent to one skilled in the relevant arts based on the disclosure provided herein. Router 120 is shown containing layer2 (L2) lookup 210, layer3 (L3) router 220, layer4 (L4) packet classifier 230, parser 240, and forwarding engine 270. Each component is described below in greater detail.
  • Forwarding [0039] engine 270 receives packets on input path 201, and processes the packets according to various decisions made by layer2 lookup 210, layer3 router 220, layer4 packet classifier 230. For example, for non-IP packets, forwarding engines may operate as a layer-2 bridge. For IP packets, IP routing may be performed. For at least some of the IP packets, the processing (or forwarding) may depend on layer-4 determinations performed by layer4 packet classifier 230.
  • In general, the implementation of forwarding [0040] engine 270 depends on the type of packet to be processed, the medium/protocols using which packets are sent and received, and the different services (e.g., higher priority to HTTP packets) that may need to be provided to different packets depending on various inputs. Forwarding engine 270 may be implemented in a known way.
  • It should be appreciated that forwarding [0041] engine 270 represents an example processing block, and parsers generally need to be implemented to provide the information needed by the processing blocks to provide a desired utility. In addition, processing blocks need to be designed taking into consideration the specific output path on which a corresponding desired data unit of interest will be provided.
  • [0042] Layer2 lookup 210 may receive layer-2 information (e.g., An Ethernet protocol type and Ethernet destination address) on path 211, and determine the manner in which the corresponding packet is to be forwarded. In an embodiment, layer-2 lookup 210 may indicate on path 217 that a received packet is simply to be bridged if the packet is not an IP packet. Layer2 lookup 210 may further indicate the specific port (not shown) on which the packet is to be forwarded. In response, forwarding engine 270 causes router 120 to operate as a bridge in a known way.
  • Layer[0043] 3 routing 220 may receive layer-3 information (e.g., a destination address) and determine the specific port on which a corresponding packet is to be forwarded at least for some protocols (e.g., IP having a value of 0×800 in the Ethernet protocol type field). The specific port to be forwarded may be determined according to a routing table, and the corresponding information may be communicated to forwarding engine 270.
  • [0044] Layer4 packet classifier 230 may receive layer-4 information (e.g., port numbers in TCP/UDP packets) and determine the manner in which a corresponding packet is to be processed. In an embodiment, the layer-4 information is used to implement fire-walls (or perform filtering operation, in general). In another embodiment, the layer-4 information is used to determine whether a HTTP packet is to be forwarded to a local web cache (not shown). In response, forwarding engine 270 may process the packet accordingly.
  • [0045] Blocks 210, 220 and 230 are described as operating on data of only a corresponding one of the layers for simplicity. However, blocks may be implemented to use data in multiple layers as may be necessary consistent with the design requirements. In addition, implementation of blocks examining other layers will also be apparent to one skilled in the relevant arts based on the disclosure provided herein. Such embodiments are contemplated to be within the scope and spirit of various aspects of the present invention.
  • [0046] Parser 240 parses the input data received on path 201 to provide each desired data unit on a corresponding output path according to a pre-specified convention. Thus, the layer-2, layer-3 and layer-4 portions of information are provided on (pre-specified bit positions) paths 211, 212 and 213 respectively. Parser 240 may need to provide the corresponding data units on the corresponding paths (outputs) quickly.
  • The manner in which a parser may quickly parse packets is described below in further detail. However, it is helpful to understand the operation of an example CAM to appreciate the details of parsing implemented in accordance with the present invention. Accordingly, the operation of an example CAM are described first with reference to FIG. 3. [0047]
  • 4. CONTENT ADDRESSABLE MEMBORY(CAM)
  • FIG. 3 is a block diagram illustrating the details of an embodiment of a ternary content addressable memory (TCAM) [0048] 300. In general, a CAM generally refers to a memory in which the data is retrieved from a memory location according to a comparison of input data with the content of the memory. CAMs are generally distinguished from random access memories (RAM) in that RAMs receive an address as an input and generate an output at a memory location specified by the output. The manner in which a TCAM can be used is described below.
  • Broadly, [0049] TCAM 300 is configured with data and masks forming ‘search rules’. The manner in which search rules can be implemented in the context of ternary CAMs is described in sections below with examples. When input data is provided on input path 301, TCAM 300 generates on path 399 an output data stored associated with a search rule matching the input data as described below in further detail.
  • [0050] TCAM 300 is shown containing search field 320 and output field 340 organized in the form of multiple rows 330-1 through 330-N. The search field in each row may in turn contain compare data 360 and associated mask 370. The compare data and mask in each row form a search rule.
  • In operation, when input data is provided on [0051] path 301, the input data is compared with the compare data (on a bit by bit basis) in each row at the positions specified by the mask in the same row. If a match is detected at all the compared positions of a row, the data in the corresponding output field 340 is provided on path 399. As may be appreciated, the comparison of each row may be viewed as a search, and the comparison data along with the corresponding mask forms a search rule.
  • While the embodiments are described with reference to TCAMs for illustration, it should be understood that other types of CAMs (depending on the needs of the specific environment and availability of the products) may also be used to parse data in accordance with various aspects of the present invention. Examples of such CAMs include range match CAM, prefix match CAM, etc. The implementation of such alternative embodiments will be apparent to one skilled in the relevant arts based on the disclosure provided herein, and are contemplated to be within the scope and spirit of various aspects of the present invention. [0052]
  • In addition, while a CAM is described as being implemented using a single unit (TCAM [0053] 300), alternative embodiments may be implemented which use multiple units, with different units generating the byte positions for different fields. Each of the units may be implemented with different technology (ternary CAM, range match CAM, etc.) as is suited to parse the specific packet formats for the desired fields.
  • In general, the implementation of search rules depends on the specific type of features provided by the corresponding type of CAM unit. The manner in which TCAMs can be used to parse a received packet is described below in further details with reference to FIG. 4. [0054]
  • 5. PARSER
  • FIG. 4 is a block diagram illustrating the details of an embodiment of [0055] parser 400. Parser 400 is shown containing CAM 410, output register 450, and multiplexers 460-A through 460-Q. Only the representative components and interconnections are shown for clarity. The components are described below.
  • [0056] Output register 450 stores the output data received on path 415, and provides the control values to multiplexers 460-A through 460-Q using the received data. Output register 450 may be logically viewed as containing Q partitions (portions) 451-A through 451-Q, with each partition providing a control value (on paths 456-A through 456-Q) to the corresponding multiplexer.
  • The width of each partition (and thus the width of paths [0057] 456-A through 456-Q) generally depends on the number of inputs provided to a corresponding multiplexer. In an embodiment, each multiplexer receives all the 128 bits of parsed data as inputs, and thus the widths of each partition equals 7 bits. However, optimizations may be performed to determine the specific possible inputs from which each output (of multiplexer) needs to be selected, and multiplexers may be designed only with the required inputs. In such a situation, the width of each partition may be minimized.
  • Each of the multiplexers [0058] 460-A through 460-Q selects one of the corresponding connected inputs according to the corresponding control value 456-A through 456-Q respectively. Multiplexers 460-C, 460-E and 460-Q are respectively shown providing the selected data on output paths 469-C, 469-E, and 469-Q, which may respectively be contained in paths 211, 212, and 213 of FIG. 2. Each output path generates one of the desired fields, and the specific field selected by each multiplexer is determined by the corresponding control value received from output register 450.
  • [0059] CAM 410 receives input data to be parsed on path 201, and generates the control values (used for selection by multiplexers 460-A through 460-Q) on path 415 according to the stored values (configured search rules and associated output data/control values). CAM 410 may also contain compare data field 460, mask data field 470 and output field 440, which respectively operate similarly to the corresponding fields 360, 370 and 340 of FIG. 3.
  • In general, the input data is compared with the bits in each data field only at positions specified by corresponding mask, and the value in the corresponding (same row) output field is provided as output to be stored in [0060] output register 450. Each value (in portions 451-A through 451-Q respectively) in output register 450 in turn controls the corresponding specific bit positions (or field) of data selected from the input data received on path 201. For example, if partition 451-C contains a value of 10, byte 10 of the input data may be selected on path 469-C. Similarly, if partition 451 -Q contains a value of 76, byte 76 of the input data may be selected on path 469-Q.
  • In general, each path [0061] 469-A through 469-Q can be made to generate a corresponding desired pre-specified field in input data on path 201 by appropriate programming of CAM 410, even if the positions of the fields can change according to a pre-specified protocol. Thus, it may be appreciated that CAM 410 needs to be programmed consistent with the protocol format of input data to be parsed and the design of the multiplexers selecting the desired data units. The description is continued with reference to TCP/IP protocol format illustrating the general considerations in programming the CAM.
  • 6. TCP/IP FORMAT
  • Only the details of TCP/IP protocol as relevant to an understanding of some the described embodiments are provided herein. For further information on TCP/IP protocol format, the reader is referred to documents entitled, “RFC 0791—Internet Protocol” and, “RFC 0793—Transmission Control Protocol”, available at www.ietf org, and are both incorporated in their entirety herewith. [0062]
  • In general, each data unit of interest may be present in specific bit positions or in different bit positions depending on the manner in which protocol is defined. For example, the destination IP address would be available in bytes [0063] 30-33 at least for version 4 IP. On the other hand, the destination port number may start at byte positions 36, 40, 44, 48, . . . , 76 (in IP version 4) depending on the header length.
  • [0064] CAM 410 may be programmed with search rules and the associated output values to determine the correct byte positions for data units of interest in input data. In general, the search rules for each data unit of interest need to be implemented taking into consideration various combinations of possible byte positions consistent with the protocol format. The manner in which CAM 410 may be programmed for the IP protocol Version 4 (and the destination address and port numbers described above) is described below.
  • 7. PROGRAMMING THE CAM
  • FIG. 5 depicts the content of [0065] CAM 410 illustrating the manner in which search rules can be implemented and the matching output can be generated according to an aspect of the present invention. For the purpose of illustration and conciseness, it is assumed that parser 400 needs to generate data representing the Ethernet protocol type, IP destination address, TCP source port number and TCP destination port number only.
  • By analyzing the TCP/IP protocols, it may be appreciated that the IP version and header length fields determine the specific byte locations of the source port number. In IP version 4, the source port number may be present in [0066] byte location 34 for header length of 5 respectively. Accordingly, the input data needs to be searched according to the header length and version number to determine the specific byte position for the source port number. On the other hand, the byte positions are fixed for an IP protocol type and destination address.
  • The manner in which [0067] CAM 410 can be programmed to perform the corresponding searches on input data is described now. CAM 410 is shown containing several rows (530-A through 530-K) and columns (560-A through 560-X). Each row implements a search rule as described below with several examples.
  • Row [0068] 530-A searches for packets of IP version 4 and having a value of 5 in header length field since columns 560-C and 560-D respectively contain compare data values 0×0800 and 0×45, with the corresponding mask bits set to 1. Output field 540 in row 530-A is shown containing values 12, 30, 34 and 36 which respectively equal the byte position of Ethernet protocol type, IP destination address, TCP source port and TCP destination port in case of IP version 4 and header length value of 5.
  • The remaining search rules are described with reference to row [0069] 530-A for conciseness. In all the search rules, column 560-C contains the same value as all the corresponding searches are designed for IP protocol. However as searches are designed for header length 5 through F, column 560-D contains values ranging from 45-4F (4 representing IP version 4) in different rows.
  • The description is continued with a few examples illustrating the operation of [0070] parser 400 when CAM 410 is programmed as described above with reference to FIG. 5.
  • 8. EXAMPLES
  • Consider an example in which input data specifies version 0×04 and header length 0×07 (i.e., value of V/IHL is 0×47) and type of protocol is IP (i.e., value is 0×0800). The value of input data matches only the search rule specified by row [0071] 530-C. As a result, CAM 410 sends the corresponding values (i.e., 12, 30,42 and 44) in output field on path 415. The values cause the corresponding multiplexers to select data units starting at byte positions 12, 30, 42 and 44 accurately reflecting the Ethernet protocol type, IP destination address, TCP source port, and TCP destination port fields in the case of IP version 4 and header length value of 7.
  • In case of an input data containing 0×4E for IP version/header length fields, the search rule specified by row [0072] 530-J is matched and corresponding output field values (12,30,70 and 72) are forwarded as select values to the corresponding multiplexers 460 on output path 415. In general, a select value indicates the specific one of the inputs the corresponding multiplexers needs to select. The data units starting at byte positions 12, 30, 70 and 72 are selected (and provided as outputs of parser 400) reflecting the Ethernet protocol type, IP destination address, TCP source port and TCP destination port.
  • While [0073] CAM 410 is described as providing selection values for fields which do not change position in a packet (e.g., Ethernet Type starts at byte number 12 counted starting from 0), alternative embodiments can be implemented with improvements which do not compare data related to such fixed fields. In such a case, the data values in the fields can potentially bypass both the parser and the multiplexer, and be provided directly to the subsequent blocks (e.g., 210, 220) with appropriate buffering.
  • As described above, [0074] parser 400 contains several multiplexers 460-A through 460-Q. It may be desirable to implement multiplexers while satisfying several requirements. The description is continued with reference to an example implementation of multiplexer 460-C. The description may be applicable to other multiplexers as well.
  • 9. MULTIPLEXER
  • An aspect of the present invention allows multiplexer [0075] 460-C to be implemented while minimizing the number of transistors (and thus potentially circuit area and power consumption). The details of an embodiment of multiplexer 460-C are provided in FIG. 6, which is shown selecting one of two inputs, with each input containing four bits (nibble) each. The first nibble is shown containing bit D00, D10, D20, D30, and the second nibble is shown containing bits D01, D11, D21, D31 The manner in which multiplexer 460 can be implemented to select one of the two inputs is described below with reference to FIG. 6.
  • FIG. 6 depicts the details of multiplexer [0076] 460-C as containing eight cells 680-1 through 680-8, and four charge and sense circuits 690-1 through 690-4. A first group containing cells 680-1 through 680-4, which receive bits D00, D10, D20, and D30 respectively. A second group containing cells 680-4 through 680-7, which respectively receive bits D01, D11, D21, and D31.
  • The (cells in) two groups respectively are controlled by control lines [0077] 613-A and 613-B. Only one of the control lines (even assuming more than two control lines) is at one logical value (e.g., 1) and the remaining control lines are at the other logical value. Only one of the control lines may be asserted in a known way based on the control value generated on path 415 by CAM 410. The bits in the group receiving the control line are generated on output lines 621-624 respectively.
  • For example, if a multiplexer select value equals 0, only line [0078] 613-A may be asserted to 1 causing bits D00 through D30 to be provided on output lines 621-624 respectively. Each cell 680-1 through 680-4 processes an input bit and causes the corresponding bit to be provided on the connected output line if the input control bit is asserted.
  • Charge and sense circuits (CSC) [0079] 690-1 through 690-4 are respectively connected to output lines 621-624. Each charge and sense circuit performs any necessary actions prior to or after the bits are selected on output lines 621-624, consistent with the design of the cells. Example embodiments of cells and charge and sense circuits are described below in further detail.
  • 10. CELL
  • FIG. 7 is a circuit diagram illustrating the details of [0080] cell 700 in an embodiment of the present invention. Cell 700 may correspond to any of the cells 680-1 through 680-8. As noted above, each cell may be used to process an input bit of a multiplexer. Cell 700 is shown containing two blocks, Block-A and Block-B. The two blocks are described below in further detail.
  • Broadly, block-A illustrates the manner in which an input bit received on path [0081] 730 (contained in 201 of FIG. 2) can be stored locally. Block-B illustrates the manner in which the bit may be selected on output line 799 (contained in path 461-C) according to select values generated by CAM 410. The blocks are described below in further detail.
  • Block-A receives a data bit (e.g., D[0082] 00 of FIG. 6) to be stored on input bit line 730 and the inverted bit on line 780. When write enable line 740 is asserted, N-MOS transistors 750-A and 750-B are switched on (conduct) causing the data bit to be stored in latch formed by inverter gates 760-C and 760-D. The latch may be implemented in a known way.
  • Block-B is shown containing two N-MOS transistors [0083] 750-C and 750-D, control signal 613-A and shared line 799. As noted above, block-B operates to drive shared line 799 to a bit value stored by block-A (specifically, the latch therein). Each component of block-B is described below in further detail.
  • It may be appreciated that control signals (including [0084] 613-A) may be generated from the select values specified for the corresponding multiplexer. In an embodiment, a select value of 101 in binary format (equal to 5 in decimal) may cause control signals of 0010 0000 to be generated, indicating that the 6th input is to be selected (and not the others). Accordingly, the bit corresponding to the sixth input would be provided on shared line 799 as described above.
  • It may be appreciated that translating a select value to multiple control signals generally enables CAM [0085] 410 (and bus 415) to be implemented with minimal width (number of bits). However, in situations such as when performance is of primary concern, CAM 410 may be implemented to generate control signals directly (using wider CAM), in which case select value is represented in the form of control signals. Many such alternative embodiments may be implemented without departing from the scope and spirit of the present invention as will be apparent to one skilled in the relevant arts based on the disclosure provided herein.
  • Continuing with the description of block-B of FIG. 7, the gate of N-MOS transistor [0086] 750-D is connected to the inverted value of bit (ND) stored in block-A. Thus, N-MOS transistor 750-D conducts when a value of 0 is received on input line 730 and stored in block-A. The gate of N-MOS transistor 750-C is shown connected to select signal 613-A, and thus transistor 750-C conducts when control signal 613-A is asserted. In general, the transistors represent switches which conduct upon receiving one value of an input and do not conduct upon receiving another value on the input.
  • Shared [0087] line 799 is shared by several cells (similar to 700) and is first charged to a high logical value (1). If none of the cells discharge the line, shared line 799 remains at high logical value. Only the specific cell (in a row, e.g., one of D00 and D01 in FIG. 6) selected according to the control signal/select value can discharge line 799.
  • In particular, with reference to block-B of FIG. 7, when both transistors [0088] 750-C and 750-D conduct, then shared line 799 is discharged to attain a low logical level. As described above, transistor 750-C conducts when control signal 613-A is asserted and transistor 750-D conducts when the bit stored in block-A equals 0.
  • Thus, when [0089] cell 700 is selected (as specified by control signal 613-A) and when D00 of FIG. 6 equals 0, shared line 799 is set to a low logical level (0). If D00 equals 1, transistor 750-D would not conduct, and thus shared line 799 would remain at 1. The other cells (e.g., that corresponding to D01) would not also drive shared line 799 to 0 since the corresponding select values/control signals would be set to 0.
  • The description is continued with respect to the manner in which shared [0090] line 799 may be charged to 1 and the manner in which output value may be evaluated by the corresponding charge and sensing circuits.
  • 11. CHARGING AND SENSING
  • FIG. 8 is a circuit diagram illustrating the charge and [0091] sense circuit 800 in an embodiment of the present invention. Charge and sense circuit 800 may correspond to any of the charge and sense circuits 690-1 through 690-4, even though the description is provided in relation to circuit 690-1. Charge and sense circuit 800 is shown containing two PMOS transistors (810-A and 810-B), two NMOS transistors (810-C and 810-D), a latch represented by two inventor gates 820-A and 820-B, charge initiate signal 801 and evaluation signal 850. The manner in which shared line 799 is charged and output value is evaluated on path 621 is described below.
  • Transistor [0092] 810-A conducts when gate input 801 is set to 0, causing shared line 799 to be charged. Evaluation signal 850 may be asserted to propagate the value on shared line 799 to output line 621 as described below. When evaluation signal 850 is not asserted, output line 621 remains at previous propagated value as also described below.
  • When [0093] evaluation signal 850 is asserted, shared line 799 may be either at 0 or 1. If a 0 is present, P-MOS transistor 810-B conducts, causing point 812 to be raised to 1. Latch containing back to back connected inventors 820-A and 820-B inverts the value at point 812, and stores the resulting 0 as desired. On the other hand, if shared line 799 is at 1, transistor 810-B is switched off and transistors 810-C and 810-D conduct, causing point 812 to be set to 0. The latch then provides a value of 1 on output line 612, as desired.
  • When [0094] evaluation signal 850 is not asserted, transistor 810-D does not conduct. As a result, the logical level of point 812 is not changed and the value on output 621 is not affected.
  • Thus, it may be appreciated that the approach of above can be used to implement a multiplexer while reducing a number of transistors compared to a conventional approach in which multiplexers are implemented using merely combinatorial logic gates (AND and OR gates). Many advantages such as reduced power consumption and die area (in implementations as integrated circuits) may be attained as a result. [0095]
  • In addition, the input to the parser can be different data than the data which is processed according to the results of the parsing operation. Also, the parser may be found to be useful in IPV6 type environments in which the packet format varies depending on the content of various fields. Thus, the present invention enables a high speed parser to be implemented efficiently for several types of environments. [0096]
  • It should be further understood that several modifications can be made to the described embodiments without departing from the scope and spirit of the present invention as will be apparent to one skilled in the relevant arts based on the disclosure provided herein. For example, NMOS transistors may be used instead of PMOS transistors. Similarly, [0097] CAM 410 can be implemented using several approaches as described below.
  • 12. PIPELINED CAM ARCHITECTURE
  • FIG. 9 is a block diagram illustrating the manner in which [0098] CAM 410 can be implemented using pipelining techniques in an embodiment of the present invention. The block diagram is shown containing sub-CAMs 910-A through 910-D and logical AND gates 920-B through 920-D. The components are described below in further detail.
  • Each sub-CAM ([0099] 910-A through 910-D) may contain multiple locations, with each location corresponding to a search rule implemented using CAM 410. As may be appreciated, each sub-CAM searches a portion of the search rule. For example, the first location of all the sub-CAMs 910-A through 910-D together implement the search rule in the first location of search field 420.
  • If there is a match in the first location (at the top) of sub-CAM [0100] 910-A, the output line 912-A is asserted to a logical value of 1, or else a 0 is asserted. Each location of the sub-CAMs may contain an output line, even though the output line for only the first location is shown for each sub-CAM.
  • Each AND gate ([0101] 920-B through 920-D) performs a logical AND operation of an output line from a previous stage (910-A through 910-C respectively) and an output line of present stage (910-B through 910-D). AND gates may be provided associated with all the locations of the sub-CAMs, even though AND gates are shown associated with only the first locations.
  • Thus, line [0102] 999 (output of AND gate 920-D) may be asserted to a 1 only if a match is detected with the first location of all the four sub-CAMs. When line 999 is asserted to 1, the data in the corresponding output field (not shown) may be provided as multiplexer select values.
  • The pipelining techniques thus implemented may provide several advantages. For example, each sub-CAM may be implemented using a different technology (e.g., ternary, binary or range search) as is suitable to parse packets of a specific packet format of interest. In addition, the pipelining techniques may facilitate operation of parsers with shorter bus-widths. [0103]
  • For example, assuming CAM width equals 128 bytes, a bus of only 32 bytes may be used, and sub-CAMs may be implemented using 32 bytes bus. In such a scenario, [0104] input bus 901 may be 32 bytes wide, and carry four groups of 32-bytes of data in four successive clock cycles, and store strobes 902-A through 902-D may be asserted in the corresponding clock cycles.
  • In other words, the first 32 bytes of a packet may be provided as an input to sub-CAM [0105] 910-A by asserting only store strobe 902-A, and the next 32 bytes may be provided as an input to sub-CAM 910-B by asserting only strobe pulse 902-B, etc. In an embodiment, each bit of the sub-CAMs is implemented using SRAM bit cells well known in the relevant arts.
  • Multiplexers [0106] 460-A through 460-Q may also be implemented to operate with 32 byte bus. As may be appreciated, each multiplexer may contain 1024 cells (similar to the one shown in FIG. 7). The 1024 cells may be partitioned into four groups, with each group storing the bits received in the corresponding clock cycle.
  • With respect to FIG. 7, write enable [0107] line 740 for each cell may be enabled in only one of the four clock cycles. An appropriate AND logic may be used to enable each write enable signal in one of the four clock cycles depending on the specific partition in which the cell belongs. Such implementation will be apparent to one skilled in the relevant arts based on the disclosure provided herein.
  • 13. CONCLUSION
  • While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. [0108]

Claims (30)

What is claimed is:
1. A parser receiving an input data according to a protocol, said parser generating a data unit of interest on a corresponding one of a plurality of paths, wherein the bit position in said input data of said data unit of interest can vary according to said protocol, said parser comprising:
a multiplexer receiving a plurality of inputs, said multiplexer selecting one of said plurality of inputs to generate a multiplexer output according to a select value, said plurality of inputs being comprised in said input data; and
a content addressable memory (CAM) programmed with a plurality of search rules, each of said plurality of search rules being stored associated with an output value, said CAM forwarding as said select value the output value stored associated with a matching search rule in response to receiving said input data, said matching search rule being contained in said plurality of search rules,
wherein said output value associated with each search rule indicates a specific one of said plurality of inputs is to be selected according to a bit position at which said data unit of interest is located.
2. The parser of claim 1, wherein said CAM comprises a ternary CAM and each of said plurality of search rules comprises a compare data and a mask, wherein said matching search is determined depending on whether said input data matches the compare data at bit positions specified by the mask for each of said plurality of search rules.
3. The parser of claim 2, further comprising a plurality of multiplexers including said multiplexer, wherein a plurality of output values are stored associated with each of said plurality of search rules, wherein each of said plurality of output values is provided as a select value to a corresponding one of said plurality of multiplexers.
4. The parser of claim 3, wherein said input data comprises a packet according to Internet Protocol, wherein said search rules are designed to search according to a version number of said Internet Protocol.
5. The parser of claim 4, wherein some of said plurality of search rules are designed to search according to a header length field in said packet.
6. The parser of claim 1, wherein said multiplexer comprises a plurality of cells, wherein each of said plurality of cells is designed to select a specified bit of a plurality of bits, said specified bit being indicated by said select value, wherein each of said plurality of bits is contained in a corresponding one of said plurality of inputs, wherein a first cell is contained in said plurality of cells, said first cell comprises:
an output initially charged to a first logical level; and
a pair of transistors coupled in series between said output and a pre-specified voltage level, a first transistor being switched on according to a control signal and a second transistor being switched on according to a first bit stored by said first cell,
wherein said pair of transistors cause said output to be charged to a second logical level when said first bit has one logical value but does not change said first logical level on said output if said first bit has another logical level,
wherein said output is shared by said plurality of cells and only a control signal corresponding to only one of said plurality of cells is set to one state and the control signal of the remaining ones of said plurality of cells are set to another state according to said select value.
7. The parser of claim 6, wherein said first logical level comprises a 1 and said another logical level comprises a 0.
8. The parser of claim 7, wherein said pre-specified voltage level equals ground.
9. The parser of claim 8, wherein each of said first transistor and said second transistor comprises an NMOS transistor, and wherein the gate terminal of said another transistor is connected to an inverted value of said first bit.
10. The parser of claim 9, further comprising a charging circuit to charge said output initially to said first logical level.
11. The parser of claim 10, wherein said output comprises an output line.
12. The parser of claim 6, further comprising a latch storing said first bit.
13. The parser of claim 1, wherein said CAM is implemented as a plurality CAM units.
14. The parser of claim 13, wherein some of said plurality of CAM units are implemented using one type of CAM technology and some other of said plurality of CAM units are implemented using another type of CAM technology.
15. The parser of claim 14, wherein said one type of CAM technology comprises ternary CAM technology.
16. The parser of claim 13, wherein a first CAM unit and a second CAM unit respectively search a first portion and a second portion, wherein said first CAM unit and said second CAM unit are comprised in said CAM, and each of said first portion and said second portion comprise a field in said plurality of search rules, said parser further comprises an AND gate performing a logical AND operation of a first match signal and a second match signal.
17. The parser of claim 1, wherein each of said plurality of search rules is stored in a location comprised in said CAM, wherein said location comprises a first number of bits, wherein fewer than said first number of bits are stored in one clock cycle and some more bits are stored in a next block cycle in said CAM.
18. A device processing an input data, said device comprising:
a parser examining said input data and generating a data unit of interest on a corresponding one of a plurality of paths, wherein the bit position in said input data of said data unit of interest can vary according to said protocol, said parser comprising:
a multiplexer receiving a plurality of inputs, said multiplexer selecting one of said plurality of inputs to generate a multiplexer output according to a select value, said plurality of inputs being comprised in said input data; and
a content addressable memory (CAM) programmed with a plurality of search rules, each of said plurality of search rules being stored associated with an output value, said CAM forwarding as said select value the output value stored associated with a matching search rule in response to receiving said input data, said matching search rule being contained in said plurality of search rules,
wherein said output value associated with each search rule indicates a specific one of said plurality of inputs is to be selected according to a bit position at which said data unit of interest is located; and
a processing block receiving said data unit of interest on said multiplexer output irrespective of the bit position of said data unit of interest in said input data.
19. The device of claim 18, wherein said CAM comprises a ternary CAM and each of said plurality of search rules comprises a compare data and a mask, wherein said matching search is determined depending on whether said input data matches the compare data at bit positions specified by the mask for each of said plurality of search rules.
20. The device of claim 19, further comprising a plurality of multiplexers including said multiplexer, wherein a plurality of output values are stored associated with each of said plurality of search rules, wherein each of said plurality of output values is provided as a select value to a corresponding one of said plurality of multiplexers.
21. The device of claim 20, wherein said input data comprises a packet according to Internet Protocol, wherein said search rules are designed to search according to a version number of said Internet Protocol.
22. The device of claim 21, wherein some of said plurality of search rules are designed to search according to a header length field in said packet.
23. The device of claim 18, wherein said multiplexer comprises a plurality of cells, wherein each of said plurality of cells is designed to select a specified bit of a plurality of bits, said specified bit being indicated by said select value, wherein each of said plurality of bits is contained in a corresponding one of said plurality of inputs, wherein a first cell is contained in said plurality of cells, said first cell comprises:
an output initially charged to a first logical level; and
a pair of transistors coupled in series between said output and a pre-specified voltage level, a first transistor being switched on according to a control signal and a second transistor being switched on according to a first bit stored by said first cell,
wherein said pair of transistors cause said output to be charged to a second logical level when said first bit has one logical value but does not change said first logical level on said output if said first bit has another logical level,
wherein said output is shared by said plurality of cells and only a control signal corresponding to only one of said plurality of cells is set to one state and the control signal of the remaining ones of said plurality of cells are set to another state according to said select value.
24. The device of claim 23, wherein said first logical level comprises a 1 and said another logical level comprises a 0.
25. The device of claim 24, wherein said pre-specified voltage level equals ground.
26. The device of claim 25, wherein each of said first transistor and said second transistor comprises an NMOS transistor, and wherein the gate terminal of said another transistor is connected to an inverted value of said first bit.
27. The device of claim 26, further comprising a charging circuit to charge said output initially to said first logical level.
28. The device of claim 27, wherein said output comprises an output line.
29. The device of claim 23, further comprising a latch storing said first bit.
30. The invention of claim 29, wherein said device comprises a router processing IP packets.
US10/361,752 2003-02-11 2003-02-11 High speed parser Active 2026-01-08 US7233597B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/361,752 US7233597B2 (en) 2003-02-11 2003-02-11 High speed parser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/361,752 US7233597B2 (en) 2003-02-11 2003-02-11 High speed parser

Publications (2)

Publication Number Publication Date
US20040156371A1 true US20040156371A1 (en) 2004-08-12
US7233597B2 US7233597B2 (en) 2007-06-19

Family

ID=32824297

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/361,752 Active 2026-01-08 US7233597B2 (en) 2003-02-11 2003-02-11 High speed parser

Country Status (1)

Country Link
US (1) US7233597B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040177275A1 (en) * 2003-03-06 2004-09-09 Rose Kenneth M. Apparatus and method for filtering IP packets
US20050053074A1 (en) * 2003-09-04 2005-03-10 Samsung Electronics Co., Ltd. Apparatus and method for classifying traffic in a distributed architecture router
US20050111445A1 (en) * 2003-11-21 2005-05-26 Samsung Electronics Co., Ltd. Router using switching-before-routing packet processing and method of operation
US20060209895A1 (en) * 2005-03-15 2006-09-21 Fujitsu Limited Network system, layer 3 communication device, layer 2 communication device, and route selection method
US7492763B1 (en) * 2004-07-16 2009-02-17 Applied Micro Circuits Corporation User-specified key creation from attributes independent of encapsulation type
US7873992B1 (en) * 2003-12-04 2011-01-18 Avaya Inc. Dynamic system of autonomous parsers for interpreting arbitrary telecommunication equipment streams
US20110197149A1 (en) * 2010-02-11 2011-08-11 International Business Machines Coporation Xml post-processing hardware acceleration
US20110307655A1 (en) * 2010-06-11 2011-12-15 Beijing Star-Net Ruijie Networks Co., Ltd Method and apparatus for updating table entries of a ternary content addressable memory
US20140307736A1 (en) * 2013-04-16 2014-10-16 Suresh Krishnan Method for providing a parser to extract information from fields of a data packet header
US9055114B1 (en) * 2011-12-22 2015-06-09 Juniper Networks, Inc. Packet parsing and control packet classification
WO2016178172A1 (en) * 2015-05-05 2016-11-10 Gsi Technology Inc. Sram multi-cell operations
US11316889B2 (en) 2015-12-21 2022-04-26 Fortinet, Inc. Two-stage hash based logic for application layer distributed denial of service (DDoS) attack attribution

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7382777B2 (en) * 2003-06-17 2008-06-03 International Business Machines Corporation Method for implementing actions based on packet classification and lookup results
US7366167B2 (en) * 2003-09-19 2008-04-29 Samsung Electronics Co., Ltd. Apparatus and method for hairpinning data packets in an Ethernet MAC chip
US7787471B2 (en) * 2003-11-10 2010-08-31 Broadcom Corporation Field processor for a network device
US7626940B2 (en) * 2004-12-22 2009-12-01 Intruguard Devices, Inc. System and method for integrated header, state, rate and content anomaly prevention for domain name service
US7602731B2 (en) * 2004-12-22 2009-10-13 Intruguard Devices, Inc. System and method for integrated header, state, rate and content anomaly prevention with policy enforcement
US20060187919A1 (en) * 2005-02-18 2006-08-24 Broadcom Corporation Two stage parser for a network
US7869411B2 (en) * 2005-11-21 2011-01-11 Broadcom Corporation Compact packet operation device and method
US8027946B1 (en) 2006-12-22 2011-09-27 Avaya Inc. Higher order logic applied to expert systems for alarm analysis, filtering, correlation and root cause
US8040882B2 (en) * 2008-02-14 2011-10-18 Broadcom Corporation Efficient key sequencer
US8638793B1 (en) * 2009-04-06 2014-01-28 Marvell Israle (M.I.S.L) Ltd. Enhanced parsing and classification in a packet processor
US8681819B2 (en) 2011-01-31 2014-03-25 International Business Machines Corporation Programmable multifield parser packet
US9276851B1 (en) 2011-12-20 2016-03-01 Marvell Israel (M.I.S.L.) Ltd. Parser and modifier for processing network packets
US9111615B1 (en) * 2013-10-01 2015-08-18 Xilinx, Inc. RAM-based ternary content addressable memory
KR102134315B1 (en) * 2018-12-11 2020-07-16 조선대학교산학협력단 A Non-Blocking Updateable Ternary Content-Addressable Memory on FPGA

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598114A (en) * 1995-09-27 1997-01-28 Intel Corporation High speed reduced area multiplexer
US6087855A (en) * 1998-06-15 2000-07-11 International Business Machines Corporation High performance dynamic multiplexers without clocked NFET
US6617912B1 (en) * 2002-06-13 2003-09-09 Xilinx, Inc. Pass gate multiplexer circuit with reduced susceptibility to single event upsets
US6940814B1 (en) * 1997-06-30 2005-09-06 Sun Microsystems, Inc. System and method for a quality of service in a multi-layer network element

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598114A (en) * 1995-09-27 1997-01-28 Intel Corporation High speed reduced area multiplexer
US6940814B1 (en) * 1997-06-30 2005-09-06 Sun Microsystems, Inc. System and method for a quality of service in a multi-layer network element
US6087855A (en) * 1998-06-15 2000-07-11 International Business Machines Corporation High performance dynamic multiplexers without clocked NFET
US6617912B1 (en) * 2002-06-13 2003-09-09 Xilinx, Inc. Pass gate multiplexer circuit with reduced susceptibility to single event upsets

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040205228A1 (en) * 2003-03-06 2004-10-14 Rose Kenneth M. Apparatus and method for detecting tiny fragment attacks
US8904513B2 (en) * 2003-03-06 2014-12-02 Cisco Technology, Inc. Apparatus and method for filtering IP packets
US8296452B2 (en) 2003-03-06 2012-10-23 Cisco Technology, Inc. Apparatus and method for detecting tiny fragment attacks
US20040177275A1 (en) * 2003-03-06 2004-09-09 Rose Kenneth M. Apparatus and method for filtering IP packets
US20050053074A1 (en) * 2003-09-04 2005-03-10 Samsung Electronics Co., Ltd. Apparatus and method for classifying traffic in a distributed architecture router
US7362763B2 (en) * 2003-09-04 2008-04-22 Samsung Electronics Co., Ltd. Apparatus and method for classifying traffic in a distributed architecture router
US20050111445A1 (en) * 2003-11-21 2005-05-26 Samsung Electronics Co., Ltd. Router using switching-before-routing packet processing and method of operation
US7672302B2 (en) * 2003-11-21 2010-03-02 Samsung Electronics Co., Ltd. Router using switching-before-routing packet processing and method of operation
US7873992B1 (en) * 2003-12-04 2011-01-18 Avaya Inc. Dynamic system of autonomous parsers for interpreting arbitrary telecommunication equipment streams
US20090207857A1 (en) * 2004-07-16 2009-08-20 Alexander Jr Cedell A User-specified key creation from attributes independent of encapsulation type
US8031709B2 (en) 2004-07-16 2011-10-04 Applied Micro Circuits Corporation User-specified key creation from attributes independent of encapsulation type
US7492763B1 (en) * 2004-07-16 2009-02-17 Applied Micro Circuits Corporation User-specified key creation from attributes independent of encapsulation type
US20060209895A1 (en) * 2005-03-15 2006-09-21 Fujitsu Limited Network system, layer 3 communication device, layer 2 communication device, and route selection method
US7983190B2 (en) * 2005-03-15 2011-07-19 Fujitsu Limited Network system, layer 3 communication device, layer 2 communication device, and route selection method
US9110875B2 (en) * 2010-02-11 2015-08-18 International Business Machines Corporation XML post-processing hardware acceleration
US20110197149A1 (en) * 2010-02-11 2011-08-11 International Business Machines Coporation Xml post-processing hardware acceleration
KR101762953B1 (en) * 2010-02-11 2017-08-04 인터내셔널 비지네스 머신즈 코포레이션 Xml post-processing hardware acceleration
US20110307655A1 (en) * 2010-06-11 2011-12-15 Beijing Star-Net Ruijie Networks Co., Ltd Method and apparatus for updating table entries of a ternary content addressable memory
US8527694B2 (en) * 2010-06-11 2013-09-03 Beijing Star-Net Ruijie Networks Co., Ltd. Method and apparatus for updating table entries of a ternary content addressable memory
US9055114B1 (en) * 2011-12-22 2015-06-09 Juniper Networks, Inc. Packet parsing and control packet classification
US9571396B2 (en) 2011-12-22 2017-02-14 Juniper Networks, Inc. Packet parsing and control packet classification
US20140307736A1 (en) * 2013-04-16 2014-10-16 Suresh Krishnan Method for providing a parser to extract information from fields of a data packet header
US9848068B2 (en) * 2013-04-16 2017-12-19 Telefonaktiebolaget L M Ericsson (Publ) Method for providing a parser to extract information from fields of a data packet header
WO2016178172A1 (en) * 2015-05-05 2016-11-10 Gsi Technology Inc. Sram multi-cell operations
US9558812B2 (en) 2015-05-05 2017-01-31 Gsi Technology Inc. SRAM multi-cell operations
CN106796811A (en) * 2015-05-05 2017-05-31 Gsi科技公司 SRAM multi unit operations
CN110335633A (en) * 2015-05-05 2019-10-15 Gsi科技公司 SRAM multi unit operation
US11316889B2 (en) 2015-12-21 2022-04-26 Fortinet, Inc. Two-stage hash based logic for application layer distributed denial of service (DDoS) attack attribution

Also Published As

Publication number Publication date
US7233597B2 (en) 2007-06-19

Similar Documents

Publication Publication Date Title
US7233597B2 (en) High speed parser
US7237058B2 (en) Input data selection for content addressable memory
EP1425755B1 (en) Concurrent searching of different tables within a content addressable memory
US6721202B1 (en) Bit encoded ternary content addressable memory cell
US6266262B1 (en) Enhanced binary content addressable memory for longest prefix address matching
US7313667B1 (en) Methods and apparatus for mapping fields of entries into new values and combining these mapped values into mapped entries for use in lookup operations such as for packet processing
US6253280B1 (en) Programmable multiple word width CAM architecture
US6181698B1 (en) Network routing table using content addressable memory
US7210003B2 (en) Comparand generation in a content addressable memory
US8599859B2 (en) Iterative parsing and classification
US7783654B1 (en) Multiple string searching using content addressable memory
KR100933916B1 (en) Apparatus and Method for Classifier Identification
Jedhe et al. A scalable high throughput firewall in FPGA
CN112667526B (en) Method and circuit for realizing access control list circuit
US7219187B1 (en) Search parameter table in a content addressable memory
US7694068B1 (en) Re-entrant processing in a content addressable memory
US6972978B1 (en) Content addressable memory (CAM) devices with block select and pipelined virtual sector look-up control and methods of operating same
US6825706B2 (en) Multiplexer
US7114026B1 (en) CAM device having multiple index generators
US6272133B1 (en) Packet filtering method
JP2004046988A (en) Associative memory device and repeater using the same
US20060018142A1 (en) Concurrent searching of different tables within a content addressable memory
KR100459542B1 (en) Internet protocol address look-up device
JP2000151627A (en) Packet filter circuit
JP3700574B2 (en) Search success signal look-ahead output data selection device

Legal Events

Date Code Title Description
AS Assignment

Owner name: TEXAS INSTRUMENTS INCORPORATED, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUMAR, PAMELA;CHEMPARATHY, CYRIL JOHN;SHARMA, MOHIT;AND OTHERS;REEL/FRAME:013763/0622

Effective date: 20030203

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12