US20010027496A1 - Passing a communication control block to a local device such that a message is processed on the device - Google Patents
Passing a communication control block to a local device such that a message is processed on the device Download PDFInfo
- Publication number
- US20010027496A1 US20010027496A1 US09/804,553 US80455301A US2001027496A1 US 20010027496 A1 US20010027496 A1 US 20010027496A1 US 80455301 A US80455301 A US 80455301A US 2001027496 A1 US2001027496 A1 US 2001027496A1
- Authority
- US
- United States
- Prior art keywords
- inic
- data
- host
- tcp
- driver
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/169—Special adaptations of TCP, UDP or IP for interworking of IP based networks with other networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F5/00—Methods or arrangements for data conversion without changing the order or content of the data handled
- G06F5/06—Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
- G06F5/10—Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor having a sequence of storage locations each being individually accessible for both enqueue and dequeue operations, e.g. using random access memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
- H04L45/245—Link aggregation, e.g. trunking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/901—Buffering arrangements using storage descriptor, e.g. read or write pointers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9063—Intermediate storage in different physical parts of a node or terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9084—Reactions to storage capacity overflow
- H04L49/9089—Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
- H04L49/9094—Arrangements for simultaneous transmit and receive, e.g. simultaneous reading/writing from/to the storage element
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/09—Mapping addresses
- H04L61/10—Mapping addresses of different types
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/09—Mapping addresses
- H04L61/25—Mapping addresses of the same type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/62—Establishing a time schedule for servicing the requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/161—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/161—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
- H04L69/162—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/163—In-band adaptation of TCP data exchange; In-band control procedures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/165—Combined use of TCP and UDP protocols; selection criteria therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/166—IP fragmentation; TCP segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/168—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP] specially adapted for link layer protocols, e.g. asynchronous transfer mode [ATM], synchronous optical network [SONET] or point-to-point protocol [PPP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/18—Multiprotocol handlers, e.g. single devices capable of handling multiple protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q3/00—Selecting arrangements
- H04Q3/0016—Arrangements providing connection between exchanges
- H04Q3/0029—Provisions for intelligent networking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/08—Protocols for interworking; Protocol conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/12—Protocol engines
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13093—Personal computer, PC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13103—Memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13204—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13299—Bus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/1332—Logic circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13345—Intelligent networks, SCP
Definitions
- the present invention relates generally to computer or other networks, and more particularly to protocol processing for information communicated between hosts such as computers connected to a network.
- OSI Open Systems Interconnection
- TCP/IP transport control protocol
- IP internet protocol
- Networks may include, for instance, a high-speed bus such as an Ethernet connection or an internet connection between disparate local area networks (LANs), each of which includes multiple hosts, or any of a variety of other known means for data transfer between hosts.
- LANs local area networks
- physical layers are connected to the network at respective hosts, the physical layers providing transmission and receipt of raw data bits via the network.
- a data link layer is serviced by the physical layer of each host, the data link layers providing frame division and error correction to the data received from the physical layers, as well as processing acknowledgment frames sent by the receiving host.
- a network layer of each host is serviced by respective data link layers, the network layers primarily controlling size and coordination of subnets of packets of data.
- a transport layer is serviced by each network layer and a session layer is serviced by each transport layer within each host.
- Transport layers accept data from their respective session layers and split the data into smaller units for transmission to the other host's transport layer, which concatenates the data for presentation to respective presentation layers.
- Session layers allow for enhanced communication control between the hosts.
- Presentation layers are serviced by their respective session layers, the presentation layers translating between data semantics and syntax which may be peculiar to each host and standardized structures of data representation. Compression and/or encryption of data may also be accomplished at the presentation level.
- Application layers are serviced by respective presentation layers, the application layers translating between programs particular to individual hosts and standardized programs for presentation to either an application or an end user.
- the TCP/IP standard includes the lower four layers and application layers, but integrates the functions of session layers and presentation layers into adjacent layers. Generally speaking, application, presentation and session layers are defined as upper layers, while transport, network and data link layers are defined as lower layers.
- each layer The rules and conventions for each layer are called the protocol of that layer, and since the protocols and general functions of each layer are roughly equivalent in various hosts, it is useful to think of communication occurring directly between identical layers of different hosts, even though these peer layers do not directly communicate without information transferring sequentially through each layer below.
- Each lower layer performs a service for the layer immediately above it to help with processing the communicated information.
- Each layer saves the information for processing and service to the next layer. Due to the multiplicity of hardware and software architectures, systems and programs commonly employed, each layer is necessary to insure that the data can make it to the intended destination in the appropriate form, regardless of variations in hardware and software that may intervene.
- the receiving host generally performs the converse of the above-described process, beginning with receiving the bits from the network, as headers are removed and data processed in order from the lowest (physical) layer to the highest (application) layer before transmission to a destination of the receiving host.
- Each layer of the receiving host recognizes and manipulates only the headers associated with that layer, since to that layer the higher layer control data is included with and indistinguishable from the payload data.
- Multiple interrupts, valuable central processing unit (CPU) processing time and repeated data copies may also be necessary for the receiving host to place the data in an appropriate form at its intended destination.
- a computer network is an interconnected collection of autonomous computers, such as internet and intranet systems, including local area networks (LANs), wide area networks (WANs), asynchronous transfer mode (ATM), ring or token ring, wired, wireless, satellite or other means for providing communication capability between separate processors.
- LANs local area networks
- WANs wide area networks
- ATM asynchronous transfer mode
- ring or token ring wired, wireless, satellite or other means for providing communication capability between separate processors.
- a computer is defined herein to include a device having both logic and memory functions for processing data, while computers or hosts connected to a network are said to be heterogeneous if they function according to different operating systems or communicate via different architectures.
- the current invention provides a system for processing network communication that greatly increases the speed of that processing and the efficiency of moving the data being communicated.
- the invention has been achieved by questioning the long-standing practice of performing multilayered protocol processing on a general-purpose processor.
- the protocol processing method and architecture that results effectively collapses the layers of a connection-based, layered architecture such as TCP/IP into a single wider layer which is able to send network data more directly to and from a desired location or buffer on a host.
- This accelerated processing is provided to a host for both transmitting and receiving data, and so improves performance whether one or both hosts involved in an exchange of information have such a feature.
- the accelerated processing includes employing representative control instructions for a given message that allow data from the message to be processed via a fast-path which accesses message data directly at its source or delivers it directly to its intended destination.
- This fast-path bypasses conventional protocol processing of headers that accompany the data.
- the fast-path employs a specialized microprocessor designed for processing network communication, avoiding the delays and pitfalls of conventional software layer processing, such as repeated copying and interrupts to the CPU.
- the fast-path replaces the states that are traditionally found in several layers of a conventional network stack with a single state machine encompassing all those layers, in contrast to conventional rules that require rigorous differentiation and separation of protocol layers.
- the host retains a sequential protocol processing stack which can be employed for setting up a fast-path connection or processing message exceptions.
- the specialized microprocessor and the host intelligently choose whether a given message or portion of a message is processed by the microprocessor or the host stack.
- FIG. 1 is a plan view diagram of a system of the present invention, including a host computer having a communication-processing device for accelerating network communication.
- FIG. 2 is a diagram of information flow for the host of FIG. 1 in processing network communication, including a fast-path, a slow-path and a transfer of connection context between the fast and slow-paths.
- FIG. 3 is a flow chart of message receiving according to the present invention.
- FIG. 4A is a diagram of information flow for the host of FIG. 1 receiving a message packet processed by the slow-path.
- FIG. 4B is a diagram of information flow for the host of FIG. 1 receiving an initial message packet processed by the fast-path.
- FIG. 4C is a diagram of information flow for the host of FIG. 4B receiving a subsequent message packet processed by the fast-path.
- FIG. 4D is a diagram of information flow for the host of FIG. 4C receiving a message packet having an error that causes processing to revert to the slow-path.
- FIG. 5 is a diagram of information flow for the host of FIG. 1 transmitting a message by either the fast or slow-paths.
- FIG. 6 is a diagram of information flow for a first embodiment of an intelligent network interface card (INIC) associated with a client having a TCP/IP processing stack.
- INIC intelligent network interface card
- FIG. 7 is a diagram of hardware logic for the INIC embodiment shown in FIG. 6, including a packet control sequencer and a fly-by sequencer.
- FIG. 8 is a diagram of the fly-by sequencer of FIG. 7 for analyzing header bytes as they are received by the INIC.
- FIG. 9 is a diagram of information flow for a second embodiment of an INIC associated with a server having a TCP/IP processing stack.
- FIG. 10 is a diagram of a command driver installed in the host of FIG. 9 for creating and controlling a communication control block for the fast-path.
- FIG. 11 is a diagram of the TCP/IP stack and command driver of FIG. 10 configured for NetBios communications.
- FIG. 12 is a diagram of a communication exchange between the client of FIG. 6 and the server of FIG. 9.
- FIG. 13 is a diagram of hardware functions included in the INIC of FIG. 9.
- FIG. 14 is a diagram of a trio of pipelined microprocessors included in the INIC of FIG. 13, including three phases with a processor in each phase.
- FIG. 15A is a diagram of a first phase of the pipelined microprocessor of FIG. 14.
- FIG. 15B is a diagram of a second phase of the pipelined microprocessor of FIG. 14.
- FIG. 15C is a diagram of a third phase of the pipelined microprocessor of FIG. 14.
- FIGS. 16 - 99 are associated with the description below entitled “Disclosure From Provisional Application 60/061,809”.
- FIG. 1 shows a host 20 of the present invention connected by a network 25 to a remote host 22 .
- the increase in processing speed achieved by the present invention can be provided with an intelligent network interface card (INIC) that is easily and affordably added to an existing host, or with a communication processing device (CPD) that is integrated into a host, in either case freeing the host CPU from most protocol processing and allowing improvements in other tasks performed by that CPU.
- the host 20 in a first embodiment contains a CPU 28 and a CPD 30 connected by a PCI bus 33 .
- the CPD 30 includes a microprocessor designed for processing communication data and memory buffers controlled by a direct memory access (DMA) unit.
- DMA direct memory access
- Also connected to the PCI bus 33 is a storage device 35 , such as a semiconductor memory or disk drive, along with any related controls.
- the host CPU 28 controls a protocol processing stack 44 housed in storage 35 , the stack including a data link layer 36 , network layer 38 , transport layer 40 , upper layer 46 and an upper layer interface 42 .
- the upper layer 46 may represent a session, presentation and/or application layer, depending upon the particular protocol being employed and message communicated.
- the upper layer interface 42 along with the CPU 28 and any related controls can send or retrieve a file to or from the upper layer 46 or storage 35 , as shown by arrow 48 .
- a connection context 50 has been created, as will be explained below, the context summarizing various features of the connection, such as protocol type and source and destination addresses for each protocol layer.
- the context may be passed between an interface for the session layer 42 and the CPD 30 , as shown by arrows 52 and 54 , and stored as a communication control block (CCB) at either CPD 30 or storage 35 .
- CB communication control block
- the CPD 30 collapses multiple protocol stacks each having possible separate states into a single state machine for fast-path processing.
- exception conditions may occur that are not provided for in the single state machine, primarily because such conditions occur infrequently and to deal with them on the CPD would provide little or no performance benefit to the host.
- Such exceptions can be CPD 30 or CPU 28 initiated.
- An advantage of the invention includes the manner in which unexpected situations that occur on a fast-path CCB are handled.
- the CPD 30 deals with these rare situations by passing back or flushing to the host protocol stack 44 the CCB and any associated message frames involved, via a control negotiation.
- the exception condition is then processed in a conventional manner by the host protocol stack 44 . At some later time, usually directly after the handling of the exception condition has completed and fast-path processing can resume, the host stack 44 hands the CCB back to the CPD.
- the custom designed network microprocessor can have independent processors for transmitting and receiving network information, and further processors for assisting and queuing.
- a preferred microprocessor embodiment includes a pipelined trio of receive, transmit and utility processors. DMA controllers are integrated into the implementation and work in close concert with the network microprocessor to quickly move data between buffers adjacent the controllers and other locations such as long term storage. Providing buffers logically adjacent to the DMA controllers avoids unnecessary loads on the PCI bus.
- FIG. 3 diagrams the general flow of messages received according to the current invention.
- a large TCP/IP message such as a file transfer may be received by the host from the network in a number of separate, approximately 64 KB transfers, each of which may be split into many, approximately 1.5 KB frames or packets for transmission over a network.
- Novel NetWare protocol suites running Sequenced Packet Exchange Protocol (SPX) or NetWare Core Protocol (NCP) over Internetwork Packet Exchange (IPX) work in a similar fashion.
- SPX Sequenced Packet Exchange Protocol
- NCP NetWare Core Protocol
- IPX Internetwork Packet Exchange
- TCP Transaction TCP
- TTCP Transaction TCP
- T/TCP Transaction TCP
- each packet conventionally includes a portion of the data being transferred, as well as headers for each of the protocol layers and markers for positioning the packet relative to the rest of the packets of this message.
- a message packet or frame is received 47 from a network by the CPD, it is first validated by a hardware assist. This includes determining the protocol types of the various layers, verifying relevant checksums, and summarizing 57 these findings into a status word or words. Included in these words is an indication whether or not the frame is a candidate for fast-path data flow. Selection 59 of fast-path candidates is based on whether the host may benefit from this message connection being handled by the CPD, which includes determining whether the packet has header bytes denoting particular protocols, such as TCP/IP or SPX/IPX for example. The small percent of frames that are not fast-path candidates are sent 61 to the host protocol stacks for slow-path protocol processing.
- Subsequent network microprocessor work with each fast-path candidate determines whether a fast-path connection such as a TCP or SPX CCB is already extant for that candidate, or whether that candidate may be used to set up a new fast-path connection, such as for a TTCP/IP transaction.
- the validation provided by the CPD provides acceleration whether a frame is processed by the fast-path or a slow-path, as only error free, validated frames are processed by the host CPU even for the slow-path processing.
- All received message frames which have been determined by the CPD hardware assist to be fast-path candidates are examined 53 by the network microprocessor or INIC comparitor circuits to determine whether they match a CCB held by the CPD.
- the CPD removes lower layer headers and sends 69 the remaining application data from the frame directly into its final destination in the host using direct memory access (DMA) units of the CPD.
- DMA direct memory access
- This operation may occur immediately upon receipt of a message packet, for example when a TCP connection already exists and destination buffers have been negotiated, or it may first be necessary to process an initial header to acquire a new set of final destination addresses for this transfer. In this latter case, the CPD will queue subsequent message packets while waiting for the destination address, and then DMA the queued application data to that destination.
- a fast-path candidate that does not match a CCB may be used to set up a new fast-path connection, by sending 65 the frame to the host for sequential protocol processing.
- the host uses this frame to create 51 a CCB, which is then passed to the CPD to control subsequent frames on that connection.
- the CCB which is cached 67 in the CPD, includes control and state information pertinent to all protocols that would have been processed had conventional software layer processing been employed.
- the CCB also contains storage space for per-transfer information used to facilitate moving application-level data contained within subsequent related message packets directly to a host application in a form available for immediate usage.
- the CPD takes command of connection processing upon receiving a CCB for that connection from the host.
- FIG. 4A when a message packet is received from the remote host 22 via network 25 , the packet enters hardware receive logic 32 of the CPD 30 , which checksums headers and data, and parses the headers, creating a word or words which identify the message packet and status, storing the headers, data and word temporarily in memory 60 . As well as validating the packet, the receive logic 32 indicates with the word whether this packet is a candidate for fast-path processing.
- FIG. 4A depicts the case in which the packet is not a fast-path candidate, in which case the CPD 30 sends the validated headers and data from memory 60 to data link layer 36 along an internal bus for processing by the host CPU, as shown by arrow 56 .
- the packet is processed by the host protocol stack 44 of data link 36 , network 38 , transport 40 and session 42 layers, and data (D) 63 from the packet may then be sent to storage 35 , as shown by arrow 65 .
- FIG. 4B depicts the case in which the receive logic 32 of the CPD determines that a message packet is a candidate for fast-path processing, for example by deriving from the packet's headers that the packet belongs to a TCP/IP, TTCP/IP or SPX/IPX message.
- a processor 55 in the CPD 30 then checks to see whether the word that summarizes the fast-path candidate matches a CCB held in a cache 62 . Upon finding no match for this packet, the CPD sends the validated packet from memory 60 to the host protocol stack 44 for processing.
- Host stack 44 may use this packet to create a connection context for the message, including finding and reserving a destination for data from the message associated with the packet, the context taking the form of a CCB.
- the present embodiment employs a single specialized host stack 44 for processing both fast-path and non-fast-path candidates, while in an embodiment described below fast-path candidates are processed by a different host stack than non-fast-path candidates.
- Some data (D1) 66 from that initial packet may optionally be sent to the destination in storage 35 , as shown by arrow 68 .
- the CCB is then sent to the CPD 30 to be saved in cache 62 , as shown by arrow 64 .
- the initial packet may be part of a connection initialization dialogue that transpires between hosts before the CCB is created and passed to the CPD 30 .
- the packet headers and data are validated by the receive logic 32 , and the headers are parsed to create a summary of the message packet and a hash for finding a corresponding CCB, the summary and hash contained in a word or words.
- the word or words are temporarily stored in memory 60 along with the packet.
- the processor 55 checks for a match between the hash and each CCB that is stored in the cache 62 and, finding a match, sends the data (D2) 70 via a fast-path directly to the destination in storage 35 , as shown by arrow 72 , bypassing the session layer 42 , transport layer 40 , network layer 38 and data link layer 36 .
- the remaining data packets from the message can also be sent by DMA directly to storage, avoiding the relatively slow protocol layer processing and repeated copying by the CPU stack 44 .
- FIG. 4D shows the procedure for handling the rare instance when a message for which a fast-path connection has been established, such as shown in FIG. 4C, has a packet that is not easily handled by the CPD.
- the packet is sent to be processed by the protocol stack 44 , which is handed the CCB for that message from cache 62 via a control dialogue with the CPD, as shown by arrow 76 , signaling to the CPU to take over processing of that message.
- Slow-path processing by the protocol stack then results in data (D3) 80 from the packet being sent, as shown by arrow 82 , to storage 35 .
- the CCB can be handed back via a control dialogue to the cache 62 , so that payload data from subsequent packets of that message can again be sent via the fast-path of the CPD 30 .
- the CPU and CPD together decide whether a given message is to be processed according to fast-path hardware processing or more conventional software processing by the CPU.
- Transmission of a message from the host 20 to the network 25 for delivery to remote host 22 also can be processed by either sequential protocol software processing via the CPU or accelerated hardware processing via the CPD 30 , as shown in FIG. 5.
- a message (M) 90 that is selected by CPU 28 from storage 35 can be sent to session layer 42 for processing by stack 44 , as shown by arrows 92 and 96 .
- data packets can bypass host stack 44 and be sent by DMA directly to memory 60 , with the processor 55 adding to each data packet a single header containing all the appropriate protocol layers, and sending the resulting packets to the network 25 for transmission to remote host 22 .
- This fast-path transmission can greatly accelerate processing for even a single packet, with the acceleration multiplied for a larger message.
- a message for which a fast-path connection is not extant thus may benefit from creation of a CCB with appropriate control and state information for guiding fast-path transmission.
- a traditional connection-based message such as typified by TCP/IP or SPX/IPX
- the CCB is created during connection initialization dialogue.
- the CCB can be created with the same transaction that transmits payload data.
- the transmission of payload data may be a reply to a request that was used to set up the fast-path connection.
- the CCB provides protocol and status information regarding each of the protocol layers, including which user is involved and storage space for per-transfer information.
- the CCB is created by protocol stack 44 , which then passes the CCB to the CPD 30 by writing to a command register of the CPD, as shown by arrow 98 .
- the processor 55 moves network frame-sized portions of the data from the source in host memory 35 into its own memory 60 using DMA, as depicted by arrow 99 .
- the processor 55 then prepends appropriate headers and checksums to the data portions, and transmits the resulting frames to the network 25 , consistent with the restrictions of the associated protocols.
- the CPD 30 After the CPD 30 has received an acknowledgement that all the data has reached its destination, the CPD will then notify the host 35 by writing to a response buffer.
- fast-path transmission of data communications also relieves the host CPU of per-frame processing.
- a vast majority of data transmissions can be sent to the network by the fast-path.
- Both the input and output fast-paths attain a huge reduction in interrupts by functioning at an upper layer level, i.e., session level or higher, and interactions between the network microprocessor and the host occur using the full transfer sizes which that upper layer wishes to make.
- an interrupt only occurs (at the most) at the beginning and end of an entire upper-layer message transaction, and there are no interrupts for the sending or receiving of each lower layer portion or packet of that transaction.
- a simplified intelligent network interface card (INIC) 150 is shown in FIG. 6 to provide a network interface for a host 152 .
- Hardware logic 171 of the INIC 150 is connected to a network 155 , with a peripheral bus (PCI) 157 connecting the INIC and host.
- the host 152 in this embodiment has a TCP/IP protocol stack, which provides a slow-path 158 for sequential software processing of message frames received from the network 155 .
- the host 152 protocol stack includes a data link layer 160 , network layer 162 , a transport layer 164 and an application layer 166 , which provides a source or destination 168 for the communication data in the host 152 .
- Other layers which are not shown, such as session and presentation layers, may also be included in the host stack 152 , and the source or destination may vary depending upon the nature of the data and may actually be the application layer.
- the INIC 150 has a network processor 170 which chooses between processing messages along a slow-path 158 that includes the protocol stack of the host, or along a fast-path 159 that bypasses the protocol stack of the host.
- Each received packet is processed on the fly by hardware logic 171 contained in INIC 150 , so that all of the protocol headers for a packet can be processed without copying, moving or storing the data between protocol layers.
- the hardware logic 171 processes the headers of a given packet at one time as packet bytes pass through the hardware, by categorizing selected header bytes. Results of processing the selected bytes help to determine which other bytes of the packet are categorized, until a summary of the packet has been created, including checksum validations.
- the processed headers and data from the received packet are then stored in INIC storage 185 , as well as the word or words summarizing the headers and status of the packet.
- a received message packet first enters a media access controller 172 , which controls INIC access to the network and receipt of packets and can provide statistical information for network protocol management. From there, data flows one byte at a time into an assembly register 174 , which in this example is 128 bits wide.
- the data is categorized by a fly-by sequencer 178 , as will be explained in more detail with regard to FIG. 8, which examines the bytes of a packet as they fly by, and generates status from those bytes that will be used to summarize the packet. The status thus created is merged with the data by a multiplexor 180 and the resulting data stored in SRAM 182 .
- a packet control sequencer 176 oversees the fly-by sequencer 178 , examines information from the media access controller 172 , counts the bytes of data, generates addresses, moves status and manages the movement of data from the assembly register 174 to SRAM 182 and eventually DRAM 188 .
- the packet control sequencer 176 manages a buffer in SRAM 182 via SRAM controller 183 , and also indicates to a DRAM controller 186 when data needs to be moved from SRAM 182 to a buffer in DRAM 188 .
- the packet control sequencer 176 will move the status that has been generated in the fly-by sequencer 178 out to the SRAM 182 and to the beginning of the DRAM 188 buffer to be prepended to the packet data.
- the packet control sequencer 176 requests a queue manager 184 to enter a receive buffer descriptor into a receive queue, which in turn notifies the processor 170 that the packet has been processed by hardware logic 171 and its status summarized.
- FIG. 8 shows that the fly-by sequencer 178 has several tiers, with each tier generally focusing on a particular portion of the packet header and thus on a particular protocol layer, for generating status pertaining to that layer.
- the fly-by sequencer 178 in this embodiment includes a media access control sequencer 191 , a network sequencer 192 , a transport sequencer 194 and a session sequencer 195 . Sequencers pertaining to higher protocol layers can additionally be provided.
- the fly-by sequencer 178 is reset by the packet control sequencer 176 and given pointers by the packet control sequencer that tell the fly-by sequencer whether a given byte is available from the assembly register 174 .
- the media access control sequencer 191 determines, by looking at bytes 0 - 5 , that a packet is addressed to host 152 rather than or in addition to another host. Offsets 12 and 13 of the packet are also processed by the media access control sequencer 191 to determine the type field, for example whether the packet is Ethernet or 802.3. If the type field is Ethernet those bytes also tell the media access control sequencer 191 the packet's network protocol type. For the 802.3 case, those bytes instead indicate the length of the entire frame, and the media access control sequencer 191 will check eight bytes further into the packet to determine the network layer type.
- the network sequencer 192 validates that the header length received has the correct length, and checksums the network layer header.
- the network layer header is known to be IP or IPX from analysis done by the media access control sequencer 191 . Assuming for example that the type field is 802.3 and the network protocol is IP, the network sequencer 192 analyzes the first bytes of the network layer header, which will begin at byte 22 , in order to determine IP type. The first bytes of the IP header will be processed by the network sequencer 192 to determine what IP type the packet involves.
- Determining that the packet involves, for example, IP version 4, directs further processing by the network sequencer 192 , which also looks at the protocol type located ten bytes into the IP header for an indication of the transport header protocol of the packet. For example, for IP over Ethernet, the IP header begins at offset 14 , and the protocol type byte is offset 23 , which will be processed by network logic to determine whether the transport layer protocol is TCP, for example. From the length of the network layer header, which is typically 20-40 bytes, network sequencer 192 determines the beginning of the packet's transport layer header for validating the transport layer header. Transport sequencer 194 may generate checksums for the transport layer header and data, which may include information from the IP header in the case of TCP at least.
- transport sequencer 194 also analyzes the first few bytes in the transport layer portion of the header to determine, in part, the TCP source and destination ports for the message, such as whether the packet is NetBios or other protocols.
- Byte 12 of the TCP header is processed by the transport sequencer 194 to determine and validate the TCP header length.
- Byte 13 of the TCP header contains flags that may, aside from ack flags and push flags, indicate unexpected options, such as reset and fin, that may cause the processor to categorize this packet as an exception.
- TCP offset bytes 16 and 17 are the checksum, which is pulled out and stored by the hardware logic 171 while the rest of the frame is validated against the checksum.
- Session sequencer 195 determines the length of the session layer header, which in the case of NetBios is only four bytes, two of which tell the length of the NetBios payload data, but which can be much larger for other protocols.
- the session sequencer 195 can also be used to categorize the type of message as read or write, for example, for which the fast-path may be particularly beneficial.
- Further upper layer logic processing, depending upon the message type, can be performed by the hardware logic 171 of packet control sequencer 176 and fly-by sequencer 178 .
- hardware logic 171 intelligently directs hardware processing of the headers by categorization of selected bytes from a single stream of bytes, with the status of the packet being built from classifications determined on the fly.
- the packet control sequencer 176 adds the status information generated by the fly-by sequencer 178 and any status information generated by the packet control sequencer 176 , and prepends (adds to the front) that status information to the packet, for convenience in handling the packet by the processor 170 .
- the additional status information generated by the packet control sequencer 176 includes media access controller 172 status information and any errors discovered, or data overflow in either the assembly register or DRAM buffer, or other miscellaneous information regarding the packet.
- the packet control sequencer 176 also stores entries into a receive buffer queue and a receive statistics queue via the queue manager 184 .
- An advantage of processing a packet by hardware logic 171 is that the packet does not, in contrast with conventional sequential software protocol processing, have to be stored, moved, copied or pulled from storage for processing each protocol layer header, offering dramatic increases in processing efficiency and savings in processing time for each packet.
- the packets can be processed at the rate bits are received from the network, for example 100 megabits/second for a 100 baseT connection. The time for categorizing a packet received at this rate and having a length of sixty bytes is thus about 5 microseconds.
- the total time for processing this packet with the hardware logic 171 and sending packet data to its host destination via the fast-path may be about 16 microseconds or less, assuming a 66 MH PCI bus, whereas conventional software protocol processing by a 300 MH Pentium II® processor may take as much as 200 microseconds in a busy system. More than an order of magnitude decrease in processing time can thus be achieved with fast-path 159 in comparison with a high-speed CPU employing conventional sequential software protocol processing, demonstrating the dramatic acceleration provided by processing the protocol headers by the hardware logic 171 and processor 170 , without even considering the additional time savings afforded by the reduction in CPU interrupts and host bus bandwidth savings.
- the processor 170 chooses, for each received message packet held in storage 185 , whether that packet is a candidate for the fast-path 159 and, if so, checks to see whether a fast-path has already been set up for the connection that the packet belongs to. To do this, the processor 170 first checks the header status summary to determine whether the packet headers are of a protocol defined for fast-path candidates. If not, the processor 170 commands DMA controllers in the INIC 150 to send the packet to the host for slow-path 158 processing. Even for a slow-path 158 processing of a message, the INIC 150 thus performs initial procedures such as validation and determination of message type, and passes the validated message at least to the data link layer 160 of the host.
- the processor 170 checks to see whether the header status summary matches a CCB held by the INIC. If so, the data from the packet is sent along fast-path 159 to the destination 168 in the host. If the fast-path 159 candidate's packet summary does not match a CCB held by the INIC, the packet may be sent to the host 152 for slow-path processing to create a CCB for the message. Employment of the fast-path 159 may also not be needed or desirable for the case of fragmented messages or other complexities. For the vast majority of messages, however, the INIC fast-path 159 can greatly accelerate message processing.
- the INIC 150 thus provides a single state machine processor 170 that decides whether to send data directly to its destination, based upon information gleaned on the fly, as opposed to the conventional employment of a state machine in each of several protocol layers for determining the destiny of a given packet.
- a protocol driver of the host selects the processing route based upon whether the indication is fast-path or slow-path.
- a TCP/IP or SPX/IPX message has a connection that is set up from which a CCB is formed by the driver and passed to the INIC for matching with and guiding the fast-path packet to the connection destination 168 .
- the driver can create a connection context for the transaction from processing an initial request packet, including locating the message destination 168 , and then passing that context to the INIC in the form of a CCB for providing a fast-path for a reply from that destination.
- a CCB includes connection and state information regarding the protocol layers and packets of the message.
- a CCB can include source and destination media access control (MAC) addresses, source and destination IP or IPX addresses, source and destination TCP or SPX ports, TCP variables such as timers, receive and transmit windows for sliding window protocols, and information denoting the session layer protocol.
- MAC media access control
- Caching the CCBs in a hash table in the INIC provides quick comparisons with words summarizing incoming packets to determine whether the packets can be processed via the fast-path 159 , while the full CCBs are also held in the INIC for processing.
- Other ways to accelerate this comparison include software processes such as a B-tree or hardware assists such as a content addressable memory (CAM).
- CAM content addressable memory
- INIC microcode or comparitor circuits detect a match with the CCB, a DMA controller places the data from the packet in the destination 168 , without any interrupt by the CPU, protocol processing or copying.
- the destination of the data may be the session, presentation or application layers, or a file buffer cache in the host 152 .
- FIG. 9 shows an INIC 200 connected to a host 202 that is employed as a file server.
- This INIC provides a network interface for several network connections employing the 802 . 3 u standard, commonly known as Fast Ethernet.
- the INIC 200 is connected by a PCI bus 205 to the server 202 , which maintains a TCP/IP or SPX/IPX protocol stack including MAC layer 212 , network layer 215 , transport layer 217 and application layer 220 , with a source/destination 222 shown above the application layer, although as mentioned earlier the application layer can be the source or destination.
- the INIC is also connected to network lines 210 , 240 , 242 and 244 , which are preferably fast Ethernet, twisted pair, fiber optic, coaxial cable or other lines each allowing data transmission of 100 Mb/s, while faster and slower data rates are also possible.
- Network lines 210 , 240 , 242 and 244 are each connected to a dedicated row of hardware circuits which can each validate and summarize message packets received from their respective network line.
- line 210 is connected with a first horizontal row of sequencers 250
- line 240 is connected with a second horizontal row of sequencers 260
- line 242 is connected with a third horizontal row of sequencers 262
- line 244 is connected with a fourth horizontal row of sequencers 264 .
- a network processor 230 determines, based on that summary and a comparison with any CCBs stored in the INIC 200 , whether to send a packet along a slow-path 231 for processing by the host.
- a large majority of packets can avoid such sequential processing and have their data portions sent by DMA along a fast-path 237 directly to the data destination 222 in the server according to a matching CCB.
- the fast-path 237 provides an avenue to send data directly from the source 222 to any of the network lines by processor 230 division of the data into packets and addition of full headers for network transmission, again minimizing CPU processing and interrupts.
- sequencer 250 For clarity only horizontal sequencer 250 is shown active; in actuality each of the sequencer rows 250 , 260 , 262 and 264 offers full duplex communication, concurrently with all other sequencer rows.
- the specialized INIC 200 is much faster at working with message packets than even advanced general-purpose host CPUs that processes those headers sequentially according to the software protocol stack.
- SMB server message block
- TCP/IP server message block
- SMB can operate in conjunction with redirector software that determines whether a required resource for a particular operation, such as a printer or a disk upon which a file is to be written, resides in or is associated with the host from which the operation was generated or is located at another host connected to the network, such as a file server.
- SMB and server/redirector are conventionally serviced by the transport layer; in the present invention SMB and redirector can instead be serviced by the INIC. In this case, sending data by the DMA controllers from the INIC buffers when receiving a large SMB transaction may greatly reduce interrupts that the host must handle.
- An SMB transmission of the present invention follows essentially the reverse of the above described SMB receive, with data transferred from the host to the INIC and stored in buffers, while the associated protocol headers are prepended to the data in the INIC, for transmission via a network line to a remote host.
- Processing by the INIC of the multiple packets and multiple TCP, IP, NetBios and SMB protocol layers via custom hardware and without repeated interrupts of the host can greatly increase the speed of transmitting an SMB message to a network line.
- a message command driver 300 may be installed in host 202 to work in concert with a host protocol stack 310 .
- the command driver 300 can intervene in message reception or transmittal, create CCBs and send or receive CCBs from the INIC 200 , so that functioning of the INIC, aside from improved performance, is transparent to a user.
- an INIC memory 304 and an INIC miniport driver 306 which can direct message packets received from network 210 to either the conventional protocol stack 310 or the command protocol stack 300 , depending upon whether a packet has been labeled as a fast-path candidate.
- the conventional protocol stack 310 has a data link layer 312 , a network layer 314 and a transport layer 316 for conventional, lower layer processing of messages that are not labeled as fast-path candidates and therefore not processed by the command stack 300 . Residing above the lower layer stack 310 is an upper layer 318 , which represents a session, presentation and/or application layer, depending upon the message communicated.
- the command driver 300 similarly has a data link layer 320 , a network layer 322 and a transport layer 325 .
- the driver 300 includes an upper layer interface 330 that determines, for transmission of messages to the network 210 , whether a message transmitted from the upper layer 318 is to be processed by the command stack 300 and subsequently the INIC fast-path, or by the conventional stack 310 .
- the upper layer interface 330 receives an appropriate message from the upper layer 318 that would conventionally be intended for transmission to the network after protocol processing by the protocol stack of the host, the message is passed to driver 300 .
- the INIC acquires network-sized portions of the message data for that transmission via INIC DMA units, prepends headers to the data portions and sends the resulting message packets down the wire.
- miniport driver 306 diverts that message packet to command driver 300 for processing.
- the driver 300 processes the message packet to create a context for that message, with the driver 302 passing the context and command instructions back to the INIC 200 as a CCB for sending data of subsequent messages for the same connection along a fast-path.
- a least recently used (LRU) algorithm is employed for the case when the INIC cache is full.
- the driver 300 can also create a connection context for a TTCP request which is passed to the INIC 200 as a CCB, allowing fast-path transmission of a TTCP reply to the request.
- a message having a protocol that is not accelerated can be processed conventionally by protocol stack 310 .
- FIG. 11 shows a TCP/IP implementation of command driver software for Microsoft® protocol messages.
- a conventional host protocol stack 350 includes MAC layer 353 , IP layer 355 and TCP layer 358 .
- a command driver 360 works in concert with the host stack 350 to process network messages.
- the command driver 360 includes a MAC layer 363 , an IP layer 366 and an Alacritech TCP (ATCP) layer 373 .
- the conventional stack 350 and command driver 360 share a network driver interface specification (NDIS) layer 375 , which interacts with the INIC miniport driver 306 .
- the INIC miniport driver 306 sorts receive indications for processing by either the conventional host stack 350 or the ATCP driver 360 .
- a TDI filter driver and upper layer interface 380 similarly determines whether messages sent from a TDI user 382 to the network are diverted to the command driver and perhaps to the fast-path of the INIC, or processed by the host stack.
- FIG. 12 depicts a typical SMB exchange between a client 190 and server 290 , both of which have communication devices of the present invention, the communication devices each holding a CCB defining their connection for fast-path movement of data.
- the client 190 includes INIC 150 , 802.3 compliant data link layer 160 , IP layer 162 , TCP layer 164 , NetBios layer 166 , and SMB layer 168 .
- the client has a slow-path 157 and fast-path 159 for communication processing.
- the server 290 includes INIC 200 , 802.3 compliant data link layer 212 , IP layer 215 , TCP layer 217 , NetBios layer 220 , and SMB 222 .
- the server is connected to network lines 240 , 242 and 244 , as well as line 210 which is connected to client 190 .
- the server also has a slow-path 231 and fast-path 237 for communication processing.
- the client may begin by sending a Read Block Raw (RBR) SMB command across network 210 requesting the first 64 KB of that file on the server 290 .
- RBR Read Block Raw
- the RBR command may be only 76 bytes, for example, so the INIC 200 on the server will recognize the message type (SMB) and relatively small message size, and send the 76 bytes directly via the fast-path to NetBios of the server.
- SMB message type
- NetBios will give the data to SMB, which processes the Read request and fetches the 64 KB of data into server data buffers. SMB then calls NetBios to send the data, and NetBios outputs the data for the client.
- NetBios would call TCP output and pass 64 KB to TCP, which would divide the data into 1460 byte segments and output each segment via IP and eventually MAC (slow-path 231 ).
- the 64 KB data goes to the ATCP driver along with an indication regarding the client-server SMB connection, which denotes a CCB held by the INIC.
- the INIC 200 then proceeds to DMA 1460 byte segments from the host buffers, add the appropriate headers for TCP, IP and MAC at one time, and send the completed packets on the network 210 (fast-path 237 ).
- the INIC 200 will repeat this until the whole 64 KB transfer has been sent. Usually after receiving acknowledgement from the client that the 64 KB has been received, the INIC will then send the remaining 36 KB also by the fast-path 237 .
- the INIC 150 With INIC 150 operating on the client 190 when this reply arrives, the INIC 150 recognizes from the first frame received that this connection is receiving fast-path 159 processing (TCP/IP, NetBios, matching a CCB), and the ATCP may use this first frame to acquire buffer space for the message. This latter case is done by passing the first 128 bytes of the NetBios portion of the frame via the ATCP fast-path directly to the host NetBios; that will give NetBios/SMB all of the frame's headers. NetBios/SMB will analyze these headers, realize by matching with a request ID that this is a reply to the original RawRead connection, and give the ATCP a 64K list of buffers into which to place the data.
- TCP/IP fast-path 159 processing
- NetBios NetBios, matching a CCB
- the client buffer list is given to the ATCP, it passes that transfer information to the INIC 150 , and the INIC 150 starts DMAing any frame data that has accumulated into those buffers.
- FIG. 13 provides a simplified diagram of the INIC 200, which combines the functions of a network interface controller and a protocol processor in a single ASIC chip 400 .
- the INIC 200 in this embodiment offers a full-duplex, four channel, 10/100-Megabit per second (Mbps) intelligent network interface controller that is designed for high speed protocol processing for server applications.
- Mbps 10/100-Megabit per second
- the INIC 200 can be connected to personal computers, workstations, routers or other hosts anywhere that TCP/IP, TTCP/IP or SPX/IPX protocols are being utilized.
- the INIC 200 is connected with four network lines 210 , 240 , 242 and 244 , which may transport data along a number of different conduits, such as twisted pair, coaxial cable or optical fiber, each of the connections providing a media independent interface (INIC).
- the lines preferably are 802.3 compliant and in connection with the INIC constitute four complete Ethernet nodes, the INIC supporting 10Base-T, 10Base-T2, 100Base-TX, 100Base-FX and 100Base-T4 as well as future interface standards. Physical layer identification and initialization is accomplished through host driver initialization routines.
- the connection between the network lines 210 , 240 , 242 and 244 and the INIC 200 is controlled by MAC units MAC-A 402 , MAC-B 404 , MAC-C 406 and MAC-D 408 which contain logic circuits for performing the basic functions of the MAC sublayer, essentially controlling when the INIC accesses the network lines 210 , 240 , 242 and 244 .
- the MAC units 402 - 408 may act in promiscuous, multicast or unicast modes, allowing the INIC to function as a network monitor, receive broadcast and multicast packets and implement multiple MAC addresses for each node.
- the MAC units 402 - 408 also provide statistical information that can be used for simple network management protocol (SNMP).
- the MAC units 402 , 404 , 406 and 408 are each connected to a transmit and receive sequencer, XMT & RCV-A 418 , XMT & RCV-B 420 , XMT & RCV-C 422 and XMT & RCV-D 424 , by wires 410 , 412 , 414 and 416 , respectively.
- Each of the transmit and receive sequencers can perform several protocol processing steps on the fly as message frames pass through that sequencer.
- the transmit and receive sequencers 418 - 422 can compile the packet status for the data link, network, transport, session and, if appropriate, presentation and application layer protocols in hardware, greatly reducing the time for such protocol processing compared to conventional sequential software engines.
- the transmit and receive sequencers 410 - 414 are connected, by lines 426 , 428 , 430 and 432 to an SRAM and DMA controller 444 , which includes DMA controllers 438 and SRAM controller 442 .
- Static random access memory (SRAM) buffers 440 are coupled with SRAM controller 442 by line 441 .
- the SRAM and DMA controllers 444 interact across line 446 with external memory control 450 to send and receive frames via external memory bus 455 to and from dynamic random access memory (DRAM) buffers 460 , which is located adjacent to the IC chip 400 .
- the DRAM buffers 460 may be configured as 4 MB, 8 MB, 16 MB or 32 MB, and may optionally be disposed on the chip.
- the SRAM and DMA controllers 444 are connected via line 464 to a PCI Bus Interface Unit (BIU) 468 , which manages the interface between the INIC 200 and the PCI interface bus 257 .
- the 64-bit, multiplexed BIU 380 provides a direct interface to the PCI bus 257 for both slave and master functions.
- the INIC 200 is capable of operating in either a 64-bit or 32-bit PCI environment, while supporting 64-bit addressing in either configuration.
- a microprocessor 470 is connected by line 472 to the SRAM and DMA controllers 444 , and connected via line 475 to the PCI BIU 468 .
- Microprocessor 470 instructions and register files reside in an on chip control store 480 , which includes a writable on-chip control store (WCS) of SRAM and a read only memory (ROM), and is connected to the microprocessor by line 477 .
- the microprocessor 470 offers a programmable state machine which is capable of processing incoming frames, processing host commands, directing network traffic and directing PCI bus traffic.
- Three processors are implemented using shared hardware in a three level pipelined architecture that launches and completes a single instruction for every clock cycle.
- a receive processor 482 is dedicated to receiving communications while a transmit processor 484 is dedicated to transmitting communications in order to facilitate full duplex communication, while a utility processor 486 offers various functions including overseeing and controlling PCI register access.
- the instructions for the three processors 482 , 484 and 486 reside in the on-chip control-store 480 .
- the INIC 200 in this embodiment can support up to 256 CCBs which are maintained in a table in the DRAM 460 . There is also, however, a CCB index in hash order in the SRAM 440 to save sequential searching. Once a hash has been generated, the CCB is cached in SRAM, with up to sixteen cached CCBs in SRAM in this example. These cache locations are shared between the transmit 484 and receive 486 processors so that the processor with the heavier load is able to use more cache buffers. There are also eight header buffers and eight command buffers to be shared between the sequencers. A given header or command buffer is not statically linked to a specific CCB buffer, as the link is dynamic on a per-frame basis.
- FIG. 14 shows an overview of the pipelined microprocessor 470 , in which instructions for the receive, transmit and utility processors are executed in three distinct phases according to Clock increments I, II and III, the phases corresponding to each of the pipeline stages. Each phase is responsible for different functions, and each of the three processors occupies a different phase during each Clock increment. Each processor usually operates upon a different instruction stream from the control store 480 , and each carries its own program counter and status through each of the phases.
- a first instruction phase 500 of the pipelined microprocessors completes an instruction and stores the result in a destination operand, fetches the next instruction, and stores that next instruction in an instruction register.
- a first register set 490 provides a number of registers including the instruction register, and a set of controls 492 for first register set provides the controls for storage to the first register set 490 .
- a second instruction phase 560 has an instruction decoder and operand multiplexer 498 that generally decodes the instruction that was stored in the instruction register of the first register set 490 and gathers any operands which have been generated, which are then stored in a decode register of a second register set 496 .
- the first register set 490 , second register set 496 and a third register set 501 which is employed in a third instruction phase 600 , include many of the same registers, as will be seen in the more detailed views of FIGS. 14 A-C.
- the instruction decoder and operand multiplexer 498 can read from two address and data ports of the RAM file register 533 , which operates in both the first phase 500 and second phase 560 .
- a third phase 600 of the processor 470 has an arithmetic logic unit (ALU) 602 which generally performs any ALU operations on the operands from the second register set, storing the results in a results register included in the third register set 501 .
- a stack exchange 608 can reorder register stacks, and a queue manager 503 can arrange queues for the processor 470 , the results of which are stored in the third register set.
- each Clock increment takes 15 nanoseconds to complete, for a total of 45 nanoseconds to complete one instruction for each of the three processors.
- the instruction phases are depicted in more detail in FIGS. 15 A-C, in which each phase is shown in a different figure.
- FIG. 15A shows some specific hardware functions of the first phase 500 , which generally includes the first register set 490 and related controls 492 .
- the controls for the first register set 492 includes an SRAM control 502 , which is a logical control for loading address and write data into SRAM address and data registers 520 .
- SRAM control 502 is a logical control for loading address and write data into SRAM address and data registers 520 .
- a load control 504 similarly provides controls for writing a context for a file to file context register 522
- another load control 506 provides controls for storing a variety of miscellaneous data to flip-flop registers 525 .
- ALU condition codes such as whether a carried bit is set, get clocked into ALU condition codes register 528 without an operation performed in the first phase 500 .
- Flag decodes 508 can perform various functions, such as setting locks, that get stored in flag registers 530 .
- the RAM file register 533 has a single write port for addresses and data and two read ports for addresses and data, so that more than one register can be read from at one time. As noted above, the RAM file register 533 essentially straddles the first and second phases, as it is written in the first phase 500 and read from in the second phase 560 .
- a control store instruction 510 allows the reprogramming of the processors due to new data in from the control store 480 , not shown in this figure, the instructions stored in an instruction register 535 . The address for this is generated in a fetch control register 511 , which determines which address to fetch, the address stored in fetch address register 538 .
- Load control 515 provides instructions for a program counter 540 , which operates much like the fetch address for the control store. A last-in first-out stack 544 of three registers is copied to the first register set without undergoing other operations in this phase. Finally, a load control 517 for a debug address 548 is optionally included, which allows correction of errors that may occur.
- FIG. 15B depicts the second microprocessor phase 560 , which includes reading addresses and data out of the RAM file register 533 .
- a scratch SRAM 565 is written from SRAM address and data register 520 of the first register set, which includes a register that passes through the first two phases to be incremented in the third.
- the scratch SRAM 565 is read by the instruction decoder and operand multiplexer 498 , as are most of the registers from the first register set, with the exception of the stack 544 , debug address 548 and SRAM address and data register mentioned above.
- the instruction decoder and operand multiplexer 498 looks at the various registers of set 490 and SRAM 565 , decodes the instructions and gathers the operands for operation in the next phase, in particular determining the operands to provide to the ALU 602 below.
- the outcome of the instruction decoder and operand multiplexer 498 is stored to a number of registers in the second register set 496 , including ALU operands 579 and 582 , ALU condition code register 580 , and a queue channel and command 587 register, which in this embodiment can control thirty-two queues.
- registers in set 496 are loaded fairly directly from the instruction register 535 above without substantial decoding by the decoder 498 , including a program control 590 , a literal field 589 , a test select 584 and a flag select 585 .
- Other registers such as the file context 522 of the first phase 500 are always stored in a file context 577 of the second phase 560 , but may also be treated as an operand that is gathered by the multiplexer 572 .
- the stack registers 544 are simply copied in stack register 594 .
- the program counter 540 is incremented 568 in this phase and stored in register 592 .
- Also incremented 570 is the optional debug address 548 , and a load control 575 may be fed from the pipeline 505 at this point in order to allow error control in each phase, the result stored in debug address 598 .
- FIG. 15C depicts the third microprocessor phase 600 , which includes ALU and queue operations.
- the ALU 602 includes an adder, priority encoders and other standard logic functions. Results of the ALU are stored in registers ALU output 618 , ALU condition codes 620 and destination operand results 622 .
- a file context register 616 , flag select register 626 and literal field register 630 are simply copied from the previous phase 560 .
- a test multiplexer 604 is provided to determine whether a conditional jump results in a jump, with the results stored in a test results register 624 . The test multiplexer 604 may instead be performed in the first phase 500 along with similar decisions such as fetch control 511 .
- a stack exchange 608 shifts a stack up or down depending by fetching a program counter from stack 594 or putting a program counter onto that stack, results of which are stored in program control 634 , program counter 638 and stack 640 registers.
- the SRAM address may optionally be incremented in this phase 600 .
- Another load control 610 for another debug address 642 may be forced from the pipeline 505 at this point in order to allow error control in this phase also.
- a queue RAM and queue ALU 606 reads from the queue channel and command register 587 , stores in SRAM and rearranges queues, adding or removing data and pointers as needed to manage the queues of data, sending results to the test multiplexer 604 and a queue flags and queue address register 628 .
- the queue RAM and ALU 606 assumes the duties of managing queues for the three processors, a task conventionally performed sequentially by software on a CPU, the queue manager 606 instead providing accelerated and substantially parallel hardware queuing.
- Protocol processing speed is tremendously accelerated by specially designed protocol processing hardware as compared with a general purpose CPU running conventional protocol software, and interrupts to the host CPU are also substantially reduced.
- INIC intelligent network interface card
- the protocol processing hardware and CPU intelligently decide which device processes a given message, and can change the allocation of that processing based upon conditions of the message.
- NIC network interface card
- the NIC moves the data into pre-allocated network buffers in system main memory. From there the data is read into the CPU cache so that it can be checksummed (assuming of course that the protocol in use requires checksums. Some, like IPX, do not.).
- the protocol stack Once the data has been fully processed by the protocol stack, it can then be moved into its final destination in memory. Since the CPU is moving the data, and must read the destination cache line in before it can fill it and write it back out, this involves at a minimum 2 more trips across the system memory bus. In short, the best one can hope for is that the data will get moved across the system memory bus 4 times before it arrives in its final destination. It can, and does, get worse.
- the data gets copied yet another time while being moved up the protocol stack. In NT 4.0, this occurs between the miniport driver interface and the protocol driver interface. This can add up to a whopping 8 trips across the system memory bus (the 4 trips described above, plus the move to replenish the cache, plus 3 more to copy from the miniport to the protocol driver). That's enough to bring even today's advanced memory busses to their knees.
- a 64 k SMB request (write or read-reply) is typically made up of 44 TCP segments when running over Ethernet (1500 byte MTU). Each of these segments may result in an interrupt to the CPU. Furthermore, since TCP must acknowledge all of this incoming data, it's possible to get another 44 transmit-complete interrupts as a result of sending out the TCP acknowledgements. While this is possible, it is not notably likely. Delayed ACK timers allow us to acknowledge more than one segment at a time. And delays in interrupt processing may mean that we are able to process more than one incoming network frame per interrupt. Nevertheless, even if we assume 4 incoming frames per input, and an acknowledgement for every 2 segments (as is typical per the ACK-every-other-segment property of TCP), we are still left with 33 interrupts per 64 k SMB request.
- Interrupts tend to be very costly to the system. Often when a system is interrupted, important information must be flushed or invalidated from the system cache so that the interrupt routine instructions, and needed data can be pulled into the cache. Since the CPU will return to its prior location after the interrupt, it is likely that the information flushed from the cache will immediately need to be pulled back into the cache.
- PCI Peripheral Bus
- Typical NICs operate using descriptor rings. When a frame arrives, the NIC reads a receive descriptor from system memory to determine where to place the data. Once the data has been moved to main memory, the descriptor is then written back out to system memory with status about the received frame. Transmit operates in a similar fashion. The CPU must notify that NIC that it has a new transmit. The NIC will read the descriptor to locate the data, read the data itself, and then write the descriptor back with status about the send. Typically on transmits the NIC will then read the next expected descriptor to see if any more data needs to be sent. In short, each receive or transmit frame results in 3 or 4 separate PCI reads or writes (not counting the status register read).
- Alacritech was formed with the idea that the network processing described above could be offloaded onto a cost-effective Intelligent Network Interface Card (INIC). With the Alacritech INIC, we address each of the above problems, resulting in the following advancements:
- a context is required to keep track of information that spans many, possibly discontiguous, pieces of information.
- the first context is required to reassemble IP fragments. It holds information about the status of the IP reassembly as well as any checksum information being calculated across the IP datagram (UDP or TCP). This context is identified by the IP_ID of the datagram as well as the source and destination IP addresses.
- the second context is required to handle the sliding window protocol of TCP. It holds information about which segments have been sent or received, and which segments have been acknowledged, and is identified by the IP source and destination addresses and TCP source and destination ports.
- TCP performs a Maximum Segment Size negotiation at connection establishment time, which should prevent IP fragmentation in nearly all TCP connections.
- the only time that we should end up with fragmented TCP connections is when there is a router in the middle of a connection which must fragment the segments to support a smaller MTU.
- the only networks that use a smaller MTU than Ethernet are serial line interfaces such as SLIP and PPP.
- ISDN 128 k
- the fastest of these connections only run at 128 k (ISDN) so even if we had 256 of these connections, we would still only need to support 34 Mb/sec, or a little over three 10 bT connections worth of data. This is not enough to justify any performance enhancements that the INIC offers. If this becomes an issue at some point, we may decide to implement the MTU discovery algorithm, which should prevent TCP fragmentation on all connections (unless an ICMP redirect changes the connection route while the connection is established).
- UDP is another matter. Since UDP does not support the notion of a Maximum Segment Size, it is the responsibility of IP to break down a UDP datagram into MTU sized packets. Thus, fragmented UDP datagrams are very common.
- the most common UDP application running today is NFSV2 over UDP. While this is also the most common version of NFS running today, the current version of Solaris being sold by Sun Microsystems runs NFSV3 over TCP by default. We can expect to see the NFSV2/UDP traffic start to decrease over the coming years. In summary, we will only offer assistance to non-fragmented TCP connections on the INIC.
- Retransmission Timeout Occurs when we do not get an acknowledgement for previously sent data within the expected time period.
- TCP operates without experiencing any exceptions between 97 and 100 percent of the time in local area networks. As network, router, and switch reliability improve this number is likely to only improve with time.
- the answer shown in FIG. 16 is to use two modes of operation: One in which the network frames are processed on the INIC through TCP and one in which the card operates like a typical dumb NIC.
- the slow-path case network frames are handed to the system at the MAC layer and passed up through the host protocol stack like any other network frame.
- the fast path case network data is given to the host after the headers have been processed and stripped.
- the transmit case works in much the same fashion.
- slow-path mode the packets are given to the INIC with all of the headers attached.
- the INIC simply sends these packets out as if it were a dumb NIC.
- fast-path mode the host gives raw data to the INIC which it must carve into MSS sized segments, add headers to the data, perform checksums on the segment, and then send it out on the wire.
- a TCB is a structure that contains the entire context associated with a connection. This includes the source and destination IP addresses and source and destination TCP ports that define the connection. It also contains information about the connection itself such as the current send and receive sequence numbers, and the first-hop MAC address, etc.
- the complete set of TCBs exists in host memory, but a subset of these may be “owned” by the card at any given time. This subset is the TCB cache.
- the INIC can own up to 256 TCBs at any given time.
- TCBs are initialized by the host during TCP connection setup. Once the connection has achieved a “steady-state” of operation, its associated TCB can then be turned over to the INIC, putting us into fast-path mode. From this point on, the INIC owns the connection until either a FIN arrives signaling that the connection is being closed, or until an exception occurs which the INIC is not designed to handle (such as an out of order segment). When any of these conditions occur, the NIC will then flush the TCB back to host memory, and issue a message to the host telling it that it has relinquished control of the connection, thus putting the connection back into slow-path mode. From this point on, the INIC simply hands incoming segments that are destined for this TCB off to the host with all of the headers intact.
- the INIC When a frame is received by the INIC, it must verify it completely before it even determines whether it belongs to one of its TCBs or not. This includes all header validation (is it IP, IPV4 or V6, is the IP header checksum correct, is the TCP checksum correct, etc). Once this is done it must compare the source and destination IP address and the source and destination TCP port with those in each of its TCBs to determine if it is associated with one of its TCBs. This is an expensive process. To expedite this, we have added several features in hardware to assist us. The header is fully parsed by hardware and its type is summarized in a single status word.
- the checksum is also verified automatically in hardware, and a hash key is created out of the IP addresses and TCP ports to expedite TCB lookup.
- a hash key is created out of the IP addresses and TCP ports to expedite TCB lookup.
- This section defines the INIC's relation to the hosts transport layer interface (Called TDI or Transport Driver Interface in Windows NT). For full details on this interface, refer to the Alacritech TCP (ATCP) driver specification (Heading 4).
- NT has provided a mechanism by which a transport driver can “indicate” a small amount of data to a client above it while telling it that it has more data to come. The client, having then received enough of the data to know what it is, is then responsible for allocating a block of memory and passing the memory address or addresses back down to the transport driver, which is in turn responsible for moving the data into the provided location.
- PCI reads are particularly inefficient in that they completely stall the reader until the transaction completes. As noted above, this could hold a CPU up for several microseconds, a thousand times the time typically required to execute a single instruction.
- PCI writes on the other hand, are usually buffered by the memory-bus PCI-bridge allowing the writer to continue on with other instructions. This technique is known as “posting”.
- the only PCI read that is required by most NICs is the read of the interrupt status register. This register gives the host CPU information about what event has caused an interrupt (if any). In the design of our INIC we have elected to place this necessary status register into host memory. Thus, when an event occurs on the INIC, it writes the status register to an agreed upon location in host memory. The corresponding driver on the host reads this local register to determine the cause of the interrupt. The interrupt lines are held high until the host clears the interrupt by writing to the INIC's Interrupt Clear Register. Shadow registers are maintained on the INIC to ensure that events are not lost.
- a small buffer contains roughly 200 bytes of data payload, as well as extra fields containing status about the received data bringing the total size to 256 bytes. We can therefore pass 16 of these small buffers at a time to the INIC. Large buffers are 2 k in size. They are used to contain any fast or slow-path data that does not fit in a small buffer. Note that when we have a large fast-path receive, a small buffer will be used to indicate a small piece of the data, while the remainder of the data will be DMA'd directly into memory.
- the first segment will contain the NetBIOS header, which contains the total NetBIOS length.
- a small chunk of this first segment is provided to the host by filling in a small receive buffer, modifying the interrupt status register on the host, and raising the appropriate interrupt line.
- the host Upon receiving the interrupt, the host will read the ISR, clear it by writing back to the INIC's Interrupt Clear Register, and will then process its small receive buffer queue looking for receive buffers to be processed. Upon finding the small buffer, it will indicate the small amount of data up to the client to be processed by NetBIOS. It will also, if necessary, replenish the receive buffer pool on the INIC by passing off a pages worth of small buffers.
- the NetBIOS client will allocate a memory pool large enough to hold the entire NetBIOS message, and will pass this address or set of addresses down to the transport driver.
- the transport driver will allocate an INIC command buffer, fill it in with the list of addresses, set the command type to tell the INIC that this is where to put the receive data, and then pass the command off to the INIC by writing to the command register.
- the INIC receives the command buffer, it will DMA the remainder of the NetBIOS data, as it is received, into the memory address or addresses designated by the host.
- the INIC will complete the command by writing to the response buffer with the appropriate status and command buffer identifier.
- the INIC receives a frame that does not contain a TCP segment for one of its TCB's, it simply passes it to the host as if it were a dumb NIC. If the frame fits into a small buffer ( ⁇ 200 bytes or less), then it simply fills in the small buffer with the data and notifies the host. Otherwise it places the data in a large buffer, writes the address of the large buffer into a small buffer, and again notifies the host. The host, having received the interrupt and found the completed small buffer, checks to see if the data is contained in the small buffer, and if not, locates the large buffer. Having found the data, the host will then pass the frame upstream to be processed by the standard protocol stack. It must also replenish the INIC's small and large receive buffer pool if necessary.
- the client has a small amount of data to send. It will issue the TDI Send to the transport driver which will allocate a command buffer, fill it in with the address of the 400 byte send, and set the command to indicate that it is a transmit. It will then pass the command off to the INIC by writing to the command register. The INIC will then DMA the 400 bytes into its own memory, prepare a frame with the appropriate checksums and headers, and send the frame out on the wire. After it has received the acknowledgement it will then notify the host of the completion by writing to a response buffer.
- This section describes the host interface strategy for the Alacritech Intelligent Network Interface Card (INIC).
- the goal of the Alacritech INIC is to not only process network data through TCP, but also to provide zero-copy support for the SMP upper-layer protocol. It achieves this by supporting two paths for sending and receiving data, the fast-path and the slow-path.
- the fast path data flow corresponds to connections that are maintained on the NIC, while slow-path traffic corresponds to network data for which the NIC does not have a connection.
- the fast-path flow works by passing a header to the host and subsequently holding further data for that connection on the card until the host responds via an INIC command with a set of buffers into which to place the accumulated data.
- the INIC In the slow-path data flow, the INIC will be operating as a “dumb” NIC, so that these packets are simply dumped into frame buffers on the host as they arrive. To do either path requires a pool of smaller buffers to be used for headers and a pool of data buffers for frames/data that are too large for the header buffer, with both pools being managed by the INIC. This section discusses how these two pools of data are managed as well as how buffers are associated with a given context.
- the fast-path flow puts a header into a header buffer that is then forwarded to the host.
- the host uses the header to determine what further data is following, allocates the necessary host buffers, and these are passed back to the INIC via a command to the INIC.
- the INIC then fills these buffers from data it was accumulating on the card and notifies the host by sending a response to the command.
- the fast-path may receive a header and data that is a complete request, but that is also too large for a header buffer. This results in a header and data buffer being passed to the host.
- Header buffers in host memory are 256 bytes long, and are aligned on 256 byte boundaries. There will be a field in the header buffer indicating it has valid data. This field will initially be reset by the host before passing the buffer descriptor to the INIC. A set of header buffers are passed from the host to the INIC by the host writing to the Header Buffer Address Register on the INIC. This register is defined as follows:
- Bits 31 - 8 Physical address in host memory of the first of a set of contiguous header buffers.
- the host can, say, allocate 16 buffers in a 4K page, and pass all 16 buffers to the INIC with one register write.
- the INIC will maintain a queue of these header descriptors in the SmallHType queue in it's own local memory, adding to the end of the queue every time the host writes to the Header Buffer Address Register. Note that the single entry is added to the queue; the eventual dequeuer will use the count after extracting that entry.
- the header buffers will be used and returned to the host in the same order that they were given to the INIC.
- the valid field will be set by the INIC before returning the buffer to the host.
- a PCI interrupt with a single bit in the interrupt register, may be generated to indicate that there is a header buffer for the host to process.
- the host When servicing this interrupt, the host will look at its queue of header buffers, reading the valid field to determine how many header buffers are to be processed.
- Receive data buffers in host memory are aligned to page boundaries, assumed here to be 2K bytes long and aligned on 4K page boundaries, 2 buffers per page.
- the host In order to pass receive data buffers to the INIC, the host must write to two registers on the INIC.
- the first register to be written is the Data Buffer Handle Register.
- the buffer handle is not significant to the INIC, but will be copied back to the host to return the buffer to the host.
- the second register written is the Data Buffer Address Register. This is the physical address of the data buffer.
- the INIC will add the contents of these two registers to FreeType queue of data buffer descriptors. Note that the INIC host driver sets the handle register first, then the address register.
- the INIC can read the address register first and save its contents, then read the handle register. It can then lock the register pair in some manner such that another write to the handle register is not permitted until the current contents have been saved. Both addresses extracted from the registers are to be written to the FreeType queue. The INIC will extract 2 entries each time when dequeuing.
- Data buffers will be allocated and used by the INIC as needed. For each data buffer used by a slow-path transaction, the data buffer handle will be copied into a header buffer. Then the header buffer will be returned to the host.
- the host will transfer a command buffer to the INIC.
- This command buffer will include a command buffer handle, a command field, possibly a TCP context identification, and a list of physical data pointers.
- the command buffer handle is defined to be the first word of the command buffer and is used by the host to identify the command. This word will be passed back to the host in a response buffer, since commands may complete out of order, and the host will need to know which command is complete. Commands will be used for many reasons, but primarily to cause the INIC to transmit data, or to pass a set of buffers to the INIC for input data on the fast-path as previously discussed.
- Response buffers are physical buffers in host memory. They are used by the INIC in the same order as they were given to it by the host. This enables the host to know which response buffer(s) to next look at when the INIC signals a command completion.
- Command buffers in host memory are a multiple of 32 bytes, up to a maximum of 1K bytes, and are aligned on 32 byte boundaries.
- a command buffer is passed to the INIC by writing to one of 5 Command Buffer Address Registers. These registers are defined as follows:
- the register to which the command is written predetermines the XMT interface number, or if the command is for the RCV CPU; hence there will be 5 of them, 0-3 for XMT and 4 for RCV.
- the INIC will add the contents of the register to it's own internal queue of command buffer descriptors.
- the first word of all command buffers is defined to be the command buffer handle. It is the job of the utility CPU to extract a command from its local queue, DMA the command into a small INIC buffer (from the FreeSType queue), and queue that buffer into the Xmit#Type queue, where # is 0-3 depending on the interface, or the appropriate RCV queue.
- the receiving CPU will service the queues to perform the commands. When that CPU has completed a command, it extracts the command buffer handle and passes it back to the host via a response buffer.
- Response buffers in host memory are 32 bytes long and aligned on 32 byte boundaries. They are handled in a very similar fashion to header buffers. There will be a field in the response buffer indicating it has valid data. This field will initially be reset by the host before passing the buffer descriptor to the INIC. A set of response buffers are passed from the host to the INIC by the host writing to the Response Buffer Address Register on the INIC. This register is defined as follows:
- Bits 31 - 8 Physical address in host memory of the first of a set of contiguous response buffers.
- the host can, say, allocate 128 buffers in a 4K page, and pass all 128 buffers to the INIC with one register write.
- the INIC will maintain a queue of these header descriptors in it's ResponseType queue, adding to the end of the queue every time the host writes to the Response Buffer Address Register.
- the INIC writes the extracted contents including the count, to the queue in exactly the same manner as for the header buffers.
- the response buffers can be used and returned to the host in the same order that they were given to the INIC.
- the valid field will be set by the INIC before returning the buffer to the host.
- a PCI interrupt with a single bit in the interrupt register, may be generated to indicate that there is a response buffer for the host to process.
- the host When servicing this interrupt, the host will look at its queue of response buffers, reading the valid field to determine how many response buffers are to be processed.
- FIG. 19 shows the general format of this register. The setting of any bits in the ISR will cause an interrupt, provided the corresponding bit in the Interrupt Mask Register is set. The default setting for the IMR is 0.
- the INIC is configured so that the host should never need to directly read the ISR from the INIC. To support this, it is important for the host/INIC to arrange a buffer area in host memory into which the ISR is dumped. The address and size of that area ca be passed to the INIC via a command on the XMT interface. That command will also specify the setting for the IMR. Until the INIC receives this command, it will not DMA the ISR to host memory, and no events will cause an interrupt. The host could if necessary, read the ISR directly from the INIC in this case.
- the INIC keeps a local copy of the register whenever it DMAs it to the host i.e. after some event(s). Call this COPYA Then the INIC starts accumulating any new events not reflected in the host copy in a separate word. Call this NEWA. As the host clears bits by writing the register back with those bits set to zero, the INIC clears these bits in COPYA (or the host write-back goes directly to COPYA). If there are new events in NEWA, it ORs them with COPYA, and DMAs this new ISR to the host. This new ISR then replaces COPYA, NEWA is cleared and the cycle then repeats.
- the registers are at 4-byte increments from whatever the base address is.
- ATCP Alacritech TCP
- the bulk of the protocol stack is based on the FreeBSD TCP/IP protocol stack. This code performs the Ethernet, ARP, IP, ICMP, and (slow path) TCP processing for the driver.
- Structure pointers Macrosoft typedefs all of their structures. The structure types are always capitals and they typedef a pointer to the structure as “P” ⁇ name>as follows: typedef struct_FOO ⁇ INT bar; ⁇ FOO, *PFOO:
- Function prototypes We will include function prototypes in the most logical header file corresponding to the .c file. For example, the prototype for function foo( ) found in foo.c will be placed in foo.h.
- Header file #ifndef each header file should contain a #ifndef/#define/#endif which is used to prevent recursive header file includes.
- foo.h would include: #ifndef_FOO_H — #define_FOO_H — ⁇ foo.h contents.> #endif/*_FOO_H_*/ Note the_NAME_H_format.
- CVS (RCS) will expand this keyword to denote RCS revision, timestamps, author, etc.
- each instance of a structure will include a spinlock, which must be acquired before members of that structure are accessed, and held while a function is accessing that instance of the structure.
- Structures which are logically grouped together may be protected by a single spinlock: for example, the ‘in_pcb’ structure, ‘tcpcb’ structure, and ‘socket’ structure which together constitute the administrative information for a TCP connection will probably be collectively managed by a single spinlock in the ‘socket’ structure.
- every global data structure such as a list or hash table must also have a protecting spinlock which must be held while the structure is being accessed or modified.
- the NT DDK in fact provides a number of convenient primitives for SMP-safe list manipulation, and it is recommended that these be used for any new lists.
- Existing list manipulations in the FreeBSD code can probably be left as-is to minimize code disturbance, except of course that the necessary spinlock acquisition and release must be added around them.
- IRPs are simply marked as ‘PENDING’ when an operation cannot be completed immediately.
- the calling thread does NOT sleep at that point: it returns, and may go on with other processing.
- Pending IRPs are later completed, not by waking up the thread which initiated them, but by an ‘IoCompleteRequest’ call which typically runs at DISPATCH level in an arbitrary context.
- the ATCP driver supports two paths for sending and receiving data, the fast-path and the slow-path.
- the fast-path data flow corresponds to connections that are maintained on the INIC, while slow-path traffic corresponds to network data for which the INIC does not have a connection. In order to set some groundwork for the rest of this section, these two data paths are summarized here.
- the INIC As soon as the INIC has received a segment containing a NETBIOS header, it will forward it up to the TCP driver, along with the NETBIOS length from the header. (In principle the host could get this from the header itself, but since the INIC has already done the decode, it seem reasonable to just pass it.)
- the amount of data in the buffer actually sent must be at least 128 bytes.
- all of the received SMB should be forwarded; it will be absorbed directly by the TDI client without any further MDL exchange.
- Experiments tracing the TDI data flow show that the NETBIOS client directly absorbs up to 1460 bytes: the amount of payload data in a single Ethernet frame.
- the INIC will indicate anything up to a complete segment to the ATCP driver. [See note (1)].
- the INIC Once the INIC has passed up an indication with an NETBIOS length greater than the amount of data in the packet it passed, it will continue to accumulate further incoming data in DRAM on the INIC. Overflow of INIC DRAM buffers will be avoided by using a receive window on the INIC at this point, which can be 8K.
- the ATCP driver On receiving the indicated packet, the ATCP driver will call the receive handler registered by the TDI client for the connection, passing the actual size of the data in the packet from the INIC as “bytes indicated” and the NETBIOS length as “bytes available.” [See note (2)].
- the TDI client will then provide an MDL, associated with an IRP, which must be completed when this MDL is filled. (This IRP/MDL may come back either in the response to TCP's call of the receive handler, or as an explicit TDI_RECEIVE request.)
- the ATCP driver will build a “receive request” from the MDL information, and pass this to the INIC. This request will contain:
- the ATCP driver must copy any remaining data (which was not taken by the receive handler) from the segment indicated by the INIC to the start of the MDL, and must adjust the size & offset information in the request passed to the INIC to account for this.
- the INIC will fill the given page(s) with incoming data up to the requested amount, and respond to the ATCP driver when this is done [See note (3)]. If the MDL is large, the INIC may open up its advertised receive window for improved throughput while filling the MDL. On receiving the response from the INIC, the ATCP driver will complete the IRP associated with this MDL, to tell the TDI client that the data is available. At this point the cycle of events is complete, and the ATCP driver is now waiting for the next header indication.
- INIC In the general case we do not have a higher-level protocol header to enable us to predict that more data is coming. So on non-NETBIOS connections, the INIC will just accumulate incoming data in INIC DRAM up to a quantity of 8K in this example. Again, a maximum advertised window size, which may be 16K, will be used to prevent overflow of INIC DRAM buffers.
- the INIC When the prescribed amount has been accumulated, or when a PSH flag is seen, the INIC will indicate a small packet which may be 128 bytes of the data to the ATCP driver, along with the total length of the data accumulated in NIC DRAM.
- the ATCP driver On receiving the indicated packet, the ATCP driver will call the receive handler registered by the TDI client for the connection, passing the actual size of the data in the packet from the INIC as “bytes indicated” and the total INIC-buffer length as “bytes available.”
- the TDI client will provide an IRP with an MDL.
- the ATCP driver will pass the MDL to the INIC to be filled, as before.
- the INIC will reply to the ATCP driver, which in turn will complete the IRP to the TDI client.
- the INIC “owns” an MDL provided by the TDI client (sent by ATCP as a receive request), it will treat this as a “promise” by the TDI client to accept the data placed in it, and may therefore ACK incoming data as it is filling the pages.
- the PSH flag can help to identify small SMB requests that fit into one segment.
- the fast-path output data flow is similar to the input data-flow, but simpler.
- the TDI client will provide a MDL to the ATCP driver along with an IRP to be completed when the data is sent.
- the ATCP driver will then give a request (corresponding to the MDL) to the INIC. This request will contain:
- the INIC will copy the data from the given physical location(s) as it sends the corresponding network frames onto the network. When all of the data is sent, the INIC will notify the host of the completion, and the ATCP driver will complete the IRP.
- the MBUFs in the incoming direction will in fact be managing NDIS-allocated packets.
- the MFREE macro must be cognizant of the various types of MBUFs, and “do the right thing” for each type.
- sosend( ) there will be a function that copies data from the MDL provided in a TDI_SEND call into socket buffer MBUFs.
- soreceive( ) there will be a handler that calls the TDI client receive callback function, and also copies data from socket buffer MBUFs into any MDL provided by the TDI client (either explicitly with the callback response or as a separate TDI_RECEIVE call.)
- NT has a notion of “canceling” IRPs. It is possible for us to get a “cancel” on an IRP corresponding to an MDL which has been “handed” to the INIC by a send or receive request. We can handle this by being able to force the context back off the INIC, since IRPs will only get cancelled when the connection is being aborted.
- the ATCP driver will make a decision on a given connection that this connection should now be passed to the INIC. It builds and sends a command identifying this connection to the INIC.
- the initial command from ATCP to INIC expresses an “intention” to hand out the context. It will include the source and destination IP addresses and ports, which will allow the INIC to establish a “provisional” context. Once it has this “provisional” context in place, the INIC will not send any more slow-path input frames for that src/dest IP/port combination (it will queue them, if any are received.)
- the ATCP driver When the ATCP driver receives the response to this initial “intent” command, it knows that the INIC will send no more slow-path input. The ATCP driver then waits for any remaining unconsumed slow-path input data for this connection to be consumed by the client. (Generally speaking there will be none, since the ATCP driver will not initiate a context pass while there is unconsumed slow-path input data; the handshake is simply to close the crossover window.)
- Note 1 it is conceivable that there might be situations in which the ATCP driver decides, after having sent the original “intention” command, that the context is not to be passed after all. (E.g. the local client issues a close.) So we must allow for the possibility that the second command may be a “abort”, which should cause the INIC to deallocate and clear up its “provisional” context.
- the ATCP driver will guarantee that only one context may be in process of being handed out at a time: in other words, it will never issue another initial “intention” command until it has completed the second half of the handshake for the first one.
- a context transfer may be initiated either by the ATCP driver or by the INIC. However the machinery will be very similar in the two cases. If the ATCP driver wishes to cause context to be flushed from NIC to host, it will send a “flush” message to the INIC specifying the context number to be flushed. Once the INIC receives this, it will proceed with the same steps as for the case where the flush is initiated by the INIC itself:
- the INIC will send an error response to any current outstanding receive request it is working on (corresponding to an MDL into which data is being placed.) Before sending the response, it updates the receive command “length” field to reflect the amount of data which has actually been placed in the MDL buffers at the time of the flush.
- the INIC will DMA the TCB for the context back to the host. (Note: part of the information provided with a context must be the address of the TCB in the host.)
- the INIC will send a “flush” indication to the host (very preferably via the regular input path as a special type of frame) identifying the context which is being flushed. Sending this indication via the regular input path ensures that it will arrive before any following slow-path frames.
- the INIC is no longer doing fast-path processing, and any further incoming frames for the connection will simply be sent to the host as raw frames for the slow input path.
- the ATCP driver may not be able to complete the cleanup operations needed to resume normal slow path processing immediately on receipt of the “flush frame”, since there may be outstanding send and receive requests to which it has not yet received a response. If this is the case, the ATCP driver must set a “pend incoming TCP frames” flag in its per-connection context. The effect of this is to change the behavior of tcp_input( ).
- the INIC maintains its context for the connection in a “zombie” state. As “send” requests for this connection come out of the INIC queue, it sends error responses for them back to the ATCP driver. (It is apparently difficult for the INIC to identify all command requests for a given context; simpler for it to just continue processing them in order, detecting ones that are for a “zombie” context as they appear.)
- the ATCP driver has a count of the number of outstanding requests it has sent to the INIC. As error responses for these are received, it decrements this count, and when it reaches zero, the ATCP driver sends a “flush complete” message to the INIC.
- the largest portion of the ATCP driver is either derived, or directly taken from the FreeBSD TCP/IP protocol stack. This section defines the issues associated with porting this code, the FreeBSD code itself, and the modifications required for it to suit our needs.
- FreeBSD TCP/IP (current version referred to as Net/3) is a general purpose TCP/IP driver. It contains code to handle a variety of interface types and many different kinds of protocols. To meet this requirement the code is often written in a sometimes confusing, over-complex manner. General-purpose structures are overlaid with other interface-specific structures so that different interface types can coexist using the same general-purpose code. For our purposes much of this complexity is unnecessary since we are only supporting a single interface type and a few specific protocols. It is therefore plausible to modify the code and data structures in an effort to make it more readable, and perhaps a bit more efficient. There are, however, some problems with doing this. First, the more we modify the original FreeBSD, the more changes we will have to make.
- the FreeBSD TCP/IP protocol stack makes use of many Unix system services. These include bcopy to copy memory, malloc to allocate memory, timestamp functions, etc. These will not be itemized in detail since the conversion to the corresponding NT calls is a fairly trivial and mechanical operation.
- the mbuf structure will provide the standard fields provided in the FreeBSD mbuf including the data pointer, which points to the current location of the data, data length fields and flags. In addition each mbuf will point to the packet descriptor which is associated with the data being mapped. Once an NT packet is mapped, our transport driver should never have to refer to the packet or buffer descriptors for any information except when we are finished and are preparing to return the packet.
- FIG. 22 shows the relationship between all of these structures:
- FIG. 22 we show a single interface with a MAC address of 00:60:97:DB:9B:A6 configured with an IP address of 192.100.1.2.
- the in_ifaddr is actually an ifaddr structure with some extra fields tacked on to the end.
- the ifaddr structure is used to represent both a MAC address and an IP address.
- the sockaddr structure is recast as a sockaddr_dl or a sockaddr_in depending on its address type.
- An interface can be configured to multiple IP addresses by simply chaining in_ifaddr structures after the in_ifaddr structure shown in FIG. 22.
- iface This is a structure that we define. It contains the arpcom structure, which in turn contains the ifnet structure. It also contains fields that enable us to blend our FreeBSD implementation with NT NDIS requirements.
- NDIS binding handle used to call down to NDIS with requests (such as send).
- FreeBSD initializes the above structures in two phases. First when a network interface is found, the ifnet, arpcom, and first ifaddr structures are initialized first by the network layer driver, and then via a call to the if attach routine. The subsequent in_ifaddr structure(s) are initialized when a user dynamically configures the interface. This occurs in the in_ioctl and the in_ifinit routines. Since NT allows dynamic configuration of a network interface we will continue to perform the interface initialization in two phases, but we will consolidate these two phases as described below:
- the IfInit routine will be called from the ATKProtocolBindAdapter function.
- the IfInit function will initialize the Iface structure and associated arpcom and ifnet structures. It will then allocate and initialize an ifaddr structure in which to contain link-level information about the interface, and a sockaddr_dl structure to contain the interface name and MAC address. Finally it will add a pointer to the ifaddr structure into the ifnet_addrs array (using the if index field of the ifnet structure) contained in the extended device object. IfInit will then call IfConfig for each IP address that it finds in the registry entry for the interface.
- IfConfig is called to configure an IP address for a given interface. It is passed a pointer to the ifnet structure for that interface along with all the information required to configure an IP address for that interface (such as IP address, netmask and broadcast info, etc). IfConfig will allocate an in_ifaddr structure to be used to configure the interface. It will chain it to the total chain of in_ifaddr structures contained in the extended device object, and will then configure the structure with the information given to it. After that it will add a static route for the newly configured network and then broadcast a gratuitous ARP request to notify others of our Mac/P address and to detect duplicate IP addresses on the net.
- FreeBSD ARP code we will port the FreeBSD ARP code to NT mostly as-is. For some reason, the FreeBSD ARP code is located in a file called if ether.c. While the functionality of this file will remain the same, we will rename it to a more logical arp.c.
- the main structures used by ARP are the llinfo_arp structure and the rtentry structure (actually part of route). These structures will not be require major modifications. The functions that will require modification are defined here.
- An ARP frame can either be an ARP request or an ARP reply.
- ARP requests are broadcast, so we will see every ARP request on the network, while ARP replies are directed so we should only see ARP replies that are sent to us. This introduces the following possible cases for an incoming ARP frame:
- ARP reply In this case we add the new ARP entry to our ARP cache. Having resolved the address, we check to see if there is any transmit requests pending for the resolve IP address, and if so, transmit them.
- This code simply allocates a mbuf, fills it in with an ARP header, and then passes it down to the ethernet output routine to be transmitted. For us, the code remains essentially the same except for the obvious changes related to how we allocate a network buffer, and how we send the filled in request.
- the route table is maintained using an algorithm known as PATRICIA (Practical Algorithm To Retrieve Information Coded in Alphanumeric). This is a complicated algorithm that is a bit costly to set up, but is very efficient to reference. Since the routing table should contain the same information for both NT and FreeBSD, and since the key used to search for an entry in the routing table will be the same for each (the destination IP address), we should be able to port the routing table software to NT without any major changes.
- PATRICIA Practical Algorithm To Retrieve Information Coded in Alphanumeric
- the software which implements the route table (via the PATRICIA algorithm) is located in the FreeBSD file, radix.c. This file will be ported directly to the ATCP driver with no significant changes required.
- Routes can be added or deleted in a number of different ways.
- the kernel adds or deletes routes when the state of an interface changes or when an ICMP redirect is received.
- User space programs such as the RIP daemon, or the route command also modify the route table.
- the changes can be made by a direct call to the routing software.
- the FreeBSD software that is responsible for the modification of route table entries is found in route.c.
- the primary routine for all route table changes is called rtrequest( ). It takes as its arguments, the request type (ADD, RESOLVE, DELETE), the destination IP address for the route, the gateway for the route, the netmask for the route, the flags for the route, and a pointer to the route structure (struct rtentry) in which we will place the added or resolved route.
- Routines in the route.c file include rtinit( ), which is called during interface initialization time to add a static route to the network, rtredirect, which is called by ICMP when we receive a ICMP redirect, and an assortment of support routines used for the modification of route table entries. All of these routines found in route.c will be ported with no major modifications.
- the route table is consulted in ip_output when an IP datagram is being sent.
- the route is stored into the in_pcb for the connection.
- the route entry is then simply checked to ensure validity. While we will keep this basic operation as is, we will require a slight modification to allow us to coexist with the Microsoft TCP driver.
- our filter driver When an active connection is being set up, our filter driver will have to determine whether the connection is going to be handled by one of the INIC interfaces. To do this, we will have to consult the route table from the filter driver portion of our driver. This is done via a call to the rtalloc1 function (found in route.c). If a valid route table entry is found, then we will take control of the connection and set a pointer to the rtentry structure returned by rtalloc1 in our in_pcb structure.
- an ICMP_REDIRECT causes two things to occur. First, it causes the route table to be updated with the route given in the redirect. Second, it results in a call back to TCP to cause TCP to flush the route entry attached to its associated in_pcb structures. By doing this, it forces ip_output to search for a new route. As mentioned in the Route section above, we will also require a call to a routine which will review all of the TCP fast-path connections, and update the route entries as needed (in this case because the route entry has been zeroed). The INIC will then be notified of the route changes.
- a source quench is sent to cause a TCP sender to close its congestion window to a single segment, thereby putting the sender into slow-start mode.
- For fast path connections we will send a notification to the card that the congestion window for the given connection has been reduced.
- the INIC will then be responsible for the slow-start algorithm.
- ip_init is called to initialize the array of protosw structures. These structures contain all the information needed by IP to be able to pass incoming data to the correct protocol above it. For example, when a UDP datagram arrives, IP locates the protosw entry corresponding to the UDP protocol type value (0 ⁇ 11) and calls the input routine specified in that protosw entry. We will keep the array of protosw structures intact, but since we are only handling the TCP and ICMP protocols above IP, we will strip the protosw array down substantially.
- FreeBSD only options supported by FreeBSD at this time include record route, strict and loose source and record route, and timestamp. For the timestamp option, FreeBSD only logs the current time into the IP header so that before it is forwarded. Since we will not be forwarding IP datagrams, this seems to be of little use to us. While FreeBSD supports the remaining options, NT essentially does nothing useful with them. For the moment, we will not bother dealing with IP options. They will be added in later if needed.
- the reassembly code reuses the IP header portion of the IP datagram to contain IP reassembly queue information. It can do this because it no longer requires the original IP header. This is an absolute no-no with the NDIS 4.0 method of handling network packets.
- the NT DDK explicitly states that we must not modify packets given to us by NDIS. This is not the only place in which the FreeBSD code modifies the contents of a network buffer. It also does this when performing endian conversions. At the moment we will leave this code as is and violate the DDK rules. We believe we can do this because we are going to ensure that no other transport driver looks at these frames. If this becomes a problem we will have to modify this code substantially by moving the IP reassembly fields into the mbuf header.
- This section defines protocol driver portion of the ATCP driver.
- the protocol driver portion of the ATCP driver is defined by the set of routines registered with NDIS via a call to NdisRegisterProtocol. These routines are limited to those that are called (indirectly) by the INIC miniport driver beneath us. For example, we register a ProtocolReceivePacket routine so that when the INIC driver calls NdisMIndicateReceivePacket it will result in a call from NDIS to our driver. Strictly speaking, the protocol driver portion of our driver does not include the method by which our driver calls down to the miniport (for example, the method by which we send network packets). Nevertheless, we will describe that method here for lack of a better place to put it. That said, we cover the following topics in this section of the document: 1) Initialization; 2) Receive; 3) Transmit; 4) Query/Set Information; 5) Status indications; 6) Reset; and 7) Halt.
- the protocol driver initialization occurs in two phases.
- the first phase occurs when the ATCP DriverEntry routine calls ATKProtoSetup.
- the ATKProtoSetup routine performs the following:
- NDIS will call our driver's ATKBindAdapter function for each underlying device. It will perform the following:
- Receive is handled by the protocol driver routine ATKReceivePacket. Before we describe this routine, it is important to consider each possible receive type and how it will be handled.
- Our INIC miniport driver will be bound to our transport driver as well as the generic Microsoft TCP driver (and possibly others).
- the ATCP driver will be bound exclusively to INIC devices, while the Microsoft TCP driver will be bound to INIC devices as well as other types of NICs. This is illustrated in FIG. 23.
- By binding the driver in this fashion we can choose to direct incoming network data to our own ATCP transport driver, the Microsoft TCP driver, or both. We do this by playing with the ethernet “type” field as follows.
- the INIC will need to indicate extra information about a receive packet to the ATCP driver.
- One such example is a fast path receive in which the ATCP driver will need to be notified of how much data the card has buffered.
- the first (and sometimes only) buffer in a received packet will actually be an INIC header buffer.
- the header buffer contains status information about the receive packet, and may or may not contain network data as well.
- the ATCP driver will recognize a header buffer by mapping it to an ethernet frame and inspecting the type field found in byte 12 . We will indicate all TCP frames destined for us in this fashion, while frames that are destined for both our driver and the Microsoft TCP driver (ARP, ICMP) will be indicated without a header buffer.
- FIG. 24 shows an example of an incoming TCP packet.
- FIG. 25 shows an example of an incoming ARP frame.
- NDIS has been designed such that all packets indicated via NdisMIndicateReceivePacket by an underlying miniport are delivered to the ProtocolReceivePacket routine for all protocol drivers bound to it. These protocol drivers can choose to accept or not accept the data. They can either accept the data by copying the data out of the packet indicated to it, or alternatively they can keep the packet and return it later via a call to NdisReturnPackets. By implementing it in this fashion, NDIS allows more than one protocol driver to accept a given packet. For this reason, when a packet is delivered to a protocol driver, the contents of the packet descriptor, buffer descriptors and data must all be treated as read-only. At the moment, we intend to violate this rule.
- the DDK specifies that when a protocol driver chooses to keep a packet, it should return a value of 1 (or more) to NDIS in its ProtocolReceivePacket routine. The packet is then later returned to NDIS via the call to NdisReturnPackets. This can only happen after the ProtocolReceivePacket has returned control to NDIS. This requires that the call to NdisReturnPackets must occur in a different execution context. We can accomplish this by scheduling a DPC, scheduling a system thread, or scheduling a kernel thread of our own. For brevity in this section, we will assume it is a done through a DPC. In any case, we will require a queue of pending receive buffers on which to place and fetch receive packets.
- a receive packet is dequeued by the DPC it is then either passed to TCP directly for fast-path processing, or it is sent through the FreeBSD path for slow-path processing.
- NDIS NDIS
- ARP and ICMP we may be working on our own copy of the data.
- FIG. 26A we show incoming data for a TCP fast-path connection.
- the TCP data is fully contained in the header buffer.
- the header buffer is mapped by the mbuf and sent upstream for fast-path TCP processing.
- the header buffer be mapped and sent upstream because the fast-path TCP code will need information contained in the header buffer in order to perform the processing.
- the mfreem routine will determine that the mbuf maps a packet that is owned by NDIS and will then free the mbuf header only and call NdisReturnPackets to free the data.
- FIG. 26B we show incoming data for a TCP slow-path connection.
- the mbuf points to the start of the TCP data directly instead of the header buffer. Since this buffer will be sent up for slow-path FreeBSD processing, we can not have the mbuf pointing to a header buffer (FreeBSD would get out easily confused). Again, when mfreem is called to free the mbuf, it will discover the mapped packet, free the mbuf header, and call NDIS to free the packet and return the underlying buffers. Note that even though we do not directly map the header buffer with the mbuf we do not lose it because of the link from the packet descriptor.
- the INIC miniport driver only pass us the TCP data buffer when it receives a slow-path receive. This would work fine except that we have determined that even in the case of slow-path connections we are going to attempt to offer some assistance to the host TCP driver (most likely by checksum processing only). In this case there may be some special fields that we need to pass up to the ATCP driver from the INIC driver. Leaving the header buffer connected seems the most logical way to do this.
- FIG. 26C we show a received ARP frame. Recall that for incoming ARP and ICMP frames we are going to copy the incoming data out of the packet and return it directly to NDIS. In this case the mbuf simply points to our data, with no corresponding packet descriptor. When we free this mbuf, mfreem will discover this and free not only the mbuf header, but the data as well.
- this receive mechanism for other purposes besides the reception of network data. It is also used as a method of communication between the ATCP driver and the INIC.
- One such example is a TCP context flush from the INIC.
- the INIC determines, for whatever reason, that it can no longer manage a TCP connection, it must flush that connection to the ATCP driver. It will do this by filling in a header buffer with appropriate status and delivering it to the INIC driver.
- the INIC driver will in turn deliver it to the protocol driver which will treat it essentially like a fast-path TCP connection by mapping the header buffer with an mbuf header and delivering it to TCP for fast-path processing.
- header buffer specifies a fast-path connection
- header buffer specifies a slow-path connection
- allocate a single mbuf header to map the network data
- set the mbuf fields to map the packet
- ATKProtocolReceiveDPC The receive processing will continue when the mbufs are dequeued. At the moment this is done by a routine called ATKProtocolReceiveDPC. It will do the following:
- the NDIS 4 send operation works as follows.
- a transport/protocol driver wishes to send one or more packets down to an NDIS 4 miniport driver, it calls NdisSendPackets with an array of packet descriptors to send.
- the transport/protocol driver relinquishes ownership of the packets until they are returned, one by one in any order, via a NDIS call to the ProtocolSendComplete routine. Since this routine is called asynchronously, our ATCP driver must save any required context into the packet descriptor header so that the appropriate resources can be freed. This is discussed further in the following sections.
- the transmit path is used not only to send network data, but is also used as a communication mechanism between the host and the INIC.
- the types of sends performed by the ATCP driver are some examples of the types of sends performed by the ATCP driver.
- the ATCP driver When the ATCP driver receives a transmit request with an associated MDL, it will package up the MDL physical addresses into a command buffer, map the command buffer with a buffer and packet descriptor, and call NdisSendPackets with the corresponding packet.
- the underlying INIC driver will issue the command buffer to the INIC.
- the INIC miniport When the corresponding response buffer is given back to the host, the INIC miniport will call NdisMSendComplete which will result in a call to the ATCP ProtocolSendComplete (ATKSendComplete) routine, at which point the resources associated with the send can be freed.
- ATKSendComplete ATKSendComplete
- This context includes a pointer to the MDL and presumably some other connection context as well.
- the other advantage to using a mbuf to hold the command buffer is that it eliminates having another special set of code to allocate and return command buffer. We will store a pointer to the mbuf in the reserved section of the packet descriptor so we can locate it when the send is complete.
- FIG. 27 illustrates the relationship between the client's MDL, the command buffer, and the buffer and packet descriptors.
- the receive process typically occurs in two phases. First the INIC fills in a host receive buffer with a relatively small amount of data, but notifies the host of a large amount of pending data (either through a large amount of buffered data on the card, or through a large amount of expected NetBios data). This small amount of data is delivered to the client through the TDI interface. The client will then respond with a MDL in which the data should be placed. Like the Fast-path TCP send process, the receive portion of the ATCP driver will then fill in a command buffer with the MDL information from the client, map the buffer with packet and buffer descriptors and send it to the INIC via a call to NdisSendPackets. Again, when the response buffer is returned to the INIC miniport, the ATKSendComplete routine will be called and the receive will complete. This relationship between the MDL, command buffer and buffer and packet descriptors are the same as shown in the Fast-path send section above.
- the transmit path is also used to send non-data commands to the card.
- the ATCP driver gives a context to the INIC by filling in a command buffer, mapping it with a packet and buffer descriptor, and calling NdisSendPackets.
- the ATKProtocolSendComplete routine will perform various types of actions when it is called from NDIS. First it must examine the reserved area of the packet descriptor to determine what type of request has completed. In the case of a slow-path completion, it can simply free the mbufs, command buffer, and descriptors and return. In the case of a fast-path completion, it will need to notify the TCP fast path routines of the completion so TCP can in turn complete the client's IRP. Similarly, when a non-data command buffer completes, TCP will again be notified that the command sent to the INIC has completed.
- the INIC handles only simple-case data transfer operations on a TCP connection. (These of course constitute the large majority of CPU cycles consumed by TCP processing in a conventional driver.)
- connection setup and breakdown There are many other complexities of the TCP protocol which must still be handled by host driver software: connection setup and breakdown, out-of-order data, nonstandard flags, etc.
- the NT OS contains a fully functional TCP/IP driver, and one solution would be to enhance this so that it is able to detect our INIC and take advantage of it by “handing off” data-path processing where appropriate.
- the NT network driver framework does make provision for multiple types of protocol driver: but it does not easily allow for multiple instances of drivers handling the SAME protocol.
- NT allows a special type of driver (“filter driver”) to attach itself “on top” of another driver in the system.
- filter driver a special type of driver
- the NT I/O manager then arranges that all requests directed to the attached driver are sent first to the filter driver; this arrangement is invisible to the rest of the system.
- the filter driver may then either handle these requests itself, or pass them down to the underlying driver it is attached to. Provided the filter driver completely replicates the (externally visible) behavior of the underlying driver when it handles requests itself, the existence of the filter driver is invisible to higher-level software.
- the filter driver attaches itself on top of the Microsoft TCP/IP driver; this gives us the basic mechanism whereby we can intercept requests for TCP operations and handle them in our driver instead of the Microsoft driver.
- TDI interface into the top end of NT network protocol drivers.
- Higher-level TDI client software which requires services from a protocol driver proceeds by creating various types of NT FILE_OBJECTs, and then making various DEVICE — 10_CONTROL requests on these FILE_OBJECTs.
- ADDRESS objects A second major difficulty arises with ADDRESS objects. These are often created with the TCP/IP “wildcard” address (all zeros); the actual local address is assigned only later during connection setup (by the protocol driver itself) Of course, a “wildcard” address does not allow us to determine whether connections that will be associated with this ADDRESS object should be handled by our driver or by the Microsoft one. Also, as with CONNECTION objects, there is “opaque” data associated with ADDRESS objects that cannot be derived just from examination of the object itself. (In this case addresses of callback functions set on the object by TDI_SET_EVENT io-controls.)
- the TDI client makes a call to create the ADDRESS object. Assuming that this is a “wildcard” address, we create a “shadow” object before passing the call down to the Microsoft driver.
- the next step is normally that the client makes a number of TDI_SET_EVENT io-control calls to associate various callback functions with the ADDRESS object. These are functions that should be called to notify the TDI client when certain events (such arrival of data or disconnection requests etc) occur. We store these callback function pointers in our “shadow” address object, before passing the call down to the Microsoft driver.
- the TDI client makes a call to create a CONNECTION object. Again, we create our “shadow” of this object.
- the client issues the TDI_ASSOCIATE_ADDRESS io-control to bind the CONNECTION object to the ADDRESS object.
- the client issues the TDI_ASSOCIATE_ADDRESS io-control to bind the CONNECTION object to the ADDRESS object.
- the TDI client issues a TDI_CONNECT io-control on the CONNECTION object, specifying the remote IP address (and port) for the desired connection.
- TDI_CONNECT io-control on the CONNECTION object, specifying the remote IP address (and port) for the desired connection.
- the CONNECTION object we mark the CONNECTION object as “one of ours” for future reference (using an opaque field which NT FILE_OBJECTS provide for driver use.)
- connection request is NOT for one of our interfaces, we pass it down to the Microsoft driver. Note carefully, however, that we can not simply discard our “shadow” objects at this point.
- the TDI interface allows re-use of CONNECTION objects: on termination of a connection, it is legal for the TDI client to dissociate the CONNECTION object from its current. Thus our “shadow” objects must be retained for the lifetime ADDRESS object, re-associate it with another, and use it for another connection of the NT FILE_OBJECTS: the subsequent connection could turn out to be via one of our interfaces.
- This section provides a general description of the design of the microcode that will execute on two of the sequencers of the Protocol Processor on the INIC. The overall philosophy of the INIC is discussed in other sections. This section will discuss the INIC microcode in detail.
- the INIC supplies a set of 3 custom processors that will provide considerable hardware-assist to the microcode running thereon.
- Protocol Processor will provide 512 SRAM-based registers to be shared among the 3 sequencers;
- the INIC will support up to 256 TCP connections (TCB's).
- a TCB is associated with an input frame when the frame's source and destination IP addresses and source and destination ports match that of the TCB.
- the TCB's will be maintained in a hash table in NIC DRAM to save sequential searching. There will however, be an index in hash order in SRAM.
- the TCB will be cached in SRAM.
- each header buffer is not statically linked to a specific TCB buffer.
- the link is dynamic on a per-frame basis. The need for this dynamic linking will be explained in later sections. Suffice to say here that if there is a free header buffer, then somewhere there is also a free TCB SRAM buffer;
- the code will lock an active TCB into an SRAM buffer while either sequencer is operating on it. This implies there will be no swapping to and from DRAM of a TCB once it is in SRAM and an operation is started on it. More specifically, the TCB will not be swapped after requesting that a DMA be performed for it. Instead, the system will switch to another active “process”. Then it will resume the former process at the point directly after where the DMA was requested. This constitutes a zero-cost switch as mentioned above;
- the INIC will have 16 MB of DRAM.
- the current specification calls for dividing a large portion of this into 2K buffers and control allocation/deallocation of these buffers through one of the DRAM fifos mentioned above. These fifos will also be used to control small host buffers, large host buffers, command buffers and command response buffers;
- Contexts will be passed to the INIC through the Transmit command and response buffers.
- INIC-initiated TCB releases will be handled through the Receive small buffers.
- Host-initiated releases will use the Command buffers.
- T/TCP Transaction TCP
- the basic plan is to have the host determine when a TCP connection is able to be handed to the INIC, setup the TCB and pass it to the card via a command in the Transmit queue.
- TCBs that the INIC owns can be handed back to the host via a request from the Receive or Transmit sequencers or from the host itself at any time.
- the INIC When the INIC receives a frame, one of its immediate tasks is to determine if the frame is for a TCB that it controls. If not, the frame is passed to the host on a generic interface TCB. On transmit, the transmit request will specify a TCB hash number if the request is on a INIC-controlled TCB. Thus the initial state for the INIC will be transparent mode in which all received frames are directly passed through and all transmit requests will be simply thrown on the appropriate wire. This state is maintained until the host passes TCBs to the INIC to control. Note that frames received for which the INIC has no TCB (or it is with the host) will still have the TCP checksum verified if TCP/IP, and may split the TCPIP header off into a separate buffer.
- FIG. 31 is a summary of the main loop of Receive.
- the base for the receive processing done by the INIC on an existing context is the fast-path or “header prediction” code in the FreeBSD release.
- the processing is divided into three parts: header validation and checksumming, TCP processing and subsequent SMB processing.
- the first step in receive processing is to DMA the frame header into an SRAM header buffer. It is useful for header validation to be implemented in conjunction with this DMA by scanning the data as it flies by. The following tests need to be “passed”:
- the valid-context test is non-trivial in the amount of work involved to determine it. Also note that for pure ACKs, the window-size test will be relaxed. This is because initially the output PERSIST state is to be handled on the NIC.
- processing splits based on whether the frame is a pure ACK or a pure received segment.
- the design is to split off headers into a small header buffer and pass the aligned data in separate large buffers. Since a frame has been received, eventually some receiver process on the host will need to be informed. In the case of FTP, the frame is pure data and it is passed to the host immediately. This involves getting large buffers and DMAing the data into them, then setting the appropriate details in a small buffer that is used to notify the host. However for SMB, the INIC is performing reassembly of data when the frame consists of headers and data. So there may not yet be a complete SMB to pass to the host. In this case, a small buffer will be acquired and the header moved into it. If the received segment completes an SMB, then the procedures are pretty much as for FTP.
- the scheme is to at least move the received data (not the headers) to the host to free the INIC buffers and to save latency.
- the list of in-progress host buffers is maintained in the TCB and moved to the header buffer when the SMB is complete.
- the final part of pure-receive processing is to fire off the delayed ACK timer for this segment.
- FIG. 32 shows the format of the SMB header of an SMB frame.
- the LENGTH field of the NetBIOS header will be used to determine when a complete SMB has been received and the header buffer with appropriate details can be posted to the host.
- the interesting commands are the write commands: SMBwrite (0 ⁇ B), SMBwriteBraw (0 ⁇ 1D), SMBwriteBmpx (0 ⁇ 1E), SMBwriteBs (0 ⁇ 1F), SMBwriteclose (0 ⁇ 2C), SMBwriteX (0 ⁇ 2F), SMBwriteunlock (0 ⁇ 14). These are interesting because they will have data to be aligned in host memory.
- the point to note about these commands is that they each have a different WCT field, so that the start offset of the data depends on the command type. SMB processing will thus need to be cognizant of these types.
- PRU_RCVD or the equivalent in Microsoft language: the host application has to tell the INIC when he has accepted the received data that has been queued. This is so that the INIC can update the receive window. It is an advantage for this mechanism to be efficient. This may be accomplished by piggybacking these on transmit requests (not necessarily for the same TCB).
- Timestamp option it is useful to support this option in the fast path because the BSD implementation does. Also, it can be very helpful in getting a much better estimate of the round-trip time (RTT) which TCP needs to use.
- RTT round-trip time
- the INIC may split TCP/IP headers into a separate header buffer.
- FIG. 33 is a summary of the main loop of Transmit.
- the host posts a transmit request to the INIC by filling in a command buffer with appropriate data pointers etc and posting it to the INIC via the Command Buffer Address register. Note that there is one host command buffer queue, but there are four physical transmit lines. So each request needs to include an interface number as well as the context number.
- the INIC microcode will DMA the command in and place it in one of four internal command queues which the transmit sequencer will work on. This is so that transmit processing can round-robin service these four queues to keep all four interfaces busy, and not let a highly-active interface lock out the others (which would happen with a single queue).
- the transmit request may be a segment that is less than the MSS, or it may be as much as a full 64K SMB READ. Obviously the former request will go out as one segment, the latter as a number of MSS-sized segments.
- the transmitting TCB must hold on to the request until all data in it has been transmitted and acked. Appropriate pointers to do this will be kept in the TCB.
- a large buffer is acquired from the free buffer fifo, and the MAC and TCP/IP headers are created in it. It may be quicker/simpler to keep a basic frame header set up in the TCB and either DMA directly this into the frame each time. Then data is DMA'd from host memory into the frame to create an MSS-sized segment.
- This DMA also checksums the data. Then the checksum is adjusted for the pseudo-header and placed into the TCP header, and the frame is queued to the MAC transmit interface which may be controlled by the third sequencer. The final step is to update various window fields etc in the TCB. Eventually either the entire request will have been sent and acked, or a retransmission timer will expire in which case the context is flushed to the host. In either case, the INIC will place a command response in the Response queue containing the command buffer handle from the original transmit command and appropriate status.
- Window Probe vs Window Update an explanation for posterity.
- a Window Probe is sent from the sending TCB to the receiving TCB, and it means the sender has the receiver in PERSIST state. Persist state is entered when the receiver advertises a zero window. It is thus the state of the transmitting TCB. In this state, he sends periodic window probes to the receiver in case an ACK from the receiver has been lost. The receiver will return his latest window size in the ACK.
- a Window Update is sent from the receiving TCB to the sending TCB, usually to tell him that the receiving window has altered. It is mostly triggered by the upper layer when it accepts some data. This probably means the sending TCB is viewing the receiving TCB as being in PERSIST state.
- Persist state it is designed to handle Persist state on the INIC. It seems unreasonable to throw a TCB back to the host just because its receiver advertised a zero window. This would normally be a transient situation, and would tend to happen mostly with clients that do not support slow-start. Alternatively, the code can easily be changed to throw the TCB back to the host as soon as a receiver advertises a zero window.
- MSS-sized frames the INIC code will expect all transmit requests for which it has no TCB to not be greater than the MSS. If any request is, it will be dropped and an appropriate response status posted.
- Event13Type & Event23Type (we assume there will be an event status bit for this—USE_EV13 and USE_EV23) in the events register; these are events from sequencers 1 and 2; they will mainly be XMIT requests from the XMT sequencer. Dequeue request and place the frame on the appropriate interface.
- RCV-frame support in the model, RCV is done through VinicReceive( ) which is registered by the lower-edge driver, and is called at dispatch-level. This routine calls VinicTransferDataComplete( ) to check if the xfer (possibly DMA) of the frame into host buffers is complete. The latter rtne is also called at dispatch level on a DMA-completion interrupt. It queues complete buffers to the RCV sequencer via the normal queue mechanism.
- Buffers are 256 bytes long on 256-byte boundaries.
- 31-8 physical addr in host of a set of contiguous hddr buffers. 7-0—number of hddr buffers passed.
- Buffers are 4K long aligned on 4K boundaries.
- Buffers are multiple of 32 bytes up to 1K long (2**5*32).
- Buffers are 32 bytes long on 32-byte boundaries.
- Low buffer threshold support set approp bits in the ISR when the available-buffers count in the various queues filled by the host falls below a threshold.
- the utility processor of the microprocessor housed on the INIC is responsible for setting up and implementing all configuration space and memory mapped operations, and also as described below, for managing the debug interface.
- PCI_INT four bit register
- THE MICROPROCESSOR uses two registers, the PCI Data_Reg and the PCI_Address_Reg, to enable the Host to access Configuration Space and the memory space allocated to the INIC. These registers are not available to the Host, but are used by THE MICROPROCESSOR to enable Host reads and writes. The function of these two registers is as follows.
- PCI_Data_Reg This register can be both read and written by THE MICROPROCESSOR. On write operations from the host, this register contains the data being sent from the host. On read operations, this register contains the data to be sent to the host.
- PCI_Address_Reg This is the control register for memory reads and writes from the host. The structure of the register is shown in FIG. 34. During a write operation from the Host the PCI_Data_Reg contains valid data after Data Valid is set in the PCI_Address_Reg. Both registers are locked until THE MICROPROCESSOR writes the PCI_Data_Reg, which resets Data Valid. All read operations will be direct from SRAM. Memory space based reads will return 00. Configuration space reads will be mapped as shown in FIG. 35.
- the INIC is implemented as a multi-function device.
- the first device is the network controller, and the second device is the debug interface.
- An alternative production embodiment may implement only the network controller function. Both configuration space headers will be the same, except for the differences noted in the following description.
- Vendor ID This field will contain the Alacritech Vendor ID. One field will be used for both functions.
- the Alacritech Vendor ID is hex 139A.
- Device ID Chip at Alacritech on a device specific basis. One field will be used for both functions.
- Command Initialized to 00. All bits defined below as not enabled (0) will remain 0. Those that are enabled will be set to 0 or 1 depending on the state of the system. Each function (network and debug) will have its own command field, as shown in FIG. 36.
- revision ID The revision field will be shared by both functions.
- Class Code This is 02 00 00 for the network controller, and for the debug interface. The field will be shared.
- Cache Line Size This is initialized to zero. Supported sizes are 16, 32, 64 and 128 bytes. This hardware register is replicated in SRAM and supported separately for each function, but THE MICROPROCESSOR will implement the value set in Configuration Space 1 (the network processor).
- Latency Timer This is initialized to zero. The function is supported. This hardware register is replicated in SRAM. Each function is supported separately, but THE MICROPROCESSOR will implement the value set in Configuration Space 1 (the network processor).
- Header Type This is set to 80 for both functions, but will be supported separately.
- BIST Is implemented. In addition to responding to a request to run self test, if test after reset fails, a code will be set in the BIST register. This will be implemented separately for each function.
- Base Address Register A single base address register is implemented for each function. It is 64 bits in length, and the bottom four bits are configured as follows: Bit 0 - 0 , indicates memory base address; Bit 1 , 2 - 00 , locate base address anywhere in 32 bit memory space; and Bit 3 - 1 , memory is prefetchable.
- CardBus CIS Pointer Not implemented—initialized to 0.
- Interrupt Line ismplemented—initialized to 0. This is implemented separately for each function.
- Interrupt Pin This is set to 01, corresponding to INTA# for the network controller, and 02, corresponding to INTB# for the debug interface. This is implemented separately for each function.
- Mi_Gnt This can be set at a value in the range of 10, to allow reasonably long bursts on the bus. This is implemented separately for each function.
- Max_Lat This can be set to 0 to indicate no particular requirement for frequency of access to PCI. This is implemented separately for each function.
- each of the following functions may or may not reside in a single location, and may or may not need to be in SRAM at all, the address for each is really only used as an identifier (label). There is, therefore, no control block anywhere in memory that represents this memory space. When the host writes one of these registers, the utility processor will construct the data required and transfer it. Reads to this memory will generate 00 for data.
- Interrupt Status Returned status from host. Sent after one or more status conditions have been reset. Also an interlock for storing any new status. Once status has been stored at the Interrupt Status Pointer location, no new status will be ORed until the host writes the Interrupt Status Register. New status will be ored with any remaining uncleared status (as defined by the contents of the returned status) and stored again at the Interrupt Status Pointer location. Bits are as follows:
- Bit 31 Error bits are set
- Bit 25 RMISS—Receive drop occurred due to no buffers.
- Interrupt Mask Writing by the host. Interrupts are masked for each of the bits in the interrupt status when the same bit in the mask register is set. When the Interrupt Mask register is written and as a result a status bit is unmasked, an interrupt is generated. Also, when the Interrupt Status Register is written, enabling new status to be stored, when it is stored if a bit is stored that is not masked by the Interrupt Mask, an interrupt is generated.
- 0C Header Buffer Address—Written by host to pass a set of header buffers to the INIC.
- 10 Data Buffer Handle—First register to be written by the Host to transfer a receive data buffer to the INIC. This data is Host reference data. It is not used by the INIC, it is returned with the data buffer. However, to insure integrity of the buffer, this register must be interlocked with the Data Buffer Address register. Once the Data Buffer Address register has been written, neither register can be written until after the Data Buffer Handle register has been read by THE MICROPROCESSOR.
- Ending status will be handled by the utility processor in the same fashion as it is handled by the network processor. At present two ending status conditions are defined B3 1—command complete, and B30—error. When end status is stored an interrupt is generated.
- Command Pointer Two additional registers are defined, Command Pointer and Data Pointer.
- the Host is responsible for insuring that the Data Pointer is valid and points to sufficient memory before storing a command pointer. Storing a command pointer initiates command decode and execution by the debug processor. The Host must not modify either command or Data Pointer until ending status has been received, at which point a new command may be initiated. Memory space is write only by the Host, reads will receive 00.
- the format is as follows:
- Interrupt Status Returned status from host. Sent after one or more status conditions have been reset. Also an interlock for storing any new status. Once status has been stored at the Interrupt Status Pointer location, no new status will be stored until the host writes the Interrupt Status Register. New status will be ored with any remaining uncleared status (as defined by the contents of the returned status) and stored again at the Interrupt Status Pointer location. Bits are as follows:
- Bit 29 Transmit Processor Halted
- Bit 28 Receive Processor Halted
- Bit 27 Utility Processor Halted.
- Interrupt Mask Writing by the host. Interrupts are masked for each of the bits in the interrupt status when the same bit in the mask register is set. When the Interrupt Mask register is written and as a result a status bit is unmasked, an interrupt is generated. Also, when the Interrupt Status Register is written, enabling new status to be stored, when it is stored if a bit is stored that is not masked by the Interrupt Mask, an interrupt is generated.
- 0C Command Pointer—Points to command to be executed. Storing this pointer initiates command decode and execution.
- the first byte of the command defines the structure of the remainder of the command.
- the first five bits of the command byte are the command itself. The next bit is used to specify an alternate processor, and the last two bits specify which processors are intended for the command.
- This bit defines which processor should handle debug processing if the utility processor is defined as the processor in debug.
- This command sets a stop at the specified address.
- a count of 1 causes the specified processor to halt the first time it executes the instruction.
- a count of 2 or more causes the processor to halt after that number of executions.
- the processor is halted just before executing the instruction.
- a count of 0 does not halt the processor, but causes a sync signal to be generated. If a second processor is set to the same break address, the count data from the first break request is used, and each time either processor executes the instruction the count is decremented.
- This command resets a previously set break point at the specified address. Reset break fully resets that address. If multiple processors were set to that break point, all will be reset.
- This command transfers to the host the contents of the descriptor. For descriptors larger than four bytes, a count, in four byte increments is specified. For descriptors utilizing an address the address field is specified.
- Stand-alone descriptors The following descriptors do not use either the count or address fields. They transfer the contents of the referenced register.
- This register contains four bytes of data. If error status is posted for a command, if the next command that is issued reads this register, a code describing the error in more detail may be obtained. If any command other than a dump of this register is issued after error status, sense information will be reset.
- This command transfers from the host the contents of the descriptor. For descriptors larger than four bytes, a count, in four byte increments is specified. For descriptors utilizing an address the address field is specified.
- Stand-alone descriptors The following descriptors do not use either the count or address fields. They transfer the contents of the referenced register.
- This command allows an instruction in ROM to be replaced by an instruction in WCS.
- the new instruction will be located in the Host buffer. It will be stored in the first eight bytes of the buffer, with the high bits unused. To reset a mapped out instruction, map it to location 00.
- Writable control store allows field updates for feature enhancements.
- the microprocessor (see FIG. 38) is a 32-bit, full-duplex, four channel, 10/100-Megabit per second (Mbps), Intelligent Network Interface Controller (INIC), designed to provide high-speed protocol processing for server applications. It combines the functions of a standard network interface controller and a protocol processor within a single chip. Although designed specifically for server applications, the microprocessor can be used by PCs, workstations and routers or anywhere that TCP/IP protocols are being utilized.
- INIC Intelligent Network Interface Controller
- the INIC When combined with four 802.3/MII compliant Phys and Synchronous DRAM (SDRAM), the INIC comprises four complete ethernet nodes. It contains four 802.3/ethernet compliant Macs, a PCI Bus Interface Unit (BIU), a memory controller, transmit fifos, receive fifos and a custom TCP/IP/NETBIOS protocol processor.
- the INIC supports 10Base-T, 100Base-TX, 100Base-FX and 100Base-T4 via the MII interface attachment of appropriate Phys.
- the INIC Macs provide statistical information that may be used for SNMP.
- the Macs operate in promiscuous mode allowing the INIC to function as a network monitor, receive broadcast and multicast packets and implement multiple Mac addresses for each node.
- Any 802.3/MII compliant PHY can be utilized, allowing the INIC to support 10BASE-T, 10BASE-T2, 10BASE-TX, 100Base-FX and 100BASE-T4 as well as future interface standards.
- PHY identification and initialization is accomplished through host driver initialization routines.
- PHY status registers can be polled continuously by the INIC and detected PHY status changes reported to the host driver.
- the Mac can be configured to support a maximum frame size of 1518 bytes or 32768 bytes.
- the 64-bit, multiplexed BIU provides a direct interface to the PCI bus for both slave and master functions.
- the INIC is capable of operating in either a 64-bit or 32-bit PCI environment, while supporting 64-bit addressing in either configuration.
- PCI bus frequencies up to 66 MHz are supported yielding instantaneous bus transfer rates of 533 MB/s.
- Both 5.0V and 3.3V signaling environments can be utilized by the INIC.
- Configurable cache-line size up to 256B will accommodate future architectures, and Expansion ROM/Flash support allows for diskless system booting.
- Non-PC applications are supported via programmable big and little endian modes. Host based communication has been utilized to provide the best system performance possible.
- the NIC supports Plug-N-Play auto-configuration through the PCI configuration space. External pull-up and pull-down resistors, on the memory I/O pins, allow selection of various features during chip reset. Support of an external eeprom allows for local storage of configuration information such as Mac addresses.
- External SDRAM provides frame buffering, which is configurable as 4 MB, 8 MB, 16 MB or 32 MB using the appropriate SIMMs. Use of ⁇ 10 speed grades yields an external buffer bandwidth of 224 MB/s.
- the buffer provides temporary storage of both incoming and outgoing frames.
- the protocol processor accesses the frames within the buffer in order to implement TCP/IP and NETBIOS. Incoming frames are processed, assembled then transferred to host memory under the control of the protocol processor. For transmit, data is moved from host memory to buffers where various headers are created before being transmitted out via the Mac.
- FIG. 39 shows the area on the die of each module.
- the processor is a convenient means to provide a programmable state-machine which is capable of processing incoming frames, processing host commands, directing network traffic and directing PCI bus traffic.
- Three processors are implemented using shared hardware in a three-level pipelined architecture which launches and completes a single instruction for every clock cycle. The instructions are executed in three distinct phases corresponding to each of the pipeline stages where each phase is responsible for a different function.
- the first instruction phase writes the instruction results of the last instruction to the destination operand, modifies the program counter (Pc), selects the address source for the instruction to fetch, then fetches the instruction from the control store.
- Pc program counter
- the fetched instruction is then stored in the instruction register at the end of the clock cycle.
- the processor instructions reside in the on-chip control-store, which is implemented as a mixture of ROM and SRAM.
- the ROM contains 1K instructions starting at address 0 ⁇ 0000 and aliases each 0 ⁇ 0400 locations throughout the first 0 ⁇ 8000 of instruction space.
- the SRAM (WCS) will hold up to 0 ⁇ 2000 instructions starting at address 0 ⁇ 8000 and aliasing each 0 ⁇ 2000 locations throughout the last 0 ⁇ 8000 of instruction space.
- the ROM and SRAM are both 49-bits wide accounting for bits [48:0] of the instruction microword.
- a separate mapping ram provides bits [55:49] of the microword (MapAddr) to allow replacement of faulty ROM based instructions.
- the mapping ram has a configuration of 128 ⁇ 7 which is insufficient to allow a separate map address for each of the 1K ROM locations.
- the map ram address lines are connected to the address bits Fetch[9:3]. The result is that the ROM is re-mapped in blocks of 8 contiguous locations.
- the second instruction phase decodes the instruction which was stored in the instruction register. It is at this point that the map address is checked for a non-zero value which will cause the decoder to force a Jmp instruction to the map address. If a non-zero value is detected then the decoder selects the source operands for the Alu operation based on the values of the OpdASel, OpdBSel and AluOp fields. These operands are then stored in the decode register at the end of the clock cycle. Operands may originate from File, SRAM, or flip-flop based registers.
- the second instruction phase is also where the results of the previous instruction are written to the SRAM.
- the third instruction phase is when the actual Alu operation is performed, the test condition is selected and the Stack push and pop are implemented. Results of the Alu operation are stored in the results register at the end of the clock cycle.
- FIG. 42 is a block diagram of the CPU.
- FIG. 42 shows the hardware functions associated with each of the instruction phases. Note that various functions have been distributed across the three phases of the instruction execution in order to minimize the combinatorial delays within any given phase.
- micro-instructions are divided into six types according to the program control directive.
- the micro-instruction is further divided into sub-fields for which the definitions are dependent upon the instruction type.
- the six instruction types are listed in FIG. 43.
- All instructions include the Alu operation (AluOp), operand “A” select (OpdASel), operand “B” select (OpdBSel) and Literal fields. Other field usage depends upon the instruction type.
- the “jump condition code” (Jee) instruction causes the program counter to be altered if the condition selected by the “test select” (TstSel) field is asserted.
- the new program counter (Pe) value is loaded from either the Literal field or the AluOut as described in the following section and the Literal field may be used as a source for the Alu or the ram address if the new Pe value is sourced by the Alu.
- the “jump” (Jmp) instruction causes the program counter to be altered unconditionally.
- the new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section.
- the format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pc value is sourced by the Alu.
- the “jump subroutine” (Jsr) instruction causes the program counter to be altered unconditionally.
- the new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section.
- the old program counter value is stored on the top location of the Pc-Stack which is implemented as a LIFO memory.
- the format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pe value is sourced by the Alu.
- Nxt (Nxt) instruction causes the program counter to increment.
- the format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address.
- MapEn maps to the “map enable”
- MapAddr maps to the “map address”
- the instruction decoder forces a jump instruction with the Alu operation and destination fields set to pass the MapAddr field to the program control block.
- the program control is determined by a combination of PgmCtrl, DstOpd, FlgSel and TstSel.
- the behavior of the program control is defined with the “C-like” description in FIG. 44.
- FIGS. 45 - 53 show ALU operations, selected operands, selected tests, and flag operations.
- SRAM is the nexus for data movement within the INIC.
- a hierarchy of sequencers working in concert, accomplish the movement of data between DRAM, SRAM, Cpu, ethernet and the Pci bus.
- Slave sequencers provided with stimulus from master sequencers, request data movement operations by way of the SRAM, Pci bus, DRAM and Flash.
- the slave sequencers prioritize, service and acknowledge the requests.
- the data flow block diagram of FIG. 54 shows all of the master and slave sequencers of the INIC product.
- Request information such as r/w, address, size, endian and alignment are represented by each request line.
- Acknowledge information to master sequencers include only the size of the transfer being acknowledged.
- FIG. 55 The block diagram of FIG. 55 illustrates how data movement is accomplished for a Pci slave write to DRAM.
- the Psi (Pci slave in) module functions as both a master sequencer.
- Psi sends a write request to the SramCtrl module.
- Psi requests Xwr to move data from SRAM to DRAM.
- Xwr subsequently sends a read request to the SramCtrl module then writes the data to the DRAM via the Xctrl module.
- Xwr sends an acknowledge to the Psi module.
- the SRAM control sequencer services requests to store to, or retrieve data from an SRAM organized as 1024 locations by 128 bits (16 KB).
- the sequencer operates at a frequency of 133 MHz, allowing both a Cpu access and a DMA access to occur during a standard 66 MHz Cpu cycle.
- One 133 MHz cycle is reserved for Cpu accesses during each 66 MHz cycle while the remaining 133 MHz cycle is reserved for DMA accesses on a prioritized basis.
- FIG. 56 The block diagram of FIG. 56 shows the major functions of the SRAM control sequencer.
- a slave sequencer begins by asserting a request along with r/w, ram address, endian, data path size, data path alignment and request size.
- SramCtrl prioritizes the requests.
- the request parameters are then selected by a multiplexer which feeds the parameters to the SRAM via a register.
- the requestor provides the SRAM address which when coupled with the other parameters controls the input and output alignment.
- SRAM outputs are fed to the output aligner via a register. Requests are acknowledged in parallel with the returned data.
- FIG. 57 is a timing diagram depicting two ram accesses during a single 66 MHz clock cycle.
- Xctrl (See FIG. 58) provides the facility whereby Xwr, Xrd, Dcfg and Eectrl access external Flash and DRAM.
- Xctrl includes an arbiter, i/o registers, data multiplexers, address multiplexers and control multiplexers. Ownership of the external memory interace is requested by each block and granted to each of the requesters by the arbiter function. Once ownership has been granted the multiplexers select the address, data and control signals from owner, allowing access to external memory.
- the Xrd sequencer acts only as a slave sequencer. Servicing requests issued by master sequencers, the Xrd sequencer moves data from external SDRAM or flash to the SRAM, via the Xctrl module, in blocks of 32 bytes or less.
- the nature of the SDRAM requires fixed burst sizes for each of it's internal banks with ras precharge intervals between each access. By selecting a burst size of 32 bytes for SDRAM reads and interleaving bank accesses on a 16 byte boundary, we can ensure that the ras precharge interval for the first bank is satisfied before burst completion for the second bank, allowing us to re-instruct the first bank and continue with uninterrupted DRAM access.
- SDRAMs require a consistent burst size be utilized each and every time the SDRAM is accessed. For this reason, if an SDRAM access does not begin or end on a 32 byte boundary, SDRAM bandwidth will be reduced due to less than 32 bytes of data being transferred during the burst cycle.
- FIG. 59 depicts the major functional blocks of the Xrd external memory read sequencer.
- the first step in servicing a request to move data from SDRAM to SRAM is the prioritization of the master sequencer requests.
- the Xrd sequencer takes a snapshot of the DRAM read address and applies configuration information to determine the correct bank, row and column address to apply.
- the Xrd sequencer issues a write request to the SramCtrl sequencer which in turn sends an acknowledge to the Xrd sequencer.
- the Xrd sequencer passes the acknowledge along to the level two master with a size code indicating how much data was written during the SRAM cycle allowing the update of pointers and counters.
- the DRAM read and SRAM write cycles repeat until the original burst request has been completed at which point the Xrd sequencer prioritizes any remaining requests in preparation for the next burst cycle.
- FIG. 60 is a timing diagram illustrating how data is read from SDRAM.
- the DRAM has been configured for a burst of four with a latency of two clock cycles.
- Bank A is first selected/activated followed by a read command two clock cycles later.
- the bank select/activate for bank B is next issued as read data begins returning two clocks after the read command was issued to bank A.
- Two clock cycles before we need to receive data from bank B we issue the read command. Once all 16 bytes have been received from bank A we begin receiving data from bank B.
- the Xwr sequencer is a slave sequencer. Servicing requests issued by master sequencers, the Xwr sequencer moves data from SRAM to the external SDRAM or flash, via the Xctrl module, in blocks of 32 bytes or less while accumulating a checksum of the data moved.
- the nature of the SDRAM requires fixed burst sizes for each of it's internal banks with ras precharge intervals between each access.
- FIG. 61 depicts the major functional blocks of the Xwr sequencer.
- the first step in servicing a request to move data from SRAM to SDRAM is the prioritization of the level two master requests.
- the Xwr sequencer takes a Snapshot of the DRAM write address and applies configuration information to determine the correct DRAM, bank, row and column address to apply.
- the Xwr sequencer immediately issues a read command to the SRAM to which the SRAM responds with both data and an acknowledge.
- the Xwr sequencer passes the acknowledge to the level two master along with a size code indicating how much data was read during the SRAM cycle allowing the update of pointers and counters.
- the Xwr sequencer issues a write command to the DRAM starting the burst cycle and computing a checksum as the data flys by.
- the SRAM read cycle repeats until the original burst request has been completed at which point the Xwr sequencer prioritizes any remaining requests in preparation for the next burst cycle.
- FIG. 62 is a timing diagram illustrating how data is written to SDRAM.
- the DRAM has been configured for a burst of four with a latency of two clock cycles.
- Bank A is first selected/activated followed by a write command two clock cycles later.
- the bank select/activate for bank B is next issued in preparation for issuing the second write command. As soon as the first 16 byte burst to bank A completes we issue the write command for bank B and begin supplying data.
- the Pmo sequencer (See FIG. 63) acts only as a slave sequencer. Servicing requests issued by master sequencers, the Pmo sequencer moves data from an SRAM based fifo to a Pci target, via the PciMstrIO module, in bursts of up to 256 bytes.
- the nature of the PCI bus dictates the use of the write line command to ensure optimal system performance.
- the write line command requires that the Pmo sequencer be capable of transferring a whole multiple (1 ⁇ , 2 ⁇ , 3 ⁇ , . . . ) of cache lines of which the size is set through the Pci configuration registers.
- Pmo will automatically perform partial bursts until it has aligned the transfers on a cache line boundary at which time it will begin usage of the write line command.
- the SRAM fifo depth of 256 bytes, has been chosen in order to allow Pmo to accommodate cache line sizes up to 128 bytes. Provided the cache line size is less than 128 bytes, Pmo will perform multiple, contiguous cache line bursts until it has exhausted the supply of data.
- Pmo receives requests from two separate sources; the DRAM to Pci (D2p) module and the SRAM to Pci (S2p) module.
- An operation first begins with prioritization of the requests where the S2p module is given highest priority.
- the Pmo module takes a Snapshot of the SRAM fifo address and uses this to generate read requests for the SramCtrl sequencer.
- the Pmo module then proceeds to arbitrate for ownership of the Pci bus via the PciMstrIO module. Once the Pmo holding registers have sufficient data and Pci bus mastership has been granted, the Pmo module begins transferring data to the Pci target.
- Pmo For each successful transfer, Pmo sends an acknowledge and encoded size to the master sequencer, allow it to update it's internal pointers, counters and status. Once the Pci burst transaction has terminated, Pmo parks on the Pci bus unless another initiator has requested ownership. Pmo again prioritizes the incoming requests and repeats the process.
- the Pmi sequencer (See FIG. 64) acts only as a slave sequencer. Servicing requests issued by master sequencers, the Pmi sequencer moves data from a Pci target to an SRAM based fifo, via the PciMstrIO module, in bursts of up to 256 bytes.
- the nature of the PCI bus dictates the use of the read multiple command to ensure optimal system performance.
- the read multiple command requires that the Pmi sequencer be capable of transferring a cache line or more of data. To accomplish this end, Pmi will automatically perform partial cache line bursts until it has aligned the transfers on a cache line boundary at which time it will begin usage of the read multiple command.
- the SRAM fifo depth of 256 bytes, has been chosen in order to allow Pmi to accommodate cache line sizes up to 128 bytes. Provided the cache line size is less than 128 bytes, Pmi will perform multiple, contiguous cache line bursts until it has filled the fifo.
- Pmi receive requests from two separate sources; the Pci to DRAM (P2d) module and the Pci to SRAM (P2s) module.
- An operation first begins with prioritization of the requests where the P2s module is given highest priority.
- the Pmi module then proceeds to arbitrate for ownership of the Pci bus via the PciMstrIO module. Once the Pci bus mastership has been granted and the Pmi holding registers have sufficient data, the Pmi module begins transferring data to the SRAM fifo. For each successful transfer, Pmi sends an acknowledge and encoded size to the master sequencer, allowing it to update it's internal pointers, counters and status. Once the Pci burst transaction has terminated, Pmi parks on the Pci bus unless another initiator has requested ownership. Pmi again prioritizes the incoming requests and repeats the process.
- the D2p sequencer acts is a master sequencer. Servicing channel requests issued by the Cpu, the D2p sequencer manages movement of data from DRAM to the Pci bus by issuing requests to both the Xrd sequencer and the Pmo sequencer. Data transfer is accomplished using an SRAM based fifo through which data is staged.
- D2p can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, D2p fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, Pci address, Pci endian and request size. D2p then issues a request to the D2s sequencer causing the SRAM based fifo to fill with DRAM data. Once the fifo contains sufficient data for a Pci transaction, D2s issues a request to Pmo which in turn moves data from the fifo to a Pci target.
- FIG. 65 is an illustration showing the major blocks involved in the movement of data from DRAM to Pci target.
- the P2d sequencer (See FIG. 67) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the P2d sequencer manages movement of data from Pci bus to DRAM by issuing requests to both the Xwr sequencer and the Pmi sequencer. Data transfer is accomplished using an SRAM based fifo through which data is staged.
- P2d can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, P2d, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, Pci address, Pci endian and request size. P2d then issues a request to Pmo which in turn moves data from the Pci target to the SRAM fifo. Next, P2d issues a request to the Xwr sequencer causing the SRAM based fifo contents to be written to the DRAM.
- FIG. 68 is an illustration showing the major blocks involved in the movement of data from a Pci target to DRAM.
- the S2p sequencer (See FIG. 69) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the S2p sequencer manages movement of data from SRAM to the Pci bus by issuing requests to the Pmo sequencer
- S2p can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, S2p, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the SRAM address, Pci address, Pci endian and request size. S2p then issues a request to Pmo which in turn moves data from the SRAM to a Pci target. The process repeats until the entire request has been satisfied at which time S2p writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. S2p then monitors the DMA channels for additional requests.
- FIG. 70 is an illustration showing the major blocks involved in the movement of data from SRAM to Pci target.
- the P2s sequencer (See FIG. 71) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the P2s sequencer manages movement of data from Pci bus to SRAM by issuing requests to the Pmi sequencer.
- P2s can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, P2s, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the SRAM address, Pci address, Pci endian and request size. P2s then issues a request to Pmo which in turn moves data from the Pci target to the SRAM. The process repeats until the entire request has been satisfied at which time P2s writes ending status in to the DMA descriptor area of SRAM and sets the channel done bit associated with that channel. P2s then monitors the DMA channels for additional requests.
- FIG. 72 is an illustration showing the major blocks involved in the movement of data from a Pci target to DRAM.
- D2s DRAM to SRAM Sequencer
- the D2s sequencer (See FIG. 73) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the D2s sequencer manages movement of data from DRAM to SRAM by issuing requests to the Xrd sequencer.
- D2s can receive requests from any of the processor's thirty-two DMA channels.
- D2s operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, SRAM address and request size. D2s then issues a request to the Xrd sequencer causing the transfer of data to the SRAM. The process repeats until the entire request has been satisfied at which time D2s writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. D2s then monitors the DMA channels for additional requests.
- FIG. 74 is an illustration showing the major blocks involved in the movement of data from DRAM to SRAM.
- the S2d sequencer (See FIG. 75) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the S2d sequencer manages movement of data from SRAM to DRAM by issuing requests to the Xwr sequencer.
- S2d can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, S2d, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, SRAM address, checksum reset and request size. S2d then issues a request to the Xwr sequencer causing the transfer of data to the DRAM. The process repeats until the entire request has been satisfied at which time S2d writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. S2d then monitors the DMA channels for additional requests.
- FIG. 76 is an illustration showing the major blocks involved in the movement of data from SRAM to DRAM.
- the Psi sequencer (See FIG. 77) acts as both a slave sequencer and a master sequencer. Servicing requests issued by a Pci master, the Psi sequencer manages movement of data from Pci bus to SRAM and Pci bus to DRAM via SRAM by issuing requests to the SramCtrl and Xwr sequencers.
- Psi manages write requests to configuration space, expansion rom, DRAM, SRAM and memory mapped registers. Psi separates these Pci bus operations in to two categories with different action taken for each. DRAM accesses result in Psi generating write request to an SRAM buffer followed with a write request to the Xwr sequencer. Subsequent write or read DRAM operations are retry terminated until the buffer has been emptied. An event notification is set for the processor allowing message passing to occur through DRAM space.
- All other Pci write transactions result in Psi posting the write information including Pci address, Pci byte marks and Pci data to a reserved location in SRAM, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, SRAM or registers are terminated with retry until the processor clears the event flag. This allows the INIC pipelining levels to a minimum for the posted write and give the processor ample time to modify data for subsequent Pci read operations.
- FIG. 77 depicts the sequence of events when Psi is the target of a Pci write operation. Note that events 4 through 7 occur only when the write operation targets the DRAM.
- the Pso sequencer (See FIG. 78) acts as both a slave sequencer and a master sequencer. Servicing requests issued by a Pci master, the Pso sequencer manages movement of data to Pci bus from SRAM and to Pci bus from DRAM via SRAM by issuing requests to the SramCtrl and Xrd sequencers.
- Pso manages read requests to configuration space, expansion rom, DRAM, SRAM and memory mapped registers. Pso separates these Pci bus operations in to two categories with different action taken for each. DRAM accesses result in Pso generating read request to the Xrd sequencer followed with a read request to SRAM buffer. Subsequent write or read DRAM operations are retry terminated until the buffer has been emptied.
- All other Pci read transactions result in Pso posting the read request information including Pci address and Pci byte marks to a reserved location in SRAM, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, SRAM or registers are terminated with retry until the processor clears the event flag.
- This allows the INIC to use a microcoded response mechanism to return data for the request.
- the processor decodes the request information, formulates or fetches the requested data and stores it in SRAM then clears the event flag allowing Pso to fetch the data and return it on the Pci bus.
- FIG. 78 depicts the sequence of events when Pso is the target of a Pci read operation.
- the receive sequencer (See FIG. 79) (RcvSeq) analyzes and manages incoming packets, stores the result in DRAM buffers, then notifies the processor through the receive queue (RcvQ) mechanism.
- the process begins when a buffer descriptor is available at the output of the FreeQ.
- RcvSeq issues a request to the Qmg which responds by supplying the buffer descriptor to RcvSeq.
- RcvSeq then waits for a receive packet.
- the Mac, network, transport and session information is analyzed as each byte is received and stored in the assembly register (AssyReg). When four bytes of information is available, RcvSeq requests a write of the data to the SRAM.
- a DRAM write request is issued to Xwr.
- the process continues until the entire packet has been received at which point RcvSeq stores the results of the packet analysis in the beginning of the DRAM buffer.
- RcvSeq issues a write-queue request to Qmg.
- Qmg responds by storing a buffer descriptor and a status vector provided by RcvSeq. The process then repeats. If RcvSeq detects the arrival of a packet before a free buffer is available, it ignores the packet and sets the FrameLost status bit for the next received packet.
- FIG. 80 depicts the sequence of events for successful reception of a packet followed by a definition of the receive buffer and the buffer descriptor as stored on the RcvQ.
- FIG. 90 shows the Receive Buffer Descriptor.
- FIGS. 91 - 93 show the Receive Buffer Format.
- the transmit sequencer (See FIG. 85) (XmtSeq) analyzes and manages outgoing packets, using buffer descriptors retrieved from the transmit queue (XmtQ) then storing the descriptor for the freed buffer in the free buffer queue (FreeQ).
- the process begins when a buffer descriptor is available at the output of the XmtQ.
- XmtSeq issues a request to the Qmg which responds by supplying the buffer descriptor to XmtSeq
- XmtSeq then issues a read request to the Xrd sequencer.
- XmtSeq issues a read request to SramCtrl then instructs the Mac to begin frame transmission.
- XmtSeq stores the buffer descriptor on the FreeQ thereby recycling the buffer.
- FIG. 86 depicts the sequence of events for successful transmission of a packet followed by a definition of the receive buffer and the buffer descriptor as stored on the XmtQ.
- FIG. 87 shows the Transmit Buffer Descriptor.
- FIG. 88 shows the Transmit Buffer Format.
- FIG. 89 shows the Transmit Status Vector.
- the INIC includes special hardware assist for the implementation of message and pointer queues.
- the hardware assist is called the queue manager (See FIG. 90) (Qmg) and manages the movement of queue entries between Cpu and SRAM, between DMA sequencers and SRAM as well as between SRAM and DRAM.
- Queues comprise three distinct entities; the queue head (QHd), the queue tail (QTI) and the queue body (QBdy).
- QHd resides in 64 bytes of scratch ram and provides the area to which entries will be written (pushed).
- QTI resides in 64 bytes of scratch ram and contains queue locations from which entries will be read (popped).
- QBdy resides in DRAM and contains locations for expansion of the queue in order to minimize the SRAM space requirements.
- the QBdy size depends upon the queue being accessed and the initialization parameters presented during queue initialization.
- Qmg accepts operations from both Cpu and DMA sources (See FIG. 91). Executing these operations at a frequency of 133 MHz, Qmg reserves even cycles for DMA requests and reserves odd cycles for Cpu requests.
- Valid Cpu operations include initialize queue (InitQ), write queue (WrQ) and read queue (RdQ).
- Valid DMA requests include read body (RdBdy) and write body (WrBdy).
- Qmg working in unison with Q2d and D2q generate requests to the Xwr and Xrd sequencers to control the movement of data between the QHd, QTI and QBdy.
- FIG. 90 shows the major functions of Qmg.
- the arbiter selects the next operation to be performed.
- the dual-ported SRAM holds the queue variables HdWrAddr, HdRdAddr, TlWrAddr, TIRdAddr, BdyWrAddr, BdyRdAddr and QSz.
- Qmg accepts an operation request, fetches the queue variables from the queue ram (Qram), modifies the variables based on the current state and the requested operation then updates the variables and issues a read or write request to the SRAM controller.
- the SRAM controller services the requests by writing the tail or reading the head and returning an acknowledge.
- DMA operations are accomplished through a combination of thirtytwo DMA channels (DmaCh) and seven DMA sequencers (DmaSeq). Each DMA channel provides a mechanism whereby a Cpu can issue a command to any of the seven DMA sequencers. Where as the DMA channels are multi-purpose, the DMA sequencers they command are single purpose as shown in FIG. 92.
- the processors manage DMA in the following way.
- the processor writes a DMA descriptor to an SRAM location reserved for the DMA channel.
- the format of the DMA descriptor is dependent upon the targeted DMA sequencer.
- the processor then writes the DMA sequencer number to the channel command register.
- Each of the DMA sequencers polls all thirtytwo DMA channels in search of commands to execute. Once a command request has been detected, the DMA sequencer fetches a DMA descriptor from a fixed location in SRAM. The SRAM location is fixed and is determined by the DMA channel number. The DMA sequencer loads the DMA descriptor in to it's own registers, executes the command, then overwrites the DMA descriptor with ending status. Once the command has halted, due to completion or error, and the ending status has been written, the DMA sequencer sets the done bit for the current DMA channel.
- the done bit appears in a DMA event register which the Cpu can examine.
- the Cpu fetches ending status from SRAM, then clears the done bit by writing zeroes to the channel command (ChCmd) register.
- the channel is now ready to accept another command.
- the format of the channel command register is as shown in FIG. 93.
- the format of the P2d or P2s descriptor is as shown in FIG. 94.
- the format of the S2p or D2p descriptor is as shown in FIG. 95.
- the format of the S2d, D2d or D2s descriptor is as shown in FIG. 96.
- the format of the ending status of all channels is as shown in FIG. 97.
- the format of the ChEvnt register is as shown in FIG. 98.
- FIG. 99 is a block diagram of MAC CONTROL (Macctrl).
- N number of jobs in the system (either in progress or in a queue),
- R response time (which includes time waiting in queues).
- a 256-byte frame at 100 Mb/sec takes 20 usec per frame.
- the real point here is the effect of instructions/frame on the throughput that can be maintained. If the instructions/frame drops to 200, then the INIC is capable of handling the full theoretical load (102000 frames/second) with only 9 active TCBs. If it drops to 100 instructions per frame, then the INIC can handle full bandwidth at 256 byte frames (204000 frames/second) with 10 active CCBs.
- the bottom line is that ALL hardware-assist that reduces the instructions/frame is really worthwhile. If header-assist hardware can save us 50 instructions per frame then it goes straight to the throughput bottom line.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Communication Control (AREA)
Abstract
Description
- This application claims the benefit under 35 U.S.C. § 120 of U.S. patent application Ser. No. 09/748,936, filed Dec. 26, 2000, which in turn claims the benefit under 35 U.S.C. § 120 of U.S. patent application Ser. No. 09/439,603, filed Nov. 12, 1999, which in turn claims the benefit under 35 U.S.C. § 120 of U.S. patent application Ser. No. 09/067,544, filed Apr. 27, 1998, which in turn claims the benefit under 35 U.S.C. § 119(e) of the Provisional Application Ser. No. 60/061,809, filed Oct. 14, 1997. The complete disclosures of: U.S. patent application Ser. No. 09/748,936; U.S. patent application Ser. No. 09/439,603; U.S. patent application Ser. No. 09/067,544; and Provisional Application Ser. No. 60/061,809 are incorporated herein by reference.
- The present invention relates generally to computer or other networks, and more particularly to protocol processing for information communicated between hosts such as computers connected to a network.
- The advantages of network computing are increasingly evident. The convenience and efficiency of providing information, communication or computational power to individuals at their personal computer or other end user devices has led to rapid growth of such network computing, including internet as well as intranet systems and applications.
- As is well known, most network computer communication is accomplished with the aid of a layered software architecture for moving information between host computers connected to the network. The layers help to segregate information into manageable segments, the general functions of each layer often based on an international standard called Open Systems Interconnection (OS). OSI sets forth seven processing layers through which information may pass when received by a host in order to be presentable to an end user. Similarly, transmission of information from a host to the network may pass through those seven processing layers in reverse order. Each step of processing and service by a layer may include copying the processed information. Another reference model that is widely implemented, called TCP/IP (TCP stands for transport control protocol, while IP denotes internet protocol) essentially employs five of the seven layers of OSI.
- Networks may include, for instance, a high-speed bus such as an Ethernet connection or an internet connection between disparate local area networks (LANs), each of which includes multiple hosts, or any of a variety of other known means for data transfer between hosts. According to the OSI standard, physical layers are connected to the network at respective hosts, the physical layers providing transmission and receipt of raw data bits via the network. A data link layer is serviced by the physical layer of each host, the data link layers providing frame division and error correction to the data received from the physical layers, as well as processing acknowledgment frames sent by the receiving host. A network layer of each host is serviced by respective data link layers, the network layers primarily controlling size and coordination of subnets of packets of data.
- A transport layer is serviced by each network layer and a session layer is serviced by each transport layer within each host. Transport layers accept data from their respective session layers and split the data into smaller units for transmission to the other host's transport layer, which concatenates the data for presentation to respective presentation layers. Session layers allow for enhanced communication control between the hosts. Presentation layers are serviced by their respective session layers, the presentation layers translating between data semantics and syntax which may be peculiar to each host and standardized structures of data representation. Compression and/or encryption of data may also be accomplished at the presentation level. Application layers are serviced by respective presentation layers, the application layers translating between programs particular to individual hosts and standardized programs for presentation to either an application or an end user. The TCP/IP standard includes the lower four layers and application layers, but integrates the functions of session layers and presentation layers into adjacent layers. Generally speaking, application, presentation and session layers are defined as upper layers, while transport, network and data link layers are defined as lower layers.
- The rules and conventions for each layer are called the protocol of that layer, and since the protocols and general functions of each layer are roughly equivalent in various hosts, it is useful to think of communication occurring directly between identical layers of different hosts, even though these peer layers do not directly communicate without information transferring sequentially through each layer below. Each lower layer performs a service for the layer immediately above it to help with processing the communicated information. Each layer saves the information for processing and service to the next layer. Due to the multiplicity of hardware and software architectures, systems and programs commonly employed, each layer is necessary to insure that the data can make it to the intended destination in the appropriate form, regardless of variations in hardware and software that may intervene.
- In preparing data for transmission from a first to a second host, some control data is added at each layer of the first host regarding the protocol of that layer, the control data being indistinguishable from the original (payload) data for all lower layers of that host. Thus an application layer attaches an application header to the payload data and sends the combined data to the presentation layer of the sending host, which receives the combined data, operates on it and adds a presentation header to the data, resulting in another combined data packet. The data resulting from combination of payload data, application header and presentation header is then passed to the session layer, which performs required operations including attaching a session header to the data and presenting the resulting combination of data to the transport layer. This process continues as the information moves to lower layers, with a transport header, network header and data link header and trailer attached to the data at each of those layers, with each step typically including data moving and copying, before sending the data as bit packets over the network to the second host.
- The receiving host generally performs the converse of the above-described process, beginning with receiving the bits from the network, as headers are removed and data processed in order from the lowest (physical) layer to the highest (application) layer before transmission to a destination of the receiving host. Each layer of the receiving host recognizes and manipulates only the headers associated with that layer, since to that layer the higher layer control data is included with and indistinguishable from the payload data. Multiple interrupts, valuable central processing unit (CPU) processing time and repeated data copies may also be necessary for the receiving host to place the data in an appropriate form at its intended destination.
- The above description of layered protocol processing is simplified, as college-level textbooks devoted primarily to this subject are available, such as Computer Networks, Third Edition (1996) by Andrew S. Tanenbaum, which is incorporated herein by reference. As defined in that book, a computer network is an interconnected collection of autonomous computers, such as internet and intranet systems, including local area networks (LANs), wide area networks (WANs), asynchronous transfer mode (ATM), ring or token ring, wired, wireless, satellite or other means for providing communication capability between separate processors. A computer is defined herein to include a device having both logic and memory functions for processing data, while computers or hosts connected to a network are said to be heterogeneous if they function according to different operating systems or communicate via different architectures.
- As networks grow increasingly popular and the information communicated thereby becomes increasingly complex and copious, the need for such protocol processing has increased. It is estimated that a large fraction of the processing power of a host CPU may be devoted to controlling protocol processes, diminishing the ability of that CPU to perform other tasks. Network interface cards have been developed to help with the lowest layers, such as the physical and data link layers. It is also possible to increase protocol processing speed by simply adding more processing power or CPUs according to conventional arrangements. This solution, however, is both awkward and expensive. But the complexities presented by various networks, protocols, architectures, operating systems and applications generally require extensive processing to afford communication capability between various network hosts.
- The current invention provides a system for processing network communication that greatly increases the speed of that processing and the efficiency of moving the data being communicated. The invention has been achieved by questioning the long-standing practice of performing multilayered protocol processing on a general-purpose processor. The protocol processing method and architecture that results effectively collapses the layers of a connection-based, layered architecture such as TCP/IP into a single wider layer which is able to send network data more directly to and from a desired location or buffer on a host. This accelerated processing is provided to a host for both transmitting and receiving data, and so improves performance whether one or both hosts involved in an exchange of information have such a feature.
- The accelerated processing includes employing representative control instructions for a given message that allow data from the message to be processed via a fast-path which accesses message data directly at its source or delivers it directly to its intended destination. This fast-path bypasses conventional protocol processing of headers that accompany the data. The fast-path employs a specialized microprocessor designed for processing network communication, avoiding the delays and pitfalls of conventional software layer processing, such as repeated copying and interrupts to the CPU. In effect, the fast-path replaces the states that are traditionally found in several layers of a conventional network stack with a single state machine encompassing all those layers, in contrast to conventional rules that require rigorous differentiation and separation of protocol layers. The host retains a sequential protocol processing stack which can be employed for setting up a fast-path connection or processing message exceptions. The specialized microprocessor and the host intelligently choose whether a given message or portion of a message is processed by the microprocessor or the host stack.
- FIG. 1 is a plan view diagram of a system of the present invention, including a host computer having a communication-processing device for accelerating network communication.
- FIG. 2 is a diagram of information flow for the host of FIG. 1 in processing network communication, including a fast-path, a slow-path and a transfer of connection context between the fast and slow-paths.
- FIG. 3 is a flow chart of message receiving according to the present invention.
- FIG. 4A is a diagram of information flow for the host of FIG. 1 receiving a message packet processed by the slow-path.
- FIG. 4B is a diagram of information flow for the host of FIG. 1 receiving an initial message packet processed by the fast-path.
- FIG. 4C is a diagram of information flow for the host of FIG. 4B receiving a subsequent message packet processed by the fast-path.
- FIG. 4D is a diagram of information flow for the host of FIG. 4C receiving a message packet having an error that causes processing to revert to the slow-path.
- FIG. 5 is a diagram of information flow for the host of FIG. 1 transmitting a message by either the fast or slow-paths.
- FIG. 6 is a diagram of information flow for a first embodiment of an intelligent network interface card (INIC) associated with a client having a TCP/IP processing stack.
- FIG. 7 is a diagram of hardware logic for the INIC embodiment shown in FIG. 6, including a packet control sequencer and a fly-by sequencer.
- FIG. 8 is a diagram of the fly-by sequencer of FIG. 7 for analyzing header bytes as they are received by the INIC.
- FIG. 9 is a diagram of information flow for a second embodiment of an INIC associated with a server having a TCP/IP processing stack.
- FIG. 10 is a diagram of a command driver installed in the host of FIG. 9 for creating and controlling a communication control block for the fast-path.
- FIG. 11 is a diagram of the TCP/IP stack and command driver of FIG. 10 configured for NetBios communications.
- FIG. 12 is a diagram of a communication exchange between the client of FIG. 6 and the server of FIG. 9.
- FIG. 13 is a diagram of hardware functions included in the INIC of FIG. 9.
- FIG. 14 is a diagram of a trio of pipelined microprocessors included in the INIC of FIG. 13, including three phases with a processor in each phase.
- FIG. 15A is a diagram of a first phase of the pipelined microprocessor of FIG. 14.
- FIG. 15B is a diagram of a second phase of the pipelined microprocessor of FIG. 14.
- FIG. 15C is a diagram of a third phase of the pipelined microprocessor of FIG. 14.
- FIGS.16-99 are associated with the description below entitled “Disclosure From
Provisional Application 60/061,809”. - FIG. 1 shows a
host 20 of the present invention connected by anetwork 25 to aremote host 22. The increase in processing speed achieved by the present invention can be provided with an intelligent network interface card (INIC) that is easily and affordably added to an existing host, or with a communication processing device (CPD) that is integrated into a host, in either case freeing the host CPU from most protocol processing and allowing improvements in other tasks performed by that CPU. Thehost 20 in a first embodiment contains aCPU 28 and aCPD 30 connected by aPCI bus 33. TheCPD 30 includes a microprocessor designed for processing communication data and memory buffers controlled by a direct memory access (DMA) unit. Also connected to thePCI bus 33 is astorage device 35, such as a semiconductor memory or disk drive, along with any related controls. - Referring additionally to FIG. 2, the
host CPU 28 controls aprotocol processing stack 44 housed instorage 35, the stack including adata link layer 36,network layer 38,transport layer 40,upper layer 46 and anupper layer interface 42. Theupper layer 46 may represent a session, presentation and/or application layer, depending upon the particular protocol being employed and message communicated. Theupper layer interface 42, along with theCPU 28 and any related controls can send or retrieve a file to or from theupper layer 46 orstorage 35, as shown byarrow 48. Aconnection context 50 has been created, as will be explained below, the context summarizing various features of the connection, such as protocol type and source and destination addresses for each protocol layer. The context may be passed between an interface for thesession layer 42 and theCPD 30, as shown byarrows CPD 30 orstorage 35. - When the
CPD 30 holds a CCB defining a particular connection, data received by the CPD from the network and pertaining to the connection is referenced to that CCB and can then be sent directly tostorage 35 according to a fast-path 58, bypassing sequential protocol processing by thedata link 36,network 38 andtransport 40 layers. Transmitting a message, such as sending a file fromstorage 35 toremote host 22, can also occur via the fast-path 58, in which case the context for the file data is added by theCPD 30 referencing a CCB, rather than by sequentially adding headers during processing by thetransport 40,network 38 and data link 36 layers. The DMA controllers of theCPD 30 perform these transfers between CPD andstorage 35. - The
CPD 30 collapses multiple protocol stacks each having possible separate states into a single state machine for fast-path processing. As a result, exception conditions may occur that are not provided for in the single state machine, primarily because such conditions occur infrequently and to deal with them on the CPD would provide little or no performance benefit to the host. Such exceptions can beCPD 30 orCPU 28 initiated. An advantage of the invention includes the manner in which unexpected situations that occur on a fast-path CCB are handled. TheCPD 30 deals with these rare situations by passing back or flushing to thehost protocol stack 44 the CCB and any associated message frames involved, via a control negotiation. The exception condition is then processed in a conventional manner by thehost protocol stack 44. At some later time, usually directly after the handling of the exception condition has completed and fast-path processing can resume, thehost stack 44 hands the CCB back to the CPD. - This fallback capability enables the performance-impacting functions of the host protocols to be handled by the CPD network microprocessor, while the exceptions are dealt with by the host stacks, the exceptions being so rare as to negligibly effect overall performance. The custom designed network microprocessor can have independent processors for transmitting and receiving network information, and further processors for assisting and queuing. A preferred microprocessor embodiment includes a pipelined trio of receive, transmit and utility processors. DMA controllers are integrated into the implementation and work in close concert with the network microprocessor to quickly move data between buffers adjacent the controllers and other locations such as long term storage. Providing buffers logically adjacent to the DMA controllers avoids unnecessary loads on the PCI bus.
- FIG. 3 diagrams the general flow of messages received according to the current invention. A large TCP/IP message such as a file transfer may be received by the host from the network in a number of separate, approximately 64 KB transfers, each of which may be split into many, approximately 1.5 KB frames or packets for transmission over a network. Novel NetWare protocol suites running Sequenced Packet Exchange Protocol (SPX) or NetWare Core Protocol (NCP) over Internetwork Packet Exchange (IPX) work in a similar fashion. Another form of data communication which can be handled by the fast-path is Transaction TCP (hereinafter T/TCP or TTCP), a version of TCP which initiates a connection with an initial transaction request after which a reply containing data may be sent according to the connection, rather than initiating a connection via a several-message initialization dialogue and then transferring data with later messages. In any of the transfers typified by these protocols, each packet conventionally includes a portion of the data being transferred, as well as headers for each of the protocol layers and markers for positioning the packet relative to the rest of the packets of this message.
- When a message packet or frame is received47 from a network by the CPD, it is first validated by a hardware assist. This includes determining the protocol types of the various layers, verifying relevant checksums, and summarizing 57 these findings into a status word or words. Included in these words is an indication whether or not the frame is a candidate for fast-path data flow.
Selection 59 of fast-path candidates is based on whether the host may benefit from this message connection being handled by the CPD, which includes determining whether the packet has header bytes denoting particular protocols, such as TCP/IP or SPX/IPX for example. The small percent of frames that are not fast-path candidates are sent 61 to the host protocol stacks for slow-path protocol processing. Subsequent network microprocessor work with each fast-path candidate determines whether a fast-path connection such as a TCP or SPX CCB is already extant for that candidate, or whether that candidate may be used to set up a new fast-path connection, such as for a TTCP/IP transaction. The validation provided by the CPD provides acceleration whether a frame is processed by the fast-path or a slow-path, as only error free, validated frames are processed by the host CPU even for the slow-path processing. - All received message frames which have been determined by the CPD hardware assist to be fast-path candidates are examined53 by the network microprocessor or INIC comparitor circuits to determine whether they match a CCB held by the CPD. Upon confirming such a match, the CPD removes lower layer headers and sends 69 the remaining application data from the frame directly into its final destination in the host using direct memory access (DMA) units of the CPD. This operation may occur immediately upon receipt of a message packet, for example when a TCP connection already exists and destination buffers have been negotiated, or it may first be necessary to process an initial header to acquire a new set of final destination addresses for this transfer. In this latter case, the CPD will queue subsequent message packets while waiting for the destination address, and then DMA the queued application data to that destination.
- A fast-path candidate that does not match a CCB may be used to set up a new fast-path connection, by sending65 the frame to the host for sequential protocol processing. In this case, the host uses this frame to create 51 a CCB, which is then passed to the CPD to control subsequent frames on that connection. The CCB, which is cached 67 in the CPD, includes control and state information pertinent to all protocols that would have been processed had conventional software layer processing been employed. The CCB also contains storage space for per-transfer information used to facilitate moving application-level data contained within subsequent related message packets directly to a host application in a form available for immediate usage. The CPD takes command of connection processing upon receiving a CCB for that connection from the host.
- As shown more specifically in FIG. 4A, when a message packet is received from the
remote host 22 vianetwork 25, the packet enters hardware receivelogic 32 of theCPD 30, which checksums headers and data, and parses the headers, creating a word or words which identify the message packet and status, storing the headers, data and word temporarily inmemory 60. As well as validating the packet, the receivelogic 32 indicates with the word whether this packet is a candidate for fast-path processing. FIG. 4A depicts the case in which the packet is not a fast-path candidate, in which case theCPD 30 sends the validated headers and data frommemory 60 todata link layer 36 along an internal bus for processing by the host CPU, as shown byarrow 56. The packet is processed by thehost protocol stack 44 ofdata link 36,network 38,transport 40 andsession 42 layers, and data (D) 63 from the packet may then be sent tostorage 35, as shown byarrow 65. - FIG. 4B, depicts the case in which the receive
logic 32 of the CPD determines that a message packet is a candidate for fast-path processing, for example by deriving from the packet's headers that the packet belongs to a TCP/IP, TTCP/IP or SPX/IPX message. Aprocessor 55 in theCPD 30 then checks to see whether the word that summarizes the fast-path candidate matches a CCB held in acache 62. Upon finding no match for this packet, the CPD sends the validated packet frommemory 60 to thehost protocol stack 44 for processing.Host stack 44 may use this packet to create a connection context for the message, including finding and reserving a destination for data from the message associated with the packet, the context taking the form of a CCB. The present embodiment employs a singlespecialized host stack 44 for processing both fast-path and non-fast-path candidates, while in an embodiment described below fast-path candidates are processed by a different host stack than non-fast-path candidates. Some data (D1) 66 from that initial packet may optionally be sent to the destination instorage 35, as shown byarrow 68. The CCB is then sent to theCPD 30 to be saved incache 62, as shown byarrow 64. For a traditional connection-based message such as typified by TCP/IP, the initial packet may be part of a connection initialization dialogue that transpires between hosts before the CCB is created and passed to theCPD 30. - Referring now to FIG. 4C, when a subsequent packet from the same connection as the initial packet is received from the
network 25 byCPD 30, the packet headers and data are validated by the receivelogic 32, and the headers are parsed to create a summary of the message packet and a hash for finding a corresponding CCB, the summary and hash contained in a word or words. The word or words are temporarily stored inmemory 60 along with the packet. Theprocessor 55 checks for a match between the hash and each CCB that is stored in thecache 62 and, finding a match, sends the data (D2) 70 via a fast-path directly to the destination instorage 35, as shown byarrow 72, bypassing thesession layer 42,transport layer 40,network layer 38 anddata link layer 36. The remaining data packets from the message can also be sent by DMA directly to storage, avoiding the relatively slow protocol layer processing and repeated copying by theCPU stack 44. - FIG. 4D shows the procedure for handling the rare instance when a message for which a fast-path connection has been established, such as shown in FIG. 4C, has a packet that is not easily handled by the CPD. In this case the packet is sent to be processed by the
protocol stack 44, which is handed the CCB for that message fromcache 62 via a control dialogue with the CPD, as shown byarrow 76, signaling to the CPU to take over processing of that message. Slow-path processing by the protocol stack then results in data (D3) 80 from the packet being sent, as shown byarrow 82, tostorage 35. Once the packet has been processed and the error situation corrected, the CCB can be handed back via a control dialogue to thecache 62, so that payload data from subsequent packets of that message can again be sent via the fast-path of theCPD 30. Thus the CPU and CPD together decide whether a given message is to be processed according to fast-path hardware processing or more conventional software processing by the CPU. - Transmission of a message from the
host 20 to thenetwork 25 for delivery toremote host 22 also can be processed by either sequential protocol software processing via the CPU or accelerated hardware processing via theCPD 30, as shown in FIG. 5. A message (M) 90 that is selected byCPU 28 fromstorage 35 can be sent tosession layer 42 for processing bystack 44, as shown byarrows CPD 30 already has an appropriate CCB for the message, however, data packets can bypasshost stack 44 and be sent by DMA directly tomemory 60, with theprocessor 55 adding to each data packet a single header containing all the appropriate protocol layers, and sending the resulting packets to thenetwork 25 for transmission toremote host 22. This fast-path transmission can greatly accelerate processing for even a single packet, with the acceleration multiplied for a larger message. - A message for which a fast-path connection is not extant thus may benefit from creation of a CCB with appropriate control and state information for guiding fast-path transmission. For a traditional connection-based message, such as typified by TCP/IP or SPX/IPX, the CCB is created during connection initialization dialogue. For a quick-connection message, such as typified by TTCP/IP, the CCB can be created with the same transaction that transmits payload data. In this case, the transmission of payload data may be a reply to a request that was used to set up the fast-path connection. In any case, the CCB provides protocol and status information regarding each of the protocol layers, including which user is involved and storage space for per-transfer information. The CCB is created by
protocol stack 44, which then passes the CCB to theCPD 30 by writing to a command register of the CPD, as shown byarrow 98. Guided by the CCB, theprocessor 55 moves network frame-sized portions of the data from the source inhost memory 35 into itsown memory 60 using DMA, as depicted byarrow 99. Theprocessor 55 then prepends appropriate headers and checksums to the data portions, and transmits the resulting frames to thenetwork 25, consistent with the restrictions of the associated protocols. After theCPD 30 has received an acknowledgement that all the data has reached its destination, the CPD will then notify thehost 35 by writing to a response buffer. - Thus, fast-path transmission of data communications also relieves the host CPU of per-frame processing. A vast majority of data transmissions can be sent to the network by the fast-path. Both the input and output fast-paths attain a huge reduction in interrupts by functioning at an upper layer level, i.e., session level or higher, and interactions between the network microprocessor and the host occur using the full transfer sizes which that upper layer wishes to make. For fast-path communications, an interrupt only occurs (at the most) at the beginning and end of an entire upper-layer message transaction, and there are no interrupts for the sending or receiving of each lower layer portion or packet of that transaction.
- A simplified intelligent network interface card (INIC)150 is shown in FIG. 6 to provide a network interface for a
host 152.Hardware logic 171 of theINIC 150 is connected to anetwork 155, with a peripheral bus (PCI) 157 connecting the INIC and host. Thehost 152 in this embodiment has a TCP/IP protocol stack, which provides a slow-path 158 for sequential software processing of message frames received from thenetwork 155. Thehost 152 protocol stack includes adata link layer 160,network layer 162, atransport layer 164 and anapplication layer 166, which provides a source ordestination 168 for the communication data in thehost 152. Other layers which are not shown, such as session and presentation layers, may also be included in thehost stack 152, and the source or destination may vary depending upon the nature of the data and may actually be the application layer. - The
INIC 150 has anetwork processor 170 which chooses between processing messages along a slow-path 158 that includes the protocol stack of the host, or along a fast-path 159 that bypasses the protocol stack of the host. Each received packet is processed on the fly byhardware logic 171 contained inINIC 150, so that all of the protocol headers for a packet can be processed without copying, moving or storing the data between protocol layers. Thehardware logic 171 processes the headers of a given packet at one time as packet bytes pass through the hardware, by categorizing selected header bytes. Results of processing the selected bytes help to determine which other bytes of the packet are categorized, until a summary of the packet has been created, including checksum validations. The processed headers and data from the received packet are then stored inINIC storage 185, as well as the word or words summarizing the headers and status of the packet. - The hardware processing of message packets received by
INIC 150 fromnetwork 155 is shown in more detail in FIG. 7. A received message packet first enters amedia access controller 172, which controls INIC access to the network and receipt of packets and can provide statistical information for network protocol management. From there, data flows one byte at a time into anassembly register 174, which in this example is 128 bits wide. The data is categorized by a fly-bysequencer 178, as will be explained in more detail with regard to FIG. 8, which examines the bytes of a packet as they fly by, and generates status from those bytes that will be used to summarize the packet. The status thus created is merged with the data by amultiplexor 180 and the resulting data stored inSRAM 182. Apacket control sequencer 176 oversees the fly-bysequencer 178, examines information from themedia access controller 172, counts the bytes of data, generates addresses, moves status and manages the movement of data from theassembly register 174 toSRAM 182 and eventuallyDRAM 188. Thepacket control sequencer 176 manages a buffer inSRAM 182 viaSRAM controller 183, and also indicates to aDRAM controller 186 when data needs to be moved fromSRAM 182 to a buffer inDRAM 188. Once data movement for the packet has been completed and all the data has been moved to the buffer inDRAM 188, thepacket control sequencer 176 will move the status that has been generated in the fly-bysequencer 178 out to theSRAM 182 and to the beginning of theDRAM 188 buffer to be prepended to the packet data. Thepacket control sequencer 176 then requests aqueue manager 184 to enter a receive buffer descriptor into a receive queue, which in turn notifies theprocessor 170 that the packet has been processed byhardware logic 171 and its status summarized. - FIG. 8 shows that the fly-by
sequencer 178 has several tiers, with each tier generally focusing on a particular portion of the packet header and thus on a particular protocol layer, for generating status pertaining to that layer. The fly-bysequencer 178 in this embodiment includes a mediaaccess control sequencer 191, anetwork sequencer 192, atransport sequencer 194 and asession sequencer 195. Sequencers pertaining to higher protocol layers can additionally be provided. The fly-bysequencer 178 is reset by thepacket control sequencer 176 and given pointers by the packet control sequencer that tell the fly-by sequencer whether a given byte is available from theassembly register 174. The mediaaccess control sequencer 191 determines, by looking at bytes 0-5, that a packet is addressed to host 152 rather than or in addition to another host.Offsets access control sequencer 191 to determine the type field, for example whether the packet is Ethernet or 802.3. If the type field is Ethernet those bytes also tell the mediaaccess control sequencer 191 the packet's network protocol type. For the 802.3 case, those bytes instead indicate the length of the entire frame, and the mediaaccess control sequencer 191 will check eight bytes further into the packet to determine the network layer type. - For most packets the
network sequencer 192 validates that the header length received has the correct length, and checksums the network layer header. For fast-path candidates the network layer header is known to be IP or IPX from analysis done by the mediaaccess control sequencer 191. Assuming for example that the type field is 802.3 and the network protocol is IP, thenetwork sequencer 192 analyzes the first bytes of the network layer header, which will begin atbyte 22, in order to determine IP type. The first bytes of the IP header will be processed by thenetwork sequencer 192 to determine what IP type the packet involves. Determining that the packet involves, for example,IP version 4, directs further processing by thenetwork sequencer 192, which also looks at the protocol type located ten bytes into the IP header for an indication of the transport header protocol of the packet. For example, for IP over Ethernet, the IP header begins at offset 14, and the protocol type byte is offset 23, which will be processed by network logic to determine whether the transport layer protocol is TCP, for example. From the length of the network layer header, which is typically 20-40 bytes,network sequencer 192 determines the beginning of the packet's transport layer header for validating the transport layer header.Transport sequencer 194 may generate checksums for the transport layer header and data, which may include information from the IP header in the case of TCP at least. - Continuing with the example of a TCP packet,
transport sequencer 194 also analyzes the first few bytes in the transport layer portion of the header to determine, in part, the TCP source and destination ports for the message, such as whether the packet is NetBios or other protocols.Byte 12 of the TCP header is processed by thetransport sequencer 194 to determine and validate the TCP header length.Byte 13 of the TCP header contains flags that may, aside from ack flags and push flags, indicate unexpected options, such as reset and fin, that may cause the processor to categorize this packet as an exception. TCP offsetbytes hardware logic 171 while the rest of the frame is validated against the checksum. -
Session sequencer 195 determines the length of the session layer header, which in the case of NetBios is only four bytes, two of which tell the length of the NetBios payload data, but which can be much larger for other protocols. Thesession sequencer 195 can also be used to categorize the type of message as read or write, for example, for which the fast-path may be particularly beneficial. Further upper layer logic processing, depending upon the message type, can be performed by thehardware logic 171 ofpacket control sequencer 176 and fly-bysequencer 178. Thushardware logic 171 intelligently directs hardware processing of the headers by categorization of selected bytes from a single stream of bytes, with the status of the packet being built from classifications determined on the fly. Once thepacket control sequencer 176 detects that all of the packet has been processed by the fly-bysequencer 178, thepacket control sequencer 176 adds the status information generated by the fly-bysequencer 178 and any status information generated by thepacket control sequencer 176, and prepends (adds to the front) that status information to the packet, for convenience in handling the packet by theprocessor 170. The additional status information generated by thepacket control sequencer 176 includesmedia access controller 172 status information and any errors discovered, or data overflow in either the assembly register or DRAM buffer, or other miscellaneous information regarding the packet. Thepacket control sequencer 176 also stores entries into a receive buffer queue and a receive statistics queue via thequeue manager 184. - An advantage of processing a packet by
hardware logic 171 is that the packet does not, in contrast with conventional sequential software protocol processing, have to be stored, moved, copied or pulled from storage for processing each protocol layer header, offering dramatic increases in processing efficiency and savings in processing time for each packet. The packets can be processed at the rate bits are received from the network, for example 100 megabits/second for a 100 baseT connection. The time for categorizing a packet received at this rate and having a length of sixty bytes is thus about 5 microseconds. The total time for processing this packet with thehardware logic 171 and sending packet data to its host destination via the fast-path may be about 16 microseconds or less, assuming a 66 MH PCI bus, whereas conventional software protocol processing by a 300 MH Pentium II® processor may take as much as 200 microseconds in a busy system. More than an order of magnitude decrease in processing time can thus be achieved with fast-path 159 in comparison with a high-speed CPU employing conventional sequential software protocol processing, demonstrating the dramatic acceleration provided by processing the protocol headers by thehardware logic 171 andprocessor 170, without even considering the additional time savings afforded by the reduction in CPU interrupts and host bus bandwidth savings. - The
processor 170 chooses, for each received message packet held instorage 185, whether that packet is a candidate for the fast-path 159 and, if so, checks to see whether a fast-path has already been set up for the connection that the packet belongs to. To do this, theprocessor 170 first checks the header status summary to determine whether the packet headers are of a protocol defined for fast-path candidates. If not, theprocessor 170 commands DMA controllers in theINIC 150 to send the packet to the host for slow-path 158 processing. Even for a slow-path 158 processing of a message, theINIC 150 thus performs initial procedures such as validation and determination of message type, and passes the validated message at least to thedata link layer 160 of the host. - For fast-
path 159 candidates, theprocessor 170 checks to see whether the header status summary matches a CCB held by the INIC. If so, the data from the packet is sent along fast-path 159 to thedestination 168 in the host. If the fast-path 159 candidate's packet summary does not match a CCB held by the INIC, the packet may be sent to thehost 152 for slow-path processing to create a CCB for the message. Employment of the fast-path 159 may also not be needed or desirable for the case of fragmented messages or other complexities. For the vast majority of messages, however, the INIC fast-path 159 can greatly accelerate message processing. TheINIC 150 thus provides a singlestate machine processor 170 that decides whether to send data directly to its destination, based upon information gleaned on the fly, as opposed to the conventional employment of a state machine in each of several protocol layers for determining the destiny of a given packet. - In processing an indication or packet received at the
host 152, a protocol driver of the host selects the processing route based upon whether the indication is fast-path or slow-path. A TCP/IP or SPX/IPX message has a connection that is set up from which a CCB is formed by the driver and passed to the INIC for matching with and guiding the fast-path packet to theconnection destination 168. For a TTCP/IP message, the driver can create a connection context for the transaction from processing an initial request packet, including locating themessage destination 168, and then passing that context to the INIC in the form of a CCB for providing a fast-path for a reply from that destination. A CCB includes connection and state information regarding the protocol layers and packets of the message. Thus a CCB can include source and destination media access control (MAC) addresses, source and destination IP or IPX addresses, source and destination TCP or SPX ports, TCP variables such as timers, receive and transmit windows for sliding window protocols, and information denoting the session layer protocol. - Caching the CCBs in a hash table in the INIC provides quick comparisons with words summarizing incoming packets to determine whether the packets can be processed via the fast-
path 159, while the full CCBs are also held in the INIC for processing. Other ways to accelerate this comparison include software processes such as a B-tree or hardware assists such as a content addressable memory (CAM). When INIC microcode or comparitor circuits detect a match with the CCB, a DMA controller places the data from the packet in thedestination 168, without any interrupt by the CPU, protocol processing or copying. Depending upon the type of message received, the destination of the data may be the session, presentation or application layers, or a file buffer cache in thehost 152. - FIG. 9 shows an
INIC 200 connected to ahost 202 that is employed as a file server. This INIC provides a network interface for several network connections employing the 802.3u standard, commonly known as Fast Ethernet. TheINIC 200 is connected by aPCI bus 205 to theserver 202, which maintains a TCP/IP or SPX/IPX protocol stack includingMAC layer 212,network layer 215,transport layer 217 andapplication layer 220, with a source/destination 222 shown above the application layer, although as mentioned earlier the application layer can be the source or destination. The INIC is also connected to networklines Network lines line 210 is connected with a first horizontal row ofsequencers 250,line 240 is connected with a second horizontal row ofsequencers 260,line 242 is connected with a third horizontal row ofsequencers 262 andline 244 is connected with a fourth horizontal row ofsequencers 264. After a packet has been validated and summarized by one of the horizontal hardware rows it is stored along with its status summary instorage 270. - A
network processor 230 determines, based on that summary and a comparison with any CCBs stored in theINIC 200, whether to send a packet along a slow-path 231 for processing by the host. A large majority of packets can avoid such sequential processing and have their data portions sent by DMA along a fast-path 237 directly to thedata destination 222 in the server according to a matching CCB. Similarly, the fast-path 237 provides an avenue to send data directly from thesource 222 to any of the network lines byprocessor 230 division of the data into packets and addition of full headers for network transmission, again minimizing CPU processing and interrupts. For clarity onlyhorizontal sequencer 250 is shown active; in actuality each of thesequencer rows specialized INIC 200 is much faster at working with message packets than even advanced general-purpose host CPUs that processes those headers sequentially according to the software protocol stack. - One of the most commonly used network protocols for large messages such as file transfers is server message block (SMB) over TCP/IP. SMB can operate in conjunction with redirector software that determines whether a required resource for a particular operation, such as a printer or a disk upon which a file is to be written, resides in or is associated with the host from which the operation was generated or is located at another host connected to the network, such as a file server. SMB and server/redirector are conventionally serviced by the transport layer; in the present invention SMB and redirector can instead be serviced by the INIC. In this case, sending data by the DMA controllers from the INIC buffers when receiving a large SMB transaction may greatly reduce interrupts that the host must handle. Moreover, this DMA generally moves the data to its final destination in the file system cache. An SMB transmission of the present invention follows essentially the reverse of the above described SMB receive, with data transferred from the host to the INIC and stored in buffers, while the associated protocol headers are prepended to the data in the INIC, for transmission via a network line to a remote host. Processing by the INIC of the multiple packets and multiple TCP, IP, NetBios and SMB protocol layers via custom hardware and without repeated interrupts of the host can greatly increase the speed of transmitting an SMB message to a network line.
- As shown in FIG. 10, for controlling whether a given message is processed by the
host 202 or by theINIC 200, amessage command driver 300 may be installed inhost 202 to work in concert with ahost protocol stack 310. Thecommand driver 300 can intervene in message reception or transmittal, create CCBs and send or receive CCBs from theINIC 200, so that functioning of the INIC, aside from improved performance, is transparent to a user. Also shown is an INIC memory 304 and anINIC miniport driver 306, which can direct message packets received fromnetwork 210 to either theconventional protocol stack 310 or thecommand protocol stack 300, depending upon whether a packet has been labeled as a fast-path candidate. Theconventional protocol stack 310 has adata link layer 312, anetwork layer 314 and atransport layer 316 for conventional, lower layer processing of messages that are not labeled as fast-path candidates and therefore not processed by thecommand stack 300. Residing above thelower layer stack 310 is anupper layer 318, which represents a session, presentation and/or application layer, depending upon the message communicated. Thecommand driver 300 similarly has adata link layer 320, anetwork layer 322 and atransport layer 325. - The
driver 300 includes anupper layer interface 330 that determines, for transmission of messages to thenetwork 210, whether a message transmitted from theupper layer 318 is to be processed by thecommand stack 300 and subsequently the INIC fast-path, or by theconventional stack 310. When theupper layer interface 330 receives an appropriate message from theupper layer 318 that would conventionally be intended for transmission to the network after protocol processing by the protocol stack of the host, the message is passed todriver 300. The INIC then acquires network-sized portions of the message data for that transmission via INIC DMA units, prepends headers to the data portions and sends the resulting message packets down the wire. Conversely, in receiving a TCP, TTCP, SPX or similar message packet from thenetwork 210 to be used in setting up a fast-path connection,miniport driver 306 diverts that message packet to commanddriver 300 for processing. Thedriver 300 processes the message packet to create a context for that message, with the driver 302 passing the context and command instructions back to theINIC 200 as a CCB for sending data of subsequent messages for the same connection along a fast-path. Hundreds of TCP, TTCP, SPX or similar CCB connections may be held indefinitely by the INIC, although a least recently used (LRU) algorithm is employed for the case when the INIC cache is full. Thedriver 300 can also create a connection context for a TTCP request which is passed to theINIC 200 as a CCB, allowing fast-path transmission of a TTCP reply to the request. A message having a protocol that is not accelerated can be processed conventionally byprotocol stack 310. - FIG. 11 shows a TCP/IP implementation of command driver software for Microsoft® protocol messages. A conventional
host protocol stack 350 includesMAC layer 353,IP layer 355 andTCP layer 358. Acommand driver 360 works in concert with thehost stack 350 to process network messages. Thecommand driver 360 includes aMAC layer 363, anIP layer 366 and an Alacritech TCP (ATCP) layer 373. Theconventional stack 350 andcommand driver 360 share a network driver interface specification (NDIS)layer 375, which interacts with theINIC miniport driver 306. TheINIC miniport driver 306 sorts receive indications for processing by either theconventional host stack 350 or theATCP driver 360. A TDI filter driver andupper layer interface 380 similarly determines whether messages sent from aTDI user 382 to the network are diverted to the command driver and perhaps to the fast-path of the INIC, or processed by the host stack. - FIG. 12 depicts a typical SMB exchange between a
client 190 andserver 290, both of which have communication devices of the present invention, the communication devices each holding a CCB defining their connection for fast-path movement of data. Theclient 190 includesINIC 150, 802.3 compliantdata link layer 160,IP layer 162,TCP layer 164,NetBios layer 166, andSMB layer 168. The client has a slow-path 157 and fast-path 159 for communication processing. Similarly, theserver 290 includesINIC 200, 802.3 compliantdata link layer 212,IP layer 215,TCP layer 217,NetBios layer 220, andSMB 222. The server is connected to networklines line 210 which is connected toclient 190. The server also has a slow-path 231 and fast-path 237 for communication processing. - Assuming that the
client 190 wishes to read a 100 KB file on theserver 290, the client may begin by sending a Read Block Raw (RBR) SMB command acrossnetwork 210 requesting the first 64 KB of that file on theserver 290. The RBR command may be only 76 bytes, for example, so theINIC 200 on the server will recognize the message type (SMB) and relatively small message size, and send the 76 bytes directly via the fast-path to NetBios of the server. NetBios will give the data to SMB, which processes the Read request and fetches the 64 KB of data into server data buffers. SMB then calls NetBios to send the data, and NetBios outputs the data for the client. In a conventional host, NetBios would call TCP output and pass 64 KB to TCP, which would divide the data into 1460 byte segments and output each segment via IP and eventually MAC (slow-path 231). In the present case, the 64 KB data goes to the ATCP driver along with an indication regarding the client-server SMB connection, which denotes a CCB held by the INIC. TheINIC 200 then proceeds to DMA 1460 byte segments from the host buffers, add the appropriate headers for TCP, IP and MAC at one time, and send the completed packets on the network 210 (fast-path 237). TheINIC 200 will repeat this until the whole 64 KB transfer has been sent. Usually after receiving acknowledgement from the client that the 64 KB has been received, the INIC will then send the remaining 36 KB also by the fast-path 237. - With
INIC 150 operating on theclient 190 when this reply arrives, theINIC 150 recognizes from the first frame received that this connection is receiving fast-path 159 processing (TCP/IP, NetBios, matching a CCB), and the ATCP may use this first frame to acquire buffer space for the message. This latter case is done by passing the first 128 bytes of the NetBios portion of the frame via the ATCP fast-path directly to the host NetBios; that will give NetBios/SMB all of the frame's headers. NetBios/SMB will analyze these headers, realize by matching with a request ID that this is a reply to the original RawRead connection, and give the ATCP a 64K list of buffers into which to place the data. At this stage only one frame has arrived, although more may arrive while this processing is occurring. As soon as the client buffer list is given to the ATCP, it passes that transfer information to theINIC 150, and theINIC 150 starts DMAing any frame data that has accumulated into those buffers. - FIG. 13 provides a simplified diagram of the
INIC 200, which combines the functions of a network interface controller and a protocol processor in asingle ASIC chip 400. TheINIC 200 in this embodiment offers a full-duplex, four channel, 10/100-Megabit per second (Mbps) intelligent network interface controller that is designed for high speed protocol processing for server applications. Although designed specifically for server applications, theINIC 200 can be connected to personal computers, workstations, routers or other hosts anywhere that TCP/IP, TTCP/IP or SPX/IPX protocols are being utilized. - The
INIC 200 is connected with fournetwork lines network lines INIC 200 is controlled by MAC units MAC-A 402, MAC-B 404, MAC-C 406 and MAC-D 408 which contain logic circuits for performing the basic functions of the MAC sublayer, essentially controlling when the INIC accesses thenetwork lines - The
MAC units A 418, XMT & RCV-B 420, XMT & RCV-C 422 and XMT & RCV-D 424, bywires lines DMA controller 444, which includesDMA controllers 438 andSRAM controller 442. Static random access memory (SRAM) buffers 440 are coupled withSRAM controller 442 byline 441. The SRAM andDMA controllers 444 interact acrossline 446 withexternal memory control 450 to send and receive frames viaexternal memory bus 455 to and from dynamic random access memory (DRAM) buffers 460, which is located adjacent to theIC chip 400. The DRAM buffers 460 may be configured as 4 MB, 8 MB, 16 MB or 32 MB, and may optionally be disposed on the chip. The SRAM andDMA controllers 444 are connected vialine 464 to a PCI Bus Interface Unit (BIU) 468, which manages the interface between theINIC 200 and thePCI interface bus 257. The 64-bit, multiplexedBIU 380 provides a direct interface to thePCI bus 257 for both slave and master functions. TheINIC 200 is capable of operating in either a 64-bit or 32-bit PCI environment, while supporting 64-bit addressing in either configuration. - A
microprocessor 470 is connected byline 472 to the SRAM andDMA controllers 444, and connected vialine 475 to thePCI BIU 468.Microprocessor 470 instructions and register files reside in an onchip control store 480, which includes a writable on-chip control store (WCS) of SRAM and a read only memory (ROM), and is connected to the microprocessor byline 477. Themicroprocessor 470 offers a programmable state machine which is capable of processing incoming frames, processing host commands, directing network traffic and directing PCI bus traffic. Three processors are implemented using shared hardware in a three level pipelined architecture that launches and completes a single instruction for every clock cycle. A receiveprocessor 482 is dedicated to receiving communications while a transmitprocessor 484 is dedicated to transmitting communications in order to facilitate full duplex communication, while autility processor 486 offers various functions including overseeing and controlling PCI register access. The instructions for the threeprocessors store 480. - The
INIC 200 in this embodiment can support up to 256 CCBs which are maintained in a table in theDRAM 460. There is also, however, a CCB index in hash order in theSRAM 440 to save sequential searching. Once a hash has been generated, the CCB is cached in SRAM, with up to sixteen cached CCBs in SRAM in this example. These cache locations are shared between the transmit 484 and receive 486 processors so that the processor with the heavier load is able to use more cache buffers. There are also eight header buffers and eight command buffers to be shared between the sequencers. A given header or command buffer is not statically linked to a specific CCB buffer, as the link is dynamic on a per-frame basis. - FIG. 14 shows an overview of the pipelined
microprocessor 470, in which instructions for the receive, transmit and utility processors are executed in three distinct phases according to Clock increments I, II and III, the phases corresponding to each of the pipeline stages. Each phase is responsible for different functions, and each of the three processors occupies a different phase during each Clock increment. Each processor usually operates upon a different instruction stream from thecontrol store 480, and each carries its own program counter and status through each of the phases. - In general, a
first instruction phase 500 of the pipelined microprocessors completes an instruction and stores the result in a destination operand, fetches the next instruction, and stores that next instruction in an instruction register. A first register set 490 provides a number of registers including the instruction register, and a set ofcontrols 492 for first register set provides the controls for storage to thefirst register set 490. Some items pass through the first phase without modification by thecontrols 492, and instead are simply copied into the first register set 490 or aRAM file register 533. Asecond instruction phase 560 has an instruction decoder andoperand multiplexer 498 that generally decodes the instruction that was stored in the instruction register of the first register set 490 and gathers any operands which have been generated, which are then stored in a decode register of a second register set 496. The first register set 490, second register set 496 and a third register set 501, which is employed in athird instruction phase 600, include many of the same registers, as will be seen in the more detailed views of FIGS. 14A-C. The instruction decoder andoperand multiplexer 498 can read from two address and data ports of theRAM file register 533, which operates in both thefirst phase 500 andsecond phase 560. Athird phase 600 of theprocessor 470 has an arithmetic logic unit (ALU) 602 which generally performs any ALU operations on the operands from the second register set, storing the results in a results register included in the third register set 501. Astack exchange 608 can reorder register stacks, and aqueue manager 503 can arrange queues for theprocessor 470, the results of which are stored in the third register set. - The instructions continue with the first phase then following the third phase, as depicted by a
circular pipeline 505. Note that various functions have been distributed across the three phases of the instruction execution in order to minimize the combinatorial delays within any given phase. With a frequency in this embodiment of 66 Megahertz, each Clock increment takes 15 nanoseconds to complete, for a total of 45 nanoseconds to complete one instruction for each of the three processors. The instruction phases are depicted in more detail in FIGS. 15A-C, in which each phase is shown in a different figure. - More particularly, FIG. 15A shows some specific hardware functions of the
first phase 500, which generally includes the first register set 490 andrelated controls 492. The controls for the first register set 492 includes anSRAM control 502, which is a logical control for loading address and write data into SRAM address and data registers 520. Thus the output of theALU 602 from thethird phase 600 may be placed bySRAM control 502 into an address register or data register of SRAM address and data registers 520. Aload control 504 similarly provides controls for writing a context for a file to filecontext register 522, and anotherload control 506 provides controls for storing a variety of miscellaneous data to flip-flop registers 525. ALU condition codes, such as whether a carried bit is set, get clocked into ALU condition codes register 528 without an operation performed in thefirst phase 500. Flag decodes 508 can perform various functions, such as setting locks, that get stored in flag registers 530. - The
RAM file register 533 has a single write port for addresses and data and two read ports for addresses and data, so that more than one register can be read from at one time. As noted above, theRAM file register 533 essentially straddles the first and second phases, as it is written in thefirst phase 500 and read from in thesecond phase 560. Acontrol store instruction 510 allows the reprogramming of the processors due to new data in from thecontrol store 480, not shown in this figure, the instructions stored in aninstruction register 535. The address for this is generated in a fetch control register 511, which determines which address to fetch, the address stored in fetchaddress register 538.Load control 515 provides instructions for aprogram counter 540, which operates much like the fetch address for the control store. A last-in first-out stack 544 of three registers is copied to the first register set without undergoing other operations in this phase. Finally, aload control 517 for adebug address 548 is optionally included, which allows correction of errors that may occur. - FIG. 15B depicts the
second microprocessor phase 560, which includes reading addresses and data out of theRAM file register 533. Ascratch SRAM 565 is written from SRAM address and data register 520 of the first register set, which includes a register that passes through the first two phases to be incremented in the third. Thescratch SRAM 565 is read by the instruction decoder andoperand multiplexer 498, as are most of the registers from the first register set, with the exception of thestack 544,debug address 548 and SRAM address and data register mentioned above. The instruction decoder andoperand multiplexer 498 looks at the various registers ofset 490 andSRAM 565, decodes the instructions and gathers the operands for operation in the next phase, in particular determining the operands to provide to theALU 602 below. The outcome of the instruction decoder andoperand multiplexer 498 is stored to a number of registers in the second register set 496, includingALU operands condition code register 580, and a queue channel andcommand 587 register, which in this embodiment can control thirty-two queues. Several of the registers inset 496 are loaded fairly directly from theinstruction register 535 above without substantial decoding by thedecoder 498, including aprogram control 590, aliteral field 589, a test select 584 and a flag select 585. Other registers such as thefile context 522 of thefirst phase 500 are always stored in afile context 577 of thesecond phase 560, but may also be treated as an operand that is gathered by the multiplexer 572. The stack registers 544 are simply copied instack register 594. Theprogram counter 540 is incremented 568 in this phase and stored inregister 592. Also incremented 570 is theoptional debug address 548, and aload control 575 may be fed from thepipeline 505 at this point in order to allow error control in each phase, the result stored indebug address 598. - FIG. 15C depicts the
third microprocessor phase 600, which includes ALU and queue operations. TheALU 602 includes an adder, priority encoders and other standard logic functions. Results of the ALU are stored inregisters ALU output 618,ALU condition codes 620 and destination operand results 622. Afile context register 616, flagselect register 626 andliteral field register 630 are simply copied from theprevious phase 560. Atest multiplexer 604 is provided to determine whether a conditional jump results in a jump, with the results stored in a test results register 624. Thetest multiplexer 604 may instead be performed in thefirst phase 500 along with similar decisions such as fetchcontrol 511. Astack exchange 608 shifts a stack up or down depending by fetching a program counter fromstack 594 or putting a program counter onto that stack, results of which are stored inprogram control 634,program counter 638 and stack 640 registers. The SRAM address may optionally be incremented in thisphase 600. Anotherload control 610 for anotherdebug address 642 may be forced from thepipeline 505 at this point in order to allow error control in this phase also. A queue RAM andqueue ALU 606 reads from the queue channel andcommand register 587, stores in SRAM and rearranges queues, adding or removing data and pointers as needed to manage the queues of data, sending results to thetest multiplexer 604 and a queue flags andqueue address register 628. Thus the queue RAM andALU 606 assumes the duties of managing queues for the three processors, a task conventionally performed sequentially by software on a CPU, thequeue manager 606 instead providing accelerated and substantially parallel hardware queuing. - The above-described system for protocol processing of data communication results in dramatic reductions in the time required for processing large, connection-based messages. Protocol processing speed is tremendously accelerated by specially designed protocol processing hardware as compared with a general purpose CPU running conventional protocol software, and interrupts to the host CPU are also substantially reduced. These advantages can be provided to an existing host by addition of an intelligent network interface card (INIC), or the protocol processing hardware may be integrated with the CPU. In either case, the protocol processing hardware and CPU intelligently decide which device processes a given message, and can change the allocation of that processing based upon conditions of the message.
- Network processing as it exists today is a costly and inefficient use of system resources. A 200 MHz Pentium-Pro is typically consumed simply processing network data from a 100 Mb/second-network connection. The reasons that this processing is so costly are described here.
- 1.1 Too Many Data Moves.
- When network packet arrives at a typical network interface card (NIC), the NIC moves the data into pre-allocated network buffers in system main memory. From there the data is read into the CPU cache so that it can be checksummed (assuming of course that the protocol in use requires checksums. Some, like IPX, do not.). Once the data has been fully processed by the protocol stack, it can then be moved into its final destination in memory. Since the CPU is moving the data, and must read the destination cache line in before it can fill it and write it back out, this involves at a
minimum 2 more trips across the system memory bus. In short, the best one can hope for is that the data will get moved across thesystem memory bus 4 times before it arrives in its final destination. It can, and does, get worse. If the data happens to get invalidated from system cache after it has been checksummed, then it must get pulled back across the memory bus before it can be moved to its final destination. Finally, on some systems, including Windows NT 4.0, the data gets copied yet another time while being moved up the protocol stack. In NT 4.0, this occurs between the miniport driver interface and the protocol driver interface. This can add up to a whopping 8 trips across the system memory bus (the 4 trips described above, plus the move to replenish the cache, plus 3 more to copy from the miniport to the protocol driver). That's enough to bring even today's advanced memory busses to their knees. - 1.2 Too Much Processing by the Cpu.
- In all but the original move from the NIC to system memory, the system CPU is responsible for moving the data. This is particularly expensive because while the CPU is moving this data it can do nothing else. While moving the data the CPU is typically stalled waiting for the relatively slow memory to satisfy its read and write requests. A CPU, which can execute an instruction every 5 nanoseconds, must now wait as long as several hundred nanoseconds for the memory controller to respond before it can begin its next instruction. Even today's advanced pipelining technology doesn't help in these situations because that relies on the CPU being able to do useful work while it waits for the memory controller to respond. If the only thing the CPU has to look forward to for the next several hundred instructions is more data moves, then the CPU ultimately gets reduced to the speed of the memory controller.
- Moving all this data with the CPU slows the system down even after the data has been moved. Since both the source and destination cache lines must be pulled into the CPU cache when the data is moved, more than 3 k of instructions and or data resident in the CPU cache must be flushed or invalidated for every 1500 byte frame. This is of course assuming a combined instruction and data second level cache, as is the case with the Pentium processors. After the data has been moved, the former resident of the cache will likely need to be pulled back in, stalling the CPU even when we are not performing network processing. Ideally a system would never have to bring network frames into the CPU cache, instead reserving that precious commodity for instructions and data that are referenced repeatedly and frequently.
- But the data movement is not the only drain on the CPU. There is also a fair amount of processing that must be done by the protocol stack software. The most obvious expense is calculating the checksum for each TCP segment (or UDP datagram). Beyond this, however, there is other processing to be done as well. The TCP connection object must be located when a given TCP segment arrives, IP header checksums must be calculated, there are buffer and memory management issues, and finally there is also the significant expense of interrupt processing which we will discuss in the following section.
- 1.3 Too Many Interrupts.
- A 64 k SMB request (write or read-reply) is typically made up of 44 TCP segments when running over Ethernet (1500 byte MTU). Each of these segments may result in an interrupt to the CPU. Furthermore, since TCP must acknowledge all of this incoming data, it's possible to get another 44 transmit-complete interrupts as a result of sending out the TCP acknowledgements. While this is possible, it is not terribly likely. Delayed ACK timers allow us to acknowledge more than one segment at a time. And delays in interrupt processing may mean that we are able to process more than one incoming network frame per interrupt. Nevertheless, even if we assume 4 incoming frames per input, and an acknowledgement for every 2 segments (as is typical per the ACK-every-other-segment property of TCP), we are still left with 33 interrupts per 64 k SMB request.
- Interrupts tend to be very costly to the system. Often when a system is interrupted, important information must be flushed or invalidated from the system cache so that the interrupt routine instructions, and needed data can be pulled into the cache. Since the CPU will return to its prior location after the interrupt, it is likely that the information flushed from the cache will immediately need to be pulled back into the cache.
- What's more, interrupts force a pipeline flush in today's advanced processors. While the processor pipeline is an extremely efficient way of improving CPU performance, it can be expensive to get going after it has been flushed.
- Finally, each of these interrupts results in expensive register accesses across the peripheral bus (PCI). This is discussed more in the following section.
- 1.4 Inefficient Use of the Peripheral Bus (PCI).
- We noted earlier that when the CPU has to access system memory, it may be stalled for several hundred nanoseconds. When it has to read from PCI, it may be stalled for many microseconds. This happens every time the CPU takes an interrupt from a standard NIC. The first thing the CPU must do when it receives one of these interrupts is to read the NIC Interrupt Status Register (ISR) from PCI to determine the cause of the interrupt. The most troubling thing about this is that since interrupt lines are shared on PC-based systems, we may have to perform this expensive PCI read even when the interrupt is not meant for us.
- There are other peripheral bus inefficiencies as well. Typical NICs operate using descriptor rings. When a frame arrives, the NIC reads a receive descriptor from system memory to determine where to place the data. Once the data has been moved to main memory, the descriptor is then written back out to system memory with status about the received frame. Transmit operates in a similar fashion. The CPU must notify that NIC that it has a new transmit. The NIC will read the descriptor to locate the data, read the data itself, and then write the descriptor back with status about the send. Typically on transmits the NIC will then read the next expected descriptor to see if any more data needs to be sent. In short, each receive or transmit frame results in 3 or 4 separate PCI reads or writes (not counting the status register read).
- Alacritech was formed with the idea that the network processing described above could be offloaded onto a cost-effective Intelligent Network Interface Card (INIC). With the Alacritech INIC, we address each of the above problems, resulting in the following advancements:
- 1. The vast majority of the data is moved directly from the INIC into its final destination. A single trip across the system memory bus.
- 2. There is no header processing, little data copying, and no checksumming required by the CPU. Because of this, the data is never moved into the CPU cache, allowing the system to keep important instructions and data resident in the CPU cache.
- 3. Interrupts are reduced to as little as 4 interrupts per 64 k SMB read and 2 per 64 k SMB write.
- 4. There are no CPU reads over PCI and there are fewer PCI operations per receive or transmit transaction.
- In the remainder of this document we will describe how we accomplish the above.
- 2.1 Perform Transport Level Processing on the INIC.
- In order to keep the system CPU from having to process the packet headers or checksum the packet, we must perform this task on the INIC. This is a daunting task. There are more than 20,000 lines of C code that make up the FreeBSD TCP/IP protocol stack. Clearly this is more code than could be efficiently handled by a competitively priced network card. Furthermore, as noted above, the TCP/IP protocol stack is complicated enough to consume a 200 MHz Pentium-Pro. Clearly in order to perform this function on an inexpensive card, we need special network processing hardware as opposed to simply using a general purpose CPU.
- 2.1.1 Only Support TCP/IP.
- In this section we introduce the notion of a “context”. A context is required to keep track of information that spans many, possibly discontiguous, pieces of information. When processing TCP/IP data, there are actually two contexts that must be maintained. The first context is required to reassemble IP fragments. It holds information about the status of the IP reassembly as well as any checksum information being calculated across the IP datagram (UDP or TCP). This context is identified by the IP_ID of the datagram as well as the source and destination IP addresses. The second context is required to handle the sliding window protocol of TCP. It holds information about which segments have been sent or received, and which segments have been acknowledged, and is identified by the IP source and destination addresses and TCP source and destination ports.
- If we were to choose to handle both contexts in hardware, we would have to potentially keep track of many pieces of information. One such example is a case in which a single 64 k SMB write is broken down into 44 1500 byte TCP segments, which are in turn broken down into 131 576 byte IP fragments, all of which can come in any order (though the maximum window size is likely to restrict the number of outstanding segments considerably).
- Fortunately, TCP performs a Maximum Segment Size negotiation at connection establishment time, which should prevent IP fragmentation in nearly all TCP connections. The only time that we should end up with fragmented TCP connections is when there is a router in the middle of a connection which must fragment the segments to support a smaller MTU. The only networks that use a smaller MTU than Ethernet are serial line interfaces such as SLIP and PPP. At the moment, the fastest of these connections only run at 128 k (ISDN) so even if we had 256 of these connections, we would still only need to support 34 Mb/sec, or a little over three 10 bT connections worth of data. This is not enough to justify any performance enhancements that the INIC offers. If this becomes an issue at some point, we may decide to implement the MTU discovery algorithm, which should prevent TCP fragmentation on all connections (unless an ICMP redirect changes the connection route while the connection is established).
- With this in mind, it seems a worthy sacrifice to not attempt to handle fragmented TCP segments on the INIC. UDP is another matter. Since UDP does not support the notion of a Maximum Segment Size, it is the responsibility of IP to break down a UDP datagram into MTU sized packets. Thus, fragmented UDP datagrams are very common. The most common UDP application running today is NFSV2 over UDP. While this is also the most common version of NFS running today, the current version of Solaris being sold by Sun Microsystems runs NFSV3 over TCP by default. We can expect to see the NFSV2/UDP traffic start to decrease over the coming years. In summary, we will only offer assistance to non-fragmented TCP connections on the INIC.
- 2.1.2 Don't Handle TCP “Exceptions”.
- As noted above, we won't provide support for fragmented TCP segments on the INIC. We have also opted to not handle TCP connection and breakdown. Here is a list of other TCP “exceptions” which we have elected to not handle on the INIC:
- Fragmented Segments—Discussed above.
- Retransmission Timeout—Occurs when we do not get an acknowledgement for previously sent data within the expected time period.
- Out of order segments—Occurs when we receive a segment with a sequence number other than the next expected sequence number.
- FIN segment—Signals the close of the connection.
- Since we have now eliminated support for so many different code paths, it might seem hardly worth the trouble to provide any assistance by the card at all. This is not the case. According to W. Richard Stevens and Gary Write in their book “TCP/
IP Illustrated Volume 2”, TCP operates without experiencing any exceptions between 97 and 100 percent of the time in local area networks. As network, router, and switch reliability improve this number is likely to only improve with time. - 2.1.3 Two Modes of Operation.
- So the next question is what to do about the network packets that do not fit our criteria. The answer shown in FIG. 16 is to use two modes of operation: One in which the network frames are processed on the INIC through TCP and one in which the card operates like a typical dumb NIC. We call these two modes fast-path, and slow-path. In the slow-path case, network frames are handed to the system at the MAC layer and passed up through the host protocol stack like any other network frame. In the fast path case, network data is given to the host after the headers have been processed and stripped.
- The transmit case works in much the same fashion. In slow-path mode the packets are given to the INIC with all of the headers attached. The INIC simply sends these packets out as if it were a dumb NIC. In fast-path mode, the host gives raw data to the INIC which it must carve into MSS sized segments, add headers to the data, perform checksums on the segment, and then send it out on the wire.
- 2.1.4 The TCB Cache.
- Consider a situation in which a TCP connection is being handled by the card and a fragmented TCP segment for that connection arrives. In this situation, it will be necessary for the card to turn control of this connection over to the host.
- This introduces the notion of a Transmit Control Block (TCB) cache. A TCB is a structure that contains the entire context associated with a connection. This includes the source and destination IP addresses and source and destination TCP ports that define the connection. It also contains information about the connection itself such as the current send and receive sequence numbers, and the first-hop MAC address, etc. The complete set of TCBs exists in host memory, but a subset of these may be “owned” by the card at any given time. This subset is the TCB cache. The INIC can own up to 256 TCBs at any given time.
- TCBs are initialized by the host during TCP connection setup. Once the connection has achieved a “steady-state” of operation, its associated TCB can then be turned over to the INIC, putting us into fast-path mode. From this point on, the INIC owns the connection until either a FIN arrives signaling that the connection is being closed, or until an exception occurs which the INIC is not designed to handle (such as an out of order segment). When any of these conditions occur, the NIC will then flush the TCB back to host memory, and issue a message to the host telling it that it has relinquished control of the connection, thus putting the connection back into slow-path mode. From this point on, the INIC simply hands incoming segments that are destined for this TCB off to the host with all of the headers intact.
- Note that when a connection is owned by the INIC, the host is not allowed to reference the corresponding TCB in host memory as it will contain invalid information about the state of the connection.
- 2.1.5 TCP Hardware Assistance.
- When a frame is received by the INIC, it must verify it completely before it even determines whether it belongs to one of its TCBs or not. This includes all header validation (is it IP, IPV4 or V6, is the IP header checksum correct, is the TCP checksum correct, etc). Once this is done it must compare the source and destination IP address and the source and destination TCP port with those in each of its TCBs to determine if it is associated with one of its TCBs. This is an expensive process. To expedite this, we have added several features in hardware to assist us. The header is fully parsed by hardware and its type is summarized in a single status word. The checksum is also verified automatically in hardware, and a hash key is created out of the IP addresses and TCP ports to expedite TCB lookup. For full details on these and other hardware optimizations, refer to the INIC Hardware Specification sections (Heading 8).
- With the aid of these and other hardware features, much of the work associated with TCP is done essentially for free. Since the card will automatically calculate the checksum for TCP segments, we can pass this on to the host, even when the segment is for a TCB that the INIC does not own.
- 2.1.6 TCP Summary.
- By moving TCP processing down to the INIC we have offloaded the host of a large amount of work. The host no longer has to pull the data into its cache to calculate the TCP checksum. It does not have to process the packet headers, and it does not have to generate TCP ACKs. We have achieved most of the goals outlined above, but we are not done yet.
- 2.2 Transport Layer Interface.
- This section defines the INIC's relation to the hosts transport layer interface (Called TDI or Transport Driver Interface in Windows NT). For full details on this interface, refer to the Alacritech TCP (ATCP) driver specification (Heading 4).
- 2.2.1 Receive.
- Simply implementing TCP on the INIC does not allow us to achieve our goal of landing the data in its final destination. Somehow the host has to tell the INIC where to put the data. This is a problem in that the host can not do this without knowing what the data actually is. Fortunately, NT has provided a mechanism by which a transport driver can “indicate” a small amount of data to a client above it while telling it that it has more data to come. The client, having then received enough of the data to know what it is, is then responsible for allocating a block of memory and passing the memory address or addresses back down to the transport driver, which is in turn responsible for moving the data into the provided location.
- We will make use of this feature by providing a small amount of any received data to the host, with a notification that we have more data pending. When this small amount of data is passed up to the client, and it returns with the address in which to put the remainder of the data, our host transport driver will pass that address to the INIC which will DMA the remainder of the data into its final destination.
- Clearly there are circumstances in which this does not make sense. When a small amount of data (500 bytes for example), with a push flag set indicating that the data must be delivered to the client immediately, it does not make sense to deliver some of the data directly while waiting for the list of addresses to DMA the rest. Under these circumstances, it makes more sense to deliver the 500 bytes directly to the host, and allow the host to copy it into its final destination. While various ranges are feasible, it is currently preferred that anything less than a segment's (1500 bytes) worth of data will be delivered directly to the host, while anything more will be delivered as a small piece which may be 128 bytes, while waiting until receiving the destination memory address before moving the rest.
- The trick then is knowing when the data should be delivered to the client or not. As we've noted, a push flag indicates that the data should be delivered to the client immediately, but this alone is not sufficient. Fortunately, in the case of NetBIOS transactions (such as SMB), we are explicitly told the length of the session message in the NetBIOS header itself. With this we can simply indicate a small amount of data to the host immediately upon receiving the first segment. The client will then allocate enough memory for the entire NetBIOS transaction, which we can then use to DMA the remainder of the data into as it arrives. In the case of a large (56 k for example) NetBIOS session message, all but the first couple hundred bytes will be DMA'd to their final destination in memory.
- But what about applications that do not reside above NetBIOS? In this case we can not rely on a session level protocol to tell us the length of the transaction. Under these circumstances we will buffer the data as it arrives until A) we have receive some predetermined number of bytes such as 8 k, or B) some predetermined period of time passes between segments or C) we get a push flag. If after any of these conditions occur we will then indicate some or all of the data to the host depending on the amount of data buffered. If the data buffered is greater than about 1500 bytes we must then also wait for the memory address to be returned from the host so that we may then DMA the remainder of the data.
- 2.2.2 Transmit.
- The transmit case is much simpler. In this case the client (NetBIOS for example) issues a TDI Send with a list of memory addresses which contain data that it wishes to send along with the length. The host can then pass this list of addresses and length off to the INIC. The INIC will then pull the data from its source location in host memory, as it needs it, until the complete TDI request is satisfied.
- 2.2.3 Affect on Interrupts.
- Note that when we receive a large SMB transaction, for example, that there are two interactions between the INIC and the host. The first in which the INIC indicates a small amount of the transaction to the host, and the second in which the host provides the memory location(s) in which the INIC places the remainder of the data. This results in only two interrupts from the INIC. The first when it indicates the small amount of data and the second after it has finished filling in the host memory given to it. A drastic reduction from the 33/64 k SMB request that we estimate at the beginning of this section. On transmit, we actually only receive a single interrupt when the send command that has been given to the INIC completes.
- 2.2.4 Transport Layer Interface Summary.
- Having now established our interaction with Microsoft's TDI interface, we have achieved our goal of landing most of our data directly into its final destination in host memory. We have also managed to transmit all data from its original location on host memory. And finally, we have reduced our interrupts to 2 per 64 k SMB read and 1 per 64 k SMB write. The only thing that remains in our list of objectives is to design an efficient host (PCD interface.
- 2.3 HOST (PCI) INTERFACE.
- In this section we define the host interface. For a more detailed description, refer to the “Host Interface Strategy for the Alacritech INIC” section (Heading 3).
- 2.3.1 Avoid PCI Reads.
- One of our primary objectives in designing the host interface of the INIC was to eliminate PCI reads in either direction. PCI reads are particularly inefficient in that they completely stall the reader until the transaction completes. As noted above, this could hold a CPU up for several microseconds, a thousand times the time typically required to execute a single instruction. PCI writes on the other hand, are usually buffered by the memory-busPCI-bridge allowing the writer to continue on with other instructions. This technique is known as “posting”.
- 2.3.1.1 Memory-Based Status Register.
- The only PCI read that is required by most NICs is the read of the interrupt status register. This register gives the host CPU information about what event has caused an interrupt (if any). In the design of our INIC we have elected to place this necessary status register into host memory. Thus, when an event occurs on the INIC, it writes the status register to an agreed upon location in host memory. The corresponding driver on the host reads this local register to determine the cause of the interrupt. The interrupt lines are held high until the host clears the interrupt by writing to the INIC's Interrupt Clear Register. Shadow registers are maintained on the INIC to ensure that events are not lost.
- 2.3.1.2 Buffer Addresses are Pushed to the INIC.
- Since it is imperative that our INIC operate as efficiently as possible, we must also avoid PCI reads from the NIC. We do this by pushing our receive buffer addresses to the INIC. As mentioned at the beginning of this section, most NICs work on a descriptor queue algorithm in which the NIC reads a descriptor from main memory in order to determine where to place the next frame. We will instead write receive buffer addresses to the INIC as receive buffers are filled. In order to avoid having to write to the INIC for every receive frame, we instead allow the host to pass off a pages worth (4 k) of buffers in a single write.
- 2.3.2 Support Small and Large Buffers on Receive.
- In order to reduce further the number of writes to the INIC, and to reduce the amount of memory being used by the host, we support two different buffer sizes. A small buffer contains roughly 200 bytes of data payload, as well as extra fields containing status about the received data bringing the total size to 256 bytes. We can therefore pass 16 of these small buffers at a time to the INIC. Large buffers are 2 k in size. They are used to contain any fast or slow-path data that does not fit in a small buffer. Note that when we have a large fast-path receive, a small buffer will be used to indicate a small piece of the data, while the remainder of the data will be DMA'd directly into memory. Large buffers are never passed to the host by themselves, instead they are always accompanied by a small buffer which contains status about the receive along with the large buffer address. By operating in the manner, the driver must only maintain and process the small buffer queue. Large buffers are returned to the host by virtue of being attached to small buffers. Since large buffers are 2k in size they are passed to the
INIC 2 buffers at a time. - 2.3.3 Command and Response Buffers.
- In addition to needing a manner by which the NIC can pass incoming data to us, we also need a manner by which we can instruct the INIC to send data. Plus, when the INIC indicates a small amount of data in a large fast-path receive, we need a method of passing back the address or addresses in which to put the remainder of the data. We accomplish both of these with the use of a command buffer. Sadly, the command buffer is the only place in which we must violate our rule of only pushing data across PCI. For the command buffer, we write the address of command buffer to the INIC. The INIC then reads the contents of the command buffer into its memory so that it can execute the desired command. Since a command may take a relatively long time to complete, it is unlikely that command buffers will complete in order. For this reason we also maintain a response buffer queue. Like the small and large receive buffers, a page worth of response buffers is passed to the INIC at a time. Response buffers are only 32 bytes, so we have to replenish the INIC's supply of them relatively infrequently. The response buffers only purpose is to indicate the completion of the designated command buffer, and to pass status about the completion.
- 2.4 Examples.
- In this section we will provide a couple of examples describing some of the differing data flows that we might see on the Alacritech INIC.
- 2.4.1 Fast-Path 56Knetbios Session Message.
- Let's say a 56 k NetBIOS session message is received on the INIC. The first segment will contain the NetBIOS header, which contains the total NetBIOS length. A small chunk of this first segment is provided to the host by filling in a small receive buffer, modifying the interrupt status register on the host, and raising the appropriate interrupt line. Upon receiving the interrupt, the host will read the ISR, clear it by writing back to the INIC's Interrupt Clear Register, and will then process its small receive buffer queue looking for receive buffers to be processed. Upon finding the small buffer, it will indicate the small amount of data up to the client to be processed by NetBIOS. It will also, if necessary, replenish the receive buffer pool on the INIC by passing off a pages worth of small buffers. Meanwhile, the NetBIOS client will allocate a memory pool large enough to hold the entire NetBIOS message, and will pass this address or set of addresses down to the transport driver. The transport driver will allocate an INIC command buffer, fill it in with the list of addresses, set the command type to tell the INIC that this is where to put the receive data, and then pass the command off to the INIC by writing to the command register. When the INIC receives the command buffer, it will DMA the remainder of the NetBIOS data, as it is received, into the memory address or addresses designated by the host. Once the entire NetBIOS transaction is complete, the INIC will complete the command by writing to the response buffer with the appropriate status and command buffer identifier.
- In this example, we have two interrupts, and all but a couple hundred bytes are DMA'd directly to their final destination. On PCI we have two interrupt status register writes, two interrupt clear register writes, a command register write, a command read, and a response buffer write.
- With a standard NIC this would result in an estimated 30 interrupts, 30 interrupt register reads, 30 interrupt clear writes, and 58 descriptor reads and writes. Plus the data will get moved anywhere from 4 to 8 times across the system memory bus.
- 2.4.2 Slow-Path Receive.
- If the INIC receives a frame that does not contain a TCP segment for one of its TCB's, it simply passes it to the host as if it were a dumb NIC. If the frame fits into a small buffer (˜200 bytes or less), then it simply fills in the small buffer with the data and notifies the host. Otherwise it places the data in a large buffer, writes the address of the large buffer into a small buffer, and again notifies the host. The host, having received the interrupt and found the completed small buffer, checks to see if the data is contained in the small buffer, and if not, locates the large buffer. Having found the data, the host will then pass the frame upstream to be processed by the standard protocol stack. It must also replenish the INIC's small and large receive buffer pool if necessary.
- With the INIC, this will result in one interrupt, one interrupt status register write and one interrupt clear register write as well as a possible small and or large receive buffer register write. The data will go through the normal path although if it is TCP data then the host will not have to perform the checksum.
- With a standard NIC this will result in a single interrupt, an interrupt status register read, an interrupt clear register write, and a descriptor read and write. The data will get processed as it would by the INIC, except for a possible extra checksum.
- 2.4.3 Fast-
path 400 Byte Send. - In this example, lets assume that the client has a small amount of data to send. It will issue the TDI Send to the transport driver which will allocate a command buffer, fill it in with the address of the 400 byte send, and set the command to indicate that it is a transmit. It will then pass the command off to the INIC by writing to the command register. The INIC will then DMA the 400 bytes into its own memory, prepare a frame with the appropriate checksums and headers, and send the frame out on the wire. After it has received the acknowledgement it will then notify the host of the completion by writing to a response buffer.
- With the NIC, this will result in one interrupt, one interrupt status register write, one interrupt clear register write, a command buffer register write a command buffer read, and a response buffer write. The data is DMA'd directly from the system memory.
- With a standard NIC this will result in a single interrupt, an interrupt status register read, an interrupt clear register write, and a descriptor read and write. The data would get moved across the system bus a minimum of 4 times. The resulting TCP ACK of the data, however, would add yet another interrupt, another interrupt status register read, interrupt clear register write, a descriptor read and write, and yet more processing by the host protocol stack.
- This section describes the host interface strategy for the Alacritech Intelligent Network Interface Card (INIC). The goal of the Alacritech INIC is to not only process network data through TCP, but also to provide zero-copy support for the SMP upper-layer protocol. It achieves this by supporting two paths for sending and receiving data, the fast-path and the slow-path. The fast path data flow corresponds to connections that are maintained on the NIC, while slow-path traffic corresponds to network data for which the NIC does not have a connection. The fast-path flow works by passing a header to the host and subsequently holding further data for that connection on the card until the host responds via an INIC command with a set of buffers into which to place the accumulated data. In the slow-path data flow, the INIC will be operating as a “dumb” NIC, so that these packets are simply dumped into frame buffers on the host as they arrive. To do either path requires a pool of smaller buffers to be used for headers and a pool of data buffers for frames/data that are too large for the header buffer, with both pools being managed by the INIC. This section discusses how these two pools of data are managed as well as how buffers are associated with a given context.
- 3.1 Receive Interface.
- The varying requirements of the fast and slow paths and a desire to save PCI bandwidth are the driving forces behind the host interface that is described herein. As mentioned above, the fast-path flow puts a header into a header buffer that is then forwarded to the host. The host uses the header to determine what further data is following, allocates the necessary host buffers, and these are passed back to the INIC via a command to the INIC. The INIC then fills these buffers from data it was accumulating on the card and notifies the host by sending a response to the command. Alternatively, the fast-path may receive a header and data that is a complete request, but that is also too large for a header buffer. This results in a header and data buffer being passed to the host. This latter flow is identical to the slow-path flow, which also puts all the data into the header buffer or, if the header is too small, uses a large (2K) host buffer for all the data. This means that on the unsolicited receive path, the host will only see either a header buffer or a header and at most, one data buffer. Note that data is never split between a header and a data buffer. FIG. 17 illustrates both situations. Since we want to fill in the header buffer with a single DMA, the header must be the last piece of data to be written to the host for any received transaction.
- 3.1.1 Receive Interface Details.
- 3.1.2 Header Buffers.
- Header buffers in host memory are 256 bytes long, and are aligned on 256 byte boundaries. There will be a field in the header buffer indicating it has valid data. This field will initially be reset by the host before passing the buffer descriptor to the INIC. A set of header buffers are passed from the host to the INIC by the host writing to the Header Buffer Address Register on the INIC. This register is defined as follows:
- Bits31-8 Physical address in host memory of the first of a set of contiguous header buffers.
- Bits7-0 Number of header buffers passed.
- In this way the host can, say, allocate 16 buffers in a 4K page, and pass all 16 buffers to the INIC with one register write. The INIC will maintain a queue of these header descriptors in the SmallHType queue in it's own local memory, adding to the end of the queue every time the host writes to the Header Buffer Address Register. Note that the single entry is added to the queue; the eventual dequeuer will use the count after extracting that entry.
- The header buffers, will be used and returned to the host in the same order that they were given to the INIC. The valid field will be set by the INIC before returning the buffer to the host. In this way a PCI interrupt, with a single bit in the interrupt register, may be generated to indicate that there is a header buffer for the host to process. When servicing this interrupt, the host will look at its queue of header buffers, reading the valid field to determine how many header buffers are to be processed.
- 3.1.3 Receive Data Buffers.
- Receive data buffers in host memory are aligned to page boundaries, assumed here to be 2K bytes long and aligned on 4K page boundaries, 2 buffers per page. In order to pass receive data buffers to the INIC, the host must write to two registers on the INIC. The first register to be written is the Data Buffer Handle Register. The buffer handle is not significant to the INIC, but will be copied back to the host to return the buffer to the host. The second register written is the Data Buffer Address Register. This is the physical address of the data buffer. When both registers have been written, the INIC will add the contents of these two registers to FreeType queue of data buffer descriptors. Note that the INIC host driver sets the handle register first, then the address register. There needs to be some mechanism put in place to ensure the reading of these registers does not get out of sync with writing them. Effectively the INIC can read the address register first and save its contents, then read the handle register. It can then lock the register pair in some manner such that another write to the handle register is not permitted until the current contents have been saved. Both addresses extracted from the registers are to be written to the FreeType queue. The INIC will extract 2 entries each time when dequeuing.
- Data buffers will be allocated and used by the INIC as needed. For each data buffer used by a slow-path transaction, the data buffer handle will be copied into a header buffer. Then the header buffer will be returned to the host.
- 3.2 Transmit Interface.
- 3.2.1 Transmit Interface Overview.
- The transmit interface shown in FIG. 18, like the receive interface, has been designed to minimize the amount of PCI bandwidth and latencies. In order to transmit data, the host will transfer a command buffer to the INIC. This command buffer will include a command buffer handle, a command field, possibly a TCP context identification, and a list of physical data pointers. The command buffer handle is defined to be the first word of the command buffer and is used by the host to identify the command. This word will be passed back to the host in a response buffer, since commands may complete out of order, and the host will need to know which command is complete. Commands will be used for many reasons, but primarily to cause the INIC to transmit data, or to pass a set of buffers to the INIC for input data on the fast-path as previously discussed.
- Response buffers are physical buffers in host memory. They are used by the INIC in the same order as they were given to it by the host. This enables the host to know which response buffer(s) to next look at when the INIC signals a command completion.
- 3.2.2 Transmit Interface Details.
- 3.2.2.1 Command Buffers.
- Command buffers in host memory are a multiple of 32 bytes, up to a maximum of 1K bytes, and are aligned on 32 byte boundaries. A command buffer is passed to the INIC by writing to one of 5 Command Buffer Address Registers. These registers are defined as follows:
- Bits3-5 Physical address in host memory of the command buffer.
- Bits4-0 Length of command buffer in bytes/32 (i.e. number of multiples of 32 bytes).
- This is the physical address of the command buffer. The register to which the command is written predetermines the XMT interface number, or if the command is for the RCV CPU; hence there will be 5 of them, 0-3 for XMT and 4 for RCV. When one of these registers has been written, the INIC will add the contents of the register to it's own internal queue of command buffer descriptors. The first word of all command buffers is defined to be the command buffer handle. It is the job of the utility CPU to extract a command from its local queue, DMA the command into a small INIC buffer (from the FreeSType queue), and queue that buffer into the Xmit#Type queue, where # is 0-3 depending on the interface, or the appropriate RCV queue. The receiving CPU will service the queues to perform the commands. When that CPU has completed a command, it extracts the command buffer handle and passes it back to the host via a response buffer.
- 3.2.2.2 Response Buffers.
- Response buffers in host memory are 32 bytes long and aligned on 32 byte boundaries. They are handled in a very similar fashion to header buffers. There will be a field in the response buffer indicating it has valid data. This field will initially be reset by the host before passing the buffer descriptor to the INIC. A set of response buffers are passed from the host to the INIC by the host writing to the Response Buffer Address Register on the INIC. This register is defined as follows:
- Bits31-8 Physical address in host memory of the first of a set of contiguous response buffers.
- Bits7-0 Number of response buffers passed.
- In this way the host can, say, allocate 128 buffers in a 4K page, and pass all 128 buffers to the INIC with one register write. The INIC will maintain a queue of these header descriptors in it's ResponseType queue, adding to the end of the queue every time the host writes to the Response Buffer Address Register. The INIC writes the extracted contents including the count, to the queue in exactly the same manner as for the header buffers.
- The response buffers can be used and returned to the host in the same order that they were given to the INIC. The valid field will be set by the INIC before returning the buffer to the host. In this way a PCI interrupt, with a single bit in the interrupt register, may be generated to indicate that there is a response buffer for the host to process. When servicing this interrupt, the host will look at its queue of response buffers, reading the valid field to determine how many response buffers are to be processed.
- 3.2.3 Interrupt Status Register/Interrupt Mask Register:
- FIG. 19 shows the general format of this register. The setting of any bits in the ISR will cause an interrupt, provided the corresponding bit in the Interrupt Mask Register is set. The default setting for the IMR is 0.
- The INIC is configured so that the host should never need to directly read the ISR from the INIC. To support this, it is important for the host/INIC to arrange a buffer area in host memory into which the ISR is dumped. The address and size of that area ca be passed to the INIC via a command on the XMT interface. That command will also specify the setting for the IMR. Until the INIC receives this command, it will not DMA the ISR to host memory, and no events will cause an interrupt. The host could if necessary, read the ISR directly from the INIC in this case.
- For the host to never have to actually read the register from the INIC itself, it is necessary for the INIC to update this host copy of the register whenever anything in it changes. The host will Ack (or deassert) events in the register by writing the register with 0's in appropriate bit fields. So that the host does not miss events, the following scheme has been developed:
- The INIC keeps a local copy of the register whenever it DMAs it to the host i.e. after some event(s). Call this COPYA Then the INIC starts accumulating any new events not reflected in the host copy in a separate word. Call this NEWA. As the host clears bits by writing the register back with those bits set to zero, the INIC clears these bits in COPYA (or the host write-back goes directly to COPYA). If there are new events in NEWA, it ORs them with COPYA, and DMAs this new ISR to the host. This new ISR then replaces COPYA, NEWA is cleared and the cycle then repeats.
- 3.2.4 Register Addresses.
- For the sake of simplicity, in this example of FIG. 20 the registers are at 4-byte increments from whatever the base address is.
- This section outlines the design specification for the Alacritech TCP (ATCP) transport driver. The ATCP driver consists of three components:
- 1. The bulk of the protocol stack is based on the FreeBSD TCP/IP protocol stack. This code performs the Ethernet, ARP, IP, ICMP, and (slow path) TCP processing for the driver.
- 2. At the top of the protocol stack we introduce an NT filter driver used to intercept TDI requests destined for the Microsoft TCP driver.
- 3. At the bottom of the protocol stack we include an NDIS protocol-driver interface which allows us to communicate with the INIC miniport NDIS driver beneath the ATCP driver.
- This section covers each of these topics, as well as issues common to the entire ATCP driver.
- 4.1 Coding Style.
- In order to ensure that our ATCP driver is written in a consistent manner, we have adopted a set of coding guidelines. These guidelines are introduced with the philosophy that we should write code in a Microsoft style since we are introducing an NT-based product. The guidelines below apply to all code that we introduce into our driver. Since a very large portion of our ATCP driver will be based on FreeBSD, and since we are somewhat time-constrained on our driver development, the ported FreeBSD code will be exempt from these guidelines.
- 1. Global symbols—All function names and global variables in the ATCP driver should begin with the “ATK” prefix (ATKSend( ) for instance).
- 2. Variable names—Microsoft seems to use capital letters to separate multi-word variable names instead of underscores (VariableName instead of variable_name). We should adhere to this style.
- 3. Structure pointers—Microsoft typedefs all of their structures. The structure types are always capitals and they typedef a pointer to the structure as “P”<name>as follows:
typedef struct_FOO { INT bar; } FOO, *PFOO: - We will adhere to this style.
- 4. Function calls—Microsoft separates function call arguments on separate lines:
X = foobar( argument1, argument2, ); - We will adhere to this style.
- 5. Comments—While Microsoft seems to alternatively use // and /* */ comment notation, we will exclusively use the /* */ notation.
- 6. Function comments—Microsoft includes comments with each function that describe the function, its arguments, and its return value. We will also include these comments, but will move them from within the function itself to just prior to the function for better readability.
- 7. Function arguments—Microsoft includes the keywords IN and OUT when defining function arguments. These keywords denote whether the function argument is used as an input parameter, or alternatively as a placeholder for an output parameter. We will include these keywords.
- 8. Function prototypes—We will include function prototypes in the most logical header file corresponding to the .c file. For example, the prototype for function foo( ) found in foo.c will be placed in foo.h.
- 9. Indentation—Microsoft code fairly consistently uses a tabstop of 4. We will do likewise.
- 10. Header file #ifndef—each header file should contain a #ifndef/#define/#endif which is used to prevent recursive header file includes. For example, foo.h would include:
#ifndef_FOO_H— #define_FOO_H— <foo.h contents.> #endif/*_FOO_H_*/ Note the_NAME_H_format. - 11. Each file must contain a comment at the beginning which includes the $Id$ as follows:
/* * $Id$ */ - CVS (RCS) will expand this keyword to denote RCS revision, timestamps, author, etc.
- 4.2 SMP
- This section describes the process by which we will make the ATCP driver SMP safe. The basic rule for SMP kernel code is that any access to a memory variable must be protected by a lock that prevents a competing access by code running on another processor. Spinlocks are the normal locking method for code paths which do not take a long time to execute (and which do not sleep.)
- In general each instance of a structure will include a spinlock, which must be acquired before members of that structure are accessed, and held while a function is accessing that instance of the structure. Structures which are logically grouped together may be protected by a single spinlock: for example, the ‘in_pcb’ structure, ‘tcpcb’ structure, and ‘socket’ structure which together constitute the administrative information for a TCP connection will probably be collectively managed by a single spinlock in the ‘socket’ structure.
- In addition, every global data structure such as a list or hash table must also have a protecting spinlock which must be held while the structure is being accessed or modified. The NT DDK in fact provides a number of convenient primitives for SMP-safe list manipulation, and it is recommended that these be used for any new lists. Existing list manipulations in the FreeBSD code can probably be left as-is to minimize code disturbance, except of course that the necessary spinlock acquisition and release must be added around them.
- Spinlocks should not be held for long periods of time, and most especially, must not be held during a sleep, since this will lead to deadlocks. There is a significant deficiency in the NT kernel support for SMP systems: it does not provide an operation which allows a spinlock to be exchanged atomically for a sleep lock. This would be a serious problem in a UNIX environment where much of the processing occurs in the context of the user process which initiated the operation. (The spinlock would have to be explicitly released, followed by a separate acquisition of the sleep lock: creating an unsafe window.)
- The NT approach is more asynchronous, however: IRPs are simply marked as ‘PENDING’ when an operation cannot be completed immediately. The calling thread does NOT sleep at that point: it returns, and may go on with other processing. Pending IRPs are later completed, not by waking up the thread which initiated them, but by an ‘IoCompleteRequest’ call which typically runs at DISPATCH level in an arbitrary context.
- Thus we have not in fact used sleep locks anywhere in the design of the ATCP driver, hoping the above issue will not arise.
- 4.3 Data Flow Overview.
- The ATCP driver supports two paths for sending and receiving data, the fast-path and the slow-path. The fast-path data flow corresponds to connections that are maintained on the INIC, while slow-path traffic corresponds to network data for which the INIC does not have a connection. In order to set some groundwork for the rest of this section, these two data paths are summarized here.
- 4.3.1 Fast-Path Input Data Flow.
- There are 2 different cases to consider:
- 1. NETBIOS traffic (identifiable by port number.)
- 2. Everything else. 3.
-
- As soon as the INIC has received a segment containing a NETBIOS header, it will forward it up to the TCP driver, along with the NETBIOS length from the header. (In principle the host could get this from the header itself, but since the INIC has already done the decode, it seem reasonable to just pass it.)
- From the TDI spec, the amount of data in the buffer actually sent must be at least 128 bytes. For small SMBs, all of the received SMB should be forwarded; it will be absorbed directly by the TDI client without any further MDL exchange. Experiments tracing the TDI data flow show that the NETBIOS client directly absorbs up to 1460 bytes: the amount of payload data in a single Ethernet frame. Thus the initial system specifies that the INIC will indicate anything up to a complete segment to the ATCP driver. [See note (1)].
- Once the INIC has passed up an indication with an NETBIOS length greater than the amount of data in the packet it passed, it will continue to accumulate further incoming data in DRAM on the INIC. Overflow of INIC DRAM buffers will be avoided by using a receive window on the INIC at this point, which can be 8K.
- On receiving the indicated packet, the ATCP driver will call the receive handler registered by the TDI client for the connection, passing the actual size of the data in the packet from the INIC as “bytes indicated” and the NETBIOS length as “bytes available.” [See note (2)].
- In the “large data input” case, where “bytes available” exceeds the packet length, the TDI client will then provide an MDL, associated with an IRP, which must be completed when this MDL is filled. (This IRP/MDL may come back either in the response to TCP's call of the receive handler, or as an explicit TDI_RECEIVE request.)
- The ATCP driver will build a “receive request” from the MDL information, and pass this to the INIC. This request will contain:
- 1) The TCP context identifier; 2) Size and offset information; 3) A list of physical addresses corresponding to the MDL pages; 4) A context field to allow the ATCP driver to identify the request on completion; and 5) “Piggybacked” window update information.
- Note: the ATCP driver must copy any remaining data (which was not taken by the receive handler) from the segment indicated by the INIC to the start of the MDL, and must adjust the size & offset information in the request passed to the INIC to account for this.
- The INIC will fill the given page(s) with incoming data up to the requested amount, and respond to the ATCP driver when this is done [See note (3)]. If the MDL is large, the INIC may open up its advertised receive window for improved throughput while filling the MDL. On receiving the response from the INIC, the ATCP driver will complete the IRP associated with this MDL, to tell the TDI client that the data is available. At this point the cycle of events is complete, and the ATCP driver is now waiting for the next header indication.
- 4.3.1.2 Other TCP Input.
- In the general case we do not have a higher-level protocol header to enable us to predict that more data is coming. So on non-NETBIOS connections, the INIC will just accumulate incoming data in INIC DRAM up to a quantity of 8K in this example. Again, a maximum advertised window size, which may be 16K, will be used to prevent overflow of INIC DRAM buffers.
- When the prescribed amount has been accumulated, or when a PSH flag is seen, the INIC will indicate a small packet which may be 128 bytes of the data to the ATCP driver, along with the total length of the data accumulated in NIC DRAM.
- On receiving the indicated packet, the ATCP driver will call the receive handler registered by the TDI client for the connection, passing the actual size of the data in the packet from the INIC as “bytes indicated” and the total INIC-buffer length as “bytes available.”
- As in the NETBIOS case, if “bytes available” exceeds “bytes indicated”, the TDI client will provide an IRP with an MDL. The ATCP driver will pass the MDL to the INIC to be filled, as before. The INIC will reply to the ATCP driver, which in turn will complete the IRP to the TDI client.
- Using an MDL from the client avoids a copy step. However, if we can only buffer 8K and delay indicating to the ATCP driver until we have done so, a question arises regarding further segments coming in, since INIC DRAM is a scarce resource. We do not want to ACK with a zero-size window advertisement: this would cause the transmitting end to go into persist state, which is bad for throughput. If the transmitting end is also our INIC, this results in having to implement the persist timer on the INIC, which we do not wish to do. Instead for large transfers (i.e. no PSH flag seen) we will not send an ACK until the host has provided the MDL, and also, to avoid stopping the transmitting end, we will use a receive window of twice the amount we will buffer before calling the host. Since the host comes back with the MDL quite quickly (measured at <100 microseconds), we do not expect to experience significant overruns.
- 4.3.1.3 INIC Receive Window Updates.
- If the INIC “owns” an MDL provided by the TDI client (sent by ATCP as a receive request), it will treat this as a “promise” by the TDI client to accept the data placed in it, and may therefore ACK incoming data as it is filling the pages.
- However, for small requests, there will be no MDL returned by the TDI client: it absorbs all of the data directly in the receive callback function. We need to update the INIC's view of data which has been accepted, so that it can update its receive window. In order to be able to do this, the ATCP driver will accumulate a count of data which has been accepted by the TDI client receive callback function for a connection.
- From the INIC's point of view, though, segments sent up to the ATCP driver are just “thrown over the wall”; there is no explicit reply path. We will therefore “piggyback” the update on requests sent out to the INIC. Whenever the ATCP driver has outgoing data for that connection, it will place this count in a field in the send request (and then clear the counter.) Any receive request (passing a receive MDL to the INIC) may also be used to transport window update info in the same way.
- Note: we will probably also need to design a message path whereby the ATCP driver can explicitly send an update of this “bytes consumed” information (either when it exceeds a preset threshold or if there are no requests going out to the INIC for more than a given time interval), to allow for possible scenarios in which the data stream is entirely one-way.
- 4.3.1.4 Notes:
- 1) The PSH flag can help to identify small SMB requests that fit into one segment.
- 2) Actually, the observed “bytes available” from the NT TCP driver to its client's callback in this case is always 1460. The NETBIOS-aware TDI client presumably calculates the size of the MDL it will return from the NETBIOS header. So strictly speaking we do not need the NETBIOS header length at this point: just an indication that this is a header for a “large” size. However, we *do* need an actual “bytes available” value for the non-NETBIOS case, so we may as well pass it.
- 3) We observe that the PSH flag is set in the segment completing each NETBIOS transfer. The INIC can use this to determine when the current transfer is complete and the MDL should be returned. It can, at least in a debug mode, sanity check the amount of received data against what is expected, though.
- 4.3.2 Fast-Path Output Data Flow.
- The fast-path output data flow is similar to the input data-flow, but simpler. In this case the TDI client will provide a MDL to the ATCP driver along with an IRP to be completed when the data is sent. The ATCP driver will then give a request (corresponding to the MDL) to the INIC. This request will contain:
- 1) The TCP context identifier; 2) Size and offset information; 3) A list of physical addresses corresponding to the MDL pages; 4) A context field to allow the ATCP driver to identify the request on completion; 5) “Piggybacked” window update information (as discussed in section 6.1.3.)
- The INIC will copy the data from the given physical location(s) as it sends the corresponding network frames onto the network. When all of the data is sent, the INIC will notify the host of the completion, and the ATCP driver will complete the IRP.
- Note that there may be multiple output requests pending at any given time, since SMB allows multiple SMB requests to be simultaneously outstanding.
- 4.3.3 Slow-Path Data Flow.
- For data for which there is no connection being maintained on the INIC, we will have to perform all of the TCP, IP, and Ethernet processing ourselves. To accomplish this we will port the FreeBSD protocol stack. In this mode, the INIC will be operating as a “dumb NIC”; the packets which pass over the NDIS interface will just contain MAC-layer frames.
- The MBUFs in the incoming direction will in fact be managing NDIS-allocated packets. In the outgoing direction, we need protocol-allocated MBUFs in which to assemble the data and headers. The MFREE macro must be cognizant of the various types of MBUFs, and “do the right thing” for each type.
- We will retain a (modified) socket structure for each connection, containing the socket buffer fields expected by the FreeBSD code. The TCP code that operates on socket buffers (adding/removing MBUFs to & from queues, indicating acknowledged & received data etc) will remain essentially unchanged from the FreeBSD base (though most of the socket functions & macros used to do this will need to be modified; these are the functions in kern/uipc_socket2.c) The upper socket layer (kem/uipc_socket.c), where the overlying OS moves data in and out of socket buffers, must be entirely re-implemented to work in TDI terms. Thus, instead of sosend( ), there will be a function that copies data from the MDL provided in a TDI_SEND call into socket buffer MBUFs. Instead of soreceive( ), there will be a handler that calls the TDI client receive callback function, and also copies data from socket buffer MBUFs into any MDL provided by the TDI client (either explicitly with the callback response or as a separate TDI_RECEIVE call.)
- We must note that there is a semantic difference between TDI_SEND and a write( ) on a BSD socket. The latter may complete back to its caller as soon as the data has been copied into the socket buffer. The completion of a TDI_SEND, however, implies that the data has actually been sent on the connection. Thus we will need to keep the TDI_SEND IRPs (and associated MDLs) in a queue on the socket until the TCP code indicates that the data from them has been ACK'd.
- 4.3.4 Data Path Notes:
- 1. There might be input data on a connection object for which there is no receive handler function registered. This has not been observed, but we can probably just ASSERT for a missing handler for the moment. If it should happen, however, we must assume that the TDI client will be doing TDI_RECEIVE calls on the connection. If we can't make a callup at the time that the indication from the INIC appears, we can queue the data and handle it when a TDI_RECEIVE does appear.
- 2. NT has a notion of “canceling” IRPs. It is possible for us to get a “cancel” on an IRP corresponding to an MDL which has been “handed” to the INIC by a send or receive request. We can handle this by being able to force the context back off the INIC, since IRPs will only get cancelled when the connection is being aborted.
- 4.4 Context Passing Between ATCP and INIC.
- 4.4.1 From ATCP to INIC.
- There is a synchronization problem that must be addressed here. The ATCP driver will make a decision on a given connection that this connection should now be passed to the INIC. It builds and sends a command identifying this connection to the INIC.
- Before doing so, it must ensure that no slow-path outgoing data is outstanding. This is not difficult; it simply pends and queues any new TDI_SEND requests and waits for any unacknowledged slow path output data to be acknowledged before initiating the context pass operation.
- The problem arises with incoming slow-path data. If we attempt to do the context-pass in a single command handshake, there is a window during which the ATCP driver has send the context command, but the INIC has not yet seen this (or has not yet completed setting up its context.) During this time, slow-path input data frames could arrive and be fed into the slow-path ATCP processing code. Should that happen, the context information which the ATCP driver passed to the INIC is no longer correct. We can simply abort the outward pass of the context in this event, but it seems better to have a reliable handshake.
- Therefore, the command to pass context from ATCP driver to INIC will be split into two halves, and there will be a two-exchange handshake.
- The initial command from ATCP to INIC expresses an “intention” to hand out the context. It will include the source and destination IP addresses and ports, which will allow the INIC to establish a “provisional” context. Once it has this “provisional” context in place, the INIC will not send any more slow-path input frames for that src/dest IP/port combination (it will queue them, if any are received.)
- When the ATCP driver receives the response to this initial “intent” command, it knows that the INIC will send no more slow-path input. The ATCP driver then waits for any remaining unconsumed slow-path input data for this connection to be consumed by the client. (Generally speaking there will be none, since the ATCP driver will not initiate a context pass while there is unconsumed slow-path input data; the handshake is simply to close the crossover window.)
- Once any such data has been consumed, we know things are in a quiescent state. The ATCP driver can then send the second, “commit” command to hand out the context, with confidence that the TCB values it is handing out (sequence numbers etc) are reliable.
- Note 1: it is conceivable that there might be situations in which the ATCP driver decides, after having sent the original “intention” command, that the context is not to be passed after all. (E.g. the local client issues a close.) So we must allow for the possibility that the second command may be a “abort”, which should cause the INIC to deallocate and clear up its “provisional” context.
- Note 2: to simplify the logic, the ATCP driver will guarantee that only one context may be in process of being handed out at a time: in other words, it will never issue another initial “intention” command until it has completed the second half of the handshake for the first one.
- 4.4.2 From INIC to ATCP.
- There are two possible cases for this: a context transfer may be initiated either by the ATCP driver or by the INIC. However the machinery will be very similar in the two cases. If the ATCP driver wishes to cause context to be flushed from NIC to host, it will send a “flush” message to the INIC specifying the context number to be flushed. Once the INIC receives this, it will proceed with the same steps as for the case where the flush is initiated by the INIC itself:
- 1) The INIC will send an error response to any current outstanding receive request it is working on (corresponding to an MDL into which data is being placed.) Before sending the response, it updates the receive command “length” field to reflect the amount of data which has actually been placed in the MDL buffers at the time of the flush.
- 2) Likewise it will send an error response for any current send request, again reporting the amount of data actually sent from the request.
- 3) The INIC will DMA the TCB for the context back to the host. (Note: part of the information provided with a context must be the address of the TCB in the host.)
- 4) The INIC will send a “flush” indication to the host (very preferably via the regular input path as a special type of frame) identifying the context which is being flushed. Sending this indication via the regular input path ensures that it will arrive before any following slow-path frames.
- At this point, the INIC is no longer doing fast-path processing, and any further incoming frames for the connection will simply be sent to the host as raw frames for the slow input path. The ATCP driver may not be able to complete the cleanup operations needed to resume normal slow path processing immediately on receipt of the “flush frame”, since there may be outstanding send and receive requests to which it has not yet received a response. If this is the case, the ATCP driver must set a “pend incoming TCP frames” flag in its per-connection context. The effect of this is to change the behavior of tcp_input( ). This runs as a function call in the context of ip_input( ), and normally returns only when incoming frames have been processed as far as possible (queued on the socket receive buffer or out-of-sequence reassembly queue.) However, if there is a flush pending and we have not yet completed resynchronization, we cannot do TCP processing and must instead queue input frames for TCP on a “holding queue” for the connection, to be picked up later when context flush is complete and normal slow path processing resumes. (This is why we want to send the “flush” indication via the normal input path: so that we can ensure it is seen before any following frames of slow-path input.)
- Next we need to wait for any outstanding “send” requests to be errored off:
- 1) The INIC maintains its context for the connection in a “zombie” state. As “send” requests for this connection come out of the INIC queue, it sends error responses for them back to the ATCP driver. (It is apparently difficult for the INIC to identify all command requests for a given context; simpler for it to just continue processing them in order, detecting ones that are for a “zombie” context as they appear.)
- 2) The ATCP driver has a count of the number of outstanding requests it has sent to the INIC. As error responses for these are received, it decrements this count, and when it reaches zero, the ATCP driver sends a “flush complete” message to the INIC.
- 3) When the INIC receives the “flush complete” message, it dismantles its “zombie” context. From the INIC perspective, the flush is now completed.
- 4) When the ATCP driver has received error responses for all outstanding requests, it has all the information needed to complete its cleanup. This involves completing any IRPs corresponding to requests which have entirely completed and adjusting fields in partially-completed requests so that send and receive of slow path data will resume at the right point in the byte streams.
- 4) Once all this cleanup is complete, the ATCP driver will loop pulling any “pended” TCP input frames off the “pending queue” mentioned above and feeding them into the normal TCP input processing. Once all input frames on this queue have been cleared off, the “pend incoming TCP frames” flag can be cleared for the connection, and we are back to normal slow-path processing.
- 4.5 Freebsd Porting Specification.
- The largest portion of the ATCP driver is either derived, or directly taken from the FreeBSD TCP/IP protocol stack. This section defines the issues associated with porting this code, the FreeBSD code itself, and the modifications required for it to suit our needs.
- 4.5.1 Porting Philosophy.
- FreeBSD TCP/IP (current version referred to as Net/3) is a general purpose TCP/IP driver. It contains code to handle a variety of interface types and many different kinds of protocols. To meet this requirement the code is often written in a sometimes confusing, over-complex manner. General-purpose structures are overlaid with other interface-specific structures so that different interface types can coexist using the same general-purpose code. For our purposes much of this complexity is unnecessary since we are only supporting a single interface type and a few specific protocols. It is therefore tempting to modify the code and data structures in an effort to make it more readable, and perhaps a bit more efficient. There are, however, some problems with doing this. First, the more we modify the original FreeBSD, the more changes we will have to make. This is especially true with regard to data structures. If we collapse two data structures into one we might improve the cleanliness of the code a bit, but we will then have to modify every reference to that data structure in the entire protocol stack. Another problem with attempting to “clean up” the code is that we might later discover that we need something that we had previously thrown away. Finally, while we might gain a small performance advantage in cleaning up the FreeBSD code, the FreeBSD TCP code will mostly only run in the slow-path connections, which are not our primary focus. Our priority is to get the slow-path code functional and reliable as quickly as possible.
- For the reasons above we have adopted the philosophy that we should initially keep the data structures and code at close to the original FreeBSD implementation as possible. The code will be modified for the following reasons:
- 1) As required for NT interaction—Obviously we can't expect to simply “drop-in” the FreeBSD code as is. The interface of this code to the NT system will require some significant code modifications. This will mostly occur at the topmost and bottommost portions of the protocol stack, as well as the “ioctl” sections of the code. Modifications for SMP issues are also needed.
- 2) Unnecessary code can be removed—While we will keep the code as close to the original FreeBSD as possible, we will nonetheless remove code that will never be used (UDP is a good example of this).
- 4.5.2 UNIX←→NT Conversion.
- The FreeBSD TCP/IP protocol stack makes use of many Unix system services. These include bcopy to copy memory, malloc to allocate memory, timestamp functions, etc. These will not be itemized in detail since the conversion to the corresponding NT calls is a fairly trivial and mechanical operation.
- An area which will need non-trivial support redesign is MBUFs.
- 4.5.2.1 Network Buffers.
- Under FreeBSD, network buffers are mapped using mbufs. Under NT network buffers are mapped using a combination of packet descriptors and buffer descriptors (the buffer descriptors are really MDLs). There are a couple of problems with the Microsoft method. First it does not provide the necessary fields which allow us to easily strip off protocol headers. Second, converting all of the FreeBSD protocol code to speak in terms of buffer descriptors is an unnecessary amount of overhead. Instead, in our port we will allocate our own mbuf structures and remap the NT packets as shown in FIG. 21.
- The mbuf structure will provide the standard fields provided in the FreeBSD mbuf including the data pointer, which points to the current location of the data, data length fields and flags. In addition each mbuf will point to the packet descriptor which is associated with the data being mapped. Once an NT packet is mapped, our transport driver should never have to refer to the packet or buffer descriptors for any information except when we are finished and are preparing to return the packet.
- There are a couple of things to note here. We have designed our INIC such that a packet header should never be split across multiple buffers. Thus, we should never require the equivalent of the “m_pullup” routine included in Unix. Also note that there are circumstances in which we will be accepting data that will also be accepted by the Microsoft TCP/IP. One such example of this is ARP frames. We will need to build our own ARP cache by looking at ARP replies as they come off the network. Under these circumstances, it is absolutely imperative that we do not modify the data, or the packet and buffer descriptors. We will discuss this further in the following sections.
- We will allocate a pool of mbuf headers at ATCP initialization time. It is important to remember that unlike other INICs, we can not simply drop data if we run out of the system resources required to manage/map the data. The reason for this is that we will be receiving data from the card that has already been acknowledged by TCP. Because of this it is essential that we never run out of mbuf headers. To solve this problem we will statically allocate mbuf headers for the maximum number of buffers that we will ever allow to be outstanding. By doing so, the card will run out of buffers in which to put the data before we will run out of mbufs, and as a result, the card will be forced to drop data at the link layer instead of us dropping it at the transport layer. DhXXX: as we've discussed, I don't think this is really true anymore. The INIC won't ACK data until either it's gotten a window update from ATCP to tell it the data's been accepted, or it's got an MDL. Thus it seems workable, though undesirable, if we can't accept a frame from the INIC & return an error to it saying it was not taken.
- We will also require a pool of actual mbufs (not just headers). These mbufs are required in order to build transmit protocol headers for the slow-path data path, as well as other miscellaneous purposes such as for building ARP requests. We will allocate a pool of these at initialization time and we will add to this pool dynamically as needed. Unlike the mbuf headers described above, which will be used to map acknowledged TCP data coming from the card, the full mbufs will contain data that can be dropped if we can not get an mbuf.
- 4.5.3 The Code.
- In this section we describe each section of the FreeBSD TCP/IP port. These sections include Interface Initialization, ARP, Route, IP, ICMP, and TCP.
- 4.5.3.1 Interface Initialization.
- 4.5.3.1.1 Structures.
- There are a variety of structures, which represent a single interface in FreeBSD. These structures include: ifiet, arpcom, ifaddr, in_ifaddr, sockaddr, sockaddr_in, and sockaddr_dl. FIG. 22 shows the relationship between all of these structures:
- In the example of FIG. 22 we show a single interface with a MAC address of 00:60:97:DB:9B:A6 configured with an IP address of 192.100.1.2. As illustrated above, the in_ifaddr is actually an ifaddr structure with some extra fields tacked on to the end. Thus the ifaddr structure is used to represent both a MAC address and an IP address. Similarly the sockaddr structure is recast as a sockaddr_dl or a sockaddr_in depending on its address type. An interface can be configured to multiple IP addresses by simply chaining in_ifaddr structures after the in_ifaddr structure shown in FIG. 22.
- As mentioned in the Porting Philosophy section, many of the above structures could likely be collapsed into fewer structures. In order to avoid making unnecessary modifications to FreeBSD, for the time being we will leave these structures mostly as is. We will however eliminate the fields from the structure that will never be used. These structure modifications are discussed below.
- We also show above a structure called iface. This is a structure that we define. It contains the arpcom structure, which in turn contains the ifnet structure. It also contains fields that enable us to blend our FreeBSD implementation with NT NDIS requirements. One such example is the NDIS binding handle used to call down to NDIS with requests (such as send).
- 4.5.3.1.2 The Functions.
- FreeBSD initializes the above structures in two phases. First when a network interface is found, the ifnet, arpcom, and first ifaddr structures are initialized first by the network layer driver, and then via a call to the if attach routine. The subsequent in_ifaddr structure(s) are initialized when a user dynamically configures the interface. This occurs in the in_ioctl and the in_ifinit routines. Since NT allows dynamic configuration of a network interface we will continue to perform the interface initialization in two phases, but we will consolidate these two phases as described below:
- 4.5.3.1.2.1 IfInit.
- The IfInit routine will be called from the ATKProtocolBindAdapter function. The IfInit function will initialize the Iface structure and associated arpcom and ifnet structures. It will then allocate and initialize an ifaddr structure in which to contain link-level information about the interface, and a sockaddr_dl structure to contain the interface name and MAC address. Finally it will add a pointer to the ifaddr structure into the ifnet_addrs array (using the if index field of the ifnet structure) contained in the extended device object. IfInit will then call IfConfig for each IP address that it finds in the registry entry for the interface.
- 4.5.3.1.2.2 Ifconfig.
- IfConfig is called to configure an IP address for a given interface. It is passed a pointer to the ifnet structure for that interface along with all the information required to configure an IP address for that interface (such as IP address, netmask and broadcast info, etc). IfConfig will allocate an in_ifaddr structure to be used to configure the interface. It will chain it to the total chain of in_ifaddr structures contained in the extended device object, and will then configure the structure with the information given to it. After that it will add a static route for the newly configured network and then broadcast a gratuitous ARP request to notify others of our Mac/P address and to detect duplicate IP addresses on the net.
- 4.5.3.2 ARP.
- We will port the FreeBSD ARP code to NT mostly as-is. For some reason, the FreeBSD ARP code is located in a file called if ether.c. While the functionality of this file will remain the same, we will rename it to a more logical arp.c. The main structures used by ARP are the llinfo_arp structure and the rtentry structure (actually part of route). These structures will not be require major modifications. The functions that will require modification are defined here.
- 4.5.3.2.1 In_arpinput.
- This function is called to process an incoming ARP frame. An ARP frame can either be an ARP request or an ARP reply. ARP requests are broadcast, so we will see every ARP request on the network, while ARP replies are directed so we should only see ARP replies that are sent to us. This introduces the following possible cases for an incoming ARP frame:
- 1. ARP request trying to resolve our IP address—Under normal circumstances, ARP would reply to this ARP request with an ARP reply containing our MAC address. Since ARP requests will also be passed up to the Microsoft TCP/IP driver, we need not reply. Note however, that FreeBSD also creates or updates an ARP cache entry with the information derived from the ARP request. It does this in anticipation of the fact that any host that wishes to know our MAC address is likely to wish to talk to us soon. Since we will need to know his MAC address in order to talk back, we might as well add the ARP information now rather than issuing our own ARP request later.
- 2. ARP request trying to resolve someone else's IP address—Since ARP requests are broadcast, we see every one on the network. When we receive an ARP request of this type, we simply check to see if we have an entry for the host that sent the request in our ARP cache. If we do, we check to see if we still have the correct MAC address associated with that host. If it is incorrect, we update our ARP cache entry. Note that we do not create a new ARP cache entry in this case.
- 3. ARP reply—In this case we add the new ARP entry to our ARP cache. Having resolved the address, we check to see if there is any transmit requests pending for the resolve IP address, and if so, transmit them.
- Given the above three possibilities, the only major change to the in_arpinput code is that we will remove the code which generates an ARP reply for ARP requests that are meant for our interface.
- 4.5.3.2.2 Arpintr.
- This is the FreeBSD code that delivers an incoming ARP frame to in_arpinput. We will be calling in_arpinput directly from our ProtocolReceiveDPC routine (discussed in the NDIS section below) so this function is not needed.
- 4.5.3.2.3 Arpwhohas.
- This is a single line function that serves only as a wrapper around arprequest. We will remove it and replace all calls to it with direct calls to arprequest.
- 4.5.3.2.4 Arprequest.
- This code simply allocates a mbuf, fills it in with an ARP header, and then passes it down to the ethernet output routine to be transmitted. For us, the code remains essentially the same except for the obvious changes related to how we allocate a network buffer, and how we send the filled in request.
- 4.5.3.2.5 Arp_ifinit.
- This is simply called when an interface is initialized to broadcast a gratuitous ARP request (described in the interface initialization section) and to set some ARP related fields in the ifaddr structure for the interface. We will simply move this functionality into the interface initialization code and remove this function.
- 4.5.3.2.6 Arptimer.
- This is a timer-based function that is called every 5 minutes to walk through the ARP table looking for entries that have timed out. Although the time-out period for FreeBSD is 20 minutes, RFC 826 does not specify any timer requirements with regard to ARP so we can modify this value or delete the timer altogether to suit our needs. Either way the function won't require any major changes. All other functions in if ether.c will not require any major changes.
- 4.5.3.3 Route.
- On first thought, it might seem that we have no need for routing support since our ATCP driver will only receive IP datagrams who's destination IP address matches that of one of our own interfaces. Therefore, we will not “route” from one interface to another. Instead, the MICROSOFT TCP/IP driver will provide that service. We will, however, need to maintain an up-to-date routing table so that we know a) whether an outgoing connection belongs to one of our interfaces, b) to which interface it belongs, and c) what the first-hop IP address (gateway) is if the destination is not on the local network.
- We discuss four aspects on the subject of routing in this section. They are as follows:
- 1. The mechanics of how routing information is stored.
- 2. The manner in which routes are added or deleted from the route table.
- 3. When and how route information is retrieved from the route table.
- 4. Notification of route table changes to interested parties.
- 4.5.3.3.1 The Route Table.
- In FreeBSD, the route table is maintained using an algorithm known as PATRICIA (Practical Algorithm To Retrieve Information Coded in Alphanumeric). This is a complicated algorithm that is a bit costly to set up, but is very efficient to reference. Since the routing table should contain the same information for both NT and FreeBSD, and since the key used to search for an entry in the routing table will be the same for each (the destination IP address), we should be able to port the routing table software to NT without any major changes.
- The software which implements the route table (via the PATRICIA algorithm) is located in the FreeBSD file, radix.c. This file will be ported directly to the ATCP driver with no significant changes required.
- 4.5.3.3.2 Adding and Deleting Routes.
- Routes can be added or deleted in a number of different ways. The kernel adds or deletes routes when the state of an interface changes or when an ICMP redirect is received. User space programs such as the RIP daemon, or the route command also modify the route table.
- For kernel-based route changes, the changes can be made by a direct call to the routing software. The FreeBSD software that is responsible for the modification of route table entries is found in route.c. The primary routine for all route table changes is called rtrequest( ). It takes as its arguments, the request type (ADD, RESOLVE, DELETE), the destination IP address for the route, the gateway for the route, the netmask for the route, the flags for the route, and a pointer to the route structure (struct rtentry) in which we will place the added or resolved route. Other routines in the route.c file include rtinit( ), which is called during interface initialization time to add a static route to the network, rtredirect, which is called by ICMP when we receive a ICMP redirect, and an assortment of support routines used for the modification of route table entries. All of these routines found in route.c will be ported with no major modifications.
- For user-space-based changes, we will have to be a bit more clever. In FreeBSD, route changes are sent down to the kernel from user-space applications via a special route socket. This code is found in the FreeBSD file, rtsock.c. Obviously this will not work for our ATCP driver. Instead the filter driver portion of our driver will intercept route changes destined for the Microsoft TCP driver and will apply those modifications to our own route table via the rtrequest routine described above. In order to do this, it will have to do some format translation to put the data into the format (sockaddr_in) expected by the rtrequest routine. Obviously, none of the code from rtsock.c will be ported to the ATCP driver. This same procedure will be used to intercept and process explicit ARP cache modifications.
- 4.5.3.3.3 Consulting the Route Table.
- In FreeBSD, the route table is consulted in ip_output when an IP datagram is being sent. In order to avoid a complete route table search for every outgoing datagram, the route is stored into the in_pcb for the connection. For subsequent calls to ip_output, the route entry is then simply checked to ensure validity. While we will keep this basic operation as is, we will require a slight modification to allow us to coexist with the Microsoft TCP driver. When an active connection is being set up, our filter driver will have to determine whether the connection is going to be handled by one of the INIC interfaces. To do this, we will have to consult the route table from the filter driver portion of our driver. This is done via a call to the rtalloc1 function (found in route.c). If a valid route table entry is found, then we will take control of the connection and set a pointer to the rtentry structure returned by rtalloc1 in our in_pcb structure.
- 4.5.3.3.4 What to do When a Route Changes.
- When a route table entry changes, there may be connections that have pointers to a stale route table entry. These connections will need to be notified of the new route. FreeBSD solves this by checking the validity of a route entry during every call to ip_output. If the entry is no longer valid, its reference to the stale route table entry is removed, and an attempt is made to allocate a new route to the destination. For our slow path, this will work fine. Unfortunately, since our IP processing is handled by the INIC for our fast path, this sanity check method will not be sufficient. Instead, we will need to perform a review of all of our fast path connections during every route table modification. If the route table change affects our connection, we will need to advise the INIC with a new first-hop address, or if the destination is no longer reachable, close the connection entirely.
- 4.5.3.4 ICMP.
- Like the ARP code above, we will need to process certain types of incoming ICMP frames. Of the 10 possible ICMP message types, there are only three that we need to support. These include ICMP_REDIRECT, ICMP_UNREACH, and ICMP_SOURCEQUENCH. Any FreeBSD code to deal with other types of ICMP traffic will be removed. Instead, we will simply return NDIS_STATUS_NOT_ACCEPTED for all but the above ICMP frame types. This section describes how we will handle these ICMP frames.
- 4.5.3.4.1 ICMP_Redirect.
- Under FreeBSD, an ICMP_REDIRECT causes two things to occur. First, it causes the route table to be updated with the route given in the redirect. Second, it results in a call back to TCP to cause TCP to flush the route entry attached to its associated in_pcb structures. By doing this, it forces ip_output to search for a new route. As mentioned in the Route section above, we will also require a call to a routine which will review all of the TCP fast-path connections, and update the route entries as needed (in this case because the route entry has been zeroed). The INIC will then be notified of the route changes.
- 4.5.3.4.2 ICMP_Unreach.
- In both FreeBSD and Microsoft TCP, the ICMP_UNREACH results in no more than a simple statistic update. We will do the same.
- 4.5.3.4.3 ICMP_Sourcequench.
- A source quench is sent to cause a TCP sender to close its congestion window to a single segment, thereby putting the sender into slow-start mode. We will keep the FreeBSD code as-is for slow-path connections. For fast path connections we will send a notification to the card that the congestion window for the given connection has been reduced. The INIC will then be responsible for the slow-start algorithm.
- 4.5.3.5 IP.
- The FreeBSD IP code should require few modifications when porting to the ATCP driver. What few modifications will be required will be discussed in this section.
- 4.5.3.5.1 IP Initialization.
- During initialization time, ip_init is called to initialize the array of protosw structures. These structures contain all the information needed by IP to be able to pass incoming data to the correct protocol above it. For example, when a UDP datagram arrives, IP locates the protosw entry corresponding to the UDP protocol type value (0×11) and calls the input routine specified in that protosw entry. We will keep the array of protosw structures intact, but since we are only handling the TCP and ICMP protocols above IP, we will strip the protosw array down substantially.
- 4.5.3.5.2 Input.
- Following are the changes required for IP input (function ip intr( )).
- 4.5.3.5.2.1 No IP Forwarding.
- Since we will only be handling datagrams for which we are the final destination, we should never be required to forward an IP datagram. All references to IP forwarding, and the ip_forward function itself, can be removed.
- 4.5.3.5.2.2 IP Options.
- The only options supported by FreeBSD at this time include record route, strict and loose source and record route, and timestamp. For the timestamp option, FreeBSD only logs the current time into the IP header so that before it is forwarded. Since we will not be forwarding IP datagrams, this seems to be of little use to us. While FreeBSD supports the remaining options, NT essentially does nothing useful with them. For the moment, we will not bother dealing with IP options. They will be added in later if needed.
- 4.5.3.5.2.3 IP Reassembly.
- There is a small problem with the FreeBSD IP reassembly code. The reassembly code reuses the IP header portion of the IP datagram to contain IP reassembly queue information. It can do this because it no longer requires the original IP header. This is an absolute no-no with the NDIS 4.0 method of handling network packets. The NT DDK explicitly states that we must not modify packets given to us by NDIS. This is not the only place in which the FreeBSD code modifies the contents of a network buffer. It also does this when performing endian conversions. At the moment we will leave this code as is and violate the DDK rules. We believe we can do this because we are going to ensure that no other transport driver looks at these frames. If this becomes a problem we will have to modify this code substantially by moving the IP reassembly fields into the mbuf header.
- 4.5.3.5.3 IP Output.
- There are only two modifications required for IP output. The first is that since, for the moment, we are not dealing with IP options, there is no need for the code that inserts the IP options into the IP header. Second, we may discover that it is impossible for us to ever receive an output request that requires fragmentation. Since TCP performs Maximum Segment Size negotiation, we should theoretically never attempt to send a TCP segment larger than the MTU.
- 4.6 NDIS Protocol Driver.
- This section defines protocol driver portion of the ATCP driver. The protocol driver portion of the ATCP driver is defined by the set of routines registered with NDIS via a call to NdisRegisterProtocol. These routines are limited to those that are called (indirectly) by the INIC miniport driver beneath us. For example, we register a ProtocolReceivePacket routine so that when the INIC driver calls NdisMIndicateReceivePacket it will result in a call from NDIS to our driver. Strictly speaking, the protocol driver portion of our driver does not include the method by which our driver calls down to the miniport (for example, the method by which we send network packets). Nevertheless, we will describe that method here for lack of a better place to put it. That said, we cover the following topics in this section of the document: 1) Initialization; 2) Receive; 3) Transmit; 4) Query/Set Information; 5) Status indications; 6) Reset; and 7) Halt.
- 4.6.1 Initialization.
- The protocol driver initialization occurs in two phases. The first phase occurs when the ATCP DriverEntry routine calls ATKProtoSetup. The ATKProtoSetup routine performs the following:
- 1. Allocate resources—We attempt to allocate many of the required resources as soon as possible so that we are more likely to get the memory we want. This mostly applies to allocating and initializing our mbuf and mbuf header pools.
- 2. Register Protocol—We call NdisRegisterProtocol to register our set of protocol driver routines.
- 3. Locate and initialize bound NICs—We read the Linkage parameters of the registry to determine which NIC devices we are bound to. For each of these devices we allocate and initialize a IFACE structure (defined above). We then read the TCP parameters out of the registry for each bound device and set the corresponding fields in the IFACE structure.
- After the underlying INIC devices have completed their initialization, NDIS will call our driver's ATKBindAdapter function for each underlying device. It will perform the following:
- 1. Open the device specified in the call the ATKBindAdapter
- 2. Find the IFACE structure that was created in ATKProtoSetup for this device.
- 3. Query the miniport for adapter information. This includes such things as link speed and MAC address. Save relevant information in the IFACE structure.
- 4. Perform the interface initialization as specified in section 4.5.3.1 Interface initialization.
- 4.6.2 Receive.
- Receive is handled by the protocol driver routine ATKReceivePacket. Before we describe this routine, it is important to consider each possible receive type and how it will be handled.
- 4.6.2.1 Receive Overview.
- Our INIC miniport driver will be bound to our transport driver as well as the generic Microsoft TCP driver (and possibly others). The ATCP driver will be bound exclusively to INIC devices, while the Microsoft TCP driver will be bound to INIC devices as well as other types of NICs. This is illustrated in FIG. 23. By binding the driver in this fashion, we can choose to direct incoming network data to our own ATCP transport driver, the Microsoft TCP driver, or both. We do this by playing with the ethernet “type” field as follows.
- To NDIS and the transport drivers above it, our card is going to be registered as a normal ethernet card. When a transport driver receives a packet from our driver, it will expect the data to start with an ethernet header, and consequently, expects the protocol type field to be in byte offset12. If Microsoft TCP finds that the protocol type field is not equal to either IP, or ARP, it will not accept the packet. So, to deliver an incoming packet to our driver, we must simply map the data such that
byte 12 contains a non-recognized ethernet type field. Note that we must choose a value that is greater than 1500 bytes so that the transport drivers do not confuse it with an 802.3 frame. We must also choose a value that will not be accepted by other transport driver such as Appletalk or IPX. Similarly, if we want to direct the data to Microsoft TCP, we can then simply leave the ethernet type field set to IP (or ARP). Note that since we will also see these frames we can choose to accept or not-accept them as necessary. Incoming packets are delivered as follows: - A. Packets Delivered to ATCP only (Not Accepted by MSTCP):
- 1. All TCP packets destined for one of our IP addresses. This includes both slow-path frames and fast-path frames. In the slow-path case, the TCP frames are given in there entirety (headers included). In the fast-path case, the ATKReceivePacket is given a header buffer that contains status information and data with no headers (except those above TCP). More on this later.
- B. Packets Delivered to Microsoft TCP only (Not Accepted by ATCP):
- 1. All non-TCP packets.
- 2. All packets that are not destined for one of our interfaces (packets that will be routed). Continuing the above example, if there is an IP address 144.48.252.4 associated with the 3com interface, and we receive a TCP connect with a destination IP address of 144.48.252.4, we will actually want to send that request up to the ATCP driver so that we create a fast-path connection for it. This means that we will need to know every IP address in the system and filter frames based on the destination IP address in a given TCP datagram. This can be done in the INIC miniport driver. Since it will be the ATCP driver that learns of dynamic IP address changes in the system, we will need a method to notify the INIC miniport of all the IP addresses in the system. More on this later.
- C. Packets Delivered to Both:
- 1. All ARP frames.
- 2. All ICMP frames.
- 4.6.2.2 Two Types of Receive Packets.
- There are several circumstances in which the INIC will need to indicate extra information about a receive packet to the ATCP driver. One such example is a fast path receive in which the ATCP driver will need to be notified of how much data the card has buffered. To accomplish this, the first (and sometimes only) buffer in a received packet will actually be an INIC header buffer. The header buffer contains status information about the receive packet, and may or may not contain network data as well. The ATCP driver will recognize a header buffer by mapping it to an ethernet frame and inspecting the type field found in
byte 12. We will indicate all TCP frames destined for us in this fashion, while frames that are destined for both our driver and the Microsoft TCP driver (ARP, ICMP) will be indicated without a header buffer. FIG. 24 shows an example of an incoming TCP packet. FIG. 25 shows an example of an incoming ARP frame. - 4.6.2.3
NDIS 4 Protocolreceivepacket Operation. - NDIS has been designed such that all packets indicated via NdisMIndicateReceivePacket by an underlying miniport are delivered to the ProtocolReceivePacket routine for all protocol drivers bound to it. These protocol drivers can choose to accept or not accept the data. They can either accept the data by copying the data out of the packet indicated to it, or alternatively they can keep the packet and return it later via a call to NdisReturnPackets. By implementing it in this fashion, NDIS allows more than one protocol driver to accept a given packet. For this reason, when a packet is delivered to a protocol driver, the contents of the packet descriptor, buffer descriptors and data must all be treated as read-only. At the moment, we intend to violate this rule. We choose to violate this because much of the FreeBSD code modifies the packet headers as it examines them (mostly for endian conversion purposes). Rather than modify all of the FreeBSD code, we will instead ensure that no other transport driver accepts the data by making sure that the ethernet type field is unique to us (no one else will want it). Obviously this only works with data that is only delivered to our ATCP driver. For ARP and ICMP frames we will instead copy the data out of the packet into our own buffer and return the packet to NDIS directly. While this is less efficient than keeping the data and returning it later, ARP and ICMP traffic should be small enough, and infrequent enough, that it doesn't matter.
- The DDK specifies that when a protocol driver chooses to keep a packet, it should return a value of 1 (or more) to NDIS in its ProtocolReceivePacket routine. The packet is then later returned to NDIS via the call to NdisReturnPackets. This can only happen after the ProtocolReceivePacket has returned control to NDIS. This requires that the call to NdisReturnPackets must occur in a different execution context. We can accomplish this by scheduling a DPC, scheduling a system thread, or scheduling a kernel thread of our own. For brevity in this section, we will assume it is a done through a DPC. In any case, we will require a queue of pending receive buffers on which to place and fetch receive packets.
- After a receive packet is dequeued by the DPC it is then either passed to TCP directly for fast-path processing, or it is sent through the FreeBSD path for slow-path processing. Note that in the case of slow-path processing, we may be working on data that needs to be returned to NDIS (TCP data) or we may be working on our own copy of the data (ARP and ICMP). When we finish with the data we will need to figure out whether or not to return the data to NDIS or not. This will be done via fields in the mbuf header used to map the data. When the mfreem routine is called to free a chain of mbufs, the fields in the mbuf will be checked and, if required, the packet descriptor pointed to by the mbuf will be returned to NDIS.
- 4.6.2.4 MBUF←→Packet Mapping.
- As noted in the section on mbufs above, we will map incoming data to mbufs so that our FreeBSD port requires fewer modifications. Depending on the type of data received, this mapping will appear differently. Here are some examples:
- In FIG. 26A, we show incoming data for a TCP fast-path connection. In this example, the TCP data is fully contained in the header buffer. The header buffer is mapped by the mbuf and sent upstream for fast-path TCP processing. In this case it is required that the header buffer be mapped and sent upstream because the fast-path TCP code will need information contained in the header buffer in order to perform the processing. When the mbuf in this example is freed, the mfreem routine will determine that the mbuf maps a packet that is owned by NDIS and will then free the mbuf header only and call NdisReturnPackets to free the data.
- In FIG. 26B, we show incoming data for a TCP slow-path connection. In this example the mbuf points to the start of the TCP data directly instead of the header buffer. Since this buffer will be sent up for slow-path FreeBSD processing, we can not have the mbuf pointing to a header buffer (FreeBSD would get awfully confused). Again, when mfreem is called to free the mbuf, it will discover the mapped packet, free the mbuf header, and call NDIS to free the packet and return the underlying buffers. Note that even though we do not directly map the header buffer with the mbuf we do not lose it because of the link from the packet descriptor. Note also that we could alternatively have the INIC miniport driver only pass us the TCP data buffer when it receives a slow-path receive. This would work fine except that we have determined that even in the case of slow-path connections we are going to attempt to offer some assistance to the host TCP driver (most likely by checksum processing only). In this case there may be some special fields that we need to pass up to the ATCP driver from the INIC driver. Leaving the header buffer connected seems the most logical way to do this.
- Finally, in FIG. 26C, we show a received ARP frame. Recall that for incoming ARP and ICMP frames we are going to copy the incoming data out of the packet and return it directly to NDIS. In this case the mbuf simply points to our data, with no corresponding packet descriptor. When we free this mbuf, mfreem will discover this and free not only the mbuf header, but the data as well.
- 4.6.2.5 Other Receive Packets.
- We use this receive mechanism for other purposes besides the reception of network data. It is also used as a method of communication between the ATCP driver and the INIC. One such example is a TCP context flush from the INIC. When the INIC determines, for whatever reason, that it can no longer manage a TCP connection, it must flush that connection to the ATCP driver. It will do this by filling in a header buffer with appropriate status and delivering it to the INIC driver. The INIC driver will in turn deliver it to the protocol driver which will treat it essentially like a fast-path TCP connection by mapping the header buffer with an mbuf header and delivering it to TCP for fast-path processing. There are two advantages to communicating in this manner. First, it is already an established path, so no extra coding or testing is required. Second, since a context flush comes in, in the same manner as received frames, it will prevent us from getting a slow-path frame before the context has been flushed.
- 4.6.2.6 Summary
- Having covered all of the various types of receive data, following are the steps that are taken by the ATKProtocolReceivePacket routine.
- 1. Map incoming data to an ethernet frame and check the type field;
- 2. If the type field contains our custom INIC type then it should be TCP;
- 3. If the header buffer specifies a fast-path connection, allocate one or more mbufs headers to map the header and possibly data buffers. Set the packet descriptor field of the mbuf to point to the packet descriptor, set the mbuf flags appropriately, queue the mbuf, and
return 1; - 4. If the header buffer specifies a slow-path connection, allocate a single mbuf header to map the network data, set the mbuf fields to map the packet, queue the mbuf and
return 1. Note that we design the INIC such that we will never get a TCP segment split across more than one buffer; - 5. If the type field of the frame indicates ARP or ICMP;
- 6. Allocate a mbuf with a data buffer. Copy the contents of the packet into the mbuf. Queue the mbuf, and return 0 (not accepted); and
- 7. If the type field is not either the INIC type, ARP or ICMP, we don't want it.
Return 0. - The receive processing will continue when the mbufs are dequeued. At the moment this is done by a routine called ATKProtocolReceiveDPC. It will do the following:
- 1. Dequeue a mbuf from the queue; and
- 2. Inspect the mbuf flags. If the mbuf is meant for fast-path TCP, it will call the fast-path routine directly. Otherwise it will call the ethernet input routine for slow-path processing.
- 4.6.3 Transmit.
- In this section we discuss the ATCP transmit path.
- 4.6.3.1
NDIS 4 Send Operation. - The
NDIS 4 send operation works as follows. When a transport/protocol driver wishes to send one or more packets down to anNDIS 4 miniport driver, it calls NdisSendPackets with an array of packet descriptors to send. As soon as this routine is called, the transport/protocol driver relinquishes ownership of the packets until they are returned, one by one in any order, via a NDIS call to the ProtocolSendComplete routine. Since this routine is called asynchronously, our ATCP driver must save any required context into the packet descriptor header so that the appropriate resources can be freed. This is discussed further in the following sections. - 4.6.3.2 Types of “Sends”.
- Like the Receive path described above, the transmit path is used not only to send network data, but is also used as a communication mechanism between the host and the INIC. Here are some examples of the types of sends performed by the ATCP driver.
- 4.6.3.2.1 Fast-Path TCP Send.
- When the ATCP driver receives a transmit request with an associated MDL, it will package up the MDL physical addresses into a command buffer, map the command buffer with a buffer and packet descriptor, and call NdisSendPackets with the corresponding packet. The underlying INIC driver will issue the command buffer to the INIC. When the corresponding response buffer is given back to the host, the INIC miniport will call NdisMSendComplete which will result in a call to the ATCP ProtocolSendComplete (ATKSendComplete) routine, at which point the resources associated with the send can be freed. We will allocate and use a mbuf to hold the command buffer. By doing this we can store the context necessary in order to clean up after the send completes. This context includes a pointer to the MDL and presumably some other connection context as well. The other advantage to using a mbuf to hold the command buffer is that it eliminates having another special set of code to allocate and return command buffer. We will store a pointer to the mbuf in the reserved section of the packet descriptor so we can locate it when the send is complete. FIG. 27 illustrates the relationship between the client's MDL, the command buffer, and the buffer and packet descriptors.
- 4.6.3.2.2 Fast-Path TCP Receive.
- As described in section 4.3.1 above, the receive process typically occurs in two phases. First the INIC fills in a host receive buffer with a relatively small amount of data, but notifies the host of a large amount of pending data (either through a large amount of buffered data on the card, or through a large amount of expected NetBios data). This small amount of data is delivered to the client through the TDI interface. The client will then respond with a MDL in which the data should be placed. Like the Fast-path TCP send process, the receive portion of the ATCP driver will then fill in a command buffer with the MDL information from the client, map the buffer with packet and buffer descriptors and send it to the INIC via a call to NdisSendPackets. Again, when the response buffer is returned to the INIC miniport, the ATKSendComplete routine will be called and the receive will complete. This relationship between the MDL, command buffer and buffer and packet descriptors are the same as shown in the Fast-path send section above.
- 4.6.3.2.3 Slow-Path (FREEBSD).
- Slow-path sends pass through the FreeBSD stack until the ethernet header is prepended in ether_output and the packet is ready to be sent. At this point a command buffer will be filled with pointers to the ethernet frame, the command buffer will be mapped with a packet and buffer descriptor and NdisSendPackets will be called to hand the packet off to the miniport. In FIG. 28 shows the relationship between the mbufs, command buffer, and buffer and packet descriptors. Since we will use a mbuf to map the command buffer, we can simply link the data mbufs directly off of the command buffer mbuf. This will make the freeing of resources much simpler.
- 4.6.3.2.4 Non-Data Command Buffer.
- The transmit path is also used to send non-data commands to the card. As shown in FIG. 29, for example, the ATCP driver gives a context to the INIC by filling in a command buffer, mapping it with a packet and buffer descriptor, and calling NdisSendPackets.
- 4.6.3.3 ATKProtocolSendComplete.
- Given the above different types of sends, the ATKProtocolSendComplete routine will perform various types of actions when it is called from NDIS. First it must examine the reserved area of the packet descriptor to determine what type of request has completed. In the case of a slow-path completion, it can simply free the mbufs, command buffer, and descriptors and return. In the case of a fast-path completion, it will need to notify the TCP fast path routines of the completion so TCP can in turn complete the client's IRP. Similarly, when a non-data command buffer completes, TCP will again be notified that the command sent to the INIC has completed.
- 4.7 TDI Filter Driver.
- In a first embodiment of the product, the INIC handles only simple-case data transfer operations on a TCP connection. (These of course constitute the large majority of CPU cycles consumed by TCP processing in a conventional driver.)
- There are many other complexities of the TCP protocol which must still be handled by host driver software: connection setup and breakdown, out-of-order data, nonstandard flags, etc.
- The NT OS contains a fully functional TCP/IP driver, and one solution would be to enhance this so that it is able to detect our INIC and take advantage of it by “handing off” data-path processing where appropriate.
- Unfortunately, we do not have access to NT source, let alone permission to modify NT. Thus the solution above, while a goal, cannot be done immediately. We instead provide our own custom driver software on the host for those parts of TCP processing which are not handled by the INIC.
- This presents a challenge. The NT network driver framework does make provision for multiple types of protocol driver: but it does not easily allow for multiple instances of drivers handling the SAME protocol.
- For example, there are no “hooks” into the Microsoft TCP/IP driver which would allow for routing of IP packets between our driver (handling our INICs) and the Microsoft driver (handling other NICs).
- Our approach to this is to retain the Microsoft driver for all non-TCP network processing (even for traffic on our INICs), but to invisibly “steal” TCP traffic on our connections and handle it via our own (BSD-derived) driver. The Microsoft TCP/IP driver is unaware of TCP connections on interfaces we handle.
- The network “bottom end” of this artifice is described earlier in the document. In this section we will discuss the “top end”: the TDI interface to higher-level NT network client software.
- We make use of an NT facility called a filter driver. NT allows a special type of driver (“filter driver”) to attach itself “on top” of another driver in the system. The NT I/O manager then arranges that all requests directed to the attached driver are sent first to the filter driver; this arrangement is invisible to the rest of the system.
- The filter driver may then either handle these requests itself, or pass them down to the underlying driver it is attached to. Provided the filter driver completely replicates the (externally visible) behavior of the underlying driver when it handles requests itself, the existence of the filter driver is invisible to higher-level software.
- The filter driver attaches itself on top of the Microsoft TCP/IP driver; this gives us the basic mechanism whereby we can intercept requests for TCP operations and handle them in our driver instead of the Microsoft driver.
- However, while the filter driver concept gives us a framework for what we want to achieve, there are some significant technical problems to be solved. The basic issue is that setting up a TCP connection involves a sequence of several requests from higher-level software, and it is not always possible to tell, for requests early in this sequence, whether the connection should be handled by our driver or by the Microsoft driver.
- Thus for many requests, we store information about the request in case we need it later, but also allow the request to be passed down to the Microsoft TCP/IP driver in case the connection ultimately turns out to be one which that driver should handle.
- Let us look at this in more detail, which will involve some examination of the TDI interface: the NT interface into the top end of NT network protocol drivers. Higher-level TDI client software which requires services from a protocol driver proceeds by creating various types of NT FILE_OBJECTs, and then making various DEVICE— 10_CONTROL requests on these FILE_OBJECTs.
- There are two types of FILE_OBJECT of interest here. Local IP addresses that are represented by ADDRESS objects, and TCP connections that are represented by CONNECTION objects. The steps involved in setting up a TCP connection (from the “active” client side, for a CONNECTION object) are:
- 1) Create an ADDRESS object; 2) Create a CONNECTION object; 3) Issue a TDI_ASSOCIATE_ADDRESS io-control to associate the CONNECTION object with the ADDRESS object; and 4) Issue a TDI_CONNECT io-control on the CONNECTION object, specifying the remote address and port for the connection.
- Initial thoughts were that handling this would be straightforward: we would tell, on the basis of the address given when creating the ADDRESS object, whether the connection is for one of our interfaces or not. After which, it would be easy to arrange for handling entirely by our code, or entirely by the Microsoft code: we would simply examine the ADDRESS object to see if it was “one of ours” or not.
- There are two main difficulties, however. First, when the CONNECTION object is created, no address is specified: it acquires a local address only later when the TDI_ASSOCIATE_ADDRESS is done. Also, when a CONNECTION object is created, the caller supplies an opaque “context cookie” which will be needed for later communications with that caller. Storage of this cookie is the responsibility of the protocol driver: it is not directly derivable just by examination of the CONNECTION object itself. If we simply passed the “create” call down to the Microsoft TCP/IP driver, we would have no way of obtaining this cookie later if it turns out that we need to handle the connection. Therefore, for every CONNECTION object which is created we allocate a structure to keep track of information about it, and store this structure in a hash table keyed by the address of the CONNECTION object itself, so that we can locate it if we later need to process requests on this object. We refer to this as a “shadow” object: it replicates information about the object stored in the Microsoft driver. (We must, of course, also pass the create request down to the Microsoft driver too, to allow it to set up its own administrative information about the object.)
- A second major difficulty arises with ADDRESS objects. These are often created with the TCP/IP “wildcard” address (all zeros); the actual local address is assigned only later during connection setup (by the protocol driver itself) Of course, a “wildcard” address does not allow us to determine whether connections that will be associated with this ADDRESS object should be handled by our driver or by the Microsoft one. Also, as with CONNECTION objects, there is “opaque” data associated with ADDRESS objects that cannot be derived just from examination of the object itself. (In this case addresses of callback functions set on the object by TDI_SET_EVENT io-controls.)
- Thus, as in the CONNECTION object case, we create a “shadow” object for each ADDRESS object which is created with a wildcard address. In this we store information (principally addresses of callback functions) which we will need if we are handling connections on CONNECTION objects associated with this ADDRESS object. We store similar information, of course, for any ADDRESS object which is explicitly for one of our interface addresses; in this case we don't need to also pass the create request down to the Microsoft driver.
- With this concept of “shadow” objects in place, let us revisit the steps involved in setting up a connection, and look at the processing required in our driver.
- First, the TDI client makes a call to create the ADDRESS object. Assuming that this is a “wildcard” address, we create a “shadow” object before passing the call down to the Microsoft driver.
- The next step (omitted in the earlier list for brevity) is normally that the client makes a number of TDI_SET_EVENT io-control calls to associate various callback functions with the ADDRESS object. These are functions that should be called to notify the TDI client when certain events (such arrival of data or disconnection requests etc) occur. We store these callback function pointers in our “shadow” address object, before passing the call down to the Microsoft driver.
- Next, the TDI client makes a call to create a CONNECTION object. Again, we create our “shadow” of this object.
- Next, the client issues the TDI_ASSOCIATE_ADDRESS io-control to bind the CONNECTION object to the ADDRESS object. We note the association in our “shadow” objects, and also pass the call down to the Microsoft driver.
- Finally the TDI client issues a TDI_CONNECT io-control on the CONNECTION object, specifying the remote IP address (and port) for the desired connection. At this point, we examine our routing tables to determine if this connection should be handled by one of our interfaces, or by some other NIC. If it is ours, we mark the CONNECTION object as “one of ours” for future reference (using an opaque field which NT FILE_OBJECTS provide for driver use.) We then proceed with connection setup and handling in our driver, using information stored in our “shadow” objects. The Microsoft driver does not see the connection request or any subsequent traffic on the connection.
- If the connection request is NOT for one of our interfaces, we pass it down to the Microsoft driver. Note carefully, however, that we can not simply discard our “shadow” objects at this point. The TDI interface allows re-use of CONNECTION objects: on termination of a connection, it is legal for the TDI client to dissociate the CONNECTION object from its current. Thus our “shadow” objects must be retained for the lifetime ADDRESS object, re-associate it with another, and use it for another connection of the NT FILE_OBJECTS: the subsequent connection could turn out to be via one of our interfaces.
- 4.7.1 Timers.
- 4.7.1.1 Keepalive Timer.
- We don't want to implement keepalive timers on the INIC. It would in any case be a very poor use of resources to have an INIC context sitting idle for two hours.
- 4.7.1.2 Idle Timer.
- We will keep an idle timer in the ATCP driver for connections that are managed by the INIC (resetting it whenever we see activity on the connection), and cause a flush of context back to the host if this timer expires. We may want to make the threshold substantially lower than 2 hours, to reclaim NIC context slots for useful work sooner. May also want to make that dependent on the number of contexts which have actually been handed out: don't need to reclaim them if we haven't handed out the max.
- This section provides a general description of the design of the microcode that will execute on two of the sequencers of the Protocol Processor on the INIC. The overall philosophy of the INIC is discussed in other sections. This section will discuss the INIC microcode in detail.
- 5.1 Design Overview.
- As specified in other sections, the INIC supplies a set of 3 custom processors that will provide considerable hardware-assist to the microcode running thereon. The paragraphs immediately following list the main hardware-assist features:
- 1) Header processing with specialized DMA engines to validate an input header and generate a context hash, move the header into fast memory and do header comparisons on a DRAM-based TCP control block;
- 2) DRAM fifos for free buffer queues (large & small), receive-frame queues, event queues etc.;
- 3) Header compare logic;
- 4) Checksum generation;
- 5) Multiple register contexts with register access controlled by simply setting a context register. The Protocol Processor will provide 512 SRAM-based registers to be shared among the 3 sequencers;
- 6) Automatic movement of input frames into DRAM buffers from the MAC Fifos;
- 7) Run receive processing on one sequencer and transmit processing on the other. This was chosen as opposed to letting both sequencers run receive and transmit. One of the main reasons for this is that the header-processing hardware can not be shared and interlocks would be needed to do this. Another reason is that interlocks would be needed on the resources used exclusively by receive and by transmit;
- 8) The INIC will support up to 256 TCP connections (TCB's). A TCB is associated with an input frame when the frame's source and destination IP addresses and source and destination ports match that of the TCB. For speed of access, the TCB's will be maintained in a hash table in NIC DRAM to save sequential searching. There will however, be an index in hash order in SRAM. Once a hash has been generated, the TCB will be cached in SRAM. There will be up to 8 cached TCBs in SRAM. These cache locations can be shared between both sequencers so that the sequencer with the heavier load will be able to use more cache buffers. There will also be 8 header buffers to be shared between the sequencers. Note that each header buffer is not statically linked to a specific TCB buffer. In fact the link is dynamic on a per-frame basis. The need for this dynamic linking will be explained in later sections. Suffice to say here that if there is a free header buffer, then somewhere there is also a free TCB SRAM buffer;
- 9) There were 2 basic implementation options considered here. The first was single-stack and the second was a process model. The process model was chosen here because the custom processor design is providing zero-cost overhead for context switching through the use of a context base register, and because there will be more than enough process slots (or contexts) available for the peak load. It is also expected that all “local” variables will be held permanently in registers whilst an event is being processed;
- 10) The features that provide this are 256 of the 512 SRAM-based registers that will be used for the register contexts. This can be divided up into 16 contexts (or processes) of 16 registers each. Then 8 of these will be reserved for receive and 8 for transmit. A Little's Law analysis has shown that in order to support 512 byte frames at maximum arrival rate of 4*100 Mbits, requires more than 8 jobs to be in process in the NIC. However each job requires an SRAM buffer for a TCB context and at present, there are only 8 of these currently specified due to SRAM space limits. So more contexts (e.g. 32*8 regs each) do not seem worthwhile. Refer to the section entitled “LOAD CALCULATIONS” for more details of this analysis. A context switch simply involves reloading the context base register based on the context to be restarted, and jumping to the appropriate address for resumption;
- 11) To better support the process model chosen, the code will lock an active TCB into an SRAM buffer while either sequencer is operating on it. This implies there will be no swapping to and from DRAM of a TCB once it is in SRAM and an operation is started on it. More specifically, the TCB will not be swapped after requesting that a DMA be performed for it. Instead, the system will switch to another active “process”. Then it will resume the former process at the point directly after where the DMA was requested. This constitutes a zero-cost switch as mentioned above;
- 12) Individual TCB state machines will be run from within a “process”. There will be a state machine for the receive side and one for the transmit side. The current TCB states will be stored in the SRAM TCB index table entry;
- 13) The INIC will have 16 MB of DRAM. The current specification calls for dividing a large portion of this into 2K buffers and control allocation/deallocation of these buffers through one of the DRAM fifos mentioned above. These fifos will also be used to control small host buffers, large host buffers, command buffers and command response buffers;
- 14) For events from one sequencer to the other (i.e. RCV←→XMT), the current specification calls for using simple SRAM CIO buffers, one for each direction;
- 15) Each sequencer handles its own timers independently of the others;
- 16) Contexts will be passed to the INIC through the Transmit command and response buffers. INIC-initiated TCB releases will be handled through the Receive small buffers. Host-initiated releases will use the Command buffers. There needs to be strict handling of the acquisition and release of contexts to avoid windows where for example, a frame is received on a context just after the context was passed to the INIC, but before the INIC has “accepted” it; and
- 17) T/TCP (Transaction TCP): the initial INIC will not handle T/TCP connections. This is because they are typically used for the HTTP protocol and the client for that protocol typically connects, sends a request and disconnects in one segment. The server sends the connect confirm, reply and disconnect in his first segment. Then the client confirms the disconnect. This is a total of 3 segments for the life of a context. Typical data lengths are on the order of 300 bytes from the client and 3K from the server. The INIC will provide as good an assist as seems necessary here by checksumming the frame and splitting headers and data. The latter is only likely when data is forwarded with a request such as when a filled-in form is sent by the client.
- 5.1.1 Sram Requirements.
- SRAM requirements for the Receive and Transmit engines ar shown in FIG. 30. Depending upon the available space, the number of TCB buffers may be increased to 16.
- 5.1.2 General Philosophy.
- The basic plan is to have the host determine when a TCP connection is able to be handed to the INIC, setup the TCB and pass it to the card via a command in the Transmit queue. TCBs that the INIC owns can be handed back to the host via a request from the Receive or Transmit sequencers or from the host itself at any time.
- When the INIC receives a frame, one of its immediate tasks is to determine if the frame is for a TCB that it controls. If not, the frame is passed to the host on a generic interface TCB. On transmit, the transmit request will specify a TCB hash number if the request is on a INIC-controlled TCB. Thus the initial state for the INIC will be transparent mode in which all received frames are directly passed through and all transmit requests will be simply thrown on the appropriate wire. This state is maintained until the host passes TCBs to the INIC to control. Note that frames received for which the INIC has no TCB (or it is with the host) will still have the TCP checksum verified if TCP/IP, and may split the TCPIP header off into a separate buffer.
- 5.1.3 Register Usage.
- There will be 512 registers available. The first 256 will be used for process contexts. The remaining 256 will be split between the three sequencers as follows: 1) 257−320: 64 for RCV general processing/main loop; 2) 321−384: 64 for XMT general processing/main loop; and 3) 385−512: 128 for three sequencer use.
- 5.2 Receive Processing.
- 5.2.1 Mainloop.
- FIG. 31 is a summary of the main loop of Receive.
- 5.2.2 Receive Events.
- The events that will be processed on a given context are:
- 1) accept a context;
- 2) release a context command (from the host via Transmit);
- 3) release a context request (from Transmit);
- 4) receive a valid frame; this will actually become 2 events based on the received frame—receive an ACK, receive a segment;
- 5) receive an “invalid” frame i.e. one that causes the TCB to be flushed to the host;
- 6) a valid ACK needs to be sent (delayed ACK timer expiry); and
- 7) There are expected to be the following sources of events: a) Receive input queue: it is expected that hardware will automatically DMA arriving frames into frame buffers and queue an event into a RCV-event queue; b) Timer event queue: expiration of a timer will queue an event into this queue; and c) Transmit sequencer queue: for requests from the transmit processor.
- For the sake of brevity the following only discusses receive-frame processing.
- 5.2.3 Receive Details—Valid Context.
- The base for the receive processing done by the INIC on an existing context is the fast-path or “header prediction” code in the FreeBSD release. Thus the processing is divided into three parts: header validation and checksumming, TCP processing and subsequent SMB processing.
- 5.2.3.1 Header Validation.
- There is considerable hardware assist here. The first step in receive processing is to DMA the frame header into an SRAM header buffer. It is useful for header validation to be implemented in conjunction with this DMA by scanning the data as it flies by. The following tests need to be “passed”:
- 1) MAC header: destination address is our MAC address (not MC or BC too), the Ethertype is IP; 2) IP header: header checksum is valid, header length=5, IP length>header length, protocol=TCP, no fragmentation, destination IP is our IP address; and 3) TCP header: checksum is valid (incl. pseudo-header), header length=5 or 8 (timestamp option), length is valid, dest port=SMB or FTP data, no FIN/SYN/URG/PSH/RST bits set, timestamp option is valid if present, segment is in sequence, the window size did not change, this is not a retransmission, it is a pure ACK or a pure receive segment, and most important, a valid context exists. The valid-context test is non-trivial in the amount of work involved to determine it. Also note that for pure ACKs, the window-size test will be relaxed. This is because initially the output PERSIST state is to be handled on the NIC.
- Many but perhaps not all of these tests will be performed in hardware—depending upon the embodiment.
- 5.2.3.2 TCP Processing.
- Once a frame has passed the header validation tests, processing splits based on whether the frame is a pure ACK or a pure received segment.
- 5.2.3.2.1 Pure RCV Packet.
- The design is to split off headers into a small header buffer and pass the aligned data in separate large buffers. Since a frame has been received, eventually some receiver process on the host will need to be informed. In the case of FTP, the frame is pure data and it is passed to the host immediately. This involves getting large buffers and DMAing the data into them, then setting the appropriate details in a small buffer that is used to notify the host. However for SMB, the INIC is performing reassembly of data when the frame consists of headers and data. So there may not yet be a complete SMB to pass to the host. In this case, a small buffer will be acquired and the header moved into it. If the received segment completes an SMB, then the procedures are pretty much as for FTP. If it does not, then the scheme is to at least move the received data (not the headers) to the host to free the INIC buffers and to save latency. The list of in-progress host buffers is maintained in the TCB and moved to the header buffer when the SMB is complete.
- The final part of pure-receive processing is to fire off the delayed ACK timer for this segment.
- 5.2.3.2.2 Pure ACK.
- Pure ACK processing implies this TCB is the sender, so there may be transmit buffers that can be returned to the host. If so, send an event to the Transmit processor (or do the processing here). If there is more output available, send an event to the transmit processor. Then appropriate actions need to be taken with the retransmission timer.
- 5.2.3.3 SMB Processing.
- FIG. 32 shows the format of the SMB header of an SMB frame. The LENGTH field of the NetBIOS header will be used to determine when a complete SMB has been received and the header buffer with appropriate details can be posted to the host. The interesting commands are the write commands: SMBwrite (0×B), SMBwriteBraw (0×1D), SMBwriteBmpx (0×1E), SMBwriteBs (0×1F), SMBwriteclose (0×2C), SMBwriteX (0×2F), SMBwriteunlock (0×14). These are interesting because they will have data to be aligned in host memory. The point to note about these commands is that they each have a different WCT field, so that the start offset of the data depends on the command type. SMB processing will thus need to be cognizant of these types.
- 5.2.4 Receive Details—No Valid Context.
- The design here is to provide as much assist as possible. Frames will be checksummed and the TCPIP headers may be split off.
- 5.2.5 Receive Notes.
- 1. PRU_RCVD or the equivalent in Microsoft language: the host application has to tell the INIC when he has accepted the received data that has been queued. This is so that the INIC can update the receive window. It is an advantage for this mechanism to be efficient. This may be accomplished by piggybacking these on transmit requests (not necessarily for the same TCB).
- 2. Keepalive Timer: for a INIC-controlled TCB, the INIC will not maintain this timer. This leaves the host with the job of determining that the TCB is still active.
- 3. Timestamp option: it is useful to support this option in the fast path because the BSD implementation does. Also, it can be very helpful in getting a much better estimate of the round-trip time (RTT) which TCP needs to use.
- 4. Idle timer: the INIC will not maintain this timer (see
Note 2 above). - 5. Frame with no valid context: The INIC may split TCP/IP headers into a separate header buffer.
- 5.3 Transmit Processing.
- 5.3.1 Main Loop.
- FIG. 33 is a summary of the main loop of Transmit.
- 5.3.2 Transmit Events.
- The events that will be processed on a given context and their sources are: 1) accept a context (from the Host); 2) release a context command (from the Host); 3) release a context command (from Receive); 4) valid send request and window>0 (from host or RCV sequencer); 5) valid send request and window=0 (from host or RCV sequencer); 6) send a window update (host has accepted data); 7) persist timer expiration (persist timer); 8) context-release event e.g. window shrank (XMT processing or retransmission timer); and 9) receive-release request ACK(from RCV sequencer).
- 5.3.3 Transmit Details—Valid Context.
- The following is an overview of the transmit flow: The host posts a transmit request to the INIC by filling in a command buffer with appropriate data pointers etc and posting it to the INIC via the Command Buffer Address register. Note that there is one host command buffer queue, but there are four physical transmit lines. So each request needs to include an interface number as well as the context number. The INIC microcode will DMA the command in and place it in one of four internal command queues which the transmit sequencer will work on. This is so that transmit processing can round-robin service these four queues to keep all four interfaces busy, and not let a highly-active interface lock out the others (which would happen with a single queue). The transmit request may be a segment that is less than the MSS, or it may be as much as a full 64K SMB READ. Obviously the former request will go out as one segment, the latter as a number of MSS-sized segments. The transmitting TCB must hold on to the request until all data in it has been transmitted and acked. Appropriate pointers to do this will be kept in the TCB. A large buffer is acquired from the free buffer fifo, and the MAC and TCP/IP headers are created in it. It may be quicker/simpler to keep a basic frame header set up in the TCB and either DMA directly this into the frame each time. Then data is DMA'd from host memory into the frame to create an MSS-sized segment. This DMA also checksums the data. Then the checksum is adjusted for the pseudo-header and placed into the TCP header, and the frame is queued to the MAC transmit interface which may be controlled by the third sequencer. The final step is to update various window fields etc in the TCB. Eventually either the entire request will have been sent and acked, or a retransmission timer will expire in which case the context is flushed to the host. In either case, the INIC will place a command response in the Response queue containing the command buffer handle from the original transmit command and appropriate status.
- The above discussion has dealt how an actual transmit occurs. However the real challenge in the transmit processor is to determine whether it is appropriate to transmit at the time a transmit request arrives. There are many reasons not to transmit: the receiver's window size is <=0, the Persist timer has expired, the amount to send is less than a full segment and an ACK is expected/outstanding, the receiver's window is not half-open etc. Much of the transmit processing will be in determining these conditions.
- 5.3.4 Transmit Details—No Valid Context.
- The main difference between this and a context-based transmit is that the queued request here will already have the appropriate MAC and TCP/IP (or whatever) headers in the frame to be output. Also the request is guaranteed not to be greater than MSS-sized in length. So the processing is fairly simple. A large buffer is acquired and the frame is DMAed into it, at which time the checksum is also calculated. If the frame is TCP/IP, the checksum will be appropriately adjusted if necessary (pseudo-header etc) and placed in the TCP header. The frame is then queued to the appropriate MAC transmit interface. Then the command is immediately responded to with appropriate status through the Response queue.
- 5.3.5 Transmit Notes.
- 1) Slow-start: the INIC will handle the slow-start algorithm that is now a part of the TCP standard. This obviates waiting until the connection is sending a full-rate before passing it to the INIC.
- 2) Window Probe vs Window Update—an explanation for posterity. A Window Probe is sent from the sending TCB to the receiving TCB, and it means the sender has the receiver in PERSIST state. Persist state is entered when the receiver advertises a zero window. It is thus the state of the transmitting TCB. In this state, he sends periodic window probes to the receiver in case an ACK from the receiver has been lost. The receiver will return his latest window size in the ACK. A Window Update is sent from the receiving TCB to the sending TCB, usually to tell him that the receiving window has altered. It is mostly triggered by the upper layer when it accepts some data. This probably means the sending TCB is viewing the receiving TCB as being in PERSIST state.
- 3) Persist state: it is designed to handle Persist state on the INIC. It seems unreasonable to throw a TCB back to the host just because its receiver advertised a zero window. This would normally be a transient situation, and would tend to happen mostly with clients that do not support slow-start. Alternatively, the code can easily be changed to throw the TCB back to the host as soon as a receiver advertises a zero window.
- 4) MSS-sized frames: the INIC code will expect all transmit requests for which it has no TCB to not be greater than the MSS. If any request is, it will be dropped and an appropriate response status posted.
- 5) Silly Window avoidance: as a receiver, the INIC will do the right thing here and not advertise small windows—this is easy. However it is necessary to also do things to avoid this as a sender, for the cases where a stupid client does advertise small windows. Without getting into too much detail here, the mechanism requires the INIC code to calculate the largest window advertisement ever advertised by the other end. It is an attempt to guess the size of the other end's receive buffer and assumes the other end never reduces the size of its receive buffer. See Stevens, “TCP/IP Illustrated”, Vol. 1, pp. 325-326 (1994).
- 6.1 Summary.
- The following is a summary of the main functions of the utility sequencer of the microprocessor:
- 1) Look at the event queues: Event13Type & Event23Type (we assume there will be an event status bit for this—USE_EV13 and USE_EV23) in the events register; these are events from
sequencers - 2) RCV-frame support: in the model, RCV is done through VinicReceive( ) which is registered by the lower-edge driver, and is called at dispatch-level. This routine calls VinicTransferDataComplete( ) to check if the xfer (possibly DMA) of the frame into host buffers is complete. The latter rtne is also called at dispatch level on a DMA-completion interrupt. It queues complete buffers to the RCV sequencer via the normal queue mechanism.
- 3) Other processes may also be employed here for supporting the RCV sequencer.
- 4) Service the following registers (this will probably involve micro-interrupts):
- a) Header Buffer Address register:
- Buffers are 256 bytes long on 256-byte boundaries.
- 31-8—physical addr in host of a set of contiguous hddr buffers. 7-0—number of hddr buffers passed.
- Use contents to add to SmallHType queue.
- b) Data Buffer Handle & Data Buffer Address registers:
- Buffers are 4K long aligned on 4K boundaries.
- Use contents to add to the FreeType queue.
- c) Command Buffer Address register:
- Buffers are multiple of 32 bytes up to 1K long (2**5*32).
- 31-5—physical addr in host of cmd buffer.
- 4-0—length of cmd in bytes/32 (i.e. multiples of 32 bytes).
- Points to host cmd; get FreeSType buffer and move. command into it; queue to Xmit0-Xmit3Type queues.
- d) Response Buffer Address register:
- Buffers are 32 bytes long on 32-byte boundaries.
- 31-8—physical addr in host of a set of contiguous resp buffers.
- 7-0—number of resp buffers passed.
- Use contents to add to the ResponseType queue.
- 5) Low buffer threshold support: set approp bits in the ISR when the available-buffers count in the various queues filled by the host falls below a threshold.
- 6.2 Further Operations of the Utility Processor
- The utility processor of the microprocessor housed on the INIC is responsible for setting up and implementing all configuration space and memory mapped operations, and also as described below, for managing the debug interface.
- All data transfers, and other INIC initiated transfers will be done via DMA. Configuration space for both the network processor function and the utility processor function will define a single memory space for each. This memory space will define the basic communication structure for the host. In general, writing to one of these memory locations will perform a request for service from the INIC. This is detailed in the memory description for each function. This section defines much of the operation of the Host interface, but should be read in conjunction with the Host Interface Strategy for the Alacritech INIC to fully define the Host/INIC interface.
- Two registers, DMA hardware and an interrupt function comprise the INIC interface to the Host through PCI. The interrupt function is implemented via a four bit register (PCI_INT) tied to the PCI interrupt lines. This register is directly accessed by the microprocessor.
- THE MICROPROCESSOR uses two registers, the PCI Data_Reg and the PCI_Address_Reg, to enable the Host to access Configuration Space and the memory space allocated to the INIC. These registers are not available to the Host, but are used by THE MICROPROCESSOR to enable Host reads and writes. The function of these two registers is as follows.
- 1) PCI_Data_Reg: This register can be both read and written by THE MICROPROCESSOR. On write operations from the host, this register contains the data being sent from the host. On read operations, this register contains the data to be sent to the host.
- 2) PCI_Address_Reg: This is the control register for memory reads and writes from the host. The structure of the register is shown in FIG. 34. During a write operation from the Host the PCI_Data_Reg contains valid data after Data Valid is set in the PCI_Address_Reg. Both registers are locked until THE MICROPROCESSOR writes the PCI_Data_Reg, which resets Data Valid. All read operations will be direct from SRAM. Memory space based reads will return 00. Configuration space reads will be mapped as shown in FIG. 35.
- 6.2.1 Configuration Space.
- The INIC is implemented as a multi-function device. The first device is the network controller, and the second device is the debug interface. An alternative production embodiment may implement only the network controller function. Both configuration space headers will be the same, except for the differences noted in the following description.
- Vendor ID—This field will contain the Alacritech Vendor ID. One field will be used for both functions. The Alacritech Vendor ID is hex 139A.
- Device ID—Chosen at Alacritech on a device specific basis. One field will be used for both functions.
- Command—Initialized to 00. All bits defined below as not enabled (0) will remain 0. Those that are enabled will be set to 0 or 1 depending on the state of the system. Each function (network and debug) will have its own command field, as shown in FIG. 36.
- Status—This is not initialized to zero. Each function will have its own field. The configuration is as shown in FIG. 37.
- Revision ID—The revision field will be shared by both functions.
- Class Code—This is 02 00 00 for the network controller, and for the debug interface. The field will be shared.
- Cache Line Size—This is initialized to zero. Supported sizes are 16, 32, 64 and 128 bytes. This hardware register is replicated in SRAM and supported separately for each function, but THE MICROPROCESSOR will implement the value set in Configuration Space 1 (the network processor).
- Latency Timer—This is initialized to zero. The function is supported. This hardware register is replicated in SRAM. Each function is supported separately, but THE MICROPROCESSOR will implement the value set in Configuration Space 1 (the network processor).
- Header Type—This is set to 80 for both functions, but will be supported separately.
- BIST—Is implemented. In addition to responding to a request to run self test, if test after reset fails, a code will be set in the BIST register. This will be implemented separately for each function.
- Base Address Register—A single base address register is implemented for each function. It is 64 bits in length, and the bottom four bits are configured as follows: Bit0-0, indicates memory base address;
Bit 1,2-00, locate base address anywhere in 32 bit memory space; and Bit 3-1, memory is prefetchable. - CardBus CIS Pointer—Not implemented—initialized to 0.
- Subsystem Vendor ID—Not implemented—initialized to 0.
- Subsystem VOID—Not implemented—initialized to 0.
- Expansion ROM Base Address—Not implemented—initialized to 0.
- Interrupt Line—Implemented—initialized to 0. This is implemented separately for each function.
- Interrupt Pin—This is set to 01, corresponding to INTA# for the network controller, and 02, corresponding to INTB# for the debug interface. This is implemented separately for each function.
- Mi_Gnt—This can be set at a value in the range of 10, to allow reasonably long bursts on the bus. This is implemented separately for each function.
- Max_Lat—This can be set to 0 to indicate no particular requirement for frequency of access to PCI. This is implemented separately for each function.
- 6.2.2 Memory Space.
- Because each of the following functions may or may not reside in a single location, and may or may not need to be in SRAM at all, the address for each is really only used as an identifier (label). There is, therefore, no control block anywhere in memory that represents this memory space. When the host writes one of these registers, the utility processor will construct the data required and transfer it. Reads to this memory will generate 00 for data.
- 6.2.2.1 Network Processor.
- The following four byte registers, beginning at location h00 of the network processor's allocated memory, are defined.
- 00—Interrupt Status Pointer—Initialized by the host to point to a four byte area where status is stored.
- 04—Interrupt Status—Returned status from host. Sent after one or more status conditions have been reset. Also an interlock for storing any new status. Once status has been stored at the Interrupt Status Pointer location, no new status will be ORed until the host writes the Interrupt Status Register. New status will be ored with any remaining uncleared status (as defined by the contents of the returned status) and stored again at the Interrupt Status Pointer location. Bits are as follows:
-
Bit 31—ERR—Error bits are set; -
Bit 30—RCV—Receive has occurred; -
Bit 29—XMT—Transmit command complete; and -
Bit 25—RMISS—Receive drop occurred due to no buffers. - 08—Interrupt Mask—Written by the host. Interrupts are masked for each of the bits in the interrupt status when the same bit in the mask register is set. When the Interrupt Mask register is written and as a result a status bit is unmasked, an interrupt is generated. Also, when the Interrupt Status Register is written, enabling new status to be stored, when it is stored if a bit is stored that is not masked by the Interrupt Mask, an interrupt is generated.
- 0C—Header Buffer Address—Written by host to pass a set of header buffers to the INIC.
- 10—Data Buffer Handle—First register to be written by the Host to transfer a receive data buffer to the INIC. This data is Host reference data. It is not used by the INIC, it is returned with the data buffer. However, to insure integrity of the buffer, this register must be interlocked with the Data Buffer Address register. Once the Data Buffer Address register has been written, neither register can be written until after the Data Buffer Handle register has been read by THE MICROPROCESSOR.
- 14—Data Buffer Address—Pointer to the data buffer being sent to the INIC by the Host. Must be interlocked with the Data Buffer Handle register.
- 18—Command Buffer Address XMTO—Pointer to a set of command buffers sent by the Host. THE MICROPROCESSOR will DMA the buffers to local DRAM found on the FreeSType queue and queue the Command Buffer Address XMT0 with the local address replacing the host Address.
- 1C—Command Buffer Address SMT1.
- 20—Command Buffer Address SMT2.
- 24—Command Buffer Address SMT3.
- 28—Response Buffer Address—Pointer to a set of response buffers sent by the Host. These will be treated in the same fashion as the Command Buffer Address registers.
- 6.2.2.2 Utility Processor.
- Ending status will be handled by the utility processor in the same fashion as it is handled by the network processor. At present two ending status conditions are defined
B3 1—command complete, and B30—error. When end status is stored an interrupt is generated. - Two additional registers are defined, Command Pointer and Data Pointer. The Host is responsible for insuring that the Data Pointer is valid and points to sufficient memory before storing a command pointer. Storing a command pointer initiates command decode and execution by the debug processor. The Host must not modify either command or Data Pointer until ending status has been received, at which point a new command may be initiated. Memory space is write only by the Host, reads will receive 00. The format is as follows:
- 00—Interrupt Status Pointer—Initialized by the host to point to a four byte area where status is stored.
- 04—Interrupt Status—Returned status from host. Sent after one or more status conditions have been reset. Also an interlock for storing any new status. Once status has been stored at the Interrupt Status Pointer location, no new status will be stored until the host writes the Interrupt Status Register. New status will be ored with any remaining uncleared status (as defined by the contents of the returned status) and stored again at the Interrupt Status Pointer location. Bits are as follows:
-
Bit 31—CC—Command Complete; -
Bit 30—ERR—Error; -
Bit 29—Transmit Processor Halted; -
Bit 28—Receive Processor Halted; and -
Bit 27—Utility Processor Halted. - 08—Interrupt Mask—Written by the host. Interrupts are masked for each of the bits in the interrupt status when the same bit in the mask register is set. When the Interrupt Mask register is written and as a result a status bit is unmasked, an interrupt is generated. Also, when the Interrupt Status Register is written, enabling new status to be stored, when it is stored if a bit is stored that is not masked by the Interrupt Mask, an interrupt is generated.
- 0C—Command Pointer—Points to command to be executed. Storing this pointer initiates command decode and execution.
- 10—Data Pointer—Points to the data buffer. This is used for both read and write data, determined by the command function.
- In order to provide a mechanism to debug the microcode running on the microprocessor sequencers, a debug process has been defined which will run on the utility sequencer. This processor will interface with a control program on the host processor over PCI.
- 7.1 PCI Interface.
- This interface is defined in the combination of the Utility Processor and the Host Interface Strategy sections, above.
- 7.2 Command Format.
- The first byte of the command, the command byte, defines the structure of the remainder of the command.
- 7.2.1 COMMAND BYTE.
- The first five bits of the command byte are the command itself. The next bit is used to specify an alternate processor, and the last two bits specify which processors are intended for the command.
- 7.2.2 PROCESSOR BITS.
- 00—Any Processor;
- 01—Transmit Processor;
- 10—Receive Processor; and
- 11—Utility Processor.
- 7.2.3 Alternate Processor.
- This bit defines which processor should handle debug processing if the utility processor is defined as the processor in debug.
- 0—Transmit Processor; and
- 1—Receive Processor.
- 7.2.4 Single Byte Commands.
- 00—Halt—This command asynchronously halts the processor.
- 08—Run—This command starts the processor.
- 10—Step—This command steps the processor.
-
- This command sets a stop at the specified address. A count of 1 causes the specified processor to halt the first time it executes the instruction. A count of 2 or more causes the processor to halt after that number of executions. The processor is halted just before executing the instruction. A count of 0 does not halt the processor, but causes a sync signal to be generated. If a second processor is set to the same break address, the count data from the first break request is used, and each time either processor executes the instruction the count is decremented.
-
- This command transfers to the host the contents of the descriptor. For descriptors larger than four bytes, a count, in four byte increments is specified. For descriptors utilizing an address the address field is specified.
- 7.2.6 Descriptor.
- 00—Register—This descriptor uses both count and address fields. Both fields are four byte based (a count of 1 transfers four bytes).
- 01—Sram—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address.
- 02—DRAM—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address
- 03—Cstore—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address
- Stand-alone descriptors: The following descriptors do not use either the count or address fields. They transfer the contents of the referenced register.
- 04—CPU_STATUS;
- 05—PC;
- 06—ADDR_REGA;
- 07—ADDR_REGB;
- 08—RAM_BASE;
- 09—FILE_BASE;
- 0A—INSTR_REG_L;
- 0B—INSTR_REG_H;
- 0C—MAC_DATA;
- 0D—DMA_EVENT;
- 0E—MISC_EVENT;
- 0F—QIN_RDY;
- 10—QOUT_RDY;
- 11—LOCK STATUS;
- 12—STACK—This returns 12 bytes; and
- 13—Sense_Reg.
- This register contains four bytes of data. If error status is posted for a command, if the next command that is issued reads this register, a code describing the error in more detail may be obtained. If any command other than a dump of this register is issued after error status, sense information will be reset.
- This command transfers from the host the contents of the descriptor. For descriptors larger than four bytes, a count, in four byte increments is specified. For descriptors utilizing an address the address field is specified.
- 7.2.7 Descriptor.
- 00—Register—This descriptor uses both count and address fields. Both fields are four byte based.
- 01—Sram—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address.
- 02—DRAM—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address.
- 03—Cstore—This descriptor uses both count and address fields. Count is in four byte blocks. Address is in bytes, but if it is not four byte aligned, it is forced to the lower four byte aligned address. This applies to WCS only.
- Stand-alone descriptors: The following descriptors do not use either the count or address fields. They transfer the contents of the referenced register.
- 04—ADDR_REGA;
- 05—ADDR_REGB;
- 06—RAM_BASE;
- 07—FILE_BASE;
- 08—MAC_DATA;
- 09—Q_N_RDY;
- 0A—QOUT_RDY;
- 0B—DBG_ADDR; and
- 38—Map.
-
- Features:
- 1) PERIPHERAL COMPONENT INTERCONNECT (PCI) INTERFACE.
- a) Universal PCI interface supports both 5.0V and 3.3V signaling environments;
- b) Supports both 32-bit and 64 bit PCI interface;
- c) Supports PCI clock frequencies from 15 MHz to 66 MHz;
- d) High performance bus mastering architecture;
- e) Host memory based communications reduce register accesses;
- f) Host memory based interrupt status word reduces register reads;
- g) Plug and Play compatible;
- h) PCI specification revision 2.1 compliant;
- i) PCI bursts up to 512 bytes;
- j) Supports cache line operations up to 128 bytes;
- k) Both big-endian and little-endian byte alignments supported; and
- l) Supports Expansion ROM.
- 2) NETWORK INTERFACE.
- a) Four internal 802.3 and ethernet compliant Macs; b) Media Independent Interface (MII) supports external PHYs;
- c) 10BASE-T, 100BASE-TX/FX and 100BASE-T4 supported;
- d) Full and half-duplex modes supported;
- e) Automatic PHY status polling notifies system of status change;
- f) Provides SNMP statistics counters;
- g) Supports broadcast and multicast packets;
- h) Provides promiscuous mode for network monitoring or multiple unicast address detection;
- i) Supports “huge packets” up to 32 KB;
- j) Mac-layer loop-back test mode; and
- k) Supports auto-negotiating Phys.
- 3) MEMORY INTERFACE.
- a) External DRAM buffering of transmit and receive packets;
- b) Buffering configurable as 4 MB, 8 MB, 16 MB or 32 MB;
- c) 32-bit interface supports throughput of 224 MB/s;
- d) Supports external FLASH ROM up to 4 MB, for diskless boot applications; and
- e) Supports external serial EEPROM for custom configuration and Mac addresses.
- 4) PROTOCOL PROCESSOR.
- a) High speed, custom, 32-bit processor executes 66 million instructions per second;
- b) Processes IP, TCP and NETBIOS protocols;
- c) Supports up to 256 resident TCP/IP contexts; and
- d) Writable control store (WCS) allows field updates for feature enhancements.
- 5) POWER.
- a) 3.3V chip operation; and
- b) PCI controlled 5.0V/3.3V I/O cell operation.
- 6) PACKAGING.
- a) 272-pin plastic ball grid array;
- b) 91 PCI signals;
- c) 68 MII signals;
- d) 58 external memory signals;
- e) 1 clock signal;
- f) 54 signals split between power and ground; and
- g) 272 total pins.
- General Description.
- The microprocessor (see FIG. 38) is a 32-bit, full-duplex, four channel, 10/100-Megabit per second (Mbps), Intelligent Network Interface Controller (INIC), designed to provide high-speed protocol processing for server applications. It combines the functions of a standard network interface controller and a protocol processor within a single chip. Although designed specifically for server applications, the microprocessor can be used by PCs, workstations and routers or anywhere that TCP/IP protocols are being utilized.
- When combined with four 802.3/MII compliant Phys and Synchronous DRAM (SDRAM), the INIC comprises four complete ethernet nodes. It contains four 802.3/ethernet compliant Macs, a PCI Bus Interface Unit (BIU), a memory controller, transmit fifos, receive fifos and a custom TCP/IP/NETBIOS protocol processor. The INIC supports 10Base-T, 100Base-TX, 100Base-FX and 100Base-T4 via the MII interface attachment of appropriate Phys.
- The INIC Macs provide statistical information that may be used for SNMP. The Macs operate in promiscuous mode allowing the INIC to function as a network monitor, receive broadcast and multicast packets and implement multiple Mac addresses for each node.
- Any 802.3/MII compliant PHY can be utilized, allowing the INIC to support 10BASE-T, 10BASE-T2, 10BASE-TX, 100Base-FX and 100BASE-T4 as well as future interface standards. PHY identification and initialization is accomplished through host driver initialization routines. PHY status registers can be polled continuously by the INIC and detected PHY status changes reported to the host driver. The Mac can be configured to support a maximum frame size of 1518 bytes or 32768 bytes.
- The 64-bit, multiplexed BIU provides a direct interface to the PCI bus for both slave and master functions. The INIC is capable of operating in either a 64-bit or 32-bit PCI environment, while supporting 64-bit addressing in either configuration. PCI bus frequencies up to 66 MHz are supported yielding instantaneous bus transfer rates of 533 MB/s. Both 5.0V and 3.3V signaling environments can be utilized by the INIC. Configurable cache-line size up to 256B will accommodate future architectures, and Expansion ROM/Flash support allows for diskless system booting. Non-PC applications are supported via programmable big and little endian modes. Host based communication has been utilized to provide the best system performance possible.
- The NIC supports Plug-N-Play auto-configuration through the PCI configuration space. External pull-up and pull-down resistors, on the memory I/O pins, allow selection of various features during chip reset. Support of an external eeprom allows for local storage of configuration information such as Mac addresses.
- External SDRAM provides frame buffering, which is configurable as 4 MB, 8 MB, 16 MB or 32 MB using the appropriate SIMMs. Use of −10 speed grades yields an external buffer bandwidth of 224 MB/s. The buffer provides temporary storage of both incoming and outgoing frames. The protocol processor accesses the frames within the buffer in order to implement TCP/IP and NETBIOS. Incoming frames are processed, assembled then transferred to host memory under the control of the protocol processor. For transmit, data is moved from host memory to buffers where various headers are created before being transmitted out via the Mac.
- 1) Cores/Cells.
- a) LSI Logic Ethernet—110 Core, 100Base and 10Base Mac with MII interface;
- b) LSI Logic single port SRAM, triple port SRAM and ROM available;
- c)
LSI Logic PCI 66 MHz, 5V compatible I/O cell; and - d) LSI Logic PLL.
- 2) Die Size/Pin Count.
- LSI Logic G10 process. FIG. 39 shows the area on the die of each module.
- 3) Datapath Bandwidth (See FIG. 40).
- 4) CPU Bandwidth (See FIG. 41).
- 5) Performance Features.
- a) 512 registers improve performance through reduced scratch ram accesses and reduced instructions;
- b) Register windowing eliminates context-switching overhead;
- c) Separate instruction and data paths eliminate memory contention;
- d) Totally resident control store eliminates stalling during instruction fetch;
- e) Multiple logical processors eliminate context switching and improve real-time response;
- f) Pipelined architecture increases operating frequency;
- g) Shared register and scratch ram improve inter-processor communication;
- h) Fly-by state-Machine assists address compare and checksum calculation;
- i) TCP/IP-context caching reduces latency;
- j) Hardware implemented queues reduce Cpu overhead and latency;
- k) Horizontal microcode greatly improves instruction efficiency;
- l) Automatic frame DMA and status between Mac and DRAM buffer; and
- m) Deterministic architecture coupled with context switching eliminates processor stalls.
- Processor.
- The processor is a convenient means to provide a programmable state-machine which is capable of processing incoming frames, processing host commands, directing network traffic and directing PCI bus traffic. Three processors are implemented using shared hardware in a three-level pipelined architecture which launches and completes a single instruction for every clock cycle. The instructions are executed in three distinct phases corresponding to each of the pipeline stages where each phase is responsible for a different function.
- The first instruction phase writes the instruction results of the last instruction to the destination operand, modifies the program counter (Pc), selects the address source for the instruction to fetch, then fetches the instruction from the control store. The fetched instruction is then stored in the instruction register at the end of the clock cycle.
- The processor instructions reside in the on-chip control-store, which is implemented as a mixture of ROM and SRAM. The ROM contains 1K instructions starting at
address 0×0000 and aliases each 0×0400 locations throughout the first 0×8000 of instruction space. The SRAM (WCS) will hold up to 0×2000 instructions starting ataddress 0×8000 and aliasing each 0×2000 locations throughout the last 0×8000 of instruction space. The ROM and SRAM are both 49-bits wide accounting for bits [48:0] of the instruction microword. A separate mapping ram provides bits [55:49] of the microword (MapAddr) to allow replacement of faulty ROM based instructions. The mapping ram has a configuration of 128×7 which is insufficient to allow a separate map address for each of the 1K ROM locations. To allow re-mapping of the entire 1K ROM space, the map ram address lines are connected to the address bits Fetch[9:3]. The result is that the ROM is re-mapped in blocks of 8 contiguous locations. - The second instruction phase decodes the instruction which was stored in the instruction register. It is at this point that the map address is checked for a non-zero value which will cause the decoder to force a Jmp instruction to the map address. If a non-zero value is detected then the decoder selects the source operands for the Alu operation based on the values of the OpdASel, OpdBSel and AluOp fields. These operands are then stored in the decode register at the end of the clock cycle. Operands may originate from File, SRAM, or flip-flop based registers. The second instruction phase is also where the results of the previous instruction are written to the SRAM.
- The third instruction phase is when the actual Alu operation is performed, the test condition is selected and the Stack push and pop are implemented. Results of the Alu operation are stored in the results register at the end of the clock cycle.
- FIG. 42 is a block diagram of the CPU. FIG. 42 shows the hardware functions associated with each of the instruction phases. Note that various functions have been distributed across the three phases of the instruction execution in order to minimize the combinatorial delays within any given phase.
- Instruction Set.
- The micro-instructions are divided into six types according to the program control directive. The micro-instruction is further divided into sub-fields for which the definitions are dependent upon the instruction type. The six instruction types are listed in FIG. 43.
- All instructions (see FIG. 43) include the Alu operation (AluOp), operand “A” select (OpdASel), operand “B” select (OpdBSel) and Literal fields. Other field usage depends upon the instruction type.
- The “jump condition code” (Jee) instruction causes the program counter to be altered if the condition selected by the “test select” (TstSel) field is asserted. The new program counter (Pe) value is loaded from either the Literal field or the AluOut as described in the following section and the Literal field may be used as a source for the Alu or the ram address if the new Pe value is sourced by the Alu.
- The “jump” (Jmp) instruction causes the program counter to be altered unconditionally. The new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section. The format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pc value is sourced by the Alu.
- The “jump subroutine” (Jsr) instruction causes the program counter to be altered unconditionally. The new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section. The old program counter value is stored on the top location of the Pc-Stack which is implemented as a LIFO memory. The format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pe value is sourced by the Alu.
- The “Nxt” (Nxt) instruction causes the program counter to increment. The format allows instruction bits 23:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address.
- The “return from subroutine” (Rts) instruction is a special form of the Nxt instruction in which the “flag operation” (FlgSel) field is set to a value of 0hff. The current Pc value is replaced with the last value stored in the stack. The Literal field may be used as a source for the Alu or the ram address.
- The Map instruction is provided to allow replacement of instructions which have been stored in ROM and is implemented any time the “map enable” (MapEn) bit has been set and the content of the “map address” (MapAddr) field is non-zero. The instruction decoder forces a jump instruction with the Alu operation and destination fields set to pass the MapAddr field to the program control block.
- The program control is determined by a combination of PgmCtrl, DstOpd, FlgSel and TstSel. The behavior of the program control is defined with the “C-like” description in FIG. 44. FIGS.45-53 show ALU operations, selected operands, selected tests, and flag operations.
- Sram Control Sequencer (SramCtrl).
- SRAM is the nexus for data movement within the INIC. A hierarchy of sequencers, working in concert, accomplish the movement of data between DRAM, SRAM, Cpu, ethernet and the Pci bus. Slave sequencers, provided with stimulus from master sequencers, request data movement operations by way of the SRAM, Pci bus, DRAM and Flash. The slave sequencers prioritize, service and acknowledge the requests.
- The data flow block diagram of FIG. 54 shows all of the master and slave sequencers of the INIC product. Request information such as r/w, address, size, endian and alignment are represented by each request line. Acknowledge information to master sequencers include only the size of the transfer being acknowledged.
- The block diagram of FIG. 55 illustrates how data movement is accomplished for a Pci slave write to DRAM. Note that the Psi (Pci slave in) module functions as both a master sequencer. Psi sends a write request to the SramCtrl module. Psi requests Xwr to move data from SRAM to DRAM. Xwr subsequently sends a read request to the SramCtrl module then writes the data to the DRAM via the Xctrl module. As each piece of data is moved from the SRAM to Xwr, Xwr sends an acknowledge to the Psi module.
- The SRAM control sequencer services requests to store to, or retrieve data from an SRAM organized as 1024 locations by 128 bits (16 KB). The sequencer operates at a frequency of 133 MHz, allowing both a Cpu access and a DMA access to occur during a standard 66 MHz Cpu cycle. One 133 MHz cycle is reserved for Cpu accesses during each 66 MHz cycle while the remaining 133 MHz cycle is reserved for DMA accesses on a prioritized basis.
- The block diagram of FIG. 56 shows the major functions of the SRAM control sequencer. A slave sequencer begins by asserting a request along with r/w, ram address, endian, data path size, data path alignment and request size. SramCtrl prioritizes the requests. The request parameters are then selected by a multiplexer which feeds the parameters to the SRAM via a register. The requestor provides the SRAM address which when coupled with the other parameters controls the input and output alignment. SRAM outputs are fed to the output aligner via a register. Requests are acknowledged in parallel with the returned data.
- FIG. 57 is a timing diagram depicting two ram accesses during a single 66 MHz clock cycle.
- External Memory Control (Xctrl).
- Xctrl (See FIG. 58) provides the facility whereby Xwr, Xrd, Dcfg and Eectrl access external Flash and DRAM. Xctrl includes an arbiter, i/o registers, data multiplexers, address multiplexers and control multiplexers. Ownership of the external memory interace is requested by each block and granted to each of the requesters by the arbiter function. Once ownership has been granted the multiplexers select the address, data and control signals from owner, allowing access to external memory.
- External Memory Read Sequencer (Xrd).
- The Xrd sequencer acts only as a slave sequencer. Servicing requests issued by master sequencers, the Xrd sequencer moves data from external SDRAM or flash to the SRAM, via the Xctrl module, in blocks of 32 bytes or less. The nature of the SDRAM requires fixed burst sizes for each of it's internal banks with ras precharge intervals between each access. By selecting a burst size of 32 bytes for SDRAM reads and interleaving bank accesses on a 16 byte boundary, we can ensure that the ras precharge interval for the first bank is satisfied before burst completion for the second bank, allowing us to re-instruct the first bank and continue with uninterrupted DRAM access. SDRAMs require a consistent burst size be utilized each and every time the SDRAM is accessed. For this reason, if an SDRAM access does not begin or end on a 32 byte boundary, SDRAM bandwidth will be reduced due to less than 32 bytes of data being transferred during the burst cycle.
- FIG. 59 depicts the major functional blocks of the Xrd external memory read sequencer. The first step in servicing a request to move data from SDRAM to SRAM is the prioritization of the master sequencer requests. Next the Xrd sequencer takes a snapshot of the DRAM read address and applies configuration information to determine the correct bank, row and column address to apply. Once sufficient data has been read, the Xrd sequencer issues a write request to the SramCtrl sequencer which in turn sends an acknowledge to the Xrd sequencer. The Xrd sequencer passes the acknowledge along to the level two master with a size code indicating how much data was written during the SRAM cycle allowing the update of pointers and counters. The DRAM read and SRAM write cycles repeat until the original burst request has been completed at which point the Xrd sequencer prioritizes any remaining requests in preparation for the next burst cycle.
- Contiguous DRAM burst cycles are not guaranteed to the Xrd sequencer as an algorithm is implemented which ensures highest priority to refresh cycles followed by flash accesses, DRAM writes then DRAM reads.
- FIG. 60 is a timing diagram illustrating how data is read from SDRAM. The DRAM has been configured for a burst of four with a latency of two clock cycles. Bank A is first selected/activated followed by a read command two clock cycles later. The bank select/activate for bank B is next issued as read data begins returning two clocks after the read command was issued to bank A. Two clock cycles before we need to receive data from bank B we issue the read command. Once all 16 bytes have been received from bank A we begin receiving data from bank B.
- External Memory Write Sequencer (Xwr).
- The Xwr sequencer is a slave sequencer. Servicing requests issued by master sequencers, the Xwr sequencer moves data from SRAM to the external SDRAM or flash, via the Xctrl module, in blocks of 32 bytes or less while accumulating a checksum of the data moved. The nature of the SDRAM requires fixed burst sizes for each of it's internal banks with ras precharge intervals between each access. By selecting a burst size of 32 bytes for SDRAM writes and interleaving bank accesses on a 16 byte boundary, we can ensure that the ras prechage interval for the first bank is satisfied before burst completion for the second bank, allowing us to re-instruct the first bank and continue with uninterrupted DRAM access. SDRAMs require a consistent burst size be utilized each and every time the SDRAM is accessed. For this reason, if an SDRAM access does not begin or end on a 32 byte boundary, SDRAM bandwidth will be reduced due to less than 32 bytes of data being transferred during the burst cycle.
- FIG. 61 depicts the major functional blocks of the Xwr sequencer. The first step in servicing a request to move data from SRAM to SDRAM is the prioritization of the level two master requests. Next the Xwr sequencer takes a Snapshot of the DRAM write address and applies configuration information to determine the correct DRAM, bank, row and column address to apply. The Xwr sequencer immediately issues a read command to the SRAM to which the SRAM responds with both data and an acknowledge. The Xwr sequencer passes the acknowledge to the level two master along with a size code indicating how much data was read during the SRAM cycle allowing the update of pointers and counters. Once sufficient data has been read from SRAM, the Xwr sequencer issues a write command to the DRAM starting the burst cycle and computing a checksum as the data flys by. The SRAM read cycle repeats until the original burst request has been completed at which point the Xwr sequencer prioritizes any remaining requests in preparation for the next burst cycle.
- Contiguous DRAM burst cycles are not guaranteed to the Xwr sequencer as an algorithm is implemented which ensures highest priority to refresh cycles followed by flash accesses then DRAM writes.
- FIG. 62 is a timing diagram illustrating how data is written to SDRAM. The DRAM has been configured for a burst of four with a latency of two clock cycles. Bank A is first selected/activated followed by a write command two clock cycles later. The bank select/activate for bank B is next issued in preparation for issuing the second write command. As soon as the first 16 byte burst to bank A completes we issue the write command for bank B and begin supplying data.
- PCI Master-Out Sequencer (Pmo).
- The Pmo sequencer (See FIG. 63) acts only as a slave sequencer. Servicing requests issued by master sequencers, the Pmo sequencer moves data from an SRAM based fifo to a Pci target, via the PciMstrIO module, in bursts of up to 256 bytes. The nature of the PCI bus dictates the use of the write line command to ensure optimal system performance. The write line command requires that the Pmo sequencer be capable of transferring a whole multiple (1×, 2×, 3×, . . . ) of cache lines of which the size is set through the Pci configuration registers. To accomplish this end, Pmo will automatically perform partial bursts until it has aligned the transfers on a cache line boundary at which time it will begin usage of the write line command. The SRAM fifo depth, of 256 bytes, has been chosen in order to allow Pmo to accommodate cache line sizes up to 128 bytes. Provided the cache line size is less than 128 bytes, Pmo will perform multiple, contiguous cache line bursts until it has exhausted the supply of data.
- Pmo receives requests from two separate sources; the DRAM to Pci (D2p) module and the SRAM to Pci (S2p) module. An operation first begins with prioritization of the requests where the S2p module is given highest priority. Next, the Pmo module takes a Snapshot of the SRAM fifo address and uses this to generate read requests for the SramCtrl sequencer. The Pmo module then proceeds to arbitrate for ownership of the Pci bus via the PciMstrIO module. Once the Pmo holding registers have sufficient data and Pci bus mastership has been granted, the Pmo module begins transferring data to the Pci target. For each successful transfer, Pmo sends an acknowledge and encoded size to the master sequencer, allow it to update it's internal pointers, counters and status. Once the Pci burst transaction has terminated, Pmo parks on the Pci bus unless another initiator has requested ownership. Pmo again prioritizes the incoming requests and repeats the process.
- PCI Master-Out Sequencer (Pmi).
- The Pmi sequencer (See FIG. 64) acts only as a slave sequencer. Servicing requests issued by master sequencers, the Pmi sequencer moves data from a Pci target to an SRAM based fifo, via the PciMstrIO module, in bursts of up to 256 bytes. The nature of the PCI bus dictates the use of the read multiple command to ensure optimal system performance. The read multiple command requires that the Pmi sequencer be capable of transferring a cache line or more of data. To accomplish this end, Pmi will automatically perform partial cache line bursts until it has aligned the transfers on a cache line boundary at which time it will begin usage of the read multiple command. The SRAM fifo depth, of 256 bytes, has been chosen in order to allow Pmi to accommodate cache line sizes up to 128 bytes. Provided the cache line size is less than 128 bytes, Pmi will perform multiple, contiguous cache line bursts until it has filled the fifo.
- Pmi receive requests from two separate sources; the Pci to DRAM (P2d) module and the Pci to SRAM (P2s) module. An operation first begins with prioritization of the requests where the P2s module is given highest priority. The Pmi module then proceeds to arbitrate for ownership of the Pci bus via the PciMstrIO module. Once the Pci bus mastership has been granted and the Pmi holding registers have sufficient data, the Pmi module begins transferring data to the SRAM fifo. For each successful transfer, Pmi sends an acknowledge and encoded size to the master sequencer, allowing it to update it's internal pointers, counters and status. Once the Pci burst transaction has terminated, Pmi parks on the Pci bus unless another initiator has requested ownership. Pmi again prioritizes the incoming requests and repeats the process.
- Dram To Pci Sequencer (D2P).
- The D2p sequencer (See FIG. 65) acts is a master sequencer. Servicing channel requests issued by the Cpu, the D2p sequencer manages movement of data from DRAM to the Pci bus by issuing requests to both the Xrd sequencer and the Pmo sequencer. Data transfer is accomplished using an SRAM based fifo through which data is staged.
- D2p can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, D2p fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, Pci address, Pci endian and request size. D2p then issues a request to the D2s sequencer causing the SRAM based fifo to fill with DRAM data. Once the fifo contains sufficient data for a Pci transaction, D2s issues a request to Pmo which in turn moves data from the fifo to a Pci target. The process repeats until the entire request has been satisfied at which time D2p writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. D2p then monitors the DMA channels for additional requests. FIG. 65 is an illustration showing the major blocks involved in the movement of data from DRAM to Pci target.
- Pci to Dram Sequencer (P2d).
- The P2d sequencer (See FIG. 67) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the P2d sequencer manages movement of data from Pci bus to DRAM by issuing requests to both the Xwr sequencer and the Pmi sequencer. Data transfer is accomplished using an SRAM based fifo through which data is staged.
- P2d can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, P2d, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, Pci address, Pci endian and request size. P2d then issues a request to Pmo which in turn moves data from the Pci target to the SRAM fifo. Next, P2d issues a request to the Xwr sequencer causing the SRAM based fifo contents to be written to the DRAM. The process repeats until the entire request has been satisfied at which time P2d writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. P2d then monitors the DMA channels for additional requests. FIG. 68 is an illustration showing the major blocks involved in the movement of data from a Pci target to DRAM.
- SRAM to Pci Sequencer (S2p).
- The S2p sequencer (See FIG. 69) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the S2p sequencer manages movement of data from SRAM to the Pci bus by issuing requests to the Pmo sequencer
- S2p can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, S2p, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the SRAM address, Pci address, Pci endian and request size. S2p then issues a request to Pmo which in turn moves data from the SRAM to a Pci target. The process repeats until the entire request has been satisfied at which time S2p writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. S2p then monitors the DMA channels for additional requests. FIG. 70 is an illustration showing the major blocks involved in the movement of data from SRAM to Pci target.
- PCI to SRAM Sequencer (P2s).
- The P2s sequencer (See FIG. 71) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the P2s sequencer manages movement of data from Pci bus to SRAM by issuing requests to the Pmi sequencer.
- P2s can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, P2s, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the SRAM address, Pci address, Pci endian and request size. P2s then issues a request to Pmo which in turn moves data from the Pci target to the SRAM. The process repeats until the entire request has been satisfied at which time P2s writes ending status in to the DMA descriptor area of SRAM and sets the channel done bit associated with that channel. P2s then monitors the DMA channels for additional requests. FIG. 72 is an illustration showing the major blocks involved in the movement of data from a Pci target to DRAM.
- DRAM to SRAM Sequencer (D2s).
- The D2s sequencer (See FIG. 73) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the D2s sequencer manages movement of data from DRAM to SRAM by issuing requests to the Xrd sequencer.
- D2s can receive requests from any of the processor's thirty-two DMA channels.
- Once a command request has been detected, D2s, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, SRAM address and request size. D2s then issues a request to the Xrd sequencer causing the transfer of data to the SRAM. The process repeats until the entire request has been satisfied at which time D2s writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. D2s then monitors the DMA channels for additional requests. FIG. 74 is an illustration showing the major blocks involved in the movement of data from DRAM to SRAM.
- SRAM to DRAM Sequencer (S2d).
- The S2d sequencer (See FIG. 75) acts as both a slave sequencer and a master sequencer. Servicing channel requests issued by the Cpu, the S2d sequencer manages movement of data from SRAM to DRAM by issuing requests to the Xwr sequencer.
- S2d can receive requests from any of the processor's thirty-two DMA channels. Once a command request has been detected, S2d, operating as a slave sequencer, fetches a DMA descriptor from an SRAM location dedicated to the requesting channel which includes the DRAM address, SRAM address, checksum reset and request size. S2d then issues a request to the Xwr sequencer causing the transfer of data to the DRAM. The process repeats until the entire request has been satisfied at which time S2d writes ending status in to the SRAM DMA descriptor area and sets the channel done bit associated with that channel. S2d then monitors the DMA channels for additional requests. FIG. 76 is an illustration showing the major blocks involved in the movement of data from SRAM to DRAM.
- Pci slave input sequencer (Psi).
- The Psi sequencer (See FIG. 77) acts as both a slave sequencer and a master sequencer. Servicing requests issued by a Pci master, the Psi sequencer manages movement of data from Pci bus to SRAM and Pci bus to DRAM via SRAM by issuing requests to the SramCtrl and Xwr sequencers.
- Psi manages write requests to configuration space, expansion rom, DRAM, SRAM and memory mapped registers. Psi separates these Pci bus operations in to two categories with different action taken for each. DRAM accesses result in Psi generating write request to an SRAM buffer followed with a write request to the Xwr sequencer. Subsequent write or read DRAM operations are retry terminated until the buffer has been emptied. An event notification is set for the processor allowing message passing to occur through DRAM space.
- All other Pci write transactions result in Psi posting the write information including Pci address, Pci byte marks and Pci data to a reserved location in SRAM, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, SRAM or registers are terminated with retry until the processor clears the event flag. This allows the INIC pipelining levels to a minimum for the posted write and give the processor ample time to modify data for subsequent Pci read operations.
- FIG. 77 depicts the sequence of events when Psi is the target of a Pci write operation. Note that
events 4 through 7 occur only when the write operation targets the DRAM. - Pci slave output sequencer (Pso).
- The Pso sequencer (See FIG. 78) acts as both a slave sequencer and a master sequencer. Servicing requests issued by a Pci master, the Pso sequencer manages movement of data to Pci bus from SRAM and to Pci bus from DRAM via SRAM by issuing requests to the SramCtrl and Xrd sequencers.
- Pso manages read requests to configuration space, expansion rom, DRAM, SRAM and memory mapped registers. Pso separates these Pci bus operations in to two categories with different action taken for each. DRAM accesses result in Pso generating read request to the Xrd sequencer followed with a read request to SRAM buffer. Subsequent write or read DRAM operations are retry terminated until the buffer has been emptied.
- All other Pci read transactions result in Pso posting the read request information including Pci address and Pci byte marks to a reserved location in SRAM, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, SRAM or registers are terminated with retry until the processor clears the event flag. This allows the INIC to use a microcoded response mechanism to return data for the request. The processor decodes the request information, formulates or fetches the requested data and stores it in SRAM then clears the event flag allowing Pso to fetch the data and return it on the Pci bus.
- FIG. 78 depicts the sequence of events when Pso is the target of a Pci read operation.
- Frame Receive Sequencer (RcvX).
- The receive sequencer (See FIG. 79) (RcvSeq) analyzes and manages incoming packets, stores the result in DRAM buffers, then notifies the processor through the receive queue (RcvQ) mechanism. The process begins when a buffer descriptor is available at the output of the FreeQ. RcvSeq issues a request to the Qmg which responds by supplying the buffer descriptor to RcvSeq. RcvSeq then waits for a receive packet. The Mac, network, transport and session information is analyzed as each byte is received and stored in the assembly register (AssyReg). When four bytes of information is available, RcvSeq requests a write of the data to the SRAM. When sufficient data has been stored in the SRAM based receive fifo, a DRAM write request is issued to Xwr. The process continues until the entire packet has been received at which point RcvSeq stores the results of the packet analysis in the beginning of the DRAM buffer. Once the buffer and status have both been stored, RcvSeq issues a write-queue request to Qmg. Qmg responds by storing a buffer descriptor and a status vector provided by RcvSeq. The process then repeats. If RcvSeq detects the arrival of a packet before a free buffer is available, it ignores the packet and sets the FrameLost status bit for the next received packet.
- FIG. 80 depicts the sequence of events for successful reception of a packet followed by a definition of the receive buffer and the buffer descriptor as stored on the RcvQ. FIG. 90 shows the Receive Buffer Descriptor. FIGS.91-93 show the Receive Buffer Format.
- Frame Transmit Sequencer (XmtX).
- The transmit sequencer (See FIG. 85) (XmtSeq) analyzes and manages outgoing packets, using buffer descriptors retrieved from the transmit queue (XmtQ) then storing the descriptor for the freed buffer in the free buffer queue (FreeQ). The process begins when a buffer descriptor is available at the output of the XmtQ. XmtSeq issues a request to the Qmg which responds by supplying the buffer descriptor to XmtSeq XmtSeq then issues a read request to the Xrd sequencer. Next, XmtSeq issues a read request to SramCtrl then instructs the Mac to begin frame transmission. Once the frame transmission has completed, XmtSeq stores the buffer descriptor on the FreeQ thereby recycling the buffer.
- FIG. 86 depicts the sequence of events for successful transmission of a packet followed by a definition of the receive buffer and the buffer descriptor as stored on the XmtQ. FIG. 87 shows the Transmit Buffer Descriptor. FIG. 88 shows the Transmit Buffer Format. FIG. 89 shows the Transmit Status Vector.
- Queue Manager (Qmg).
- The INIC includes special hardware assist for the implementation of message and pointer queues. The hardware assist is called the queue manager (See FIG. 90) (Qmg) and manages the movement of queue entries between Cpu and SRAM, between DMA sequencers and SRAM as well as between SRAM and DRAM. Queues comprise three distinct entities; the queue head (QHd), the queue tail (QTI) and the queue body (QBdy). QHd resides in 64 bytes of scratch ram and provides the area to which entries will be written (pushed). QTI resides in 64 bytes of scratch ram and contains queue locations from which entries will be read (popped). QBdy resides in DRAM and contains locations for expansion of the queue in order to minimize the SRAM space requirements. The QBdy size depends upon the queue being accessed and the initialization parameters presented during queue initialization.
- Qmg accepts operations from both Cpu and DMA sources (See FIG. 91). Executing these operations at a frequency of 133 MHz, Qmg reserves even cycles for DMA requests and reserves odd cycles for Cpu requests. Valid Cpu operations include initialize queue (InitQ), write queue (WrQ) and read queue (RdQ). Valid DMA requests include read body (RdBdy) and write body (WrBdy). Qmg working in unison with Q2d and D2q generate requests to the Xwr and Xrd sequencers to control the movement of data between the QHd, QTI and QBdy.
- FIG. 90 shows the major functions of Qmg. The arbiter selects the next operation to be performed. The dual-ported SRAM holds the queue variables HdWrAddr, HdRdAddr, TlWrAddr, TIRdAddr, BdyWrAddr, BdyRdAddr and QSz. Qmg accepts an operation request, fetches the queue variables from the queue ram (Qram), modifies the variables based on the current state and the requested operation then updates the variables and issues a read or write request to the SRAM controller. The SRAM controller services the requests by writing the tail or reading the head and returning an acknowledge.
- DMA Operations.
- DMA operations are accomplished through a combination of thirtytwo DMA channels (DmaCh) and seven DMA sequencers (DmaSeq). Each DMA channel provides a mechanism whereby a Cpu can issue a command to any of the seven DMA sequencers. Where as the DMA channels are multi-purpose, the DMA sequencers they command are single purpose as shown in FIG. 92.
- The processors manage DMA in the following way. The processor writes a DMA descriptor to an SRAM location reserved for the DMA channel. The format of the DMA descriptor is dependent upon the targeted DMA sequencer. The processor then writes the DMA sequencer number to the channel command register.
- Each of the DMA sequencers polls all thirtytwo DMA channels in search of commands to execute. Once a command request has been detected, the DMA sequencer fetches a DMA descriptor from a fixed location in SRAM. The SRAM location is fixed and is determined by the DMA channel number. The DMA sequencer loads the DMA descriptor in to it's own registers, executes the command, then overwrites the DMA descriptor with ending status. Once the command has halted, due to completion or error, and the ending status has been written, the DMA sequencer sets the done bit for the current DMA channel.
- The done bit appears in a DMA event register which the Cpu can examine. The Cpu fetches ending status from SRAM, then clears the done bit by writing zeroes to the channel command (ChCmd) register. The channel is now ready to accept another command.
- The format of the channel command register is as shown in FIG. 93. The format of the P2d or P2s descriptor is as shown in FIG. 94. The format of the S2p or D2p descriptor is as shown in FIG. 95. The format of the S2d, D2d or D2s descriptor is as shown in FIG. 96. The format of the ending status of all channels is as shown in FIG. 97. The format of the ChEvnt register is as shown in FIG. 98. FIG. 99 is a block diagram of MAC CONTROL (Macctrl).
- Load Calculations.
- The following load caculations are based on the following basic formulae:
- N=X*R (Little's Law) where:
- N=number of jobs in the system (either in progress or in a queue),
- X=system throughput,
- R=response time (which includes time waiting in queues).
- U=X*S (from Little's Law) where:
- S=service time,
- U=utilization.
- R=S/(1−U) for exponential service times (which is the worst-case assumption).
- A 256-byte frame at 100 Mb/sec takes 20 usec per frame.
- 4*100 Mbit ethernets receiving at fill frame rate is:
- 51200 (4*12800) frames/sec @ 1024 bytes/frame,
- 102000 frames/sec @ 512 bytes/frame,
- 204000 frames/sec @ 256 bytes/frame.
- Following calculations assume 250 instructions/frame, 45 nsec clock. Thus S=250*45 nsecs=11.2 usecs.
Av Frame Thruput Utilization Response Nbr in system Size (X) (U) (R) (N) 1024 51200 0.57 26 usecs 1.3 512 102000 >1 — — 256 204000 >1 — — - Look at it for varying instructions per frame assuming 512 bytes per frame average.
Instns Nbr in Per Service Thruput Utilization Response system Frame Time (S) (X) (U) (R) (N) 250 11.2 102000 >1 — — usec 250 11.2 85000 (*) 0.95 224 19 usecs 250 11.2 80000 (**) 0.89 101 8 225 10 102000 1.0 — — 225 10 95000 (*) 0.95 200 19 225 10 89000 (**) 0.89 90 8 200 9 102000 0.9 90 9 150 6.7 102000 0.68 20 2 - If 100 instructions/frame is used, S=100*45 nsecs=4.5 usecs, and we can support 256 byte frames:
100 4.5 204000 0.91 50 10 - Note that these calculations assume that response times increase exponentially as utilization increases. This is the worst-case assumption, and probably may not be true for out system. The figures show that to support a theoretical full 4*100 Mbit receive load with an average frame size of 512 bytes, there will need to be 19 active “jobs” in the system, assuming 250 instructions per frame. Due to SRAM limitations, the current design specifies 8 SRAM buffers for active TCBs, and not to swap a TCB out of SRAM once it is active. So under these limitations, the INIC will not be able to keep up with the full frame rate. Note that the initial implementation is trying to use only 8 KB of SRAM, although 16 KB may be available, in which case 19 TCB SRAM buffers could be used. This is a cost trade-off. The real point here is the effect of instructions/frame on the throughput that can be maintained. If the instructions/frame drops to 200, then the INIC is capable of handling the full theoretical load (102000 frames/second) with only 9 active TCBs. If it drops to 100 instructions per frame, then the INIC can handle full bandwidth at 256 byte frames (204000 frames/second) with 10 active CCBs. The bottom line is that ALL hardware-assist that reduces the instructions/frame is really worthwhile. If header-assist hardware can save us 50 instructions per frame then it goes straight to the throughput bottom line.
Claims (4)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/804,553 US6393487B2 (en) | 1997-10-14 | 2001-03-12 | Passing a communication control block to a local device such that a message is processed on the device |
US09/970,124 US7124205B2 (en) | 1997-10-14 | 2001-10-02 | Network interface device that fast-path processes solicited session layer read commands |
US11/582,199 US7664883B2 (en) | 1998-08-28 | 2006-10-16 | Network interface device that fast-path processes solicited session layer read commands |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US6180997P | 1997-10-14 | 1997-10-14 | |
US09/067,544 US6226680B1 (en) | 1997-10-14 | 1998-04-27 | Intelligent network interface system method for protocol processing |
US09/439,603 US6247060B1 (en) | 1997-10-14 | 1999-11-12 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/748,936 US6334153B2 (en) | 1997-10-14 | 2000-12-26 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/804,553 US6393487B2 (en) | 1997-10-14 | 2001-03-12 | Passing a communication control block to a local device such that a message is processed on the device |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/748,936 Continuation US6334153B2 (en) | 1997-10-14 | 2000-12-26 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/802,550 Continuation-In-Part US6658480B2 (en) | 1997-10-14 | 2001-03-09 | Intelligent network interface system and method for accelerated protocol processing |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/855,979 Continuation-In-Part US7133940B2 (en) | 1997-10-14 | 2001-05-14 | Network interface device employing a DMA command queue |
US09/970,124 Continuation US7124205B2 (en) | 1997-10-14 | 2001-10-02 | Network interface device that fast-path processes solicited session layer read commands |
Publications (2)
Publication Number | Publication Date |
---|---|
US20010027496A1 true US20010027496A1 (en) | 2001-10-04 |
US6393487B2 US6393487B2 (en) | 2002-05-21 |
Family
ID=26741512
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/067,544 Expired - Lifetime US6226680B1 (en) | 1997-10-14 | 1998-04-27 | Intelligent network interface system method for protocol processing |
US09/439,603 Expired - Lifetime US6247060B1 (en) | 1997-10-14 | 1999-11-12 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/692,561 Active 2025-03-21 US8631140B2 (en) | 1997-10-14 | 2000-10-18 | Intelligent network interface system and method for accelerated protocol processing |
US09/748,936 Expired - Lifetime US6334153B2 (en) | 1997-10-14 | 2000-12-26 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/804,553 Expired - Lifetime US6393487B2 (en) | 1997-10-14 | 2001-03-12 | Passing a communication control block to a local device such that a message is processed on the device |
US09/970,124 Expired - Lifetime US7124205B2 (en) | 1997-10-14 | 2001-10-02 | Network interface device that fast-path processes solicited session layer read commands |
US10/023,240 Expired - Lifetime US6965941B2 (en) | 1997-10-14 | 2001-12-17 | Transmit fast-path processing on TCP/IP offload network interface device |
US14/038,297 Expired - Lifetime US8805948B2 (en) | 1997-10-14 | 2013-09-26 | Intelligent network interface system and method for protocol processing |
US14/040,179 Expired - Fee Related US8856379B2 (en) | 1997-10-14 | 2013-09-27 | Intelligent network interface system and method for protocol processing |
US14/507,710 Expired - Fee Related US9307054B2 (en) | 1997-10-14 | 2014-10-06 | Intelligent network interface system and method for accelerated protocol processing |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/067,544 Expired - Lifetime US6226680B1 (en) | 1997-10-14 | 1998-04-27 | Intelligent network interface system method for protocol processing |
US09/439,603 Expired - Lifetime US6247060B1 (en) | 1997-10-14 | 1999-11-12 | Passing a communication control block from host to a local device such that a message is processed on the device |
US09/692,561 Active 2025-03-21 US8631140B2 (en) | 1997-10-14 | 2000-10-18 | Intelligent network interface system and method for accelerated protocol processing |
US09/748,936 Expired - Lifetime US6334153B2 (en) | 1997-10-14 | 2000-12-26 | Passing a communication control block from host to a local device such that a message is processed on the device |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/970,124 Expired - Lifetime US7124205B2 (en) | 1997-10-14 | 2001-10-02 | Network interface device that fast-path processes solicited session layer read commands |
US10/023,240 Expired - Lifetime US6965941B2 (en) | 1997-10-14 | 2001-12-17 | Transmit fast-path processing on TCP/IP offload network interface device |
US14/038,297 Expired - Lifetime US8805948B2 (en) | 1997-10-14 | 2013-09-26 | Intelligent network interface system and method for protocol processing |
US14/040,179 Expired - Fee Related US8856379B2 (en) | 1997-10-14 | 2013-09-27 | Intelligent network interface system and method for protocol processing |
US14/507,710 Expired - Fee Related US9307054B2 (en) | 1997-10-14 | 2014-10-06 | Intelligent network interface system and method for accelerated protocol processing |
Country Status (1)
Country | Link |
---|---|
US (10) | US6226680B1 (en) |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6426944B1 (en) * | 1998-12-30 | 2002-07-30 | At&T Corp | Method and apparatus for controlling data messages across a fast packet network |
US20020112085A1 (en) * | 2000-12-21 | 2002-08-15 | Berg Mitchell T. | Method and system for communicating an information packet through multiple networks |
US20020112087A1 (en) * | 2000-12-21 | 2002-08-15 | Berg Mitchell T. | Method and system for establishing a data structure of a connection with a client |
US20020120761A1 (en) * | 2000-12-21 | 2002-08-29 | Berg Mitchell T. | Method and system for executing protocol stack instructions to form a packet for causing a computing device to perform an operation |
US6463035B1 (en) * | 1998-12-30 | 2002-10-08 | At&T Corp | Method and apparatus for initiating an upward signaling control channel in a fast packet network |
US6526446B1 (en) * | 1999-04-27 | 2003-02-25 | 3Com Corporation | Hardware only transmission control protocol segmentation for a high performance network interface card |
US20030046511A1 (en) * | 2001-08-30 | 2003-03-06 | Buch Deep K. | Multiprocessor-scalable streaming data server arrangement |
US20030084185A1 (en) * | 2001-10-30 | 2003-05-01 | Microsoft Corporation | Apparatus and method for scaling TCP off load buffer requirements by segment size |
US20030127185A1 (en) * | 2002-01-04 | 2003-07-10 | Bakly Walter N. | Method for applying retroreflective target to a surface |
US20030204631A1 (en) * | 2002-04-30 | 2003-10-30 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US20030204634A1 (en) * | 2002-04-30 | 2003-10-30 | Microsoft Corporation | Method to offload a network stack |
US20030222216A1 (en) * | 2000-03-22 | 2003-12-04 | Walkenstein Jonathan A. | Low light imaging device |
US20040003007A1 (en) * | 2002-06-28 | 2004-01-01 | Prall John M. | Windows management instrument synchronized repository provider |
US6675218B1 (en) * | 1998-08-14 | 2004-01-06 | 3Com Corporation | System for user-space network packet modification |
US20040213290A1 (en) * | 1998-06-11 | 2004-10-28 | Johnson Michael Ward | TCP/IP/PPP modem |
US20040249881A1 (en) * | 2003-06-05 | 2004-12-09 | Jha Ashutosh K. | Transmitting commands and information between a TCP/IP stack and an offload unit |
US20050271042A1 (en) * | 2000-11-10 | 2005-12-08 | Michael Johnson | Internet modem streaming socket method |
US20060004904A1 (en) * | 2004-06-30 | 2006-01-05 | Intel Corporation | Method, system, and program for managing transmit throughput for a network controller |
US20060104308A1 (en) * | 2004-11-12 | 2006-05-18 | Microsoft Corporation | Method and apparatus for secure internet protocol (IPSEC) offloading with integrated host protocol stack management |
US20060221961A1 (en) * | 2005-04-01 | 2006-10-05 | International Business Machines Corporation | Network communications for operating system partitions |
US20060221977A1 (en) * | 2005-04-01 | 2006-10-05 | International Business Machines Corporation | Method and apparatus for providing a network connection table |
US20060221953A1 (en) * | 2005-04-01 | 2006-10-05 | Claude Basso | Method and apparatus for blind checksum and correction for network transmissions |
US20060281451A1 (en) * | 2005-06-14 | 2006-12-14 | Zur Uri E | Method and system for handling connection setup in a network |
US20070061470A1 (en) * | 2000-12-21 | 2007-03-15 | Berg Mitchell T | Method and system for selecting a computing device for maintaining a client session in response to a request packet |
US20070064725A1 (en) * | 2001-01-26 | 2007-03-22 | Minami John S | System, method, and computer program product for multi-mode network interface operation |
US20070086360A1 (en) * | 2000-12-21 | 2007-04-19 | Berg Mitchell T | Method and system for communicating an information packet through multiple router devices |
US20070253430A1 (en) * | 2002-04-23 | 2007-11-01 | Minami John S | Gigabit Ethernet Adapter |
US7324547B1 (en) | 2002-12-13 | 2008-01-29 | Nvidia Corporation | Internet protocol (IP) router residing in a processor chipset |
US20080056124A1 (en) * | 2003-06-05 | 2008-03-06 | Sameer Nanda | Using TCP/IP offload to accelerate packet filtering |
US20080089358A1 (en) * | 2005-04-01 | 2008-04-17 | International Business Machines Corporation | Configurable ports for a host ethernet adapter |
US7362772B1 (en) | 2002-12-13 | 2008-04-22 | Nvidia Corporation | Network processing pipeline chipset for routing and host packet processing |
US7469295B1 (en) * | 2001-06-25 | 2008-12-23 | Network Appliance, Inc. | Modified round robin load balancing technique based on IP identifier |
US20080317027A1 (en) * | 2005-04-01 | 2008-12-25 | International Business Machines Corporation | System for reducing latency in a host ethernet adapter (hea) |
US7492771B2 (en) | 2005-04-01 | 2009-02-17 | International Business Machines Corporation | Method for performing a packet header lookup |
US7586936B2 (en) | 2005-04-01 | 2009-09-08 | International Business Machines Corporation | Host Ethernet adapter for networking offload in server environment |
US20090241001A1 (en) * | 2004-12-08 | 2009-09-24 | Electronics And Telecommunications Research Institute | Retransmission and delayed ack timer management logic for tcp protocol |
US7606166B2 (en) | 2005-04-01 | 2009-10-20 | International Business Machines Corporation | System and method for computing a blind checksum in a host ethernet adapter (HEA) |
US7698413B1 (en) | 2004-04-12 | 2010-04-13 | Nvidia Corporation | Method and apparatus for accessing and maintaining socket control information for high speed network connections |
US7706409B2 (en) | 2005-04-01 | 2010-04-27 | International Business Machines Corporation | System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA) |
US7899913B2 (en) | 2003-12-19 | 2011-03-01 | Nvidia Corporation | Connection management system and method for a transport offload engine |
US7903687B2 (en) | 2005-04-01 | 2011-03-08 | International Business Machines Corporation | Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device |
US7957379B2 (en) | 2004-10-19 | 2011-06-07 | Nvidia Corporation | System and method for processing RX packets in high speed network applications using an RX FIFO buffer |
US7962654B2 (en) | 2000-04-17 | 2011-06-14 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US8024481B2 (en) | 2000-04-17 | 2011-09-20 | Circadence Corporation | System and method for reducing traffic and congestion on distributed interactive simulation networks |
US8065439B1 (en) * | 2003-12-19 | 2011-11-22 | Nvidia Corporation | System and method for using metadata in the context of a transport offload engine |
US8065399B2 (en) | 2000-04-17 | 2011-11-22 | Circadence Corporation | Automated network infrastructure test and diagnostic system and method therefor |
US8135842B1 (en) | 1999-08-16 | 2012-03-13 | Nvidia Corporation | Internet jack |
US8176545B1 (en) | 2003-12-19 | 2012-05-08 | Nvidia Corporation | Integrated policy checking system and method |
US8195823B2 (en) | 2000-04-17 | 2012-06-05 | Circadence Corporation | Dynamic network link acceleration |
US8218555B2 (en) | 2001-04-24 | 2012-07-10 | Nvidia Corporation | Gigabit ethernet adapter |
US8234399B2 (en) | 2003-05-29 | 2012-07-31 | Seagate Technology Llc | Method and apparatus for automatic phy calibration based on negotiated link speed |
US8244916B1 (en) * | 2002-02-14 | 2012-08-14 | Marvell International Ltd. | Method and apparatus for enabling a network interface to support multiple networks |
US8510468B2 (en) | 2000-04-17 | 2013-08-13 | Ciradence Corporation | Route aware network link acceleration |
US8549170B2 (en) | 2003-12-19 | 2013-10-01 | Nvidia Corporation | Retransmission system and method for a transport offload engine |
US8898340B2 (en) | 2000-04-17 | 2014-11-25 | Circadence Corporation | Dynamic network link acceleration for network including wireless communication devices |
US8996705B2 (en) | 2000-04-17 | 2015-03-31 | Circadence Corporation | Optimization of enhanced network links |
US9098297B2 (en) | 1997-05-08 | 2015-08-04 | Nvidia Corporation | Hardware accelerator for an object-oriented programming language |
US9912493B2 (en) | 2013-09-24 | 2018-03-06 | Kt Corporation | Home network signal relay device in access network and home network signal relay method in access network using same |
US10033840B2 (en) | 2000-04-17 | 2018-07-24 | Circadence Corporation | System and devices facilitating dynamic network link acceleration |
US11803507B2 (en) | 2018-10-29 | 2023-10-31 | Secturion Systems, Inc. | Data stream protocol field decoding by a systolic array |
US12132699B2 (en) | 2018-07-26 | 2024-10-29 | Secturion Systems, Inc. | In-line transmission control protocol processing engine using a systolic array |
Families Citing this family (667)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5978379A (en) | 1997-01-23 | 1999-11-02 | Gadzoox Networks, Inc. | Fiber channel learning bridge, learning half bridge, and protocol |
US6434620B1 (en) * | 1998-08-27 | 2002-08-13 | Alacritech, Inc. | TCP/IP offload network interface device |
US6687758B2 (en) * | 2001-03-07 | 2004-02-03 | Alacritech, Inc. | Port aggregation for network connections that are offloaded to network interface devices |
US6697868B2 (en) * | 2000-02-28 | 2004-02-24 | Alacritech, Inc. | Protocol processing stack for use with intelligent network interface device |
US6427173B1 (en) * | 1997-10-14 | 2002-07-30 | Alacritech, Inc. | Intelligent network interfaced device and system for accelerated communication |
US7237036B2 (en) * | 1997-10-14 | 2007-06-26 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding a TCP connection |
US7167927B2 (en) * | 1997-10-14 | 2007-01-23 | Alacritech, Inc. | TCP/IP offload device with fast-path TCP ACK generating and transmitting mechanism |
US6658480B2 (en) | 1997-10-14 | 2003-12-02 | Alacritech, Inc. | Intelligent network interface system and method for accelerated protocol processing |
US7076568B2 (en) * | 1997-10-14 | 2006-07-11 | Alacritech, Inc. | Data communication apparatus for computer intelligent network interface card which transfers data between a network and a storage device according designated uniform datagram protocol socket |
US7042898B2 (en) | 1997-10-14 | 2006-05-09 | Alacritech, Inc. | Reducing delays associated with inserting a checksum into a network message |
US6591302B2 (en) | 1997-10-14 | 2003-07-08 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US8782199B2 (en) | 1997-10-14 | 2014-07-15 | A-Tech Llc | Parsing a packet header |
US6226680B1 (en) * | 1997-10-14 | 2001-05-01 | Alacritech, Inc. | Intelligent network interface system method for protocol processing |
US6389479B1 (en) | 1997-10-14 | 2002-05-14 | Alacritech, Inc. | Intelligent network interface device and system for accelerated communication |
US8621101B1 (en) | 2000-09-29 | 2013-12-31 | Alacritech, Inc. | Intelligent network storage interface device |
US7089326B2 (en) * | 1997-10-14 | 2006-08-08 | Alacritech, Inc. | Fast-path processing for receiving data on TCP connection offload devices |
US7185266B2 (en) | 2003-02-12 | 2007-02-27 | Alacritech, Inc. | Network interface device for error detection using partial CRCS of variable length message portions |
US7133940B2 (en) * | 1997-10-14 | 2006-11-07 | Alacritech, Inc. | Network interface device employing a DMA command queue |
US7174393B2 (en) | 2000-12-26 | 2007-02-06 | Alacritech, Inc. | TCP/IP offload network interface device |
US8539112B2 (en) | 1997-10-14 | 2013-09-17 | Alacritech, Inc. | TCP/IP offload device |
US6757746B2 (en) | 1997-10-14 | 2004-06-29 | Alacritech, Inc. | Obtaining a destination address so that a network interface device can write network data without headers directly into host memory |
US7284070B2 (en) * | 1997-10-14 | 2007-10-16 | Alacritech, Inc. | TCP offload network interface device |
KR100564665B1 (en) * | 1997-11-17 | 2006-03-29 | 시게이트 테크놀로지 엘엘씨 | Method and apparatus for using crc for data integrity in on-chip memory |
US6504851B1 (en) * | 1997-11-21 | 2003-01-07 | International Business Machines Corporation | Dynamic detection of LAN network protocol |
US6662234B2 (en) | 1998-03-26 | 2003-12-09 | National Semiconductor Corporation | Transmitting data from a host computer in a reduced power state by an isolation block that disconnects the media access control layer from the physical layer |
US6246683B1 (en) | 1998-05-01 | 2001-06-12 | 3Com Corporation | Receive processing with network protocol bypass |
US6904519B2 (en) * | 1998-06-12 | 2005-06-07 | Microsoft Corporation | Method and computer program product for offloading processing tasks from software to hardware |
GB2339368A (en) * | 1998-07-08 | 2000-01-19 | Ibm | Data communications protocol with efficient packing of datagrams |
US6457056B1 (en) * | 1998-08-17 | 2002-09-24 | Lg Electronics Inc. | Network interface card controller and method of controlling thereof |
US7664883B2 (en) * | 1998-08-28 | 2010-02-16 | Alacritech, Inc. | Network interface device that fast-path processes solicited session layer read commands |
US8332478B2 (en) * | 1998-10-01 | 2012-12-11 | Digimarc Corporation | Context sensitive connected content |
FI982224A (en) * | 1998-10-14 | 2000-04-15 | Nokia Networks Oy | Message monitoring in a network element of a telecommunications network |
US6697362B1 (en) * | 1998-11-06 | 2004-02-24 | Level One Communications, Inc. | Distributed switch memory architecture |
US6763370B1 (en) * | 1998-11-16 | 2004-07-13 | Softricity, Inc. | Method and apparatus for content protection in a secure content delivery system |
US7017188B1 (en) * | 1998-11-16 | 2006-03-21 | Softricity, Inc. | Method and apparatus for secure content delivery over broadband access networks |
US7430171B2 (en) | 1998-11-19 | 2008-09-30 | Broadcom Corporation | Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost |
US6560652B1 (en) * | 1998-11-20 | 2003-05-06 | Legerity, Inc. | Method and apparatus for accessing variable sized blocks of data |
US8225002B2 (en) * | 1999-01-22 | 2012-07-17 | Network Disk, Inc. | Data storage and data sharing in a network of heterogeneous computers |
US7031904B1 (en) | 1999-01-26 | 2006-04-18 | Adaptec, Inc. | Methods for implementing an ethernet storage protocol in computer networks |
US6738821B1 (en) * | 1999-01-26 | 2004-05-18 | Adaptec, Inc. | Ethernet storage protocol networks |
DE19906432C1 (en) * | 1999-02-16 | 2000-06-21 | Fraunhofer Ges Forschung | Second data stream generation method from first stream including start and functional audiovisual, data blocks, involves insertion of origination information |
SE516175C2 (en) * | 1999-02-17 | 2001-11-26 | Axis Ab | Device and method of communication over a network |
US6629151B1 (en) * | 1999-03-18 | 2003-09-30 | Microsoft Corporation | Method and system for querying the dynamic aspects of wireless connection |
US7174452B2 (en) * | 2001-01-24 | 2007-02-06 | Broadcom Corporation | Method for processing multiple security policies applied to a data packet structure |
US6345301B1 (en) * | 1999-03-30 | 2002-02-05 | Unisys Corporation | Split data path distributed network protocol |
US6442617B1 (en) * | 1999-03-31 | 2002-08-27 | 3Com Corporation | Method and system for filtering multicast packets in a peripheral component environment |
US7730169B1 (en) | 1999-04-12 | 2010-06-01 | Softricity, Inc. | Business method and system for serving third party software applications |
US7200632B1 (en) | 1999-04-12 | 2007-04-03 | Softricity, Inc. | Method and system for serving software applications to client computers |
US7370071B2 (en) | 2000-03-17 | 2008-05-06 | Microsoft Corporation | Method for serving third party software applications from servers to client computers |
US6938096B1 (en) * | 1999-04-12 | 2005-08-30 | Softricity, Inc. | Method and system for remote networking using port proxying by detecting if the designated port on a client computer is blocked, then encapsulating the communications in a different format and redirecting to an open port |
US7188168B1 (en) * | 1999-04-30 | 2007-03-06 | Pmc-Sierra, Inc. | Method and apparatus for grammatical packet classifier |
US7185081B1 (en) * | 1999-04-30 | 2007-02-27 | Pmc-Sierra, Inc. | Method and apparatus for programmable lexical packet classifier |
US7016951B1 (en) * | 1999-04-30 | 2006-03-21 | Mantech Ctx Corporation | System and method for network security |
US7042905B1 (en) * | 1999-05-04 | 2006-05-09 | Sprint Communications Company L.P. | Broadband wireless communication system |
US8099758B2 (en) | 1999-05-12 | 2012-01-17 | Microsoft Corporation | Policy based composite file system and method |
JP3403971B2 (en) * | 1999-06-02 | 2003-05-06 | 富士通株式会社 | Packet transfer device |
US6957346B1 (en) * | 1999-06-15 | 2005-10-18 | Ssh Communications Security Ltd. | Method and arrangement for providing security through network address translations using tunneling and compensations |
US7062574B1 (en) * | 1999-07-01 | 2006-06-13 | Agere Systems Inc. | System and method for selectively detaching point-to-point protocol header information |
US7249149B1 (en) | 1999-08-10 | 2007-07-24 | Washington University | Tree bitmap data structures and their use in performing lookup operations |
US6560610B1 (en) | 1999-08-10 | 2003-05-06 | Washington University | Data structure using a tree bitmap and method for rapid classification of data in a database |
US6983350B1 (en) * | 1999-08-31 | 2006-01-03 | Intel Corporation | SDRAM controller for parallel processor architecture |
US6829652B1 (en) * | 1999-09-07 | 2004-12-07 | Intel Corporation | I2O ISM implementation for a san based storage subsystem |
US7281030B1 (en) * | 1999-09-17 | 2007-10-09 | Intel Corporation | Method of reading a remote memory |
US6651107B1 (en) * | 1999-09-21 | 2003-11-18 | Intel Corporation | Reduced hardware network adapter and communication |
EP1188294B1 (en) * | 1999-10-14 | 2008-03-26 | Bluearc UK Limited | Apparatus and method for hardware implementation or acceleration of operating system functions |
US6813663B1 (en) | 1999-11-02 | 2004-11-02 | Apple Computer, Inc. | Method and apparatus for supporting and presenting multiple serial bus nodes using distinct configuration ROM images |
US6587904B1 (en) * | 1999-11-05 | 2003-07-01 | Apple Computer, Inc. | Method and apparatus for preventing loops in a full-duplex bus |
US6678734B1 (en) * | 1999-11-13 | 2004-01-13 | Ssh Communications Security Ltd. | Method for intercepting network packets in a computing device |
US6327625B1 (en) | 1999-11-30 | 2001-12-04 | 3Com Corporation | FIFO-based network interface supporting out-of-order processing |
US6928483B1 (en) * | 1999-12-10 | 2005-08-09 | Nortel Networks Limited | Fast path forwarding of link state advertisements |
US6862285B1 (en) * | 1999-12-13 | 2005-03-01 | Microsoft Corp. | Method and system for communicating with a virtual circuit network |
US7257079B1 (en) | 1999-12-23 | 2007-08-14 | Intel Corporation | Physical layer and data link interface with adaptive speed |
US6782001B1 (en) | 1999-12-23 | 2004-08-24 | Intel Corporation | Physical layer and data link interface with reset/sync sharing |
US6553415B1 (en) * | 1999-12-23 | 2003-04-22 | Intel Corporation | System for rescheduling cascaded callback functions to complete an asynchronous physical layer initialization process |
US6718417B1 (en) | 1999-12-23 | 2004-04-06 | Intel Corporation | Physical layer and data link interface with flexible bus width |
US6795881B1 (en) | 1999-12-23 | 2004-09-21 | Intel Corporation | Physical layer and data link interface with ethernet pre-negotiation |
US6493647B1 (en) * | 1999-12-29 | 2002-12-10 | Advanced Micro Devices, Inc. | Method and apparatus for exercising external memory with a memory built-in self-test |
US6661794B1 (en) * | 1999-12-29 | 2003-12-09 | Intel Corporation | Method and apparatus for gigabit packet assignment for multithreaded packet processing |
US6775284B1 (en) | 2000-01-07 | 2004-08-10 | International Business Machines Corporation | Method and system for frame and protocol classification |
US7058064B2 (en) * | 2000-02-08 | 2006-06-06 | Mips Technologies, Inc. | Queueing system for processors in packet routing operations |
US7042887B2 (en) | 2000-02-08 | 2006-05-09 | Mips Technologies, Inc. | Method and apparatus for non-speculative pre-fetch operation in data packet processing |
US7502876B1 (en) | 2000-06-23 | 2009-03-10 | Mips Technologies, Inc. | Background memory manager that determines if data structures fits in memory with memory state transactions map |
US7076630B2 (en) * | 2000-02-08 | 2006-07-11 | Mips Tech Inc | Method and apparatus for allocating and de-allocating consecutive blocks of memory in background memo management |
US7139901B2 (en) * | 2000-02-08 | 2006-11-21 | Mips Technologies, Inc. | Extended instruction set for packet processing applications |
US20010052053A1 (en) * | 2000-02-08 | 2001-12-13 | Mario Nemirovsky | Stream processing unit for a multi-streaming processor |
US7065096B2 (en) | 2000-06-23 | 2006-06-20 | Mips Technologies, Inc. | Method for allocating memory space for limited packet head and/or tail growth |
US7032226B1 (en) * | 2000-06-30 | 2006-04-18 | Mips Technologies, Inc. | Methods and apparatus for managing a buffer of events in the background |
US7649901B2 (en) * | 2000-02-08 | 2010-01-19 | Mips Technologies, Inc. | Method and apparatus for optimizing selection of available contexts for packet processing in multi-stream packet processing |
US7165257B2 (en) * | 2000-02-08 | 2007-01-16 | Mips Technologies, Inc. | Context selection and activation mechanism for activating one of a group of inactive contexts in a processor core for servicing interrupts |
US7082552B2 (en) * | 2000-02-08 | 2006-07-25 | Mips Tech Inc | Functional validation of a packet management unit |
US7058065B2 (en) | 2000-02-08 | 2006-06-06 | Mips Tech Inc | Method and apparatus for preventing undesirable packet download with pending read/write operations in data packet processing |
US7155516B2 (en) * | 2000-02-08 | 2006-12-26 | Mips Technologies, Inc. | Method and apparatus for overflowing data packets to a software-controlled memory when they do not fit into a hardware-controlled memory |
US6757291B1 (en) | 2000-02-10 | 2004-06-29 | Simpletech, Inc. | System for bypassing a server to achieve higher throughput between data network and data storage system |
US7191240B1 (en) * | 2000-02-14 | 2007-03-13 | International Business Machines Corporation | Generic network protocol layer with supporting data structure |
US7421507B2 (en) * | 2000-02-16 | 2008-09-02 | Apple Inc. | Transmission of AV/C transactions over multiple transports method and apparatus |
US7050453B1 (en) * | 2000-02-17 | 2006-05-23 | Apple Computer, Inc. | Method and apparatus for ensuring compatibility on a high performance serial bus |
US6757732B1 (en) * | 2000-03-16 | 2004-06-29 | Nortel Networks Limited | Text-based communications over a data network |
US7139743B2 (en) | 2000-04-07 | 2006-11-21 | Washington University | Associative database scanning and information retrieval using FPGA devices |
US6898179B1 (en) * | 2000-04-07 | 2005-05-24 | International Business Machines Corporation | Network processor/software control architecture |
US6618785B1 (en) * | 2000-04-21 | 2003-09-09 | Apple Computer, Inc. | Method and apparatus for automatic detection and healing of signal pair crossover on a high performance serial bus |
US6718497B1 (en) | 2000-04-21 | 2004-04-06 | Apple Computer, Inc. | Method and apparatus for generating jitter test patterns on a high performance serial bus |
US20020003776A1 (en) * | 2000-04-28 | 2002-01-10 | Gokhale Dilip S. | Interworking unit for integrating terrestrial ATM switches with broadband satellite networks |
US6571291B1 (en) * | 2000-05-01 | 2003-05-27 | Advanced Micro Devices, Inc. | Apparatus and method for validating and updating an IP checksum in a network switching system |
US6675200B1 (en) * | 2000-05-10 | 2004-01-06 | Cisco Technology, Inc. | Protocol-independent support of remote DMA |
GB2362548B (en) * | 2000-05-15 | 2004-03-24 | Vodafone Ltd | A method and apparatus for asynchronous information transactions |
US8256430B2 (en) | 2001-06-15 | 2012-09-04 | Monteris Medical, Inc. | Hyperthermia treatment and probe therefor |
FI20001509A (en) * | 2000-06-26 | 2001-12-27 | Nokia Networks Oy | Packet data transmission system and network element |
US7418470B2 (en) * | 2000-06-26 | 2008-08-26 | Massively Parallel Technologies, Inc. | Parallel processing systems and method |
DE60114012D1 (en) * | 2000-06-29 | 2005-11-17 | Object Reservoir Inc | METHOD AND DEVICE FOR MODELING GEOLOGICAL STRUCTURES WITH A FOUR-DIMENSIONAL UNSTRUCTURED GRILLE |
US6678746B1 (en) * | 2000-08-01 | 2004-01-13 | Hewlett-Packard Development Company, L.P. | Processing network packets |
US7126916B1 (en) * | 2000-08-24 | 2006-10-24 | Efficient Networks, Inc. | System and method for packet bypass in a communication system |
US8019901B2 (en) * | 2000-09-29 | 2011-09-13 | Alacritech, Inc. | Intelligent network storage interface system |
US6720074B2 (en) * | 2000-10-26 | 2004-04-13 | Inframat Corporation | Insulator coated magnetic nanoparticulate composites with reduced core loss and method of manufacture thereof |
US9639553B2 (en) * | 2000-11-02 | 2017-05-02 | Oracle International Corporation | TCP/UDP acceleration |
US7865596B2 (en) * | 2000-11-02 | 2011-01-04 | Oracle America, Inc. | Switching system for managing storage in digital networks |
WO2002061525A2 (en) * | 2000-11-02 | 2002-08-08 | Pirus Networks | Tcp/udp acceleration |
US6985956B2 (en) * | 2000-11-02 | 2006-01-10 | Sun Microsystems, Inc. | Switching system |
US7313614B2 (en) * | 2000-11-02 | 2007-12-25 | Sun Microsystems, Inc. | Switching system |
US7236490B2 (en) | 2000-11-17 | 2007-06-26 | Foundry Networks, Inc. | Backplane interface adapter |
US7596139B2 (en) | 2000-11-17 | 2009-09-29 | Foundry Networks, Inc. | Backplane interface adapter with error control and redundant fabric |
US20020078246A1 (en) * | 2000-12-19 | 2002-06-20 | Ing-Simmons Nicholas K. | Method and system for network protocol processing |
US7546369B2 (en) * | 2000-12-21 | 2009-06-09 | Berg Mitchell T | Method and system for communicating a request packet in response to a state |
US20020116605A1 (en) * | 2000-12-21 | 2002-08-22 | Berg Mitchell T. | Method and system for initiating execution of software in response to a state |
US20020116532A1 (en) * | 2000-12-21 | 2002-08-22 | Berg Mitchell T. | Method and system for communicating an information packet and identifying a data structure |
US20030217184A1 (en) * | 2000-12-30 | 2003-11-20 | Govindan Nair | Method and apparatus for allocating buffers shared among protocol layers in a protocol stack |
US7009933B2 (en) * | 2001-01-30 | 2006-03-07 | Broadcom Corporation | Traffic policing of packet transfer in a dual speed hub |
US6832261B1 (en) | 2001-02-04 | 2004-12-14 | Cisco Technology, Inc. | Method and apparatus for distributed resequencing and reassembly of subdivided packets |
US6934760B1 (en) * | 2001-02-04 | 2005-08-23 | Cisco Technology, Inc. | Method and apparatus for resequencing of packets into an original ordering using multiple resequencing components |
US7092393B1 (en) * | 2001-02-04 | 2006-08-15 | Cisco Technology, Inc. | Method and apparatus for distributed reassembly of subdivided packets using multiple reassembly components |
US20020124095A1 (en) * | 2001-03-02 | 2002-09-05 | Sultan Israel Daniel | Apparatus and method for sending point-to-point protocol over ethernet |
US6601070B2 (en) * | 2001-04-05 | 2003-07-29 | Hewlett-Packard Development Company, L.P. | Distribution of physical file systems |
US7447795B2 (en) * | 2001-04-11 | 2008-11-04 | Chelsio Communications, Inc. | Multi-purpose switching network interface controller |
US7274706B1 (en) | 2001-04-24 | 2007-09-25 | Syrus Ziai | Methods and systems for processing network data |
US7203756B2 (en) * | 2001-04-27 | 2007-04-10 | International Business Machines Corporation | Mechanism to cache references to Java RMI remote objects implementing the unreferenced interface |
US20020161913A1 (en) * | 2001-04-30 | 2002-10-31 | Manuel Gonzalez | System and method for performing a download |
US7023869B2 (en) * | 2001-05-10 | 2006-04-04 | Emc Corporation | Data storage system with one or more integrated server-like behaviors |
US7027439B1 (en) | 2001-05-10 | 2006-04-11 | Emc Corporation | Data storage system with improved network interface |
US7002967B2 (en) | 2001-05-18 | 2006-02-21 | Denton I Claude | Multi-protocol networking processor with data traffic support spanning local, regional and wide area networks |
US7287649B2 (en) * | 2001-05-18 | 2007-10-30 | Broadcom Corporation | System on a chip for packet processing |
US6766389B2 (en) * | 2001-05-18 | 2004-07-20 | Broadcom Corporation | System on a chip for networking |
GB2394095A (en) * | 2001-05-31 | 2004-04-14 | Espeed Inc | Securities trading system with multiple levels-of-interest |
US20030061385A1 (en) * | 2001-05-31 | 2003-03-27 | Lucas Gonze | Computer network interpretation and translation format for simple and complex machines |
US7315900B1 (en) | 2001-06-20 | 2008-01-01 | Juniper Networks, Inc. | Multi-link routing |
US20030050990A1 (en) * | 2001-06-21 | 2003-03-13 | International Business Machines Corporation | PCI migration semantic storage I/O |
US7155542B2 (en) * | 2001-06-27 | 2006-12-26 | Intel Corporation | Dynamic network interface with zero-copy frames |
US7143155B1 (en) * | 2001-06-29 | 2006-11-28 | Cisco Technology, Inc. | Standardized method and apparatus for gathering device identification and/or configuration information via a physical interface |
US20030018828A1 (en) * | 2001-06-29 | 2003-01-23 | International Business Machines Corporation | Infiniband mixed semantic ethernet I/O path |
US20030195989A1 (en) * | 2001-07-02 | 2003-10-16 | Globespan Virata Incorporated | Communications system using rings architecture |
US6964035B2 (en) * | 2001-07-03 | 2005-11-08 | Hewlett-Packard Development Company, L.P. | Debugging an operating system kernel with debugger support in a network interface card |
US7165110B2 (en) * | 2001-07-12 | 2007-01-16 | International Business Machines Corporation | System and method for simultaneously establishing multiple connections |
US20030018754A1 (en) * | 2001-07-17 | 2003-01-23 | Antonio Mugica | Paradigm for hybrid network communications protocol morphing |
US7212534B2 (en) | 2001-07-23 | 2007-05-01 | Broadcom Corporation | Flow based congestion control |
US6970921B1 (en) | 2001-07-27 | 2005-11-29 | 3Com Corporation | Network interface supporting virtual paths for quality of service |
US7860120B1 (en) | 2001-07-27 | 2010-12-28 | Hewlett-Packard Company | Network interface supporting of virtual paths for quality of service with dynamic buffer allocation |
WO2003012671A1 (en) * | 2001-07-31 | 2003-02-13 | Mobile-Mind, Inc. | Communications network with smart card |
AUPR695601A0 (en) * | 2001-08-10 | 2001-09-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Message transmission between telecommunications network entities |
US7065086B2 (en) * | 2001-08-16 | 2006-06-20 | International Business Machines Corporation | Method and system for efficient layer 3-layer 7 routing of internet protocol (“IP”) fragments |
US6892224B2 (en) * | 2001-08-31 | 2005-05-10 | Intel Corporation | Network interface device capable of independent provision of web content |
US20030046330A1 (en) * | 2001-09-04 | 2003-03-06 | Hayes John W. | Selective offloading of protocol processing |
US7142509B1 (en) * | 2001-09-12 | 2006-11-28 | Extreme Networks | Method and apparatus providing for delivery of streaming media |
US20030110208A1 (en) * | 2001-09-12 | 2003-06-12 | Raqia Networks, Inc. | Processing data across packet boundaries |
KR100424178B1 (en) | 2001-09-20 | 2004-03-24 | 주식회사 하이닉스반도체 | Circuit for internal address generation in semiconductor memory device |
US6976205B1 (en) | 2001-09-21 | 2005-12-13 | Syrus Ziai | Method and apparatus for calculating TCP and UDP checksums while preserving CPU resources |
US20030065735A1 (en) * | 2001-10-02 | 2003-04-03 | Connor Patrick L. | Method and apparatus for transferring packets via a network |
US20090006659A1 (en) * | 2001-10-19 | 2009-01-01 | Collins Jack M | Advanced mezzanine card for digital network data inspection |
US7457862B2 (en) * | 2001-10-22 | 2008-11-25 | Avaya, Inc. | Real time control protocol session matching |
US6981110B1 (en) * | 2001-10-23 | 2005-12-27 | Stephen Waller Melvin | Hardware enforced virtual sequentiality |
US7958199B2 (en) * | 2001-11-02 | 2011-06-07 | Oracle America, Inc. | Switching systems and methods for storage management in digital networks |
US20030093555A1 (en) * | 2001-11-09 | 2003-05-15 | Harding-Jones William Paul | Method, apparatus and system for routing messages within a packet operating system |
US7234003B2 (en) * | 2001-12-10 | 2007-06-19 | Sun Micorsystems, Inc. | Method and apparatus to facilitate direct transfer of data between a data device and a network connection |
US7173929B1 (en) * | 2001-12-10 | 2007-02-06 | Incipient, Inc. | Fast path for performing data operations |
US7240123B2 (en) * | 2001-12-10 | 2007-07-03 | Nortel Networks Limited | Distributed routing core |
DE10161509A1 (en) * | 2001-12-14 | 2003-07-03 | Siemens Ag | Method and arrangement for transporting data packets of a data stream |
US20030115350A1 (en) * | 2001-12-14 | 2003-06-19 | Silverback Systems, Inc. | System and method for efficient handling of network data |
US7171493B2 (en) * | 2001-12-19 | 2007-01-30 | The Charles Stark Draper Laboratory | Camouflage of network traffic to resist attack |
US20030121835A1 (en) * | 2001-12-31 | 2003-07-03 | Peter Quartararo | Apparatus for and method of sieving biocompatible adsorbent beaded polymers |
JP3988475B2 (en) * | 2002-02-05 | 2007-10-10 | ソニー株式会社 | Transmitting apparatus, receiving apparatus and methods thereof |
US7814204B1 (en) | 2002-02-11 | 2010-10-12 | Extreme Networks, Inc. | Method of and system for analyzing the content of resource requests |
US7584262B1 (en) | 2002-02-11 | 2009-09-01 | Extreme Networks | Method of and system for allocating resources to resource requests based on application of persistence policies |
US7298746B1 (en) | 2002-02-11 | 2007-11-20 | Extreme Networks | Method and system for reassembling and parsing packets in a network environment |
US7447777B1 (en) | 2002-02-11 | 2008-11-04 | Extreme Networks | Switching system |
US6781990B1 (en) | 2002-02-11 | 2004-08-24 | Extreme Networks | Method and system for managing traffic in a packet network environment |
US7152124B1 (en) | 2002-02-11 | 2006-12-19 | Extreme Networks | Method and system for maintaining temporal consistency of resources and data in a multiple-processor packet switch |
US7134139B2 (en) * | 2002-02-12 | 2006-11-07 | International Business Machines Corporation | System and method for authenticating block level cache access on network |
US6973496B2 (en) * | 2002-03-05 | 2005-12-06 | Archduke Holdings, Inc. | Concealing a network connected device |
US7535913B2 (en) * | 2002-03-06 | 2009-05-19 | Nvidia Corporation | Gigabit ethernet adapter supporting the iSCSI and IPSEC protocols |
US7305007B2 (en) * | 2002-03-07 | 2007-12-04 | Broadcom Corporation | Receiver-aided set-up request routing |
US7295555B2 (en) | 2002-03-08 | 2007-11-13 | Broadcom Corporation | System and method for identifying upper layer protocol message boundaries |
US7707287B2 (en) * | 2002-03-22 | 2010-04-27 | F5 Networks, Inc. | Virtual host acceleration system |
US7292593B1 (en) * | 2002-03-28 | 2007-11-06 | Advanced Micro Devices, Inc. | Arrangement in a channel adapter for segregating transmit packet data in transmit buffers based on respective virtual lanes |
KR20030080443A (en) * | 2002-04-08 | 2003-10-17 | (주) 위즈네트 | Internet protocol system using hardware protocol processing logic and the parallel data processing method using the same |
US6993733B2 (en) * | 2002-04-09 | 2006-01-31 | Atrenta, Inc. | Apparatus and method for handling of multi-level circuit design data |
US6850735B2 (en) * | 2002-04-22 | 2005-02-01 | Cognio, Inc. | System and method for signal classiciation of signals in a frequency band |
US7489687B2 (en) * | 2002-04-11 | 2009-02-10 | Avaya. Inc. | Emergency bandwidth allocation with an RSVP-like protocol |
US7543087B2 (en) * | 2002-04-22 | 2009-06-02 | Alacritech, Inc. | Freeing transmit memory on a network interface device prior to receiving an acknowledgement that transmit data has been received by a remote device |
US7496689B2 (en) * | 2002-04-22 | 2009-02-24 | Alacritech, Inc. | TCP/IP offload device |
US7116943B2 (en) * | 2002-04-22 | 2006-10-03 | Cognio, Inc. | System and method for classifying signals occuring in a frequency band |
US7424268B2 (en) * | 2002-04-22 | 2008-09-09 | Cisco Technology, Inc. | System and method for management of a shared frequency band |
US7254191B2 (en) * | 2002-04-22 | 2007-08-07 | Cognio, Inc. | System and method for real-time spectrum analysis in a radio device |
US7468975B1 (en) | 2002-05-06 | 2008-12-23 | Foundry Networks, Inc. | Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability |
US20120155466A1 (en) | 2002-05-06 | 2012-06-21 | Ian Edward Davis | Method and apparatus for efficiently processing data packets in a computer network |
US7187687B1 (en) | 2002-05-06 | 2007-03-06 | Foundry Networks, Inc. | Pipeline method and system for switching packets |
US7266117B1 (en) | 2002-05-06 | 2007-09-04 | Foundry Networks, Inc. | System architecture for very fast ethernet blade |
US7649885B1 (en) | 2002-05-06 | 2010-01-19 | Foundry Networks, Inc. | Network routing system for enhanced efficiency and monitoring capability |
US7093038B2 (en) | 2002-05-06 | 2006-08-15 | Ivivity, Inc. | Application program interface-access to hardware services for storage management applications |
US7299264B2 (en) * | 2002-05-07 | 2007-11-20 | Hewlett-Packard Development Company, L.P. | System and method for monitoring a connection between a server and a passive client device |
US20030212735A1 (en) | 2002-05-13 | 2003-11-13 | Nvidia Corporation | Method and apparatus for providing an integrated network of processors |
US6826634B2 (en) * | 2002-06-10 | 2004-11-30 | Sun Microsystems, Inc. | Extended message block for network device drivers |
US7444432B2 (en) * | 2002-06-11 | 2008-10-28 | Sun Microsystems, Inc. | System and method for an efficient transport layer transmit interface |
US7512128B2 (en) * | 2002-06-12 | 2009-03-31 | Sun Microsystems, Inc. | System and method for a multi-packet data link layer data transmission |
US6693910B2 (en) * | 2002-06-28 | 2004-02-17 | Interdigital Technology Corporation | System and method for avoiding stall of an H-ARQ reordering buffer in a receiver |
US8200871B2 (en) * | 2002-06-28 | 2012-06-12 | Brocade Communications Systems, Inc. | Systems and methods for scalable distributed storage processing |
US20040006587A1 (en) * | 2002-07-02 | 2004-01-08 | Dell Products L.P. | Information handling system and method for clustering with internal cross coupled storage |
FR2842378B1 (en) * | 2002-07-15 | 2005-02-04 | Canon Kk | METHOD AND DEVICE FOR PROCESSING A QUERY OR COMPRESSED DIGITAL DATA |
US7515612B1 (en) | 2002-07-19 | 2009-04-07 | Qlogic, Corporation | Method and system for processing network data packets |
US7154886B2 (en) * | 2002-07-22 | 2006-12-26 | Qlogic Corporation | Method and system for primary blade selection in a multi-module fiber channel switch |
US7522601B1 (en) * | 2002-07-24 | 2009-04-21 | Nortel Networks Limited | Filtered router alert hop-by-hop option |
US7161947B1 (en) | 2002-07-30 | 2007-01-09 | Cisco Technology, Inc. | Methods and apparatus for intercepting control and data connections |
US7171161B2 (en) * | 2002-07-30 | 2007-01-30 | Cognio, Inc. | System and method for classifying signals using timing templates, power templates and other techniques |
US8015303B2 (en) | 2002-08-02 | 2011-09-06 | Astute Networks Inc. | High data rate stateful protocol processing |
US7263108B2 (en) * | 2002-08-06 | 2007-08-28 | Netxen, Inc. | Dual-mode network storage systems and methods |
US7065332B2 (en) * | 2002-08-09 | 2006-06-20 | Matsushita Electric Industrial Co., Ltd. | Remote control receiving system |
US7213045B2 (en) * | 2002-08-16 | 2007-05-01 | Silverback Systems Inc. | Apparatus and method for transmit transport protocol termination |
US7415652B1 (en) * | 2002-08-19 | 2008-08-19 | Marvell International Ltd. | Out of order checksum calculation for fragmented packets |
US7724740B1 (en) | 2002-08-27 | 2010-05-25 | 3Com Corporation | Computer system and network interface supporting class of service queues |
US7894480B1 (en) | 2002-08-27 | 2011-02-22 | Hewlett-Packard Company | Computer system and network interface with hardware based rule checking for embedded firewall |
US7426579B2 (en) * | 2002-09-17 | 2008-09-16 | Broadcom Corporation | System and method for handling frames in multiple stack environments |
WO2004021628A2 (en) * | 2002-08-29 | 2004-03-11 | Broadcom Corporation | System and method for network interfacing |
US8230090B2 (en) * | 2002-09-11 | 2012-07-24 | Broadcom Corporation | System and method for TCP offloading and uploading |
US7346701B2 (en) * | 2002-08-30 | 2008-03-18 | Broadcom Corporation | System and method for TCP offload |
US7934021B2 (en) | 2002-08-29 | 2011-04-26 | Broadcom Corporation | System and method for network interfacing |
US7411959B2 (en) | 2002-08-30 | 2008-08-12 | Broadcom Corporation | System and method for handling out-of-order frames |
US8631162B2 (en) * | 2002-08-30 | 2014-01-14 | Broadcom Corporation | System and method for network interfacing in a multiple network environment |
US8180928B2 (en) | 2002-08-30 | 2012-05-15 | Broadcom Corporation | Method and system for supporting read operations with CRC for iSCSI and iSCSI chimney |
US7313623B2 (en) | 2002-08-30 | 2007-12-25 | Broadcom Corporation | System and method for TCP/IP offload independent of bandwidth delay product |
US20040044796A1 (en) * | 2002-09-03 | 2004-03-04 | Vangal Sriram R. | Tracking out-of-order packets |
US7016354B2 (en) * | 2002-09-03 | 2006-03-21 | Intel Corporation | Packet-based clock signal |
US7181544B2 (en) * | 2002-09-03 | 2007-02-20 | Intel Corporation | Network protocol engine |
US7299266B2 (en) * | 2002-09-05 | 2007-11-20 | International Business Machines Corporation | Memory management offload for RDMA enabled network adapters |
US20040049603A1 (en) * | 2002-09-05 | 2004-03-11 | International Business Machines Corporation | iSCSI driver to adapter interface protocol |
US7397768B1 (en) | 2002-09-11 | 2008-07-08 | Qlogic, Corporation | Zone management in a multi-module fibre channel switch |
US7337241B2 (en) * | 2002-09-27 | 2008-02-26 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US7191241B2 (en) * | 2002-09-27 | 2007-03-13 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US7359979B2 (en) * | 2002-09-30 | 2008-04-15 | Avaya Technology Corp. | Packet prioritization and associated bandwidth and buffer management techniques for audio over IP |
US20040073690A1 (en) | 2002-09-30 | 2004-04-15 | Neil Hepworth | Voice over IP endpoint call admission |
US8176154B2 (en) * | 2002-09-30 | 2012-05-08 | Avaya Inc. | Instantaneous user initiation voice quality feedback |
JP3834280B2 (en) * | 2002-10-01 | 2006-10-18 | Necインフロンティア株式会社 | Terminal device, priority processing method in terminal device, and program |
US8151278B1 (en) | 2002-10-17 | 2012-04-03 | Astute Networks, Inc. | System and method for timer management in a stateful protocol processing system |
US7814218B1 (en) | 2002-10-17 | 2010-10-12 | Astute Networks, Inc. | Multi-protocol and multi-format stateful processing |
US7596621B1 (en) | 2002-10-17 | 2009-09-29 | Astute Networks, Inc. | System and method for managing shared state using multiple programmed processors |
US7802001B1 (en) | 2002-10-18 | 2010-09-21 | Astute Networks, Inc. | System and method for flow control within a stateful protocol processing system |
US7457822B1 (en) | 2002-11-01 | 2008-11-25 | Bluearc Uk Limited | Apparatus and method for hardware-based file system |
US8041735B1 (en) | 2002-11-01 | 2011-10-18 | Bluearc Uk Limited | Distributed file system and method |
US20040088262A1 (en) * | 2002-11-06 | 2004-05-06 | Alacritech, Inc. | Enabling an enhanced function of an electronic device |
CN1711737A (en) * | 2002-11-08 | 2005-12-21 | 皇家飞利浦电子股份有限公司 | Receiver ,transmitter,method and systems for processing a network data unit in the network stack |
US20040098510A1 (en) * | 2002-11-15 | 2004-05-20 | Ewert Peter M. | Communicating between network processors |
GB2395308B (en) * | 2002-11-18 | 2005-10-19 | Quadrics Ltd | Command scheduling in computer networks |
US7319669B1 (en) | 2002-11-22 | 2008-01-15 | Qlogic, Corporation | Method and system for controlling packet flow in networks |
US7184777B2 (en) * | 2002-11-27 | 2007-02-27 | Cognio, Inc. | Server and multiple sensor system for monitoring activity in a shared radio frequency band |
US7596634B2 (en) * | 2002-12-12 | 2009-09-29 | Millind Mittal | Networked application request servicing offloaded from host |
US7397797B2 (en) * | 2002-12-13 | 2008-07-08 | Nvidia Corporation | Method and apparatus for performing network processing functions |
US7406481B2 (en) * | 2002-12-17 | 2008-07-29 | Oracle International Corporation | Using direct memory access for performing database operations between two or more machines |
US7324540B2 (en) * | 2002-12-31 | 2008-01-29 | Intel Corporation | Network protocol off-load engines |
EP1584164A2 (en) * | 2002-12-31 | 2005-10-12 | Conexant, Inc. | System and method for providing quality of service in asynchronous transfer mode cell transmission |
US7290093B2 (en) * | 2003-01-07 | 2007-10-30 | Intel Corporation | Cache memory to support a processor's power mode of operation |
JP2004220216A (en) * | 2003-01-14 | 2004-08-05 | Hitachi Ltd | San/nas integrated storage device |
US7594002B1 (en) * | 2003-02-14 | 2009-09-22 | Istor Networks, Inc. | Hardware-accelerated high availability integrated networked storage system |
US7389462B1 (en) | 2003-02-14 | 2008-06-17 | Istor Networks, Inc. | System and methods for high rate hardware-accelerated network protocol processing |
US7512663B1 (en) | 2003-02-18 | 2009-03-31 | Istor Networks, Inc. | Systems and methods of directly placing data in an iSCSI storage device |
US20040167985A1 (en) * | 2003-02-21 | 2004-08-26 | Adescom, Inc. | Internet protocol access controller |
US7911994B2 (en) * | 2003-02-28 | 2011-03-22 | Openwave Systems Inc. | Confirmation of delivery of content to an HTTP/TCP device |
US8296452B2 (en) * | 2003-03-06 | 2012-10-23 | Cisco Technology, Inc. | Apparatus and method for detecting tiny fragment attacks |
US7668841B2 (en) * | 2003-03-10 | 2010-02-23 | Brocade Communication Systems, Inc. | Virtual write buffers for accelerated memory and storage access |
EP1460806A3 (en) * | 2003-03-20 | 2006-03-22 | Broadcom Corporation | System and method for network interfacing in a multiple network environment |
CN100481811C (en) * | 2003-03-20 | 2009-04-22 | 诺基亚西门子网络两合公司 | Method and transmitter for transmitting data packets |
US7012918B2 (en) * | 2003-03-24 | 2006-03-14 | Emulex Design & Manufacturing Corporation | Direct data placement |
US20040210736A1 (en) * | 2003-04-18 | 2004-10-21 | Linden Minnick | Method and apparatus for the allocation of identifiers |
WO2004095758A2 (en) * | 2003-04-22 | 2004-11-04 | Cognio, Inc. | Signal classification methods for scanning receiver and other applications |
US20040249957A1 (en) * | 2003-05-12 | 2004-12-09 | Pete Ekis | Method for interface of TCP offload engines to operating systems |
US7257718B2 (en) * | 2003-05-12 | 2007-08-14 | International Business Machines Corporation | Cipher message assist instructions |
US7415472B2 (en) * | 2003-05-13 | 2008-08-19 | Cisco Technology, Inc. | Comparison tree data structures of particular use in performing lookup operations |
US7415463B2 (en) * | 2003-05-13 | 2008-08-19 | Cisco Technology, Inc. | Programming tree data structures and handling collisions while performing lookup operations |
US6901072B1 (en) | 2003-05-15 | 2005-05-31 | Foundry Networks, Inc. | System and method for high speed packet transmission implementing dual transmit and receive pipelines |
US20050002402A1 (en) * | 2003-05-19 | 2005-01-06 | Sony Corporation And Sony Electronics Inc. | Real-time transport protocol |
US7353284B2 (en) * | 2003-06-13 | 2008-04-01 | Apple Inc. | Synchronized transmission of audio and video data from a computer to a client via an interface |
US7668099B2 (en) * | 2003-06-13 | 2010-02-23 | Apple Inc. | Synthesis of vertical blanking signal |
FR2856263B1 (en) * | 2003-06-19 | 2007-03-09 | Seb Sa | DEVICE FOR FILTERING A COOKING BATH FOR AN ELECTRIC FRYER WITH PLASTER RESISTANCE |
US7913294B1 (en) | 2003-06-24 | 2011-03-22 | Nvidia Corporation | Network protocol processing for filtering packets |
US8275910B1 (en) | 2003-07-02 | 2012-09-25 | Apple Inc. | Source packet bridge |
FR2857539B1 (en) * | 2003-07-11 | 2005-09-30 | Cit Alcatel | PACKET CONTENT DESCRIPTION IN A PACKET COMMUNICATION NETWORK |
US7420991B2 (en) * | 2003-07-15 | 2008-09-02 | Qlogic, Corporation | TCP time stamp processing in hardware based TCP offload |
US7406628B2 (en) * | 2003-07-15 | 2008-07-29 | Seagate Technology Llc | Simulated error injection system in target device for testing host system |
US7453802B2 (en) | 2003-07-16 | 2008-11-18 | Qlogic, Corporation | Method and apparatus for detecting and removing orphaned primitives in a fibre channel network |
US7894348B2 (en) | 2003-07-21 | 2011-02-22 | Qlogic, Corporation | Method and system for congestion control in a fibre channel switch |
US7646767B2 (en) | 2003-07-21 | 2010-01-12 | Qlogic, Corporation | Method and system for programmable data dependant network routing |
US7430175B2 (en) * | 2003-07-21 | 2008-09-30 | Qlogic, Corporation | Method and system for managing traffic in fibre channel systems |
US7433342B2 (en) * | 2003-08-07 | 2008-10-07 | Cisco Technology, Inc. | Wireless-aware network switch and switch ASIC |
US20050066045A1 (en) * | 2003-09-03 | 2005-03-24 | Johnson Neil James | Integrated network interface supporting multiple data transfer protocols |
US7526577B2 (en) * | 2003-09-19 | 2009-04-28 | Microsoft Corporation | Multiple offload of network state objects with support for failover events |
US8285881B2 (en) * | 2003-09-10 | 2012-10-09 | Broadcom Corporation | System and method for load balancing and fail over |
US7539760B1 (en) | 2003-09-12 | 2009-05-26 | Astute Networks, Inc. | System and method for facilitating failover of stateful connections |
US20050065915A1 (en) * | 2003-09-23 | 2005-03-24 | Allen Wayne J. | Method and system to add protocol support for network traffic tools |
US7110756B2 (en) * | 2003-10-03 | 2006-09-19 | Cognio, Inc. | Automated real-time site survey in a shared frequency band environment |
US7263071B2 (en) * | 2003-10-08 | 2007-08-28 | Seiko Epson Corporation | Connectionless TCP/IP data exchange |
US7406533B2 (en) * | 2003-10-08 | 2008-07-29 | Seiko Epson Corporation | Method and apparatus for tunneling data through a single port |
US20050086349A1 (en) * | 2003-10-16 | 2005-04-21 | Nagarajan Subramaniyan | Methods and apparatus for offloading TCP/IP processing using a protocol driver interface filter driver |
US7689702B1 (en) * | 2003-10-31 | 2010-03-30 | Sun Microsystems, Inc. | Methods and apparatus for coordinating processing of network connections between two network protocol stacks |
US8549345B1 (en) | 2003-10-31 | 2013-10-01 | Oracle America, Inc. | Methods and apparatus for recovering from a failed network interface card |
US20050120134A1 (en) * | 2003-11-14 | 2005-06-02 | Walter Hubis | Methods and structures for a caching to router in iSCSI storage systems |
JP2005157826A (en) * | 2003-11-27 | 2005-06-16 | Hitachi Ltd | Access controller and access control method |
US6996070B2 (en) * | 2003-12-05 | 2006-02-07 | Alacritech, Inc. | TCP/IP offload device with reduced sequential processing |
CN100395985C (en) * | 2003-12-09 | 2008-06-18 | 趋势株式会社 | Method of forced setup of anti-virus software, its network system and storage medium |
EP1692707B1 (en) * | 2003-12-11 | 2013-02-27 | International Business Machines Corporation | Data transfer error checking |
US20050129039A1 (en) * | 2003-12-11 | 2005-06-16 | International Business Machines Corporation | RDMA network interface controller with cut-through implementation for aligned DDP segments |
US7426574B2 (en) * | 2003-12-16 | 2008-09-16 | Trend Micro Incorporated | Technique for intercepting data in a peer-to-peer network |
KR100557468B1 (en) * | 2003-12-17 | 2006-03-07 | 한국전자통신연구원 | Socket Compatibility Layer for TOE |
US7684440B1 (en) * | 2003-12-18 | 2010-03-23 | Nvidia Corporation | Method and apparatus for maximizing peer-to-peer frame sizes within a network supporting a plurality of frame sizes |
US8161197B2 (en) * | 2003-12-19 | 2012-04-17 | Broadcom Corporation | Method and system for efficient buffer management for layer 2 (L2) through layer 5 (L5) network interface controller applications |
US8572289B1 (en) | 2003-12-19 | 2013-10-29 | Nvidia Corporation | System, method and computer program product for stateless offloading of upper level network protocol operations |
US7590743B2 (en) * | 2003-12-23 | 2009-09-15 | Softricity, Inc. | Method and system for associating a process on a multi-user device with a host address unique to a user session associated with the process |
US20050141434A1 (en) * | 2003-12-24 | 2005-06-30 | Linden Cornett | Method, system, and program for managing buffers |
US7237135B1 (en) | 2003-12-29 | 2007-06-26 | Apple Inc. | Cyclemaster synchronization in a distributed bridge |
US7308517B1 (en) | 2003-12-29 | 2007-12-11 | Apple Inc. | Gap count analysis for a high speed serialized bus |
US20050165985A1 (en) * | 2003-12-29 | 2005-07-28 | Vangal Sriram R. | Network protocol processor |
US7621162B2 (en) * | 2003-12-30 | 2009-11-24 | Alcatel Lucent | Hierarchical flow-characterizing multiplexor |
US7698361B2 (en) * | 2003-12-31 | 2010-04-13 | Microsoft Corporation | Lightweight input/output protocol |
US7487136B2 (en) * | 2004-01-06 | 2009-02-03 | Sharp Laboratories Of America | Intelligent discovery of shares |
US7366720B2 (en) * | 2004-01-06 | 2008-04-29 | Sharp Laboratories Of America | System for remote share access |
US7664938B1 (en) * | 2004-01-07 | 2010-02-16 | Xambala Corporation | Semantic processor systems and methods |
US20050198107A1 (en) * | 2004-01-16 | 2005-09-08 | International Business Machines Corporation | Systems and methods for queuing order notification |
US7336676B2 (en) * | 2004-01-20 | 2008-02-26 | Mediatek Inc. | Multi-queue single-FIFO architecture for quality of service oriented systems |
US7631071B2 (en) * | 2004-01-23 | 2009-12-08 | Microsoft Corporation | Mechanism for ensuring processing of messages received while in recovery mode |
US8737219B2 (en) * | 2004-01-30 | 2014-05-27 | Hewlett-Packard Development Company, L.P. | Methods and systems that use information about data packets to determine an order for sending the data packets |
US7966488B2 (en) * | 2004-01-30 | 2011-06-21 | Hewlett-Packard Development Company, L. P. | Methods and systems that use information about encrypted data packets to determine an order for sending the data packets |
US20050251684A1 (en) * | 2004-02-02 | 2005-11-10 | Hitachi, Ltd. | Storage control system and storage control method |
US7457241B2 (en) * | 2004-02-05 | 2008-11-25 | International Business Machines Corporation | Structure for scheduler pipeline design for hierarchical link sharing |
DE102004006767B4 (en) * | 2004-02-11 | 2011-06-30 | Infineon Technologies AG, 81669 | Method and device for transporting data sections by means of a DMA controller |
US7249306B2 (en) * | 2004-02-20 | 2007-07-24 | Nvidia Corporation | System and method for generating 128-bit cyclic redundancy check values with 32-bit granularity |
US7206872B2 (en) * | 2004-02-20 | 2007-04-17 | Nvidia Corporation | System and method for insertion of markers into a data stream |
TWI239734B (en) | 2004-03-02 | 2005-09-11 | Ind Tech Res Inst | Full hardware based TCP/IP traffic offload engine (TOE) device and method thereof |
US7779021B1 (en) * | 2004-03-09 | 2010-08-17 | Versata Development Group, Inc. | Session-based processing method and system |
US7478109B1 (en) | 2004-03-15 | 2009-01-13 | Cisco Technology, Inc. | Identification of a longest matching prefix based on a search of intervals corresponding to the prefixes |
US7460837B2 (en) * | 2004-03-25 | 2008-12-02 | Cisco Technology, Inc. | User interface and time-shifted presentation of data in a system that monitors activity in a shared radio frequency band |
DE602005025270D1 (en) * | 2004-03-26 | 2011-01-27 | Canon Kk | Internet Protocol tunneling using templates |
US7817659B2 (en) | 2004-03-26 | 2010-10-19 | Foundry Networks, Llc | Method and apparatus for aggregating input data streams |
US7480308B1 (en) | 2004-03-29 | 2009-01-20 | Cisco Technology, Inc. | Distributing packets and packets fragments possibly received out of sequence into an expandable set of queues of particular use in packet resequencing and reassembly |
US20050246443A1 (en) * | 2004-03-31 | 2005-11-03 | Intel Corporation | Management of offload operations in a network storage driver |
JP2005295693A (en) * | 2004-03-31 | 2005-10-20 | Toshiba Corp | Waveform output unit and driver |
US20050223118A1 (en) * | 2004-04-05 | 2005-10-06 | Ammasso, Inc. | System and method for placement of sharing physical buffer lists in RDMA communication |
US20050220128A1 (en) * | 2004-04-05 | 2005-10-06 | Ammasso, Inc. | System and method for work request queuing for intelligent adapter |
US20060067346A1 (en) * | 2004-04-05 | 2006-03-30 | Ammasso, Inc. | System and method for placement of RDMA payload into application memory of a processor system |
US7519719B2 (en) * | 2004-04-15 | 2009-04-14 | Agilent Technologies, Inc. | Automatic creation of protocol dependent control path for instrument application |
US7533415B2 (en) * | 2004-04-21 | 2009-05-12 | Trend Micro Incorporated | Method and apparatus for controlling traffic in a computer network |
US8730961B1 (en) | 2004-04-26 | 2014-05-20 | Foundry Networks, Llc | System and method for optimizing router lookup |
JP4343760B2 (en) * | 2004-04-28 | 2009-10-14 | 株式会社日立製作所 | Network protocol processor |
US7669190B2 (en) | 2004-05-18 | 2010-02-23 | Qlogic, Corporation | Method and system for efficiently recording processor events in host bus adapters |
US7945705B1 (en) | 2004-05-25 | 2011-05-17 | Chelsio Communications, Inc. | Method for using a protocol language to avoid separate channels for control messages involving encapsulated payload data messages |
KR100553348B1 (en) * | 2004-05-31 | 2006-02-20 | 한국전자통신연구원 | Data transmission apparatus and method for high speed streaming using pmem controller |
US20050283545A1 (en) * | 2004-06-17 | 2005-12-22 | Zur Uri E | Method and system for supporting write operations with CRC for iSCSI and iSCSI chimney |
US20060010273A1 (en) * | 2004-06-25 | 2006-01-12 | Sridharan Sakthivelu | CAM-less command context implementation |
US7457255B2 (en) * | 2004-06-25 | 2008-11-25 | Apple Inc. | Method and apparatus for providing link-local IPv4 addressing across multiple interfaces of a network node |
US20060004935A1 (en) * | 2004-06-30 | 2006-01-05 | Pak-Lung Seto | Multi-protocol bridge |
US7978827B1 (en) | 2004-06-30 | 2011-07-12 | Avaya Inc. | Automatic configuration of call handling based on end-user needs and characteristics |
US7573831B1 (en) * | 2004-07-01 | 2009-08-11 | Sprint Communications Company L.P. | System and method for analyzing transmission of billing data |
US7787481B1 (en) * | 2004-07-19 | 2010-08-31 | Advanced Micro Devices, Inc. | Prefetch scheme to minimize interpacket gap |
US7502870B1 (en) * | 2004-08-13 | 2009-03-10 | Sun Microsystems, Inc. | Method for receiving network communication and apparatus for performing the same |
US7382779B1 (en) | 2004-08-20 | 2008-06-03 | Trend Micro Incorporated | Method and apparatus for configuring a network component |
US7761608B2 (en) * | 2004-09-01 | 2010-07-20 | Qlogic, Corporation | Method and system for processing markers, data integrity fields and digests |
US7522623B2 (en) * | 2004-09-01 | 2009-04-21 | Qlogic, Corporation | Method and system for efficiently using buffer space |
WO2006027874A1 (en) * | 2004-09-08 | 2006-03-16 | Nec Corporation | Radio communication system, mobile station, and handover control method |
US7676611B2 (en) * | 2004-10-01 | 2010-03-09 | Qlogic, Corporation | Method and system for processing out of orders frames |
US7593997B2 (en) * | 2004-10-01 | 2009-09-22 | Qlogic, Corporation | Method and system for LUN remapping in fibre channel networks |
US20060072563A1 (en) * | 2004-10-05 | 2006-04-06 | Regnier Greg J | Packet processing |
US8248939B1 (en) | 2004-10-08 | 2012-08-21 | Alacritech, Inc. | Transferring control of TCP connections between hierarchy of processing mechanisms |
US7835380B1 (en) * | 2004-10-19 | 2010-11-16 | Broadcom Corporation | Multi-port network interface device with shared processing resources |
US8478907B1 (en) * | 2004-10-19 | 2013-07-02 | Broadcom Corporation | Network interface device serving multiple host operating systems |
US7657703B1 (en) | 2004-10-29 | 2010-02-02 | Foundry Networks, Inc. | Double density content addressable memory (CAM) lookup scheme |
US20060098818A1 (en) * | 2004-11-10 | 2006-05-11 | International Business Machines (Ibm) Corporation | Encryption technique for asynchronous control commands and data |
US7392323B2 (en) * | 2004-11-16 | 2008-06-24 | Seiko Epson Corporation | Method and apparatus for tunneling data using a single simulated stateful TCP connection |
US7716380B1 (en) * | 2004-11-17 | 2010-05-11 | Juniper Networks, Inc. | Recycling items in a network device |
KR100646858B1 (en) * | 2004-12-08 | 2006-11-23 | 한국전자통신연구원 | Hardware device and behavior manner for creation and management of socket information based on TOE |
US7730257B2 (en) * | 2004-12-16 | 2010-06-01 | Broadcom Corporation | Method and computer program product to increase I/O write performance in a redundant array |
US20060136475A1 (en) * | 2004-12-21 | 2006-06-22 | Soumen Karmakar | Secure data transfer apparatus, systems, and methods |
US7404040B2 (en) * | 2004-12-30 | 2008-07-22 | Intel Corporation | Packet data placement in a processor cache |
US7966643B2 (en) * | 2005-01-19 | 2011-06-21 | Microsoft Corporation | Method and system for securing a remote file system |
US7536542B2 (en) * | 2005-01-19 | 2009-05-19 | Microsoft Corporation | Method and system for intercepting, analyzing, and modifying interactions between a transport client and a transport provider |
KR100810222B1 (en) * | 2005-02-01 | 2008-03-07 | 삼성전자주식회사 | METHOD AND SYSTEM FOR SERVICING FULL DUPLEX DIRECT CALL IN PoCPTT over Cellular |
US20060200517A1 (en) * | 2005-03-03 | 2006-09-07 | Steve Nelson | Method and apparatus for real time multi-party conference document copier |
JP2008532177A (en) | 2005-03-03 | 2008-08-14 | ワシントン ユニヴァーシティー | Method and apparatus for performing biological sequence similarity searches |
US7643420B2 (en) * | 2005-03-11 | 2010-01-05 | Broadcom Corporation | Method and system for transmission control protocol (TCP) traffic smoothing |
JP2006252733A (en) * | 2005-03-14 | 2006-09-21 | Fujitsu Ltd | Medium storage device and write path diagnosing method for the same |
US7414975B2 (en) * | 2005-03-24 | 2008-08-19 | Ixia | Protocol stack |
US8024541B2 (en) | 2005-03-25 | 2011-09-20 | Elliptic Technologies Inc. | Packet memory processing system having memory buffers with different architectures and method therefor |
US7568056B2 (en) * | 2005-03-28 | 2009-07-28 | Nvidia Corporation | Host bus adapter that interfaces with host computer bus to multiple types of storage devices |
US7657537B1 (en) | 2005-04-29 | 2010-02-02 | Netapp, Inc. | System and method for specifying batch execution ordering of requests in a storage system cluster |
US7389382B2 (en) * | 2005-06-08 | 2008-06-17 | Cisco Technology, Inc. | ISCSI block cache and synchronization technique for WAN edge device |
US7480747B2 (en) * | 2005-06-08 | 2009-01-20 | Intel Corporation | Method and apparatus to reduce latency and improve throughput of input/output data in a processor |
EP1891787B1 (en) | 2005-06-15 | 2010-03-24 | Solarflare Communications Incorporated | Data processing system |
US7574698B2 (en) * | 2005-06-16 | 2009-08-11 | International Business Machines Corporation | Method and apparatus for protecting HTTP session data from data crossover using aspect-oriented programming |
US7694287B2 (en) | 2005-06-29 | 2010-04-06 | Visa U.S.A. | Schema-based dynamic parse/build engine for parsing multi-format messages |
US7774402B2 (en) * | 2005-06-29 | 2010-08-10 | Visa U.S.A. | Adaptive gateway for switching transactions and data on unreliable networks using context-based rules |
US7839875B1 (en) * | 2005-07-12 | 2010-11-23 | Oracle America Inc. | Method and system for an efficient transport loopback mechanism for TCP/IP sockets |
US7715436B1 (en) | 2005-11-18 | 2010-05-11 | Chelsio Communications, Inc. | Method for UDP transmit protocol offload processing with traffic management |
US7616563B1 (en) | 2005-08-31 | 2009-11-10 | Chelsio Communications, Inc. | Method to implement an L4-L7 switch using split connections and an offloading NIC |
US7660264B1 (en) | 2005-12-19 | 2010-02-09 | Chelsio Communications, Inc. | Method for traffic schedulign in intelligent network interface circuitry |
US7660306B1 (en) | 2006-01-12 | 2010-02-09 | Chelsio Communications, Inc. | Virtualizing the operation of intelligent network interface circuitry |
US7724658B1 (en) | 2005-08-31 | 2010-05-25 | Chelsio Communications, Inc. | Protocol offload transmit traffic management |
US7639715B1 (en) | 2005-09-09 | 2009-12-29 | Qlogic, Corporation | Dedicated application interface for network systems |
US8135741B2 (en) | 2005-09-20 | 2012-03-13 | Microsoft Corporation | Modifying service provider context information to facilitate locating interceptor context information |
US20070073966A1 (en) * | 2005-09-23 | 2007-03-29 | Corbin John R | Network processor-based storage controller, compute element and method of using same |
US8660137B2 (en) * | 2005-09-29 | 2014-02-25 | Broadcom Israel Research, Ltd. | Method and system for quality of service and congestion management for converged network interface devices |
US7760733B1 (en) | 2005-10-13 | 2010-07-20 | Chelsio Communications, Inc. | Filtering ingress packets in network interface circuitry |
US8862783B2 (en) * | 2005-10-25 | 2014-10-14 | Broadbus Technologies, Inc. | Methods and system to offload data processing tasks |
US7656894B2 (en) * | 2005-10-28 | 2010-02-02 | Microsoft Corporation | Offloading processing tasks to a peripheral device |
US8447898B2 (en) * | 2005-10-28 | 2013-05-21 | Microsoft Corporation | Task offload to a peripheral device |
KR100653178B1 (en) * | 2005-11-03 | 2006-12-05 | 한국전자통신연구원 | Apparatus and method for creation and management of tcp transmission information based on toe |
US8284783B1 (en) * | 2005-11-15 | 2012-10-09 | Nvidia Corporation | System and method for avoiding neighbor cache pollution |
US8284782B1 (en) * | 2005-11-15 | 2012-10-09 | Nvidia Corporation | System and method for avoiding ARP cache pollution |
US7738500B1 (en) | 2005-12-14 | 2010-06-15 | Alacritech, Inc. | TCP timestamp synchronization for network connections that are offloaded to network interface devices |
US8448162B2 (en) | 2005-12-28 | 2013-05-21 | Foundry Networks, Llc | Hitless software upgrades |
US8116312B2 (en) | 2006-02-08 | 2012-02-14 | Solarflare Communications, Inc. | Method and apparatus for multicast packet reception |
US7984438B2 (en) * | 2006-02-08 | 2011-07-19 | Microsoft Corporation | Virtual machine transitioning from emulating mode to enlightened mode |
US8145733B1 (en) | 2006-02-15 | 2012-03-27 | Trend Micro Incorporated | Identification of computers located behind an address translation server |
US7675854B2 (en) | 2006-02-21 | 2010-03-09 | A10 Networks, Inc. | System and method for an adaptive TCP SYN cookie with time validation |
FR2898455A1 (en) * | 2006-03-13 | 2007-09-14 | Thomson Licensing Sas | METHOD AND DEVICE FOR TRANSMITTING DATA PACKETS |
US7733907B2 (en) | 2006-04-07 | 2010-06-08 | Microsoft Corporation | Combined header processing for network packets |
US20080040519A1 (en) * | 2006-05-02 | 2008-02-14 | Alacritech, Inc. | Network interface device with 10 Gb/s full-duplex transfer rate |
EP2016694B1 (en) * | 2006-05-09 | 2019-03-20 | Cognio, Inc. | System and method for identifying wireless devices using pulse fingerprinting and sequence analysis |
US7656849B1 (en) | 2006-05-31 | 2010-02-02 | Qurio Holdings, Inc. | System and method for bypassing an access point in a local area network for P2P data transfers |
US20070285501A1 (en) * | 2006-06-09 | 2007-12-13 | Wai Yim | Videoconference System Clustering |
US7921046B2 (en) | 2006-06-19 | 2011-04-05 | Exegy Incorporated | High speed processing of financial information using FPGA devices |
US8102863B1 (en) * | 2006-06-27 | 2012-01-24 | Qurio Holdings, Inc. | High-speed WAN to wireless LAN gateway |
US9781071B2 (en) * | 2006-06-28 | 2017-10-03 | Nokia Technologies Oy | Method, apparatus and computer program product for providing automatic delivery of information to a terminal |
US8224466B2 (en) | 2006-11-29 | 2012-07-17 | Honeywell International Inc. | Low-cost controller having a dynamically changeable interface |
US8190698B2 (en) * | 2006-06-30 | 2012-05-29 | Microsoft Corporation | Efficiently polling to determine completion of a DMA copy operation |
US9948533B2 (en) | 2006-07-10 | 2018-04-17 | Solarflare Communitations, Inc. | Interrupt management |
EP2552081B1 (en) * | 2006-07-10 | 2015-06-17 | Solarflare Communications Inc | Interrupt management |
US9686117B2 (en) | 2006-07-10 | 2017-06-20 | Solarflare Communications, Inc. | Chimney onload implementation of network protocol stack |
US7903654B2 (en) | 2006-08-22 | 2011-03-08 | Foundry Networks, Llc | System and method for ECMP load sharing |
US8584199B1 (en) | 2006-10-17 | 2013-11-12 | A10 Networks, Inc. | System and method to apply a packet routing policy to an application session |
US8312507B2 (en) | 2006-10-17 | 2012-11-13 | A10 Networks, Inc. | System and method to apply network traffic policy to an application session |
US9794378B2 (en) | 2006-11-08 | 2017-10-17 | Standard Microsystems Corporation | Network traffic controller (NTC) |
US7583674B2 (en) * | 2006-11-20 | 2009-09-01 | Alcatel Lucent | Switch and method for supporting internet protocol (IP) network tunnels |
US7773546B2 (en) * | 2006-11-21 | 2010-08-10 | Broadcom Corporation | System and method for a software-based TCP/IP offload engine for digital media renderers |
US8238255B2 (en) | 2006-11-22 | 2012-08-07 | Foundry Networks, Llc | Recovering from failures without impact on data traffic in a shared bus architecture |
JP4949816B2 (en) * | 2006-12-01 | 2012-06-13 | ルネサスエレクトロニクス株式会社 | Bidirectional communication circuit, bidirectional communication system, and bidirectional communication circuit communication method |
US9137212B2 (en) * | 2006-12-04 | 2015-09-15 | Oracle America, Inc. | Communication method and apparatus using changing destination and return destination ID's |
DE102006057686A1 (en) * | 2006-12-07 | 2008-06-12 | Robert Bosch Gmbh | Procedure for exchanging information |
JP5022691B2 (en) * | 2006-12-12 | 2012-09-12 | キヤノン株式会社 | COMMUNICATION DEVICE, ITS CONTROL METHOD, AND PROGRAM |
US20080155571A1 (en) * | 2006-12-21 | 2008-06-26 | Yuval Kenan | Method and System for Host Software Concurrent Processing of a Network Connection Using Multiple Central Processing Units |
US7912047B2 (en) * | 2006-12-22 | 2011-03-22 | International Business Machines Corporation | Method and program for classifying fragmented messages |
US20090279441A1 (en) | 2007-01-11 | 2009-11-12 | Foundry Networks, Inc. | Techniques for transmitting failure detection protocol packets |
US20080177881A1 (en) * | 2007-01-19 | 2008-07-24 | Dell Products, Lp | System and Method for Applying Quality of Service (QoS) in iSCSI Through ISNS |
US7617337B1 (en) | 2007-02-06 | 2009-11-10 | Avaya Inc. | VoIP quality tradeoff system |
US8170023B2 (en) * | 2007-02-20 | 2012-05-01 | Broadcom Corporation | System and method for a software-based TCP/IP offload engine for implementing efficient digital media streaming over internet protocol networks |
EP1976226A1 (en) * | 2007-03-30 | 2008-10-01 | STMicroelectronics Pvt. Ltd. | A method and system for optimizing power consumption and reducing MIPS requirements for wireless communication |
US8935406B1 (en) | 2007-04-16 | 2015-01-13 | Chelsio Communications, Inc. | Network adaptor configured for connection establishment offload |
US20080263171A1 (en) * | 2007-04-19 | 2008-10-23 | Alacritech, Inc. | Peripheral device that DMAS the same data to different locations in a computer |
JP5121291B2 (en) * | 2007-04-20 | 2013-01-16 | 株式会社ニューフレアテクノロジー | Data transfer system |
US8060644B1 (en) | 2007-05-11 | 2011-11-15 | Chelsio Communications, Inc. | Intelligent network adaptor with end-to-end flow control |
US7826350B1 (en) | 2007-05-11 | 2010-11-02 | Chelsio Communications, Inc. | Intelligent network adaptor with adaptive direct data placement scheme |
US8589587B1 (en) | 2007-05-11 | 2013-11-19 | Chelsio Communications, Inc. | Protocol offload in intelligent network adaptor, including application level signalling |
US7831720B1 (en) | 2007-05-17 | 2010-11-09 | Chelsio Communications, Inc. | Full offload of stateful connections, with partial connection offload |
US7908624B2 (en) * | 2007-06-18 | 2011-03-15 | Broadcom Corporation | System and method for just in time streaming of digital programs for network recording and relaying over internet protocol network |
US8037399B2 (en) | 2007-07-18 | 2011-10-11 | Foundry Networks, Llc | Techniques for segmented CRC design in high speed networks |
US8271859B2 (en) | 2007-07-18 | 2012-09-18 | Foundry Networks Llc | Segmented CRC design in high speed networks |
US9455896B2 (en) | 2007-07-23 | 2016-09-27 | Verint Americas Inc. | Dedicated network interface |
US7720099B2 (en) * | 2007-08-13 | 2010-05-18 | Honeywell International Inc. | Common protocol and routing scheme for space data processing networks |
US8031633B2 (en) * | 2007-08-13 | 2011-10-04 | Honeywell International Inc. | Virtual network architecture for space data processing |
US8149839B1 (en) | 2007-09-26 | 2012-04-03 | Foundry Networks, Llc | Selection of trunk ports and paths using rotation |
US8041854B2 (en) * | 2007-09-28 | 2011-10-18 | Intel Corporation | Steering data units to a consumer |
JP5373620B2 (en) * | 2007-11-09 | 2013-12-18 | パナソニック株式会社 | Data transfer control device, data transfer device, data transfer control method, and semiconductor integrated circuit using reconfiguration circuit |
US8527663B2 (en) * | 2007-12-21 | 2013-09-03 | At&T Intellectual Property I, L.P. | Methods and apparatus for performing non-intrusive network layer performance measurement in communication networks |
US8706862B2 (en) * | 2007-12-21 | 2014-04-22 | At&T Intellectual Property I, L.P. | Methods and apparatus for performing non-intrusive data link layer performance measurement in communication networks |
US8127307B1 (en) * | 2007-12-31 | 2012-02-28 | Emc Corporation | Methods and apparatus for storage virtualization system having switch level event processing |
KR101500970B1 (en) * | 2008-01-15 | 2015-03-11 | 삼성전자주식회사 | Method and apparatus for reducing power consumption of wireless network device, and computer readable medium thereof |
US9584446B2 (en) * | 2008-03-18 | 2017-02-28 | Vmware, Inc. | Memory buffer management method and system having multiple receive ring buffers |
US8539513B1 (en) | 2008-04-01 | 2013-09-17 | Alacritech, Inc. | Accelerating data transfer in a virtual computer system with tightly coupled TCP connections |
US8443440B2 (en) * | 2008-04-05 | 2013-05-14 | Trend Micro Incorporated | System and method for intelligent coordination of host and guest intrusion prevention in virtualized environment |
US8132089B1 (en) * | 2008-05-02 | 2012-03-06 | Verint Americas, Inc. | Error checking at the re-sequencing stage |
US7782903B2 (en) * | 2008-05-14 | 2010-08-24 | Newport Media, Inc. | Hardware accelerated protocol stack |
US8898448B2 (en) * | 2008-06-19 | 2014-11-25 | Qualcomm Incorporated | Hardware acceleration for WWAN technologies |
US8364863B2 (en) * | 2008-07-11 | 2013-01-29 | Intel Corporation | Method and apparatus for universal serial bus (USB) command queuing |
US8341286B1 (en) | 2008-07-31 | 2012-12-25 | Alacritech, Inc. | TCP offload send optimization |
US8218751B2 (en) | 2008-09-29 | 2012-07-10 | Avaya Inc. | Method and apparatus for identifying and eliminating the source of background noise in multi-party teleconferences |
TWI392983B (en) * | 2008-10-06 | 2013-04-11 | Sonix Technology Co Ltd | Robot apparatus control system using a tone and robot apparatus |
US9488992B2 (en) * | 2008-10-16 | 2016-11-08 | Honeywell International Inc. | Wall module configuration tool |
US20100097931A1 (en) * | 2008-10-21 | 2010-04-22 | Shakeel Mustafa | Management of packet flow in a network |
US9306793B1 (en) | 2008-10-22 | 2016-04-05 | Alacritech, Inc. | TCP offload device that batches session layer headers to reduce interrupts as well as CPU copies |
WO2010077829A1 (en) | 2008-12-15 | 2010-07-08 | Exegy Incorporated | Method and apparatus for high-speed processing of financial market depth data |
US8325601B2 (en) * | 2009-05-08 | 2012-12-04 | Canon Kabushiki Kaisha | Reliable network streaming of a single data stream over multiple physical interfaces |
US8396960B2 (en) * | 2009-05-08 | 2013-03-12 | Canon Kabushiki Kaisha | Efficient network utilization using multiple physical interfaces |
US8880716B2 (en) * | 2009-05-08 | 2014-11-04 | Canon Kabushiki Kaisha | Network streaming of a single data stream simultaneously over multiple physical interfaces |
US8090901B2 (en) | 2009-05-14 | 2012-01-03 | Brocade Communications Systems, Inc. | TCAM management approach that minimize movements |
US8054848B2 (en) * | 2009-05-19 | 2011-11-08 | International Business Machines Corporation | Single DMA transfers from device drivers to network adapters |
US8355345B2 (en) * | 2009-08-04 | 2013-01-15 | International Business Machines Corporation | Apparatus, system, and method for establishing point to point connections in FCOE |
US8599850B2 (en) | 2009-09-21 | 2013-12-03 | Brocade Communications Systems, Inc. | Provisioning single or multistage networks using ethernet service instances (ESIs) |
US9960967B2 (en) | 2009-10-21 | 2018-05-01 | A10 Networks, Inc. | Determining an application delivery server based on geo-location information |
US8526363B2 (en) * | 2010-01-13 | 2013-09-03 | Sony Corporation | Method and system for transferring data between wireless devices |
US8356109B2 (en) | 2010-05-13 | 2013-01-15 | Canon Kabushiki Kaisha | Network streaming of a video stream over multiple communication channels |
AU2011274418B2 (en) | 2010-07-09 | 2015-01-15 | Visa International Service Association | Gateway abstraction layer |
US8879387B2 (en) * | 2010-09-08 | 2014-11-04 | Verizon Patent And Licensing Inc. | Transmission control protocol (TCP) throughput optimization in point-to-multipoint and heterogeneous wireless access networks |
US8402454B2 (en) * | 2010-09-22 | 2013-03-19 | Telefonaktiebolaget L M Ericsson (Publ) | In-service software upgrade on cards of virtual partition of network element that includes directing traffic away from cards of virtual partition |
US9215275B2 (en) | 2010-09-30 | 2015-12-15 | A10 Networks, Inc. | System and method to balance servers based on server load status |
US20120102136A1 (en) * | 2010-10-21 | 2012-04-26 | Lancaster University | Data caching system |
US20120137102A1 (en) * | 2010-11-30 | 2012-05-31 | Ramkumar Perumanam | Consumer approach based memory buffer optimization for multimedia applications |
US9609052B2 (en) | 2010-12-02 | 2017-03-28 | A10 Networks, Inc. | Distributing application traffic to servers based on dynamic service response time |
US9674318B2 (en) | 2010-12-09 | 2017-06-06 | Solarflare Communications, Inc. | TCP processing for devices |
US9600429B2 (en) | 2010-12-09 | 2017-03-21 | Solarflare Communications, Inc. | Encapsulated accelerator |
US9258390B2 (en) | 2011-07-29 | 2016-02-09 | Solarflare Communications, Inc. | Reducing network latency |
US8996644B2 (en) | 2010-12-09 | 2015-03-31 | Solarflare Communications, Inc. | Encapsulated accelerator |
WO2012079041A1 (en) | 2010-12-09 | 2012-06-14 | Exegy Incorporated | Method and apparatus for managing orders in financial markets |
US10873613B2 (en) * | 2010-12-09 | 2020-12-22 | Xilinx, Inc. | TCP processing for devices |
KR20120071244A (en) * | 2010-12-22 | 2012-07-02 | 한국전자통신연구원 | Apparatus for transferring packet data of mobile communication system and method thereof |
US8538588B2 (en) | 2011-02-28 | 2013-09-17 | Honeywell International Inc. | Method and apparatus for configuring scheduling on a wall module |
US8837346B2 (en) * | 2011-06-01 | 2014-09-16 | General Electric Company | Repeater pass-through messaging |
US20120327952A1 (en) * | 2011-06-23 | 2012-12-27 | Exar Corporation | Ethernet tag approach to support networking task offload |
US9442881B1 (en) * | 2011-08-31 | 2016-09-13 | Yahoo! Inc. | Anti-spam transient entity classification |
US8769138B2 (en) * | 2011-09-02 | 2014-07-01 | Compuverde Ab | Method for data retrieval from a distributed data storage system |
US8897154B2 (en) | 2011-10-24 | 2014-11-25 | A10 Networks, Inc. | Combining stateless and stateful server load balancing |
WO2013072773A2 (en) * | 2011-11-18 | 2013-05-23 | Marvell World Trade Ltd. | Data path acceleration using hw virtualization |
JP5768683B2 (en) * | 2011-11-28 | 2015-08-26 | 富士通株式会社 | Reception data processing method, communication apparatus, and program |
US9386088B2 (en) | 2011-11-29 | 2016-07-05 | A10 Networks, Inc. | Accelerating service processing using fast path TCP |
US9094364B2 (en) | 2011-12-23 | 2015-07-28 | A10 Networks, Inc. | Methods to manage services over a service gateway |
US10044582B2 (en) | 2012-01-28 | 2018-08-07 | A10 Networks, Inc. | Generating secure name records |
US9565120B2 (en) | 2012-01-30 | 2017-02-07 | Broadcom Corporation | Method and system for performing distributed deep-packet inspection |
US9031094B2 (en) * | 2012-02-03 | 2015-05-12 | Apple Inc. | System and method for local flow control and advisory using a fairness-based queue management algorithm |
US10650452B2 (en) | 2012-03-27 | 2020-05-12 | Ip Reservoir, Llc | Offload processing of data packets |
US11436672B2 (en) | 2012-03-27 | 2022-09-06 | Exegy Incorporated | Intelligent switch for processing financial market data |
US20140180904A1 (en) * | 2012-03-27 | 2014-06-26 | Ip Reservoir, Llc | Offload Processing of Data Packets Containing Financial Market Data |
US10121196B2 (en) | 2012-03-27 | 2018-11-06 | Ip Reservoir, Llc | Offload processing of data packets containing financial market data |
US9990393B2 (en) | 2012-03-27 | 2018-06-05 | Ip Reservoir, Llc | Intelligent feed switch |
US8885562B2 (en) | 2012-03-28 | 2014-11-11 | Telefonaktiebolaget L M Ericsson (Publ) | Inter-chassis redundancy with coordinated traffic direction |
WO2014003855A1 (en) | 2012-06-27 | 2014-01-03 | Monteris Medical Corporation | Image-guided therapy of a tissue |
US8782221B2 (en) | 2012-07-05 | 2014-07-15 | A10 Networks, Inc. | Method to allocate buffer for TCP proxy session based on dynamic network conditions |
US10002141B2 (en) | 2012-09-25 | 2018-06-19 | A10 Networks, Inc. | Distributed database in software driven networks |
US10021174B2 (en) | 2012-09-25 | 2018-07-10 | A10 Networks, Inc. | Distributing service sessions |
US9106561B2 (en) | 2012-12-06 | 2015-08-11 | A10 Networks, Inc. | Configuration of a virtual service network |
US9843484B2 (en) | 2012-09-25 | 2017-12-12 | A10 Networks, Inc. | Graceful scaling in software driven networks |
JP2015534769A (en) | 2012-09-25 | 2015-12-03 | エイ10 ネットワークス インコーポレイテッドA10 Networks, Inc. | Load balancing in data networks |
US10505747B2 (en) | 2012-10-16 | 2019-12-10 | Solarflare Communications, Inc. | Feed processing |
US10095433B1 (en) | 2012-10-24 | 2018-10-09 | Western Digital Technologies, Inc. | Out-of-order data transfer mechanisms for data storage systems |
US9047417B2 (en) | 2012-10-29 | 2015-06-02 | Intel Corporation | NUMA aware network interface |
US9338225B2 (en) | 2012-12-06 | 2016-05-10 | A10 Networks, Inc. | Forwarding policies on a virtual service network |
US9413651B2 (en) * | 2012-12-14 | 2016-08-09 | Broadcom Corporation | Selective deep packet inspection |
US9531846B2 (en) | 2013-01-23 | 2016-12-27 | A10 Networks, Inc. | Reducing buffer usage for TCP proxy session based on delayed acknowledgement |
US9900252B2 (en) | 2013-03-08 | 2018-02-20 | A10 Networks, Inc. | Application delivery controller and global server load balancer |
US9992107B2 (en) | 2013-03-15 | 2018-06-05 | A10 Networks, Inc. | Processing data packets using a policy based network path |
US8626912B1 (en) | 2013-03-15 | 2014-01-07 | Extrahop Networks, Inc. | Automated passive discovery of applications |
US8867343B2 (en) | 2013-03-15 | 2014-10-21 | Extrahop Networks, Inc. | Trigger based recording of flows with play back |
US8619579B1 (en) * | 2013-03-15 | 2013-12-31 | Extrahop Networks, Inc. | De-duplicating of packets in flows at layer 3 |
US10027761B2 (en) | 2013-05-03 | 2018-07-17 | A10 Networks, Inc. | Facilitating a secure 3 party network session by a network device |
WO2014179753A2 (en) | 2013-05-03 | 2014-11-06 | A10 Networks, Inc. | Facilitating secure network traffic by an application delivery controller |
US10684973B2 (en) | 2013-08-30 | 2020-06-16 | Intel Corporation | NUMA node peripheral switch |
WO2015033418A1 (en) * | 2013-09-05 | 2015-03-12 | 株式会社日立製作所 | Storage system and storage control method |
WO2015041653A1 (en) * | 2013-09-19 | 2015-03-26 | Intel Corporation | Methods and apparatus to manage cache memory in multi-cache environments |
US10230770B2 (en) | 2013-12-02 | 2019-03-12 | A10 Networks, Inc. | Network proxy layer for policy-based application proxies |
US9584637B2 (en) * | 2014-02-19 | 2017-02-28 | Netronome Systems, Inc. | Guaranteed in-order packet delivery |
US10675113B2 (en) | 2014-03-18 | 2020-06-09 | Monteris Medical Corporation | Automated therapy of a three-dimensional tissue region |
US9433383B2 (en) | 2014-03-18 | 2016-09-06 | Monteris Medical Corporation | Image-guided therapy of a tissue |
WO2015143026A1 (en) | 2014-03-18 | 2015-09-24 | Monteris Medical Corporation | Image-guided therapy of a tissue |
US10020979B1 (en) | 2014-03-25 | 2018-07-10 | A10 Networks, Inc. | Allocating resources in multi-core computing environments |
US9942152B2 (en) | 2014-03-25 | 2018-04-10 | A10 Networks, Inc. | Forwarding data packets using a service-based forwarding policy |
US9942162B2 (en) | 2014-03-31 | 2018-04-10 | A10 Networks, Inc. | Active application response delay time |
EP2928123B1 (en) * | 2014-04-02 | 2019-11-06 | 6Wind | Method for processing VXLAN data units |
WO2015156776A1 (en) * | 2014-04-08 | 2015-10-15 | Empire Technology Development Llc | Full duplex radio communication |
WO2015160333A1 (en) | 2014-04-15 | 2015-10-22 | Empire Technology Development Llc | Self interference cancellation |
US9806943B2 (en) | 2014-04-24 | 2017-10-31 | A10 Networks, Inc. | Enabling planned upgrade/downgrade of network devices without impacting network sessions |
US9906422B2 (en) | 2014-05-16 | 2018-02-27 | A10 Networks, Inc. | Distributed system to determine a server's health |
US9992229B2 (en) | 2014-06-03 | 2018-06-05 | A10 Networks, Inc. | Programming a data network device using user defined scripts with licenses |
US9986061B2 (en) | 2014-06-03 | 2018-05-29 | A10 Networks, Inc. | Programming a data network device using user defined scripts |
US10129122B2 (en) | 2014-06-03 | 2018-11-13 | A10 Networks, Inc. | User defined objects for network devices |
CN104063344B (en) | 2014-06-20 | 2018-06-26 | 华为技术有限公司 | A kind of method and network interface card for storing data |
WO2016023187A1 (en) * | 2014-08-13 | 2016-02-18 | 华为技术有限公司 | Storage system, method and apparatus for processing operation request |
US9536590B1 (en) * | 2014-09-03 | 2017-01-03 | Marvell International Ltd. | System and method of memory electrical repair |
US9524169B2 (en) * | 2014-09-24 | 2016-12-20 | Intel Corporation | Technologies for efficient LZ77-based data decompression |
US9934177B2 (en) * | 2014-11-04 | 2018-04-03 | Cavium, Inc. | Methods and systems for accessing storage using a network interface card |
GB2532732B (en) | 2014-11-25 | 2019-06-26 | Ibm | Integrating a communication bridge into a data procesing system |
US9582463B2 (en) | 2014-12-09 | 2017-02-28 | Intel Corporation | Heterogeneous input/output (I/O) using remote direct memory access (RDMA) and active message |
US10320918B1 (en) * | 2014-12-17 | 2019-06-11 | Xilinx, Inc. | Data-flow architecture for a TCP offload engine |
US10681145B1 (en) * | 2014-12-22 | 2020-06-09 | Chelsio Communications, Inc. | Replication in a protocol offload network interface controller |
US10469581B2 (en) | 2015-01-05 | 2019-11-05 | International Business Machines Corporation | File storage protocols header transformation in RDMA operations |
US9755731B2 (en) * | 2015-01-10 | 2017-09-05 | Hughes Network Systems, Llc | Hardware TCP accelerator |
US20160239441A1 (en) * | 2015-02-13 | 2016-08-18 | Qualcomm Incorporated | Systems and methods for providing kernel scheduling of volatile memory maintenance events |
US9912454B2 (en) * | 2015-02-16 | 2018-03-06 | Dell Products L.P. | Systems and methods for efficient file transfer in a boot mode of a basic input/output system |
US20160261457A1 (en) * | 2015-03-03 | 2016-09-08 | Qualcomm Incorporated | Recovering from connectivity failure at a bridging device |
US9920944B2 (en) | 2015-03-19 | 2018-03-20 | Honeywell International Inc. | Wall module display modification and sharing |
US10327830B2 (en) | 2015-04-01 | 2019-06-25 | Monteris Medical Corporation | Cryotherapy, thermal therapy, temperature modulation therapy, and probe apparatus therefor |
US9338147B1 (en) | 2015-04-24 | 2016-05-10 | Extrahop Networks, Inc. | Secure communication secret sharing |
US9641628B2 (en) * | 2015-04-24 | 2017-05-02 | Avaya Inc. | Host discovery and attach |
US11032397B2 (en) | 2015-06-17 | 2021-06-08 | Hewlett Packard Enterprise Development Lp | Method and system for high speed data links |
US9888095B2 (en) * | 2015-06-26 | 2018-02-06 | Microsoft Technology Licensing, Llc | Lightweight transport protocol |
US9674090B2 (en) | 2015-06-26 | 2017-06-06 | Microsoft Technology Licensing, Llc | In-line network accelerator |
US10581976B2 (en) | 2015-08-12 | 2020-03-03 | A10 Networks, Inc. | Transmission control of protocol state exchange for dynamic stateful service insertion |
US10243791B2 (en) | 2015-08-13 | 2019-03-26 | A10 Networks, Inc. | Automated adjustment of subscriber policies |
US10019304B2 (en) * | 2016-01-06 | 2018-07-10 | Nicira, Inc. | Providing an application interface programming exception in an upper management layer |
US10318288B2 (en) | 2016-01-13 | 2019-06-11 | A10 Networks, Inc. | System and method to process a chain of network applications |
US10204211B2 (en) | 2016-02-03 | 2019-02-12 | Extrahop Networks, Inc. | Healthcare operations with passive network monitoring |
US10157162B2 (en) * | 2016-06-27 | 2018-12-18 | Intel Corporation | External universal boosting agent device that improves computing performance by managing the offloading of application tasks for connected electronic devices |
US9729416B1 (en) | 2016-07-11 | 2017-08-08 | Extrahop Networks, Inc. | Anomaly detection using device relationship graphs |
US9660879B1 (en) | 2016-07-25 | 2017-05-23 | Extrahop Networks, Inc. | Flow deduplication across a cluster of network monitoring devices |
US9998573B2 (en) | 2016-08-02 | 2018-06-12 | Qualcomm Incorporated | Hardware-based packet processing circuitry |
CN112887312B (en) * | 2016-12-29 | 2022-07-22 | 华为技术有限公司 | Slow protocol message processing method and related device |
US10389835B2 (en) | 2017-01-10 | 2019-08-20 | A10 Networks, Inc. | Application aware systems and methods to process user loadable network applications |
US10671571B2 (en) * | 2017-01-31 | 2020-06-02 | Cisco Technology, Inc. | Fast network performance in containerized environments for network function virtualization |
CN108418776B (en) * | 2017-02-09 | 2021-08-20 | 上海诺基亚贝尔股份有限公司 | Method and apparatus for providing secure services |
US10476673B2 (en) | 2017-03-22 | 2019-11-12 | Extrahop Networks, Inc. | Managing session secrets for continuous packet capture systems |
US10956245B1 (en) * | 2017-07-28 | 2021-03-23 | EMC IP Holding Company LLC | Storage system with host-directed error scanning of solid-state storage devices |
US10263863B2 (en) | 2017-08-11 | 2019-04-16 | Extrahop Networks, Inc. | Real-time configuration discovery and management |
US10063434B1 (en) | 2017-08-29 | 2018-08-28 | Extrahop Networks, Inc. | Classifying applications or activities based on network behavior |
CN109714302B (en) | 2017-10-25 | 2022-06-14 | 阿里巴巴集团控股有限公司 | Method, device and system for unloading algorithm |
US9967292B1 (en) | 2017-10-25 | 2018-05-08 | Extrahop Networks, Inc. | Inline secret sharing |
US10805420B2 (en) * | 2017-11-29 | 2020-10-13 | Forcepoint Llc | Proxy-less wide area network acceleration |
US10686872B2 (en) | 2017-12-19 | 2020-06-16 | Xilinx, Inc. | Network interface device |
US10686731B2 (en) | 2017-12-19 | 2020-06-16 | Xilinx, Inc. | Network interface device |
US11165720B2 (en) | 2017-12-19 | 2021-11-02 | Xilinx, Inc. | Network interface device |
US10264003B1 (en) | 2018-02-07 | 2019-04-16 | Extrahop Networks, Inc. | Adaptive network monitoring with tuneable elastic granularity |
US10389574B1 (en) | 2018-02-07 | 2019-08-20 | Extrahop Networks, Inc. | Ranking alerts based on network monitoring |
US10038611B1 (en) | 2018-02-08 | 2018-07-31 | Extrahop Networks, Inc. | Personalization of alerts based on network monitoring |
US10270794B1 (en) | 2018-02-09 | 2019-04-23 | Extrahop Networks, Inc. | Detection of denial of service attacks |
DE102018001574B4 (en) * | 2018-02-28 | 2019-09-05 | WAGO Verwaltungsgesellschaft mit beschränkter Haftung | Master-slave bus system and method for operating a bus system |
US10613992B2 (en) * | 2018-03-13 | 2020-04-07 | Tsinghua University | Systems and methods for remote procedure call |
US10116679B1 (en) | 2018-05-18 | 2018-10-30 | Extrahop Networks, Inc. | Privilege inference and monitoring based on network behavior |
US11394776B2 (en) | 2018-06-07 | 2022-07-19 | Tuxera, Inc. | Systems and methods for transport layer processing of server message block protocol messages |
US10990447B1 (en) * | 2018-07-12 | 2021-04-27 | Lightbits Labs Ltd. | System and method for controlling a flow of storage access requests |
US10838763B2 (en) | 2018-07-17 | 2020-11-17 | Xilinx, Inc. | Network interface device and host processing device |
US10659555B2 (en) | 2018-07-17 | 2020-05-19 | Xilinx, Inc. | Network interface device and host processing device |
US10992637B2 (en) * | 2018-07-31 | 2021-04-27 | Juniper Networks, Inc. | Detecting hardware address conflicts in computer networks |
US10411978B1 (en) | 2018-08-09 | 2019-09-10 | Extrahop Networks, Inc. | Correlating causes and effects associated with network activity |
US10594718B1 (en) | 2018-08-21 | 2020-03-17 | Extrahop Networks, Inc. | Managing incident response operations based on monitored network activity |
FR3087979B1 (en) * | 2018-10-31 | 2021-08-06 | Silkan Rt | DATA TRANSMISSION SYSTEM |
KR20210086691A (en) * | 2018-11-02 | 2021-07-08 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | Wireless communication method, network device and terminal device |
US11165744B2 (en) | 2018-12-27 | 2021-11-02 | Juniper Networks, Inc. | Faster duplicate address detection for ranges of link local addresses |
US10965637B1 (en) | 2019-04-03 | 2021-03-30 | Juniper Networks, Inc. | Duplicate address detection for ranges of global IP addresses |
GB2583736B (en) * | 2019-05-07 | 2021-12-22 | F Secure Corp | Method for inspection and filtering of TCP streams in gateway router |
WO2020236297A1 (en) | 2019-05-23 | 2020-11-26 | Cray Inc. | Method and system for facilitating lossy dropping and ecn marking |
US10965702B2 (en) * | 2019-05-28 | 2021-03-30 | Extrahop Networks, Inc. | Detecting injection attacks using passive network monitoring |
US11165814B2 (en) | 2019-07-29 | 2021-11-02 | Extrahop Networks, Inc. | Modifying triage information based on network monitoring |
US11388072B2 (en) | 2019-08-05 | 2022-07-12 | Extrahop Networks, Inc. | Correlating network traffic that crosses opaque endpoints |
US10742530B1 (en) | 2019-08-05 | 2020-08-11 | Extrahop Networks, Inc. | Correlating network traffic that crosses opaque endpoints |
US10742677B1 (en) | 2019-09-04 | 2020-08-11 | Extrahop Networks, Inc. | Automatic determination of user roles and asset types based on network monitoring |
US11570282B2 (en) * | 2019-11-18 | 2023-01-31 | EMC IP Holding Company LLC | Using high speed data transfer protocol |
US11165823B2 (en) | 2019-12-17 | 2021-11-02 | Extrahop Networks, Inc. | Automated preemptive polymorphic deception |
US11012511B1 (en) * | 2020-01-14 | 2021-05-18 | Facebook, Inc. | Smart network interface controller for caching distributed data |
KR20210093531A (en) * | 2020-01-20 | 2021-07-28 | 에스케이하이닉스 주식회사 | System including a storage device for providing data to an application processor |
US11581938B2 (en) * | 2020-03-25 | 2023-02-14 | Qualcomm Incorporated | Radio link monitoring across multiple frequencies in wireless communications |
US11444996B2 (en) | 2020-04-20 | 2022-09-13 | Cisco Technology, Inc. | Two-level cache architecture for live video streaming through hybrid ICN |
US12046578B2 (en) | 2020-06-26 | 2024-07-23 | Intel Corporation | Stacked die network interface controller circuitry |
US11956311B2 (en) * | 2020-06-29 | 2024-04-09 | Marvell Asia Pte Ltd | Method and apparatus for direct memory access of network device |
US11463466B2 (en) | 2020-09-23 | 2022-10-04 | Extrahop Networks, Inc. | Monitoring encrypted network traffic |
US11310256B2 (en) | 2020-09-23 | 2022-04-19 | Extrahop Networks, Inc. | Monitoring encrypted network traffic |
US11088784B1 (en) | 2020-12-24 | 2021-08-10 | Aira Technologies, Inc. | Systems and methods for utilizing dynamic codes with neural networks |
US11368251B1 (en) | 2020-12-28 | 2022-06-21 | Aira Technologies, Inc. | Convergent multi-bit feedback system |
US11483109B2 (en) | 2020-12-28 | 2022-10-25 | Aira Technologies, Inc. | Systems and methods for multi-device communication |
US11575469B2 (en) | 2020-12-28 | 2023-02-07 | Aira Technologies, Inc. | Multi-bit feedback protocol systems and methods |
US11477308B2 (en) | 2020-12-28 | 2022-10-18 | Aira Technologies, Inc. | Adaptive payload extraction in wireless communications involving multi-access address packets |
US20220291955A1 (en) | 2021-03-09 | 2022-09-15 | Intel Corporation | Asynchronous input dependency resolution mechanism |
US11496242B2 (en) | 2021-03-15 | 2022-11-08 | Aira Technologies, Inc. | Fast cyclic redundancy check: utilizing linearity of cyclic redundancy check for accelerating correction of corrupted network packets |
US11489623B2 (en) | 2021-03-15 | 2022-11-01 | Aira Technologies, Inc. | Error correction in network packets |
US11934658B2 (en) * | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Enhanced storage protocol emulation in a peripheral device |
US11349861B1 (en) | 2021-06-18 | 2022-05-31 | Extrahop Networks, Inc. | Identifying network entities based on beaconing activity |
US11665087B2 (en) | 2021-09-15 | 2023-05-30 | International Business Machines Corporation | Transparent service-aware multi-path networking with a feature of multiplexing |
US11296967B1 (en) | 2021-09-23 | 2022-04-05 | Extrahop Networks, Inc. | Combining passive network analysis and active probing |
US11892937B2 (en) * | 2022-02-28 | 2024-02-06 | Bank Of America Corporation | Developer test environment with containerization of tightly coupled systems |
US11843606B2 (en) | 2022-03-30 | 2023-12-12 | Extrahop Networks, Inc. | Detecting abnormal data access based on data similarity |
US12117948B2 (en) | 2022-10-31 | 2024-10-15 | Mellanox Technologies, Ltd. | Data processing unit with transparent root complex |
US12007921B2 (en) | 2022-11-02 | 2024-06-11 | Mellanox Technologies, Ltd. | Programmable user-defined peripheral-bus device implementation using data-plane accelerator (DPA) |
Family Cites Families (289)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4326265A (en) | 1971-07-19 | 1982-04-20 | Texas Instruments Incorporated | Variable function programmed calculator |
US4336538A (en) | 1975-07-26 | 1982-06-22 | The Marconi Company Limited | Radar systems |
US4366538A (en) | 1980-10-31 | 1982-12-28 | Honeywell Information Systems Inc. | Memory controller with queue control apparatus |
US4485460A (en) | 1982-05-10 | 1984-11-27 | Texas Instruments Incorporated | ROM coupling reduction circuitry |
US4589063A (en) | 1983-08-04 | 1986-05-13 | Fortune Systems Corporation | Data processing system having automatic configuration |
US4700185A (en) | 1984-12-26 | 1987-10-13 | Motorola Inc. | Request with response mechanism and method for a local area network controller |
US5097442A (en) | 1985-06-20 | 1992-03-17 | Texas Instruments Incorporated | Programmable depth first-in, first-out memory |
JP2644780B2 (en) | 1987-11-18 | 1997-08-25 | 株式会社日立製作所 | Parallel computer with processing request function |
US5212778A (en) | 1988-05-27 | 1993-05-18 | Massachusetts Institute Of Technology | Message-driven processor in a concurrent computer |
US4991133A (en) | 1988-10-07 | 1991-02-05 | International Business Machines Corp. | Specialized communications processor for layered protocols |
US5303344A (en) | 1989-03-13 | 1994-04-12 | Hitachi, Ltd. | Protocol processing apparatus for use in interfacing network connected computer systems utilizing separate paths for control information and data transfer |
JP2986802B2 (en) | 1989-03-13 | 1999-12-06 | 株式会社日立製作所 | Protocol high-speed processing method |
US5058110A (en) | 1989-05-03 | 1991-10-15 | Ultra Network Technologies | Protocol processor |
US5163131A (en) | 1989-09-08 | 1992-11-10 | Auspex Systems, Inc. | Parallel i/o network file server architecture |
JP2945757B2 (en) | 1989-09-08 | 1999-09-06 | オースペックス システムズ インコーポレイテッド | Multi-device operating system architecture. |
US5115432A (en) * | 1989-12-12 | 1992-05-19 | At&T Bell Laboratories | Communication architecture for high speed networking |
JPH077975B2 (en) | 1990-08-20 | 1995-01-30 | インターナショナル・ビジネス・マシーンズ・コーポレイション | System and method for controlling data transmission |
JP2863295B2 (en) | 1990-10-04 | 1999-03-03 | 沖電気工業株式会社 | Information processing device with communication function |
US6453406B1 (en) | 1990-10-17 | 2002-09-17 | Compaq Computer Corporation | Multiprocessor system with fiber optic bus interconnect for interprocessor communications |
US5289580A (en) | 1991-05-10 | 1994-02-22 | Unisys Corporation | Programmable multiple I/O interface controller |
US5274768A (en) | 1991-05-28 | 1993-12-28 | The Trustees Of The University Of Pennsylvania | High-performance host interface for ATM networks |
JP2791236B2 (en) | 1991-07-25 | 1998-08-27 | 三菱電機株式会社 | Protocol parallel processing unit |
US5524250A (en) | 1991-08-23 | 1996-06-04 | Silicon Graphics, Inc. | Central processing unit for processing a plurality of threads using dedicated general purpose registers and masque register for providing access to the registers |
US5574919A (en) | 1991-08-29 | 1996-11-12 | Lucent Technologies Inc. | Method for thinning a protocol |
US5222061A (en) * | 1991-10-31 | 1993-06-22 | At&T Bell Laboratories | Data services retransmission procedure |
JP3130609B2 (en) | 1991-12-17 | 2001-01-31 | 日本電気株式会社 | Online information processing equipment |
JPH05260045A (en) | 1992-01-14 | 1993-10-08 | Ricoh Co Ltd | Communication method for data terminal equipment |
EP0895162A3 (en) | 1992-01-22 | 1999-11-10 | Enhanced Memory Systems, Inc. | Enhanced dram with embedded registers |
JPH05252228A (en) | 1992-03-02 | 1993-09-28 | Mitsubishi Electric Corp | Data transmitter and its communication line management method |
CA2131627A1 (en) | 1992-03-09 | 1993-09-16 | Yu-Ping Cheng | High-performance non-volatile ram protected write cache accelerator system |
JPH0619771A (en) | 1992-04-20 | 1994-01-28 | Internatl Business Mach Corp <Ibm> | File management system of shared file by different kinds of clients |
US5742760A (en) | 1992-05-12 | 1998-04-21 | Compaq Computer Corporation | Network packet switch using shared memory for repeating and bridging packets at media rate |
EP0574140A1 (en) | 1992-05-29 | 1993-12-15 | Hewlett-Packard Company | Network adapter which places a network header and data in separate memory buffers |
US6026452A (en) | 1997-02-26 | 2000-02-15 | Pitts; William Michael | Network distributed site cache RAM claimed as up/down stream request/reply channel for storing anticipated data and meta data |
US5671355A (en) | 1992-06-26 | 1997-09-23 | Predacomm, Inc. | Reconfigurable network interface apparatus and method |
US5412782A (en) | 1992-07-02 | 1995-05-02 | 3Com Corporation | Programmed I/O ethernet adapter with early interrupts for accelerating data transfer |
US5280477A (en) | 1992-08-17 | 1994-01-18 | E-Systems, Inc. | Network synchronous data distribution system |
JP3392436B2 (en) | 1992-08-28 | 2003-03-31 | 株式会社東芝 | Communication system and communication method |
FR2699706B1 (en) | 1992-12-22 | 1995-02-24 | Bull Sa | Data transmission system between a computer bus and a network. |
US5619650A (en) | 1992-12-31 | 1997-04-08 | International Business Machines Corporation | Network processor for transforming a message transported from an I/O channel to a network by adding a message identifier and then converting the message |
GB9300942D0 (en) | 1993-01-19 | 1993-03-10 | Int Computers Ltd | Parallel computer system |
DE69320321T2 (en) | 1993-02-05 | 1998-12-24 | Hewlett-Packard Co., Palo Alto, Calif. | Method and device for checking CRC codes, combining CRC partial codes |
EP0689748B1 (en) | 1993-03-20 | 1998-09-16 | International Business Machines Corporation | Method and apparatus for extracting connection information from protocol headers |
US5815646A (en) | 1993-04-13 | 1998-09-29 | C-Cube Microsystems | Decompression processor for video applications |
JP3358254B2 (en) | 1993-10-28 | 2002-12-16 | 株式会社日立製作所 | Communication control device and communication control circuit device |
US5448566A (en) | 1993-11-15 | 1995-09-05 | International Business Machines Corporation | Method and apparatus for facilitating communication in a multilayer communication architecture via a dynamic communication channel |
US5758194A (en) | 1993-11-30 | 1998-05-26 | Intel Corporation | Communication apparatus for handling networks with different transmission protocols by stripping or adding data to the data stream in the application layer |
US5444718A (en) * | 1993-11-30 | 1995-08-22 | At&T Corp. | Retransmission protocol for wireless communications |
JP2596718B2 (en) | 1993-12-21 | 1997-04-02 | インターナショナル・ビジネス・マシーンズ・コーポレイション | How to manage network communication buffers |
US5809527A (en) | 1993-12-23 | 1998-09-15 | Unisys Corporation | Outboard file cache system |
US5517668A (en) | 1994-01-10 | 1996-05-14 | Amdahl Corporation | Distributed protocol framework |
US5485455A (en) | 1994-01-28 | 1996-01-16 | Cabletron Systems, Inc. | Network having secure fast packet switching and guaranteed quality of service |
EP0674414B1 (en) | 1994-03-21 | 2002-02-27 | Avid Technology, Inc. | Apparatus and computer-implemented process for providing real-time multimedia data transport in a distributed computing system |
JPH08180001A (en) | 1994-04-12 | 1996-07-12 | Mitsubishi Electric Corp | Communication system, communication method and network interface |
US6047356A (en) | 1994-04-18 | 2000-04-04 | Sonic Solutions | Method of dynamically allocating network node memory's partitions for caching distributed files |
US5485460A (en) | 1994-08-19 | 1996-01-16 | Microsoft Corporation | System and method for running multiple incompatible network protocol stacks |
US6333932B1 (en) * | 1994-08-22 | 2001-12-25 | Fujitsu Limited | Connectionless communications system, its test method, and intra-station control system |
WO1996007139A1 (en) | 1994-09-01 | 1996-03-07 | Mcalpine Gary L | A multi-port memory system including read and write buffer interfaces |
US5548730A (en) | 1994-09-20 | 1996-08-20 | Intel Corporation | Intelligent bus bridge for input/output subsystems in a computer system |
US5634127A (en) | 1994-11-30 | 1997-05-27 | International Business Machines Corporation | Methods and apparatus for implementing a message driven processor in a client-server environment |
EP0716370A3 (en) | 1994-12-06 | 2005-02-16 | International Business Machines Corporation | A disk access method for delivering multimedia and video information on demand over wide area networks |
US5634099A (en) | 1994-12-09 | 1997-05-27 | International Business Machines Corporation | Direct memory access unit for transferring data between processor memories in multiprocessing systems |
US5583733A (en) | 1994-12-21 | 1996-12-10 | Polaroid Corporation | Electrostatic discharge protection device |
US5598410A (en) | 1994-12-29 | 1997-01-28 | Storage Technology Corporation | Method and apparatus for accelerated packet processing |
US5566170A (en) | 1994-12-29 | 1996-10-15 | Storage Technology Corporation | Method and apparatus for accelerated packet forwarding |
US5758084A (en) | 1995-02-27 | 1998-05-26 | Hewlett-Packard Company | Apparatus for parallel client/server communication having data structures which stored values indicative of connection state and advancing the connection state of established connections |
US5701434A (en) | 1995-03-16 | 1997-12-23 | Hitachi, Ltd. | Interleave memory controller with a common access queue |
US5586121A (en) | 1995-04-21 | 1996-12-17 | Hybrid Networks, Inc. | Asymmetric hybrid access system and method |
US5592622A (en) | 1995-05-10 | 1997-01-07 | 3Com Corporation | Network intermediate system with message passing architecture |
US5802278A (en) | 1995-05-10 | 1998-09-01 | 3Com Corporation | Bridge/router architecture for high performance scalable networking |
US5664114A (en) | 1995-05-16 | 1997-09-02 | Hewlett-Packard Company | Asynchronous FIFO queuing system operating with minimal queue status |
US5629933A (en) | 1995-06-07 | 1997-05-13 | International Business Machines Corporation | Method and system for enhanced communication in a multisession packet based communication system |
JPH096706A (en) | 1995-06-22 | 1997-01-10 | Hitachi Ltd | Loosely coupled computer system |
US5596574A (en) | 1995-07-06 | 1997-01-21 | Novell, Inc. | Method and apparatus for synchronizing data transmission with on-demand links of a network |
US5752078A (en) | 1995-07-10 | 1998-05-12 | International Business Machines Corporation | System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory |
US5812775A (en) | 1995-07-12 | 1998-09-22 | 3Com Corporation | Method and apparatus for internetworking buffer management |
US5742840A (en) | 1995-08-16 | 1998-04-21 | Microunity Systems Engineering, Inc. | General purpose, multiple precision parallel operation, programmable media processor |
US5978844A (en) | 1995-09-08 | 1999-11-02 | Hitachi, Ltd. | Internetworking apparatus for load balancing plural networks |
US5682534A (en) | 1995-09-12 | 1997-10-28 | International Business Machines Corporation | Transparent local RPC optimization |
US5913028A (en) | 1995-10-06 | 1999-06-15 | Xpoint Technologies, Inc. | Client/server data traffic delivery system and method |
US5926642A (en) | 1995-10-06 | 1999-07-20 | Advanced Micro Devices, Inc. | RISC86 instruction set |
US5699350A (en) | 1995-10-06 | 1997-12-16 | Canon Kabushiki Kaisha | Reconfiguration of protocol stacks and/or frame type assignments in a network interface device |
US5758186A (en) | 1995-10-06 | 1998-05-26 | Sun Microsystems, Inc. | Method and apparatus for generically handling diverse protocol method calls in a client/server computer system |
US6047323A (en) | 1995-10-19 | 2000-04-04 | Hewlett-Packard Company | Creation and migration of distributed streams in clusters of networked computers |
US5758089A (en) | 1995-11-02 | 1998-05-26 | Sun Microsystems, Inc. | Method and apparatus for burst transferring ATM packet header and data to a host computer system |
US5848293A (en) | 1995-11-03 | 1998-12-08 | Sun Microsystems, Inc. | Method and apparatus for transmission and processing of virtual commands |
US5768618A (en) | 1995-12-21 | 1998-06-16 | Ncr Corporation | Method for performing sequence of actions in device connected to computer in response to specified values being written into snooped sub portions of address space |
US5809328A (en) | 1995-12-21 | 1998-09-15 | Unisys Corp. | Apparatus for fibre channel transmission having interface logic, buffer memory, multiplexor/control device, fibre channel controller, gigabit link module, microprocessor, and bus control device |
JP3832006B2 (en) | 1996-02-26 | 2006-10-11 | 富士ゼロックス株式会社 | Cellular communication network and communication method thereof |
US5706514A (en) | 1996-03-04 | 1998-01-06 | Compaq Computer Corporation | Distributed execution of mode mismatched commands in multiprocessor computer systems |
US6078733A (en) | 1996-03-08 | 2000-06-20 | Mitsubishi Electric Information Technolgy Center America, Inc. (Ita) | Network interface having support for message processing and an interface to a message coprocessor |
US6014557A (en) | 1996-03-14 | 2000-01-11 | Bellsouth Intellectual Property Corporation | Apparatus and methods for providing wireless system fraud and visibility data |
US5819111A (en) | 1996-03-15 | 1998-10-06 | Adobe Systems, Inc. | System for managing transfer of data by delaying flow controlling of data through the interface controller until the run length encoded data transfer is complete |
US5668373A (en) | 1996-04-26 | 1997-09-16 | Trustees Of Tufts College | Methods and apparatus for analysis of complex mixtures |
US5727142A (en) | 1996-05-03 | 1998-03-10 | International Business Machines Corporation | Method for a non-disruptive host connection switch after detection of an error condition or during a host outage or failure |
US5802258A (en) | 1996-05-03 | 1998-09-01 | International Business Machines Corporation | Loosely coupled system environment designed to handle a non-disruptive host connection switch after detection of an error condition or during a host outage or failure |
US6308148B1 (en) | 1996-05-28 | 2001-10-23 | Cisco Technology, Inc. | Network flow data export |
US6243667B1 (en) | 1996-05-28 | 2001-06-05 | Cisco Systems, Inc. | Network flow switching and flow data export |
US5878225A (en) * | 1996-06-03 | 1999-03-02 | International Business Machines Corporation | Dual communication services interface for distributed transaction processing |
US5742765A (en) | 1996-06-19 | 1998-04-21 | Pmc-Sierra, Inc. | Combination local ATM segmentation and reassembly and physical layer device |
US5878227A (en) | 1996-07-01 | 1999-03-02 | Sun Microsystems, Inc. | System for performing deadlock free message transfer in cyclic multi-hop digital computer network using a number of buffers based on predetermined diameter |
US5749095A (en) | 1996-07-01 | 1998-05-05 | Sun Microsystems, Inc. | Multiprocessing system configured to perform efficient write operations |
US5751723A (en) | 1996-07-01 | 1998-05-12 | Motorola, Inc. | Method and system for overhead bandwidth recovery in a packetized network |
US5870394A (en) | 1996-07-23 | 1999-02-09 | Northern Telecom Limited | Method and apparatus for reassembly of data packets into messages in an asynchronous transfer mode communications system |
US5774660A (en) | 1996-08-05 | 1998-06-30 | Resonate, Inc. | World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network |
US5751715A (en) | 1996-08-08 | 1998-05-12 | Gadzoox Microsystems, Inc. | Accelerator fiber channel hub and protocol |
US6078564A (en) | 1996-08-30 | 2000-06-20 | Lucent Technologies, Inc. | System for improving data throughput of a TCP/IP network connection with slow return channel |
US6038562A (en) | 1996-09-05 | 2000-03-14 | International Business Machines Corporation | Interface to support state-dependent web applications accessing a relational database |
US5892903A (en) | 1996-09-12 | 1999-04-06 | Internet Security Systems, Inc. | Method and apparatus for detecting and identifying security vulnerabilities in an open network computer communication system |
US20030110344A1 (en) | 1996-09-18 | 2003-06-12 | Andre Szczepanek | Communications systems, apparatus and methods |
CA2216980C (en) | 1996-10-04 | 2001-09-25 | Hitachi, Ltd. | Communication method |
US6034963A (en) | 1996-10-31 | 2000-03-07 | Iready Corporation | Multiple network protocol encoder/decoder and data processor |
US6473788B1 (en) * | 1996-11-15 | 2002-10-29 | Canon Kabushiki Kaisha | Remote maintenance and servicing of a network peripheral device over the world wide web |
US5963876A (en) | 1996-11-22 | 1999-10-05 | Motorola, Inc. | Method for editing a received phone number prior to placing a call to the received phone number |
US6523119B2 (en) | 1996-12-04 | 2003-02-18 | Rainbow Technologies, Inc. | Software protection device and method |
US5987022A (en) | 1996-12-27 | 1999-11-16 | Motorola, Inc. | Method for transmitting multiple-protocol packetized data |
US6233242B1 (en) | 1996-12-30 | 2001-05-15 | Compaq Computer Corporation | Network switch with shared memory system |
US5917828A (en) | 1997-01-03 | 1999-06-29 | Ncr Corporation | ATM reassembly controller and method |
US5930830A (en) * | 1997-01-13 | 1999-07-27 | International Business Machines Corporation | System and method for concatenating discontiguous memory pages |
US5935249A (en) | 1997-02-26 | 1999-08-10 | Sun Microsystems, Inc. | Mechanism for embedding network based control systems in a local network interface device |
US6490631B1 (en) | 1997-03-07 | 2002-12-03 | Advanced Micro Devices Inc. | Multiple processors in a row for protocol acceleration |
JP2001514778A (en) | 1997-03-13 | 2001-09-11 | マーク・エム・ホイットニー | System and method for offloading a network transaction including a message queuing facility from a mainframe to an intelligent input / output device |
US6097734A (en) | 1997-04-30 | 2000-08-01 | Adaptec, Inc. | Programmable reassembly of data received in an ATM network |
US5996013A (en) | 1997-04-30 | 1999-11-30 | International Business Machines Corporation | Method and apparatus for resource allocation with guarantees |
US6094708A (en) | 1997-05-06 | 2000-07-25 | Cisco Technology, Inc. | Secondary cache write-through blocking mechanism |
US5872919A (en) | 1997-05-07 | 1999-02-16 | Advanced Micro Devices, Inc. | Computer communication network having a packet processor with an execution unit which is variably configured from a programmable state machine and logic |
EP0980544A1 (en) | 1997-05-08 | 2000-02-23 | iReady Corporation | Hardware accelerator for an object-oriented programming language |
US6157944A (en) | 1997-05-14 | 2000-12-05 | Citrix Systems, Inc. | System and method for replicating a client/server data exchange to additional client notes connecting to the server |
US6487202B1 (en) | 1997-06-30 | 2002-11-26 | Cisco Technology, Inc. | Method and apparatus for maximizing memory throughput |
US6049528A (en) | 1997-06-30 | 2000-04-11 | Sun Microsystems, Inc. | Trunking ethernet-compatible networks |
US6014380A (en) | 1997-06-30 | 2000-01-11 | Sun Microsystems, Inc. | Mechanism for packet field replacement in a multi-layer distributed network element |
US5920566A (en) | 1997-06-30 | 1999-07-06 | Sun Microsystems, Inc. | Routing in a multi-layer distributed network element |
US6044438A (en) | 1997-07-10 | 2000-03-28 | International Business Machiness Corporation | Memory controller for controlling memory accesses across networks in distributed shared memory processing systems |
US6067569A (en) | 1997-07-10 | 2000-05-23 | Microsoft Corporation | Fast-forwarding and filtering of network packets in a computer system |
US6021446A (en) | 1997-07-11 | 2000-02-01 | Sun Microsystems, Inc. | Network device driver performing initial packet processing within high priority hardware interrupt service routine and then finishing processing within low priority software interrupt service routine |
US6173333B1 (en) | 1997-07-18 | 2001-01-09 | Interprophet Corporation | TCP/IP network accelerator system and method which identifies classes of packet traffic for predictable protocols |
US6128728A (en) | 1997-08-01 | 2000-10-03 | Micron Technology, Inc. | Virtual shadow registers and virtual register windows |
US6145017A (en) | 1997-08-05 | 2000-11-07 | Adaptec, Inc. | Data alignment system for a hardware accelerated command interpreter engine |
US6385647B1 (en) | 1997-08-18 | 2002-05-07 | Mci Communications Corporations | System for selectively routing data via either a network that supports Internet protocol or via satellite transmission network based on size of the data |
US6370145B1 (en) | 1997-08-22 | 2002-04-09 | Avici Systems | Internet switch router |
US6285679B1 (en) | 1997-08-22 | 2001-09-04 | Avici Systems, Inc. | Methods and apparatus for event-driven routing |
US5898713A (en) | 1997-08-29 | 1999-04-27 | Cisco Technology, Inc. | IP checksum offload |
US6041058A (en) | 1997-09-11 | 2000-03-21 | 3Com Corporation | Hardware filtering method and apparatus |
US5991299A (en) * | 1997-09-11 | 1999-11-23 | 3Com Corporation | High speed header translation processing |
US6172980B1 (en) | 1997-09-11 | 2001-01-09 | 3Com Corporation | Multiple protocol support |
US6005849A (en) | 1997-09-24 | 1999-12-21 | Emulex Corporation | Full-duplex communication processor which can be used for fibre channel frames |
US6289023B1 (en) | 1997-09-25 | 2001-09-11 | Hewlett-Packard Company | Hardware checksum assist for network protocol stacks |
US6065096A (en) | 1997-09-30 | 2000-05-16 | Lsi Logic Corporation | Integrated single chip dual mode raid controller |
US6473425B1 (en) | 1997-10-02 | 2002-10-29 | Sun Microsystems, Inc. | Mechanism for dispatching packets via a telecommunications network |
GB9721947D0 (en) | 1997-10-16 | 1997-12-17 | Thomson Consumer Electronics | Intelligent IP packet scheduler algorithm |
US6427173B1 (en) | 1997-10-14 | 2002-07-30 | Alacritech, Inc. | Intelligent network interfaced device and system for accelerated communication |
US6389479B1 (en) | 1997-10-14 | 2002-05-14 | Alacritech, Inc. | Intelligent network interface device and system for accelerated communication |
US7076568B2 (en) | 1997-10-14 | 2006-07-11 | Alacritech, Inc. | Data communication apparatus for computer intelligent network interface card which transfers data between a network and a storage device according designated uniform datagram protocol socket |
US7089326B2 (en) | 1997-10-14 | 2006-08-08 | Alacritech, Inc. | Fast-path processing for receiving data on TCP connection offload devices |
US6807581B1 (en) | 2000-09-29 | 2004-10-19 | Alacritech, Inc. | Intelligent network storage interface system |
US6226680B1 (en) | 1997-10-14 | 2001-05-01 | Alacritech, Inc. | Intelligent network interface system method for protocol processing |
US7174393B2 (en) | 2000-12-26 | 2007-02-06 | Alacritech, Inc. | TCP/IP offload network interface device |
US7284070B2 (en) | 1997-10-14 | 2007-10-16 | Alacritech, Inc. | TCP offload network interface device |
US7133940B2 (en) | 1997-10-14 | 2006-11-07 | Alacritech, Inc. | Network interface device employing a DMA command queue |
US6427171B1 (en) | 1997-10-14 | 2002-07-30 | Alacritech, Inc. | Protocol processing stack for use with intelligent network interface device |
US7185266B2 (en) | 2003-02-12 | 2007-02-27 | Alacritech, Inc. | Network interface device for error detection using partial CRCS of variable length message portions |
US6658480B2 (en) | 1997-10-14 | 2003-12-02 | Alacritech, Inc. | Intelligent network interface system and method for accelerated protocol processing |
US6687758B2 (en) | 2001-03-07 | 2004-02-03 | Alacritech, Inc. | Port aggregation for network connections that are offloaded to network interface devices |
US6697868B2 (en) | 2000-02-28 | 2004-02-24 | Alacritech, Inc. | Protocol processing stack for use with intelligent network interface device |
US7042898B2 (en) | 1997-10-14 | 2006-05-09 | Alacritech, Inc. | Reducing delays associated with inserting a checksum into a network message |
US6434620B1 (en) | 1998-08-27 | 2002-08-13 | Alacritech, Inc. | TCP/IP offload network interface device |
US6470415B1 (en) | 1999-10-13 | 2002-10-22 | Alacritech, Inc. | Queue system involving SRAM head, SRAM tail and DRAM body |
US6757746B2 (en) | 1997-10-14 | 2004-06-29 | Alacritech, Inc. | Obtaining a destination address so that a network interface device can write network data without headers directly into host memory |
US8782199B2 (en) | 1997-10-14 | 2014-07-15 | A-Tech Llc | Parsing a packet header |
US7237036B2 (en) | 1997-10-14 | 2007-06-26 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding a TCP connection |
US7167927B2 (en) | 1997-10-14 | 2007-01-23 | Alacritech, Inc. | TCP/IP offload device with fast-path TCP ACK generating and transmitting mechanism |
US6591302B2 (en) | 1997-10-14 | 2003-07-08 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US5941969A (en) | 1997-10-22 | 1999-08-24 | Auspex Systems, Inc. | Bridge for direct data storage device access |
US5937169A (en) | 1997-10-29 | 1999-08-10 | 3Com Corporation | Offload of TCP segmentation to a smart adapter |
US6122670A (en) | 1997-10-30 | 2000-09-19 | Tsi Telsys, Inc. | Apparatus and method for constructing data for transmission within a reliable communication protocol by performing portions of the protocol suite concurrently |
US6057863A (en) | 1997-10-31 | 2000-05-02 | Compaq Computer Corporation | Dual purpose apparatus, method and system for accelerated graphics port and fibre channel arbitrated loop interfaces |
US6009478A (en) | 1997-11-04 | 1999-12-28 | Adaptec, Inc. | File array communications interface for communicating between a host computer and an adapter |
US6219693B1 (en) | 1997-11-04 | 2001-04-17 | Adaptec, Inc. | File array storage architecture having file system distributed across a data processing platform |
US6061368A (en) * | 1997-11-05 | 2000-05-09 | Xylan Corporation | Custom circuitry for adaptive hardware routing engine |
US5941972A (en) | 1997-12-31 | 1999-08-24 | Crossroads Systems, Inc. | Storage router and method for providing virtual local storage |
US5950203A (en) | 1997-12-31 | 1999-09-07 | Mercury Computer Systems, Inc. | Method and apparatus for high-speed access to and sharing of storage devices on a networked digital data processing system |
US6101555A (en) | 1998-01-12 | 2000-08-08 | Adaptec, Inc. | Methods and apparatus for communicating between networked peripheral devices |
US5996024A (en) | 1998-01-14 | 1999-11-30 | Emc Corporation | Method and apparatus for a SCSI applications server which extracts SCSI commands and data from message and encapsulates SCSI responses to provide transparent operation |
FR2773935A1 (en) | 1998-01-19 | 1999-07-23 | Canon Kk | COMMUNICATION METHODS BETWEEN COMPUTER SYSTEMS AND DEVICES USING THE SAME |
EP1055177A1 (en) | 1998-01-22 | 2000-11-29 | Intelogis, Inc. | Method and apparatus for universal data exchange gateway |
US6041381A (en) | 1998-02-05 | 2000-03-21 | Crossroads Systems, Inc. | Fibre channel to SCSI addressing method and system |
US6016513A (en) | 1998-02-19 | 2000-01-18 | 3Com Corporation | Method of preventing packet loss during transfers of data packets between a network interface card and an operating system of a computer |
US6324649B1 (en) | 1998-03-02 | 2001-11-27 | Compaq Computer Corporation | Modified license key entry for pre-installation of software |
US6631484B1 (en) | 1998-03-31 | 2003-10-07 | Lsi Logic Corporation | System for packet communication where received packet is stored either in a FIFO or in buffer storage based on size of received packet |
US6570876B1 (en) * | 1998-04-01 | 2003-05-27 | Hitachi, Ltd. | Packet switch and switching method for switching variable length packets |
US6246683B1 (en) | 1998-05-01 | 2001-06-12 | 3Com Corporation | Receive processing with network protocol bypass |
US6260158B1 (en) | 1998-05-11 | 2001-07-10 | Compaq Computer Corporation | System and method for fail-over data transport |
US6202105B1 (en) | 1998-06-02 | 2001-03-13 | Adaptec, Inc. | Host adapter capable of simultaneously transmitting and receiving data of multiple contexts between a computer bus and peripheral bus |
US6070200A (en) | 1998-06-02 | 2000-05-30 | Adaptec, Inc. | Host adapter having paged data buffers for continuously transferring data between a system bus and a peripheral bus |
US6298403B1 (en) | 1998-06-02 | 2001-10-02 | Adaptec, Inc. | Host adapter having a snapshot mechanism |
US6765901B1 (en) | 1998-06-11 | 2004-07-20 | Nvidia Corporation | TCP/IP/PPP modem |
US6141705A (en) | 1998-06-12 | 2000-10-31 | Microsoft Corporation | System for querying a peripheral device to determine its processing capabilities and then offloading specific processing tasks from a host to the peripheral device when needed |
US6157955A (en) | 1998-06-15 | 2000-12-05 | Intel Corporation | Packet processing system including a policy engine having a classification unit |
US6452915B1 (en) | 1998-07-10 | 2002-09-17 | Malibu Networks, Inc. | IP-flow classification in a wireless point to multi-point (PTMP) transmission system |
US6111673A (en) | 1998-07-17 | 2000-08-29 | Telcordia Technologies, Inc. | High-throughput, low-latency next generation internet networks using optical tag switching |
US7073196B1 (en) | 1998-08-07 | 2006-07-04 | The United States Of America As Represented By The National Security Agency | Firewall for processing a connectionless network packet |
US7664883B2 (en) | 1998-08-28 | 2010-02-16 | Alacritech, Inc. | Network interface device that fast-path processes solicited session layer read commands |
US6223242B1 (en) | 1998-09-28 | 2001-04-24 | Sifera, Inc. | Linearly expandable self-routing crossbar switch |
US6381647B1 (en) | 1998-09-28 | 2002-04-30 | Raytheon Company | Method and system for scheduling network communication |
US6311213B2 (en) | 1998-10-27 | 2001-10-30 | International Business Machines Corporation | System and method for server-to-server data storage in a network environment |
KR100280642B1 (en) | 1998-11-18 | 2001-05-02 | 윤종용 | Memory management device of Ethernet controller and its control method |
US6480489B1 (en) | 1999-03-01 | 2002-11-12 | Sun Microsystems, Inc. | Method and apparatus for data re-assembly with a high performance network interface |
US6356951B1 (en) | 1999-03-01 | 2002-03-12 | Sun Microsystems, Inc. | System for parsing a packet for conformity with a predetermined protocol using mask and comparison values included in a parsing instruction |
US6389468B1 (en) | 1999-03-01 | 2002-05-14 | Sun Microsystems, Inc. | Method and apparatus for distributing network traffic processing on a multiprocessor computer |
US6453360B1 (en) | 1999-03-01 | 2002-09-17 | Sun Microsystems, Inc. | High performance network interface |
US6434651B1 (en) | 1999-03-01 | 2002-08-13 | Sun Microsystems, Inc. | Method and apparatus for suppressing interrupts in a high-speed network environment |
US6483804B1 (en) | 1999-03-01 | 2002-11-19 | Sun Microsystems, Inc. | Method and apparatus for dynamic packet batching with a high performance network interface |
US6650640B1 (en) | 1999-03-01 | 2003-11-18 | Sun Microsystems, Inc. | Method and apparatus for managing a network flow in a high performance network interface |
US6678283B1 (en) | 1999-03-10 | 2004-01-13 | Lucent Technologies Inc. | System and method for distributing packet processing in an internetworking device |
US6345301B1 (en) | 1999-03-30 | 2002-02-05 | Unisys Corporation | Split data path distributed network protocol |
US6526446B1 (en) | 1999-04-27 | 2003-02-25 | 3Com Corporation | Hardware only transmission control protocol segmentation for a high performance network interface card |
US6343360B1 (en) | 1999-05-13 | 2002-01-29 | Microsoft Corporation | Automated configuration of computing system using zip code data |
US6952409B2 (en) | 1999-05-17 | 2005-10-04 | Jolitz Lynne G | Accelerator system and method |
US6768992B1 (en) | 1999-05-17 | 2004-07-27 | Lynne G. Jolitz | Term addressable memory of an accelerator system and method |
US6542504B1 (en) | 1999-05-28 | 2003-04-01 | 3Com Corporation | Profile based method for packet header compression in a point to point link |
WO2001005107A1 (en) | 1999-07-13 | 2001-01-18 | Alteon Web Systems, Inc. | Apparatus and method to minimize congestion in an output queuing switch |
WO2001005116A2 (en) | 1999-07-13 | 2001-01-18 | Alteon Web Systems, Inc. | Routing method and apparatus |
AU6089700A (en) | 1999-07-13 | 2001-01-30 | Alteon Web Systems, Inc. | Apparatus and method to minimize incoming data loss |
AU5929700A (en) | 1999-07-13 | 2001-01-30 | Alteon Web Systems, Inc. | Method and architecture for optimizing data throughput in a multi-processor environment using a ram-based shared index fifo linked list |
US6449656B1 (en) | 1999-07-30 | 2002-09-10 | Intel Corporation | Storing a frame header |
US6427169B1 (en) | 1999-07-30 | 2002-07-30 | Intel Corporation | Parsing a packet header |
JP2001090749A (en) | 1999-07-30 | 2001-04-03 | Dana Corp | Fluid pressure type limited slip differential, and gerotor pump for differential |
US6842896B1 (en) | 1999-09-03 | 2005-01-11 | Rainbow Technologies, Inc. | System and method for selecting a server in a multiple server license management system |
US6681364B1 (en) | 1999-09-24 | 2004-01-20 | International Business Machines Corporation | Cyclic redundancy check for partitioned frames |
US6421742B1 (en) | 1999-10-29 | 2002-07-16 | Intel Corporation | Method and apparatus for emulating an input/output unit when transferring data over a network |
US6570884B1 (en) | 1999-11-05 | 2003-05-27 | 3Com Corporation | Receive filtering for communication interface |
US6327625B1 (en) | 1999-11-30 | 2001-12-04 | 3Com Corporation | FIFO-based network interface supporting out-of-order processing |
US6594261B1 (en) | 1999-12-22 | 2003-07-15 | Aztech Partners, Inc. | Adaptive fault-tolerant switching network with random initial routing and random routing around faults |
US6683851B1 (en) | 2000-01-05 | 2004-01-27 | Qualcomm, Incorporated | Flow control of multiple entities sharing a common data link |
US6195650B1 (en) | 2000-02-02 | 2001-02-27 | Hewlett-Packard Company | Method and apparatus for virtualizing file access operations and other I/O operations |
US7050437B2 (en) | 2000-03-24 | 2006-05-23 | International Business Machines Corporation | Wire speed reassembly of data frames |
US6947430B2 (en) | 2000-03-24 | 2005-09-20 | International Business Machines Corporation | Network adapter with embedded deep packet processing |
US6591310B1 (en) | 2000-05-11 | 2003-07-08 | Lsi Logic Corporation | Method of responding to I/O request and associated reply descriptor |
AU2001293269A1 (en) | 2000-09-11 | 2002-03-26 | David Edgar | System, method, and computer program product for optimization and acceleration of data transport and processing |
US7218632B1 (en) * | 2000-12-06 | 2007-05-15 | Cisco Technology, Inc. | Packet processing engine architecture |
US20020112175A1 (en) | 2000-12-13 | 2002-08-15 | Makofka Douglas S. | Conditional access for functional units |
US20040213206A1 (en) | 2001-02-06 | 2004-10-28 | Mccormack John | Multiprotocol convergence switch (MPCS) and method for use thereof |
US7149817B2 (en) | 2001-02-15 | 2006-12-12 | Neteffect, Inc. | Infiniband TM work queue to TCP/IP translation |
US7065702B2 (en) | 2001-04-12 | 2006-06-20 | Siliquent Technologies Ltd. | Out-of-order calculation of error detection codes |
US8218555B2 (en) | 2001-04-24 | 2012-07-10 | Nvidia Corporation | Gigabit ethernet adapter |
US20030046330A1 (en) | 2001-09-04 | 2003-03-06 | Hayes John W. | Selective offloading of protocol processing |
US7016361B2 (en) | 2002-03-02 | 2006-03-21 | Toshiba America Information Systems, Inc. | Virtual switch in a wide area network |
US8244890B2 (en) | 2002-03-08 | 2012-08-14 | Broadcom Corporation | System and method for handling transport protocol segments |
US7496689B2 (en) | 2002-04-22 | 2009-02-24 | Alacritech, Inc. | TCP/IP offload device |
US7543087B2 (en) | 2002-04-22 | 2009-06-02 | Alacritech, Inc. | Freeing transmit memory on a network interface device prior to receiving an acknowledgement that transmit data has been received by a remote device |
US7181531B2 (en) | 2002-04-30 | 2007-02-20 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US7441262B2 (en) | 2002-07-11 | 2008-10-21 | Seaway Networks Inc. | Integrated VPN/firewall system |
US7283549B2 (en) * | 2002-07-17 | 2007-10-16 | D-Link Corporation | Method for increasing the transmit and receive efficiency of an embedded ethernet controller |
US7313100B1 (en) * | 2002-08-26 | 2007-12-25 | Juniper Networks, Inc. | Network device having accounting service card |
US7426579B2 (en) | 2002-09-17 | 2008-09-16 | Broadcom Corporation | System and method for handling frames in multiple stack environments |
US7411959B2 (en) | 2002-08-30 | 2008-08-12 | Broadcom Corporation | System and method for handling out-of-order frames |
US7313623B2 (en) | 2002-08-30 | 2007-12-25 | Broadcom Corporation | System and method for TCP/IP offload independent of bandwidth delay product |
US7519650B2 (en) | 2002-09-05 | 2009-04-14 | International Business Machines Corporation | Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms |
US20040049580A1 (en) | 2002-09-05 | 2004-03-11 | International Business Machines Corporation | Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms |
US20040059926A1 (en) | 2002-09-20 | 2004-03-25 | Compaq Information Technology Group, L.P. | Network interface controller with firmware enabled licensing features |
US7283522B2 (en) | 2002-09-27 | 2007-10-16 | Sun Microsystems, Inc. | Method and apparatus for offloading message segmentation to a network interface card |
US7191241B2 (en) | 2002-09-27 | 2007-03-13 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US7337241B2 (en) | 2002-09-27 | 2008-02-26 | Alacritech, Inc. | Fast-path apparatus for receiving data corresponding to a TCP connection |
US20040088262A1 (en) | 2002-11-06 | 2004-05-06 | Alacritech, Inc. | Enabling an enhanced function of an electronic device |
US7191318B2 (en) | 2002-12-12 | 2007-03-13 | Alacritech, Inc. | Native copy instruction for file-access processor with copy-rule-based validation |
US7254696B2 (en) | 2002-12-12 | 2007-08-07 | Alacritech, Inc. | Functional-level instruction-set computer architecture for processing application-layer content-service requests such as file-access requests |
US7093099B2 (en) | 2002-12-12 | 2006-08-15 | Alacritech, Inc. | Native lookup instruction for file-access processor searching a three-level lookup cache for variable-length keys |
US7047320B2 (en) | 2003-01-09 | 2006-05-16 | International Business Machines Corporation | Data processing system providing hardware acceleration of input/output (I/O) communication |
US6976148B2 (en) | 2003-01-09 | 2005-12-13 | International Business Machines Corporation | Acceleration of input/output (I/O) communication through improved address translation |
US7389462B1 (en) | 2003-02-14 | 2008-06-17 | Istor Networks, Inc. | System and methods for high rate hardware-accelerated network protocol processing |
US7210061B2 (en) | 2003-04-17 | 2007-04-24 | Hewlett-Packard Development, L.P. | Data redundancy for writes using remote storage system cache memory |
US20040249957A1 (en) | 2003-05-12 | 2004-12-09 | Pete Ekis | Method for interface of TCP offload engines to operating systems |
US7609696B2 (en) | 2003-06-05 | 2009-10-27 | Nvidia Corporation | Storing and accessing TCP connection information |
US7287092B2 (en) | 2003-08-11 | 2007-10-23 | Sharp Colin C | Generating a hash for a TCP/IP offload device |
US20050060538A1 (en) | 2003-09-15 | 2005-03-17 | Intel Corporation | Method, system, and program for processing of fragmented datagrams |
US6996070B2 (en) | 2003-12-05 | 2006-02-07 | Alacritech, Inc. | TCP/IP offload device with reduced sequential processing |
US7519699B2 (en) | 2004-08-05 | 2009-04-14 | International Business Machines Corporation | Method, system, and computer program product for delivering data to a storage buffer assigned to an application |
US7533198B2 (en) | 2005-10-07 | 2009-05-12 | International Business Machines Corporation | Memory controller and method for handling DMA operations during a page copy |
US7738500B1 (en) | 2005-12-14 | 2010-06-15 | Alacritech, Inc. | TCP timestamp synchronization for network connections that are offloaded to network interface devices |
JP4634320B2 (en) * | 2006-02-28 | 2011-02-16 | 株式会社日立製作所 | Device and network system for anti-abnormal communication protection |
JP2007266850A (en) * | 2006-03-28 | 2007-10-11 | Fujitsu Ltd | Transmission apparatus |
US7738200B2 (en) | 2006-05-01 | 2010-06-15 | Agere Systems Inc. | Systems and methods for estimating time corresponding to peak signal amplitude |
WO2007130476A2 (en) | 2006-05-02 | 2007-11-15 | Alacritech, Inc. | Network interface device with 10 gb/s full-duplex transfer rate |
US7836220B2 (en) | 2006-08-17 | 2010-11-16 | Apple Inc. | Network direct memory access |
US7593331B2 (en) | 2007-01-17 | 2009-09-22 | Cisco Technology, Inc. | Enhancing transmission reliability of monitored data |
IL189530A0 (en) * | 2007-02-15 | 2009-02-11 | Marvell Software Solutions Isr | Method and apparatus for deep packet inspection for network intrusion detection |
US8516163B2 (en) | 2007-02-27 | 2013-08-20 | Integrated Device Technology, Inc. | Hardware-based concurrent direct memory access (DMA) engines on serial rapid input/output SRIO interface |
US7813342B2 (en) | 2007-03-26 | 2010-10-12 | Gadelrab Serag | Method and apparatus for writing network packets into computer memory |
US8144588B1 (en) * | 2007-09-11 | 2012-03-27 | Juniper Networks, Inc. | Scalable resource management in distributed environment |
-
1998
- 1998-04-27 US US09/067,544 patent/US6226680B1/en not_active Expired - Lifetime
-
1999
- 1999-11-12 US US09/439,603 patent/US6247060B1/en not_active Expired - Lifetime
-
2000
- 2000-10-18 US US09/692,561 patent/US8631140B2/en active Active
- 2000-12-26 US US09/748,936 patent/US6334153B2/en not_active Expired - Lifetime
-
2001
- 2001-03-12 US US09/804,553 patent/US6393487B2/en not_active Expired - Lifetime
- 2001-10-02 US US09/970,124 patent/US7124205B2/en not_active Expired - Lifetime
- 2001-12-17 US US10/023,240 patent/US6965941B2/en not_active Expired - Lifetime
-
2013
- 2013-09-26 US US14/038,297 patent/US8805948B2/en not_active Expired - Lifetime
- 2013-09-27 US US14/040,179 patent/US8856379B2/en not_active Expired - Fee Related
-
2014
- 2014-10-06 US US14/507,710 patent/US9307054B2/en not_active Expired - Fee Related
Cited By (131)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9098297B2 (en) | 1997-05-08 | 2015-08-04 | Nvidia Corporation | Hardware accelerator for an object-oriented programming language |
US7996568B2 (en) | 1998-06-11 | 2011-08-09 | Nvidia Corporation | System, method, and computer program product for an offload engine with DMA capabilities |
US7483375B2 (en) | 1998-06-11 | 2009-01-27 | Nvidia Corporation | TCP/IP/PPP modem |
US20040213290A1 (en) * | 1998-06-11 | 2004-10-28 | Johnson Michael Ward | TCP/IP/PPP modem |
US6675218B1 (en) * | 1998-08-14 | 2004-01-06 | 3Com Corporation | System for user-space network packet modification |
US6463035B1 (en) * | 1998-12-30 | 2002-10-08 | At&T Corp | Method and apparatus for initiating an upward signaling control channel in a fast packet network |
US6426944B1 (en) * | 1998-12-30 | 2002-07-30 | At&T Corp | Method and apparatus for controlling data messages across a fast packet network |
US6526446B1 (en) * | 1999-04-27 | 2003-02-25 | 3Com Corporation | Hardware only transmission control protocol segmentation for a high performance network interface card |
US8135842B1 (en) | 1999-08-16 | 2012-03-13 | Nvidia Corporation | Internet jack |
US20030222216A1 (en) * | 2000-03-22 | 2003-12-04 | Walkenstein Jonathan A. | Low light imaging device |
US8977712B2 (en) | 2000-04-17 | 2015-03-10 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure including a wireless communication link |
US8386641B2 (en) | 2000-04-17 | 2013-02-26 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US10516751B2 (en) | 2000-04-17 | 2019-12-24 | Circadence Corporation | Optimization of enhanced network links |
US10858503B2 (en) | 2000-04-17 | 2020-12-08 | Circadence Corporation | System and devices facilitating dynamic network link acceleration |
US10329410B2 (en) | 2000-04-17 | 2019-06-25 | Circadence Corporation | System and devices facilitating dynamic network link acceleration |
US10931775B2 (en) | 2000-04-17 | 2021-02-23 | Circadence Corporation | Optimization of enhanced network links |
US10205795B2 (en) | 2000-04-17 | 2019-02-12 | Circadence Corporation | Optimization of enhanced network links |
US7975066B2 (en) | 2000-04-17 | 2011-07-05 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US10154115B2 (en) | 2000-04-17 | 2018-12-11 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US10033840B2 (en) | 2000-04-17 | 2018-07-24 | Circadence Corporation | System and devices facilitating dynamic network link acceleration |
US9923987B2 (en) | 2000-04-17 | 2018-03-20 | Circadence Corporation | Optimization of enhanced network links |
US9723105B2 (en) | 2000-04-17 | 2017-08-01 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US8024481B2 (en) | 2000-04-17 | 2011-09-20 | Circadence Corporation | System and method for reducing traffic and congestion on distributed interactive simulation networks |
US9578124B2 (en) | 2000-04-17 | 2017-02-21 | Circadence Corporation | Optimization of enhanced network links |
US9436542B2 (en) | 2000-04-17 | 2016-09-06 | Circadence Corporation | Automated network infrastructure test and diagnostic system and method therefor |
US9380129B2 (en) | 2000-04-17 | 2016-06-28 | Circadence Corporation | Data redirection system and method therefor |
US9185185B2 (en) | 2000-04-17 | 2015-11-10 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US8065399B2 (en) | 2000-04-17 | 2011-11-22 | Circadence Corporation | Automated network infrastructure test and diagnostic system and method therefor |
US9148293B2 (en) | 2000-04-17 | 2015-09-29 | Circadence Corporation | Automated network infrastructure test and diagnostic system and method therefor |
US8195823B2 (en) | 2000-04-17 | 2012-06-05 | Circadence Corporation | Dynamic network link acceleration |
US10819826B2 (en) | 2000-04-17 | 2020-10-27 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US8417770B2 (en) | 2000-04-17 | 2013-04-09 | Circadence Corporation | Data redirection system and method therefor |
US8996705B2 (en) | 2000-04-17 | 2015-03-31 | Circadence Corporation | Optimization of enhanced network links |
US8463935B2 (en) | 2000-04-17 | 2013-06-11 | Circadence Corporation | Data prioritization system and method therefor |
US7962654B2 (en) | 2000-04-17 | 2011-06-14 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure |
US8510468B2 (en) | 2000-04-17 | 2013-08-13 | Ciradence Corporation | Route aware network link acceleration |
USRE45009E1 (en) | 2000-04-17 | 2014-07-08 | Circadence Corporation | Dynamic network link acceleration |
US8898340B2 (en) | 2000-04-17 | 2014-11-25 | Circadence Corporation | Dynamic network link acceleration for network including wireless communication devices |
US8977711B2 (en) | 2000-04-17 | 2015-03-10 | Circadence Corporation | System and method for implementing application functionality within a network infrastructure including wirelessly coupled devices |
US7302499B2 (en) | 2000-11-10 | 2007-11-27 | Nvidia Corporation | Internet modem streaming socket method |
US20050271042A1 (en) * | 2000-11-10 | 2005-12-08 | Michael Johnson | Internet modem streaming socket method |
US9100409B2 (en) | 2000-12-21 | 2015-08-04 | Noatak Software Llc | Method and system for selecting a computing device for maintaining a client session in response to a request packet |
US7418522B2 (en) | 2000-12-21 | 2008-08-26 | Noatak Software Llc | Method and system for communicating an information packet through multiple networks |
US7512686B2 (en) * | 2000-12-21 | 2009-03-31 | Berg Mitchell T | Method and system for establishing a data structure of a connection with a client |
US20020112085A1 (en) * | 2000-12-21 | 2002-08-15 | Berg Mitchell T. | Method and system for communicating an information packet through multiple networks |
US20020112087A1 (en) * | 2000-12-21 | 2002-08-15 | Berg Mitchell T. | Method and system for establishing a data structure of a connection with a client |
US20070061470A1 (en) * | 2000-12-21 | 2007-03-15 | Berg Mitchell T | Method and system for selecting a computing device for maintaining a client session in response to a request packet |
US20070067046A1 (en) * | 2000-12-21 | 2007-03-22 | Berg Mitchell T | Method and system for communicating an information packet through multiple networks |
US7406538B2 (en) | 2000-12-21 | 2008-07-29 | Noatak Software Llc | Method and system for identifying a computing device in response to an information packet |
US8341290B2 (en) | 2000-12-21 | 2012-12-25 | Noatak Software Llc | Method and system for selecting a computing device for maintaining a client session in response to a request packet |
US7649876B2 (en) | 2000-12-21 | 2010-01-19 | Berg Mitchell T | Method and system for communicating an information packet through multiple router devices |
US7421505B2 (en) | 2000-12-21 | 2008-09-02 | Noatak Software Llc | Method and system for executing protocol stack instructions to form a packet for causing a computing device to perform an operation |
US20020120761A1 (en) * | 2000-12-21 | 2002-08-29 | Berg Mitchell T. | Method and system for executing protocol stack instructions to form a packet for causing a computing device to perform an operation |
US20070086360A1 (en) * | 2000-12-21 | 2007-04-19 | Berg Mitchell T | Method and system for communicating an information packet through multiple router devices |
US8073002B2 (en) | 2001-01-26 | 2011-12-06 | Nvidia Corporation | System, method, and computer program product for multi-mode network interface operation |
US8059680B2 (en) | 2001-01-26 | 2011-11-15 | Nvidia Corporation | Offload system, method, and computer program product for processing network communications associated with a plurality of ports |
US20070064725A1 (en) * | 2001-01-26 | 2007-03-22 | Minami John S | System, method, and computer program product for multi-mode network interface operation |
US20070064724A1 (en) * | 2001-01-26 | 2007-03-22 | Minami John S | Offload system, method, and computer program product for processing network communications associated with a plurality of ports |
US8218555B2 (en) | 2001-04-24 | 2012-07-10 | Nvidia Corporation | Gigabit ethernet adapter |
US7469295B1 (en) * | 2001-06-25 | 2008-12-23 | Network Appliance, Inc. | Modified round robin load balancing technique based on IP identifier |
US7114011B2 (en) * | 2001-08-30 | 2006-09-26 | Intel Corporation | Multiprocessor-scalable streaming data server arrangement |
US20030046511A1 (en) * | 2001-08-30 | 2003-03-06 | Buch Deep K. | Multiprocessor-scalable streaming data server arrangement |
US20030084185A1 (en) * | 2001-10-30 | 2003-05-01 | Microsoft Corporation | Apparatus and method for scaling TCP off load buffer requirements by segment size |
US7124198B2 (en) | 2001-10-30 | 2006-10-17 | Microsoft Corporation | Apparatus and method for scaling TCP off load buffer requirements by segment size |
US20030127185A1 (en) * | 2002-01-04 | 2003-07-10 | Bakly Walter N. | Method for applying retroreflective target to a surface |
US8244916B1 (en) * | 2002-02-14 | 2012-08-14 | Marvell International Ltd. | Method and apparatus for enabling a network interface to support multiple networks |
US8447887B1 (en) | 2002-02-14 | 2013-05-21 | Marvell International Ltd. | Method and apparatus for enabling a network interface to support multiple networks |
US8886839B1 (en) | 2002-02-14 | 2014-11-11 | Marvell International Ltd. | Method and apparatus for enabling a network interface to support multiple networks |
US20070253430A1 (en) * | 2002-04-23 | 2007-11-01 | Minami John S | Gigabit Ethernet Adapter |
US7171489B2 (en) | 2002-04-30 | 2007-01-30 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US7007103B2 (en) * | 2002-04-30 | 2006-02-28 | Microsoft Corporation | Method to offload a network stack |
US20030204631A1 (en) * | 2002-04-30 | 2003-10-30 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US20030204634A1 (en) * | 2002-04-30 | 2003-10-30 | Microsoft Corporation | Method to offload a network stack |
EP1359724A1 (en) * | 2002-04-30 | 2003-11-05 | Microsoft Corporation | Method to offload a network stack |
EP1361512A2 (en) * | 2002-04-30 | 2003-11-12 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
JP4638658B2 (en) * | 2002-04-30 | 2011-02-23 | マイクロソフト コーポレーション | Method for uploading state object of offloaded network stack and method for synchronizing it |
US7254637B2 (en) | 2002-04-30 | 2007-08-07 | Microsoft Corporation | Method to offload a network stack |
JP2004030612A (en) * | 2002-04-30 | 2004-01-29 | Microsoft Corp | Method for uploading a state object of offloaded network stack and method for synchronizing the same |
US7181531B2 (en) | 2002-04-30 | 2007-02-20 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US20050182854A1 (en) * | 2002-04-30 | 2005-08-18 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
EP1361512A3 (en) * | 2002-04-30 | 2005-11-30 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
KR100938519B1 (en) | 2002-04-30 | 2010-01-25 | 마이크로소프트 코포레이션 | Method to synchronize and upload an offloaded network stack connection with a network stack |
AU2003203727B2 (en) * | 2002-04-30 | 2009-04-23 | Microsoft Corporation | Method to synchronize and upload an offloaded network stack connection with a network stack |
US20060069792A1 (en) * | 2002-04-30 | 2006-03-30 | Microsoft Corporation | Method to offload a network stack |
US20040003007A1 (en) * | 2002-06-28 | 2004-01-01 | Prall John M. | Windows management instrument synchronized repository provider |
US7924868B1 (en) | 2002-12-13 | 2011-04-12 | Nvidia Corporation | Internet protocol (IP) router residing in a processor chipset |
US7324547B1 (en) | 2002-12-13 | 2008-01-29 | Nvidia Corporation | Internet protocol (IP) router residing in a processor chipset |
US7362772B1 (en) | 2002-12-13 | 2008-04-22 | Nvidia Corporation | Network processing pipeline chipset for routing and host packet processing |
US8234399B2 (en) | 2003-05-29 | 2012-07-31 | Seagate Technology Llc | Method and apparatus for automatic phy calibration based on negotiated link speed |
US7609696B2 (en) | 2003-06-05 | 2009-10-27 | Nvidia Corporation | Storing and accessing TCP connection information |
US7991918B2 (en) | 2003-06-05 | 2011-08-02 | Nvidia Corporation | Transmitting commands and information between a TCP/IP stack and an offload unit |
US7363572B2 (en) | 2003-06-05 | 2008-04-22 | Nvidia Corporation | Editing outbound TCP frames and generating acknowledgements |
US7412488B2 (en) | 2003-06-05 | 2008-08-12 | Nvidia Corporation | Setting up a delegated TCP connection for hardware-optimized processing |
US20040249881A1 (en) * | 2003-06-05 | 2004-12-09 | Jha Ashutosh K. | Transmitting commands and information between a TCP/IP stack and an offload unit |
US20040246974A1 (en) * | 2003-06-05 | 2004-12-09 | Gyugyi Paul J. | Storing and accessing TCP connection information |
US8417852B2 (en) | 2003-06-05 | 2013-04-09 | Nvidia Corporation | Uploading TCP frame data to user buffers and buffers in system memory |
US20080056124A1 (en) * | 2003-06-05 | 2008-03-06 | Sameer Nanda | Using TCP/IP offload to accelerate packet filtering |
US7613109B2 (en) | 2003-06-05 | 2009-11-03 | Nvidia Corporation | Processing data for a TCP connection using an offload unit |
US7420931B2 (en) | 2003-06-05 | 2008-09-02 | Nvidia Corporation | Using TCP/IP offload to accelerate packet filtering |
US7899913B2 (en) | 2003-12-19 | 2011-03-01 | Nvidia Corporation | Connection management system and method for a transport offload engine |
US8176545B1 (en) | 2003-12-19 | 2012-05-08 | Nvidia Corporation | Integrated policy checking system and method |
US8065439B1 (en) * | 2003-12-19 | 2011-11-22 | Nvidia Corporation | System and method for using metadata in the context of a transport offload engine |
US8549170B2 (en) | 2003-12-19 | 2013-10-01 | Nvidia Corporation | Retransmission system and method for a transport offload engine |
US7698413B1 (en) | 2004-04-12 | 2010-04-13 | Nvidia Corporation | Method and apparatus for accessing and maintaining socket control information for high speed network connections |
US20060004904A1 (en) * | 2004-06-30 | 2006-01-05 | Intel Corporation | Method, system, and program for managing transmit throughput for a network controller |
US7957379B2 (en) | 2004-10-19 | 2011-06-07 | Nvidia Corporation | System and method for processing RX packets in high speed network applications using an RX FIFO buffer |
US20060104308A1 (en) * | 2004-11-12 | 2006-05-18 | Microsoft Corporation | Method and apparatus for secure internet protocol (IPSEC) offloading with integrated host protocol stack management |
US7783880B2 (en) | 2004-11-12 | 2010-08-24 | Microsoft Corporation | Method and apparatus for secure internet protocol (IPSEC) offloading with integrated host protocol stack management |
US8032809B2 (en) * | 2004-12-08 | 2011-10-04 | Electronics And Telecommunications Research Institute | Retransmission and delayed ACK timer management logic for TCP protocol |
US20090241001A1 (en) * | 2004-12-08 | 2009-09-24 | Electronics And Telecommunications Research Institute | Retransmission and delayed ack timer management logic for tcp protocol |
US7697536B2 (en) | 2005-04-01 | 2010-04-13 | International Business Machines Corporation | Network communications for operating system partitions |
US8225188B2 (en) | 2005-04-01 | 2012-07-17 | International Business Machines Corporation | Apparatus for blind checksum and correction for network transmissions |
US20060221977A1 (en) * | 2005-04-01 | 2006-10-05 | International Business Machines Corporation | Method and apparatus for providing a network connection table |
US20060221961A1 (en) * | 2005-04-01 | 2006-10-05 | International Business Machines Corporation | Network communications for operating system partitions |
US7706409B2 (en) | 2005-04-01 | 2010-04-27 | International Business Machines Corporation | System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA) |
US20080317027A1 (en) * | 2005-04-01 | 2008-12-25 | International Business Machines Corporation | System for reducing latency in a host ethernet adapter (hea) |
US7508771B2 (en) | 2005-04-01 | 2009-03-24 | International Business Machines Corporation | Method for reducing latency in a host ethernet adapter (HEA) |
US7606166B2 (en) | 2005-04-01 | 2009-10-20 | International Business Machines Corporation | System and method for computing a blind checksum in a host ethernet adapter (HEA) |
US20080089358A1 (en) * | 2005-04-01 | 2008-04-17 | International Business Machines Corporation | Configurable ports for a host ethernet adapter |
US7782888B2 (en) | 2005-04-01 | 2010-08-24 | International Business Machines Corporation | Configurable ports for a host ethernet adapter |
US7586936B2 (en) | 2005-04-01 | 2009-09-08 | International Business Machines Corporation | Host Ethernet adapter for networking offload in server environment |
US20060221953A1 (en) * | 2005-04-01 | 2006-10-05 | Claude Basso | Method and apparatus for blind checksum and correction for network transmissions |
US7492771B2 (en) | 2005-04-01 | 2009-02-17 | International Business Machines Corporation | Method for performing a packet header lookup |
US7903687B2 (en) | 2005-04-01 | 2011-03-08 | International Business Machines Corporation | Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device |
US7577151B2 (en) | 2005-04-01 | 2009-08-18 | International Business Machines Corporation | Method and apparatus for providing a network connection table |
US7881332B2 (en) | 2005-04-01 | 2011-02-01 | International Business Machines Corporation | Configurable ports for a host ethernet adapter |
US7636323B2 (en) * | 2005-06-14 | 2009-12-22 | Broadcom Corporation | Method and system for handling connection setup in a network |
US20060281451A1 (en) * | 2005-06-14 | 2006-12-14 | Zur Uri E | Method and system for handling connection setup in a network |
US9912493B2 (en) | 2013-09-24 | 2018-03-06 | Kt Corporation | Home network signal relay device in access network and home network signal relay method in access network using same |
US12132699B2 (en) | 2018-07-26 | 2024-10-29 | Secturion Systems, Inc. | In-line transmission control protocol processing engine using a systolic array |
US11803507B2 (en) | 2018-10-29 | 2023-10-31 | Secturion Systems, Inc. | Data stream protocol field decoding by a systolic array |
Also Published As
Publication number | Publication date |
---|---|
US7124205B2 (en) | 2006-10-17 |
US20140059155A1 (en) | 2014-02-27 |
US8631140B2 (en) | 2014-01-14 |
US20120202529A1 (en) | 2012-08-09 |
US20020087732A1 (en) | 2002-07-04 |
US20140032779A1 (en) | 2014-01-30 |
US6226680B1 (en) | 2001-05-01 |
US8856379B2 (en) | 2014-10-07 |
US20020091844A1 (en) | 2002-07-11 |
US6334153B2 (en) | 2001-12-25 |
US8805948B2 (en) | 2014-08-12 |
US6393487B2 (en) | 2002-05-21 |
US6965941B2 (en) | 2005-11-15 |
US20150055661A1 (en) | 2015-02-26 |
US6247060B1 (en) | 2001-06-12 |
US20010023460A1 (en) | 2001-09-20 |
US9307054B2 (en) | 2016-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6393487B2 (en) | Passing a communication control block to a local device such that a message is processed on the device | |
US7174393B2 (en) | TCP/IP offload network interface device | |
US7191241B2 (en) | Fast-path apparatus for receiving data corresponding to a TCP connection | |
US7284070B2 (en) | TCP offload network interface device | |
US7673072B2 (en) | Fast-path apparatus for transmitting data corresponding to a TCP connection | |
US7337241B2 (en) | Fast-path apparatus for receiving data corresponding to a TCP connection | |
US9264366B2 (en) | Method and apparatus for processing received network packets on a network interface for a computer | |
US7472156B2 (en) | Transferring control of a TCP connection between devices | |
US8782199B2 (en) | Parsing a packet header | |
US6591302B2 (en) | Fast-path apparatus for receiving data corresponding to a TCP connection | |
US20020161919A1 (en) | Fast-path processing for receiving data on TCP connection offload devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: ALACRITECH, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOUCHER, LAURENCE B.;BLIGHTMAN, STEPHEN E.J.;CRAFT, PETER K.;AND OTHERS;REEL/FRAME:021018/0061;SIGNING DATES FROM 20080401 TO 20080502 Owner name: ALACRITECH, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOUCHER, LAURENCE B.;BLIGHTMAN, STEPHEN E.J.;CRAFT, PETER K.;AND OTHERS;SIGNING DATES FROM 20080401 TO 20080502;REEL/FRAME:021018/0061 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |
|
AS | Assignment |
Owner name: A-TECH LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALACRITECH INC.;REEL/FRAME:031644/0783 Effective date: 20131017 |
|
AS | Assignment |
Owner name: ALACRITECH, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:A-TECH LLC;REEL/FRAME:039068/0884 Effective date: 20160617 |