[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US9285827B2 - Systems and methods for optimizing data storage among a plurality of solid state memory subsystems - Google Patents

Systems and methods for optimizing data storage among a plurality of solid state memory subsystems Download PDF

Info

Publication number
US9285827B2
US9285827B2 US14/204,423 US201414204423A US9285827B2 US 9285827 B2 US9285827 B2 US 9285827B2 US 201414204423 A US201414204423 A US 201414204423A US 9285827 B2 US9285827 B2 US 9285827B2
Authority
US
United States
Prior art keywords
data
storage
interface
processing system
solid state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US14/204,423
Other versions
US20140201562A1 (en
Inventor
Jason Breakstone
Alok Gupta
Himanshu Desai
Angelo Campos
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liqid Inc
Original Assignee
Liqid Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/204,423 priority Critical patent/US9285827B2/en
Application filed by Liqid Inc filed Critical Liqid Inc
Publication of US20140201562A1 publication Critical patent/US20140201562A1/en
Assigned to LIQID INC. reassignment LIQID INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PURESILICON, INC.
Priority to US15/017,071 priority patent/US10191667B2/en
Publication of US9285827B2 publication Critical patent/US9285827B2/en
Application granted granted Critical
Priority to US16/254,721 priority patent/US10795584B2/en
Assigned to CANADIAN IMPERIAL BANK OF COMMERCE reassignment CANADIAN IMPERIAL BANK OF COMMERCE SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIQID INC.
Priority to US17/019,601 priority patent/US11366591B2/en
Assigned to HORIZON TECHNOLOGY FINANCE CORPORATION reassignment HORIZON TECHNOLOGY FINANCE CORPORATION SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIQID INC.
Assigned to LIQID INC. reassignment LIQID INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CANADIAN IMPERIAL BANK OF COMMERCE
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/04Generating or distributing clock signals or signals derived directly therefrom
    • G06F1/08Clock generators with changeable or programmable clock frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/30Means for acting in the event of power-supply failure or interruption, e.g. power-supply fluctuations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • aspects of the disclosure are related to the field of computer data storage, and in particular, data storage systems employing solid state storage elements.
  • Computer systems typically include bulk storage systems, such as magnetic disc drives, optical storage devices, tape drives, or solid state storage drives, among other storage systems.
  • a host system such as a network device, server, or end-user computing device, communicates with external bulk storage systems to store data or to access previously stored data.
  • These bulk storage systems are traditionally limited in the number of devices that can be addressed in total, which can be problematic in environments where higher capacity or higher performance is desired.
  • solid state media typically relies upon non-moving underlying storage medium elements, such as flash memory, phase change memory, magnetoresistive random access memory (MRAM), or other media.
  • MRAM magnetoresistive random access memory
  • solid state memory types can see increased throughput relative to moving disc and tape media, these solid state memory types still have throughput limitations.
  • data access in some solid state media is typically performed in large blocks, such as in NAND flash memory, and the desired data portions must be accessed and parsed by the underlying storage media control elements before subsequent reads or writes can occur.
  • typical solid state memory drives exchange data over a single physical link, which further limits data access flexibility and throughput.
  • increasing data storage and retrieval in networked, cloud, and enterprise environments find these limitations of solid state memory and associated drive electronics increasingly troublesome.
  • a plurality of solid state memory subsystems are included in a single device and managed by a storage processing system separate from a host system.
  • Each of the plurality of memory subsystems are internally addressed and mapped by the storage processing system in a parallel manner over multiple channels to allow increased capacity, reduced latency, increased throughput, and more robust feature sets than traditional bulk storage systems.
  • a solid state storage device includes an interface system configured to communicate with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device.
  • the solid state storage device also includes a storage processing system configured to communicate with the interface system to receive the data, process the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transfer the parallelized data.
  • the interface system is configured to receive the parallelized data, apportion the parallelized data among the plurality of solid state memory subsystems, and transfer the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.
  • a method of operating a solid state storage device includes, in an interface system, communicating with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device.
  • the method also includes, in a storage processing system, communicating with the interface system to receive the data, processing the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transferring the parallelized data.
  • the method also includes, in the interface system, receiving the parallelized data, apportioning the parallelized data among the plurality of solid state memory subsystems, and transferring the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.
  • FIG. 1 is a system diagram illustrating a storage system.
  • FIG. 2 is a block diagram illustrating a solid state storage device.
  • FIG. 3 is a flow diagram illustrating a method of operation of a solid state storage device.
  • FIG. 4 is a system diagram illustrating a storage system.
  • FIG. 5 is a sequence diagram illustrating a method of operation of a solid state storage device.
  • FIG. 6 is a sequence diagram illustrating a method of operation of a solid state storage device.
  • FIG. 7 is a system diagram illustrating a storage system.
  • FIG. 8 is a system diagram illustrating a storage system.
  • FIG. 9 is a sequence diagram illustrating a method of operation of a solid state storage device.
  • FIG. 10 includes graphs illustrating example power down curves.
  • FIG. 11 is a system diagram illustrating a storage system.
  • FIG. 12 is a system diagram illustrating a storage system.
  • FIG. 13 includes side view diagrams illustrating a storage system.
  • FIG. 1 is a system diagram illustrating storage system 100 .
  • Storage system 100 includes solid state storage device 101 and host system 150 .
  • Solid state storage device 101 and host system 150 communicate over link 140 .
  • host system 150 can transfer data to be stored by solid state storage device 101 , such as a ‘write’ transaction where solid state storage device 101 stores associated data on a computer-readable storage medium, namely ones of solid state storage media 132 .
  • Host system 150 can also request data previously stored by solid state storage device 101 , such as during a ‘read’ transaction, and solid state storage device 101 retrieves associated data from ones of solid state storage media 132 and transfers the requested data to host system 150 .
  • further transactions than writes and reads could be handled by the elements of solid state storage device 101 , such as metadata requests and manipulation, file or folder deletion or moving, volume information requests, file information requests, or other transactions.
  • Solid state storage device 101 includes interface system 110 , storage processing system 120 , storage allocation information 125 , two memory subsystems 130 , and two solid state storage media 132 .
  • Links 140 - 146 each comprise physical, logical, or virtual communication links, capable of communicating data, control signals, instructions, or commands, along with other information.
  • Links 141 - 146 are configured to communicatively couple the associated elements of solid state storage device 101
  • link 140 is configured to communicatively couple solid state storage device 101 to external systems, such as host system 150 .
  • links 141 - 146 are encapsulated within the elements of solid state storage device 101 , and may be a software or logical links.
  • communications exchanged with a host system are typically referred to as ‘front-end’ communications
  • communications exchanged with memory subsystems are typically referred to as ‘back-end’ communications.
  • Interface system 110 includes interface circuitry and processing systems to exchange data for storage and retrieval with host system 150 over link 140 , as well as to exchange data for processing by storage processing system 120 over link 141 .
  • interface system 110 receives instructions and data from host system 150 over an aggregated link, where multiple physical interfaces each comprising a physical communication layer are bonded to form a combined-bandwidth link.
  • Interface system 110 formats the received instructions and associated data for transfer to a central processing system, such as storage processing system 120 .
  • Interface system 110 also formats data and information for transfer to host system 150 .
  • Storage processing system 120 includes a processor and non-transitory computer readable memory which includes computer-readable instructions such as firmware. These instructions, when executed by storage processing system 120 , instruct storage processing system 120 to operate as described herein.
  • storage processing system could be configured to receive data and instructions transferred by interface system 110 , and process the data and instructions to optimize storage and retrieval operations.
  • Write and read instructions and data are processed against storage allocation information 125 to optimize data transfer, such as parallelization, interleaving, portion sizing, portion addressing, or other data transfer optimizations for data storage and retrieval with memory subsystems.
  • Storage processing system 120 and storage allocation information 125 are shown communicatively coupled over link 144 , although in other examples, storage allocation information 125 could be included in storage processing system 120 or other circuitry.
  • Memory subsystems 130 each include circuitry to store and retrieve optimized data portions with solid state storage media 132 over associated links 145 - 146 and exchange the data with interface system 110 over associated links 142 - 143 .
  • Solid state storage media 132 each include a solid state storage array, such as flash memory, static random-access memory (SRAM), magnetic memory, phase change memory, or other non-transitory, non-volatile storage medium. Although two of memory subsystems 130 and solid state storage media 132 are shown in FIG. 1 , it should be understood that a different number could be included.
  • Links 140 - 146 each use various communication media, such as air, space, metal, optical fiber, or some other signal propagation path, including combinations thereof.
  • Links 140 - 146 could each be a direct link or might include various equipment, intermediate components, systems, and networks.
  • Links 140 - 146 could each be a common link, shared link, aggregated link, or may be comprised of discrete, separate links.
  • Example types of each of links 140 - 146 could comprise serial attached SCSI (SAS), aggregated SAS, Ethernet, small-computer system interface (SCSI), integrated drive electronics (IDE), Serial AT attachment interface (ATA), parallel ATA, FibreChannel, InfiniBand, Thunderbolt, universal serial bus (USB), FireWire, peripheral component interconnect (PCI), PCI Express (PCIe), communication signaling, or other communication interface types, including combinations or improvements thereof.
  • SAS serial attached SCSI
  • SAS small-computer system interface
  • IDE integrated drive electronics
  • ATA Serial AT attachment interface
  • ATA Serial AT attachment interface
  • parallel ATA FibreChannel
  • FibreChannel FibreChannel
  • InfiniBand InfiniBand
  • Thunderbolt universal serial bus
  • USB universal serial bus
  • FireWire peripheral component interconnect
  • PCI peripheral component interconnect Express
  • communication signaling or other communication interface types, including combinations or improvements thereof.
  • FIG. 1 could be included in a single enclosure, such as a case, with an external connector associated with host interface 140 to communicatively couple the associated elements of solid state storage device 101 to external systems, connectors, cabling and/or transceiver elements.
  • the enclosure could include various printed circuit boards with the components and elements of solid state storage device 101 disposed thereon. Printed circuit traces, flexible printed circuits, or discrete wires could be employed to interconnect the various elements of solid state storage device 101 . If multiple printed circuit boards are employed, inter-board connectors or cabling are employed to communicatively couple each printed circuit board.
  • FIG. 2 includes example embodiments of several elements of solid state storage device 101 .
  • FIG. 2 includes interface system 110 , storage processing system 120 , memory subsystem 130 , and solid state storage medium 132 .
  • the elements of FIG. 2 are merely exemplary, and could include other configurations.
  • elements of FIG. 2 could also be exemplary embodiments of the elements described in FIGS. 3-13 .
  • further memory subsystems and solid state storage media could be included, along with associated interfaces.
  • interface system 110 includes host interface 212 , input/output (I/O) processing system 214 , and high-speed interface 216 .
  • Host interface 212 , I/O processing system 214 , and high-speed interface 216 each communicate over bus 219 , although discrete links could be employed.
  • Host interface 212 includes connectors, buffers, transceivers, and other input/output circuitry to communicate with a host system over external device interface 140 .
  • External device interface 140 could include multiple physical links aggregated into a single interface.
  • I/O processing system 214 includes a processor and memory for exchanging data between host interface 212 and high-speed interface 216 , as well as controlling the various features of host interface 212 and high-speed interface 216 .
  • Host interface 212 also communicates with memory subsystem 130 over internal device interface 142 in this example. As shown in FIG. 2 , host interface 212 communicates with both an external host system and memory subsystem 130 using a similar communication interface and protocol type. Although external device interface 140 may include an external connector, internal device interface 142 may instead employ circuit traces or internal connectors. External device interface 140 and internal device interface 142 could each comprise an aggregated or non-aggregated serial interface, such as serial-attached SCSI (SAS), although other interfaces could be employed.
  • High-speed interface 216 includes buffers, transceivers, and other input/output circuitry to communicate over internal storage interface 141 .
  • Internal storage interface 141 could comprise a multi-lane high-speed serial interface, such as PCI-Express (PCIe), although other interfaces could be employed.
  • Interface system 110 may be distributed or concentrated among multiple elements that together form the elements of interface system 110 .
  • FIG. 2 also includes storage processing system 120 , which includes high-speed interface 222 , processing system 224 , memory 226 , and other I/O 228 .
  • High-speed interface 222 , processing system 224 , memory 226 , and other I/O 228 each communicate over bus 229 , although discrete links could be employed.
  • High-speed interface 222 includes buffers, transceivers, and other input/output circuitry to communicate over internal storage interface 141 .
  • Processing system 224 includes a processor and memory for processing data against storage allocation information 125 to determine ones of memory subsystem 130 to store the data on, to parallelize the data for interleaved storage across ones of the lanes of high-speed interface 222 or multiple memory subsystems, among other operations including reconstructing parallelized data during read operations.
  • Processing system 224 may store data and executable computer-readable processing instructions on memory 226 or within optional non-volatile memory 250 , which could include random-access memory (RAM).
  • Other I/O 228 includes other various interfaces for communicating with processing system 224 , such as power control interfaces, SMBus interfaces, system configuration and control interfaces, or an interface to communicate with non-volatile memory 250 over link 144 .
  • Optional non-volatile memory 250 comprises a non-transitory computer-readable medium, such as static RAM, flash memory, electronically erasable and reprogrammable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory.
  • Non-volatile memory 250 is shown to include storage allocation information 125 , and could include further information.
  • Storage allocation information 125 includes tables, databases, linked lists, trees, or other data structures for indicating where data is stored within and among a plurality of memory subsystems 130 .
  • storage allocation information 125 is stored within storage processing system 120 , such as within memory 226 .
  • Storage processing system 120 may be distributed or concentrated among multiple elements that together form the elements of storage processing system 120 .
  • FIG. 2 also includes memory subsystem 130 , which includes target interface 232 , processing system 234 , and memory interface 236 .
  • Target interface 232 , processing system 234 , and memory interface 236 communicate over bus 239 , although discrete links could be employed.
  • Target interface 232 includes buffers, transceivers, and other input/output circuitry to communicate with over internal device interface 142 .
  • Processing system 234 includes a processor and memory for exchanging data between target interface 232 and memory interface 236 , as well as controlling the various features of target interface 232 and memory interface 236 .
  • Memory interface 236 includes buffers, transceivers, and other input/output circuitry to communicate and control solid state storage medium 132 .
  • Memory interface 236 could also include memory technology-specific circuitry, such as flash memory electronic erasing and reprogramming circuitry, phase change memory write circuitry, or other circuitry to store data within and read data from solid state storage medium 132 over interface 145 .
  • Solid state storage medium comprises memory elements and interface circuitry, where the memory elements could comprises a non-transitory computer-readable medium, such as static RAM, flash memory, electronically erasable and reprogrammable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory.
  • processing system 234 of memory subsystem 130 comprises an application specific processor used to provide performance off-loading from processing system 120 .
  • performance off-loading includes memory wear-leveling, bad block management, error-detection and correction, parallel addressing and data channeling to individual solid state media included therein.
  • FIG. 3 is a flow diagram illustrating a method of operation of solid state storage device 101 .
  • interface system 110 communicates ( 301 ) with external host system 150 over aggregated multi-channel interface 140 to receive instructions and data for storage.
  • the data for storage could be included with a ‘write’ instruction or series of commands which indicates data to be written and possibly further information, such as a storage location, storage address, write address, volume information, metadata, or other information associated with the data.
  • Host system 150 transfers the data over link 140 , and the data is received by interface system 110 of solid state storage system 101 .
  • Interface system 110 could provide an acknowledgment message to host system 150 in response to successfully receiving the write instruction and associated data.
  • host interface 212 of FIG. 2 receives the data over link 140 and then transfers the data over bus 219 for processing and formatting by I/O processing system 214 , and subsequent transfer by high-speed interface 216 over link 141 .
  • Storage processing system 120 communicates ( 302 ) with interface system 110 to receive the data and associated write instruction information, processes the data against storage allocation information 125 to parallelize the data among a plurality of solid state memory subsystems 130 , and transfers the parallelized data.
  • Storage processing system 120 receives the data over link 141 .
  • high-speed interface 222 receives the data and associated information over link 141 and transfers the data and associated information over bus 229 .
  • Storage processing system 120 processes the data against storage allocation information 125 to determine which memory subsystems will store the data, and parallelizes the data among several memory subsystems. In this example, two memory subsystems 130 are included, and the data is parallelized among each.
  • the data parallelization could include breaking the data into individual portions for storage on an associated memory subsystem, where the individual portions are then transferred over link 141 by storage processing system 120 .
  • the individual portions could be transferred by high-speed interface 222 over link 141 .
  • the data is interleaved among multiple memory subsystems, such as by striping or mirroring.
  • Storage allocation information 125 typically includes a table, database, tree, or other data structure for indicating where data is stored among multiple memory subsystems as well as other information, such as metadata, file system structure information, volume information, logical drive information, virtual drive information, among other information for storing, retrieving, and handling data stored within solid state storage device 101 .
  • Storage processing system 120 could perform other operations on the data, such as read-modify-writes, read-modify-write caching, encryption, encoding, implementing a redundancy scheme, calculating redundancy information, compression, or de-duplication of data during storage and subsequent retrieval, among other operations.
  • Interface system 110 receives ( 303 ) the parallelized data, apportions the parallelized data among the plurality of solid state memory subsystems 130 , and transfers the parallelized data for storage by the plurality of solid state memory subsystems 130 .
  • the parallelized data is received over link 141 in this example, and subsequently transferred by interface system 110 over ones of links 142 - 143 .
  • the data portion is received by target interface 232 and transferred over bus 239 . Transferring the data portions over links 142 - 143 could include initiating a ‘write’ command with each associated memory subsystem 130 for the individual portion of data, and transferring the individual portion of data along with the associated write command to the appropriate memory subsystem 130 . Additional data could accompany the parallelized data, such as addressing information, identifiers for the associated memory subsystem, metadata, or other information.
  • Each of solid state memory subsystems 130 is configured to receive the associated portion of the parallelized data and store the associated portion on associated solid state storage medium 132 .
  • Memory interface 236 could transfer the associated portion for storage over link 145 - 146 .
  • Link 145 - 146 could include multiple links or busses, such as row/column lines, control, address, and data lines, or other configurations.
  • Processing system 234 could instruct memory interface 236 to perform wear-level optimization, bad block handling, write scheduling, write optimization, garbage collection, or other data storage operations.
  • FIG. 3 discusses a write operation for storage of data by solid state storage device 101 , a retrieval or read operation could proceed as well.
  • host system 150 informs solids state storage device 101 of data desired to be retrieved over link 140 , such as via a read instruction.
  • Interface system 110 receives this retrieve instruction, and transfers the read instruction to storage processing system 120 .
  • Interface system 110 could provide an acknowledge message to host system 150 in response to successfully receiving the read instruction.
  • Storage processing system 120 would process the retrieve command against storage allocation information 125 to determine which of memory subsystems 130 have access to the desired data.
  • Storage processing system 120 then issues individual parallel read commands to interface system 110 which subsequently informs associated memory subsystems 130 to retrieve the data portions from associated solid state storage media 132 .
  • Interface system 110 may then receive the data portions and transfer to storage processing system 120 for de-parallelization, merging, or for performing other operations, such as decrypting or de-duplication.
  • the storage allocation information could be processed against the data portions during a read to de-parallelize the data into merged data.
  • Storage processing system 120 then transfers the assembled data for delivery to host system 150 through interface system 110 .
  • FIG. 4 is a system diagram illustrating storage system 400 , as an example of elements of storage system 100 found in FIG. 1 , although storage system 100 could use other configurations.
  • Storage system 400 includes solid state storage device 401 and host system 450 .
  • Solid state storage device 401 and host system 450 communicate over link 460 , which is an aggregated serial attached SCSI (SAS) interface in this example.
  • SAS serial attached SCSI
  • host system 450 can transfer data to be stored by solid state storage device 401 , such as a ‘write’ operation where solid state storage device 401 stores associated data on a computer-readable storage medium, namely ones of solid state storage arrays 432 .
  • Host system 450 can also request data previously stored by solid state storage device 401 , such as during a ‘read’ operation, and solid state storage device 401 retrieves associated data and transfers the requested data to host system 450 .
  • Solid state storage device 401 includes host interface system 410 , storage processing system 420 , memory subsystems 430 , solid state storage arrays 432 , and storage interface system 440 .
  • the SAS interface is employed in this example as a native drive interface, where a native drive interface is typically used by a computer system, such as host 450 , for direct access to bulk storage drives.
  • a native drive interface is typically used by a computer system, such as host 450 , for direct access to bulk storage drives.
  • the SAS interface is bootable and does not typically require custom drivers for an operating system to utilize the SAS interface.
  • Link aggregation for host interface 460 can be performed during a configuration process between host 450 and configuration elements of solid state storage device 401 , such as firmware elements.
  • PCIe interfaces employed internally to solid state storage device 401 are typically non-native drive interfaces, where PCIe is typically not used by a computer system for direct access to bulk storage drives.
  • the PCIe interface does not typically support bootable devices attached thereto, and requires custom device-specific drivers for operating systems to optimally access the associated devices.
  • a PCIe or SATA-based front end host interface could be employed instead of a SAS-based front-end host interface.
  • Host interface system 410 includes interface circuitry to exchange data for storage and retrieval with host system 450 over an aggregated SAS interface, namely link 460 .
  • Host interface system 410 includes an SAS target portion to communicate with an SAS initiator portion of host system 450 .
  • Link 460 includes an aggregated SAS interface, which could include eight individual SAS links merged into a single logical SAS link, or could include a subset of the eight individual links merged into a logical link.
  • Connector 412 serves as a user-pluggable physical connection point between host system 450 and solid state storage device 401 .
  • Link 460 could include cables, wires, or optical links, including combinations thereof.
  • Host interface system 410 also includes a PCI Express (PCIe) interface and associated circuitry.
  • PCIe PCI Express
  • the PCIe interface of host interface 410 communicates over a multi-lane PCIe interface 462 with storage processing system 420 .
  • multi-lane PCIe interface 462 with storage processing system 420 .
  • eight lanes are shown, which could comprise a ‘x8’ PCIe interface, although other configurations and numbers of lanes could be used.
  • host interface system 410 includes an SAS target portion
  • host interface system 410 could include an SAS initiator portion.
  • the SAS initiator portion could be employed to manage, control, or issue commands to other solid state storage devices.
  • link 460 could include wireless portions, such as a wireless SAS interface, or other wireless communication and networking communication links.
  • Storage processing system 420 includes a microprocessor and memory with executable computer-readable instructions. Storage processing system 420 processes the data for storage and retrieval against storage allocation information as well as exchanges the data to be stored, or instructions to retrieve data, with both host interface system 410 and storage interface system 440 . Storage processing system 420 executes computer-readable instructions to operate as described herein. As with host interface system 410 , storage processing system 420 includes a PCIe interface for communicating over link 462 with host interface system 410 . Storage processing system 420 also includes a further PCIe interface for communicating with storage interface system 440 over x8 PCIe link 463 . In this example, storage processing system 420 includes two PCIe interfaces with eight PCIe lanes each, although other configurations and numbers of lanes could be used.
  • Storage interface system 440 includes interface circuitry to exchange data and storage instructions between storage processing system 420 and a plurality of memory subsystems, namely memory subsystems 430 .
  • Storage interface system 440 includes a PCIe interface for communicating with storage processing system 420 over link 463 , and a SAS interface for communicating with each of memory subsystems 430 over associated ones of links 464 .
  • storage interface system 440 includes one PCIe interface 463 with eight PCIe lanes, although other configurations and numbers of lanes could be used.
  • storage interface system 440 communicates over a single SAS link with each of memory subsystems 430 , and includes an SAS initiator portion for communicating with SAS target portions of each memory subsystem 430 over SAS links 464 .
  • host interface system 410 and storage interface system 440 are shown as separate elements in FIG. 4 , in other examples, these elements could be included in a single system, such as shown in interface system 110 or interface system 810 , although other configurations could be employed. Also, in other examples, instead of a PCIe interface for link 463 , a SAS, SATA, or other aggregated or multi-lane serial link could be employed. Likewise, instead of an SAS interface for link 464 , PCIe, SATA, or other links could be employed.
  • Memory subsystems 430 each include circuitry to store and retrieve data from associated ones of solid state storage arrays 432 over associated links 465 and exchange the data with storage interface system 440 over associated SAS links 464 .
  • Memory subsystems 430 also each include an SAS target portion for communicating with the SAS initiator portion of storage interface system 440 .
  • Solid state storage arrays 432 each include a solid state storage medium, such as flash memory, static random-access memory, magnetic memory, or other non-volatile memory. Although four of memory subsystems 430 and solid state storage arrays 432 are shown in FIG. 4 , it should be understood that a different number could be included with an associated additional link 464 .
  • FIG. 5 is a sequence diagram illustrating a method of operation of solid state storage device 401 .
  • host system 450 transfers a write command and associated data to be written to host interface system 410 .
  • the write command typically includes information on the write command as well as a storage location indicator, such as a storage address at which to store the data.
  • the write command and data is transferred over aggregated SAS link 460 in this example, and received by the SAS target portion of host interface system 410 .
  • Host interface system 410 could optionally provide an acknowledge message to host system 450 in response to successfully receiving the write command and associated data.
  • Host interface system 410 then transfers the write command and data to storage processing system 420 over PCIe interface 462 .
  • host interface system 410 modifies the write command or data into a different communication format and protocol for transfer over interface 462 , which could include generating a new write command and associating the data with the new write command for transfer over interface 462 for receipt by storage processing system 420 .
  • storage processing system 420 receives the write command and data and issues a ‘write complete’ message back to host interface system 410 .
  • Host interface system 410 then transfers the write complete message or associated information for receipt by host system 450 .
  • the write complete message indicates that host system 450 is free to initiate further commands or end the write process associated with the write command described above.
  • the write complete message is associated with command queuing.
  • a ‘write-through’ operation could be performed. In a ‘write-through’ operation, a write complete message is not generated until the associated data has been committed to associated ones of the memory subsystems or to associated ones of the solid state storage media.
  • a ‘write-back’ operation could also instead be performed, where host interface system 410 initiates and transfers a write complete message to host system 450 in response to receiving the write data, which could further reduce latency for host system 450 .
  • Storage processing system 420 then parallelizes the data for storage across multiple memory subsystems.
  • storage processing system 420 processes storage location information associated with the received data against storage allocation information to determine a parallelization.
  • Parallelizing data includes breaking the data into smaller portions, where each portion is intended for transfer across a different storage interface and subsequent storage by a different storage medium.
  • Parallelizing also includes generating multiple write commands for each data portion.
  • the data is parallelized into at least four portions.
  • a redundancy scheme is applied to the data, and the portions of data could include redundant data portions, parity data, checksum information, or other redundancy information.
  • Parallelizing the data could also include interleaving the data across several storage interfaces and associated storage media.
  • storage processing system 420 transfers parallelized write commands and parallelized data portions to storage interface system 440 .
  • PCIe interface 463 between storage processing system 420 and storage interface system 440 includes eight lanes, and the data could be transferred in parallel across all eight lanes, or a subset thereof.
  • Storage interface system 440 receives the parallelized write commands and parallelized data portions over PCIe interface 463 and in response initiates writes over each of SAS interfaces 464 for each of the parallelized data portions.
  • the SAS target portion of each of memory subsystems 430 receives the associated writes and parallelized data portion, and in response, issues associated writes to the associated solid state storage media.
  • the write operation originally transferred by host system 450 for data storage by solid state storage device 401 completes when the data is written to the associated solid state storage arrays.
  • host system 450 issues a read request.
  • the read request is transferred as a read command over SAS interface 460 for receipt by host interface system 410 .
  • the read command could include read command information such as storage location information, and a destination address for the read data once retrieved.
  • Host interface 410 receives the read command and in response issues a read command over PCIe interface 462 for receipt by storage processing system 420 .
  • Storage processing system 420 processes the read command against storage allocation information to determine where the data requested in the read command is located or stored. Since data in previous write operations was parallelized and stored on different solid state storage arrays, the data must then be retrieved from these arrays.
  • storage processing system 420 determines individual locations to issue read commands for, and transfers these individual read commands over PCIe interface 463 for receipt by storage interface system 440 .
  • Storage interface system 440 issues parallel read commands over individual ones of SAS interfaces 464 for receipt by ones of memory subsystems 430 .
  • Ones of memory subsystems 430 issue reads to retrieve the data from solid state memory arrays 432 .
  • the read data is transferred by memory subsystems 430 and storage interface system 440 over the associated SAS and PCIe interfaces for receipt by storage processing system 420 .
  • Storage processing system 420 receives the read data, and processes the individual read data portions against the storage allocation information and the read command information to reassemble or merge the individual read data portions into de-parallelized data.
  • the de-parallelized data is then transferred over PCIe interface 462 for subsequent transfer by host interface system 410 over SAS interface 460 for receipt by host system 450 .
  • the read operation originally transferred by host system 450 for data retrieval by solid state storage device 401 completes when the data is successfully transferred to host system 450 .
  • FIG. 6 is a sequence diagram illustrating a method of operation of solid state storage device 401 for optimization of read-modify-write operations.
  • host system 450 transfers a write command and associated data to be written to host interface system 410 .
  • the write command typically includes information on the write command as well as a storage location indicator, such as a storage address at which to store the data.
  • the write command and data is transferred over aggregated SAS link 460 in this example, and received by the SAS target portion of host interface system 410 .
  • Host interface system 410 could optionally provide an acknowledgment message to host system 450 in response to successfully receiving the write command and associated data.
  • Host interface system 410 then transfers a related write command and data to storage processing system 420 over PCIe interface 462 .
  • host interface system 410 modifies the write command or data into a different communication format and protocol for transfer over interface 462 , which could include generating a new write command and associating the data with the new write command for transfer over interface 462 for receipt by storage processing system 420 .
  • Storage processing system 420 receives the write command and data and issues a ‘write complete’ message back to host interface system 410 .
  • Host interface system 410 then transfers the write complete message or associated information for receipt by host system 450 .
  • the write complete message indicates that host system 450 is free to initiate further commands or end the write process associated with the write command described above. In some examples, the write complete message is associated with command queuing.
  • Storage processing system 420 determines that a read-modify-write operation would need to be performed to write the data. The determination is based on storage allocation information and information received with the read command. For example, in some types of storage media, such as flash memory, data manipulation occurs in large blocks due to limitations of the underlying media technology. If the amount of data to be written is less than a desired block size, then a read-modify-write would need to be performed. Additionally, the data to be written could be parallelized as discussed in FIG. 5 . In this example, storage processing system 420 processes a storage location associated with the received data against storage allocation information to determine a portion of stored data to read before the data received from host system 450 is written.
  • storage processing system 420 determines individual locations to issue read commands for, and transfers these individual read commands over PCIe interface 463 for receipt by storage interface system 440 .
  • Storage interface system 440 issues parallel read commands over individual ones of SAS interfaces 464 for receipt by ones of memory subsystems 430 .
  • Ones of memory subsystems 430 issue read operations to retrieve the data from solid state memory arrays 432 .
  • the read data is transferred by memory subsystems 430 and storage interface system 440 over the associated SAS and PCIe interfaces for receipt by storage processing system 420 .
  • storage processing system 420 modifies the read data with the write data received from host system 450 to create read-modified-write data.
  • This read-modified-write data comprises the read data as modified by the write data.
  • storage processing system 420 caches the read-modified-write data in anticipation of further writes to the same portion or block of data. If such a subsequent write is received, then storage processing system 420 further modifies the read-modified-write data.
  • Storage processing system 420 could wait for further writes until a threshold amount of data has been modified due to subsequent writes before committing the data to ones of the memory subsystems.
  • storage processing system 420 caches the write data for a plurality of write instructions until a threshold amount of data has been modified for a particular memory block or plurality of blocks, then the data is committed to ones of memory subsystems.
  • a read may not be required, as the initial write data and subsequent writes modify an entire block or blocks of data and the full block or blocks can be committed from the cached location to ones of the memory subsystems.
  • Various combinations of the above reads and writes could be performed.
  • storage processing system 420 parallelizes the read-modified-write data for storage across multiple computer-readable storage media.
  • storage processing system 420 processes storage location information associated with the received write data against storage allocation information and location information for the read data to determine a parallelization.
  • Parallelizing data includes breaking the data into smaller portions, where each portion is intended for transfer across a different storage interface and subsequent storage by a different storage medium.
  • the read-modified-write data is parallelized into at least four portions.
  • a redundancy scheme is applied to the read-modified-write data, and the portions of data could include redundant data portions, parity data, checksum information, or other redundancy information.
  • Parallelizing the read-modified-write data could also include interleaving the data across several storage interfaces and associated storage media.
  • storage processing system 420 transfers parallelized write commands and parallelized read-modified-write data portions to storage interface system 440 .
  • PCIe interface 463 between storage processing system 420 and storage interface system 440 includes eight lanes, and the data could be transferred in parallel across all eight lanes, or a subset thereof.
  • Storage interface system 440 receives the parallelized write commands and parallelized read-modified-write data portions over PCIe interface 463 and in response initiates write commands over each of SAS interfaces 464 for each of the parallelized data portions.
  • the SAS target portion of each of memory subsystems 430 receives the associated write commands and parallelized data portion, and in response, issues associated write operations to the associated solid state storage media.
  • the write command operation transferred by host system 450 for data storage by solid state storage device 401 completes when the read-modified-write data is written to the associated solid state storage arrays.
  • data could be written in alternate or complementary ways than a read-modify-write to ones of memory subsystems 430 or solid state storage arrays 432 .
  • an overprovisioning process could be employed. In overprovisioning, the total addressable storage space of solid state storage device 401 , or a virtual subdivision thereof, is reported to be less than an actual addressable storage space.
  • solid state storage device 401 could report 100 gigabytes (100 GB) of addressable space to host system 450 , but actually contain 128 GB of addressable space.
  • Read-modify-write procedures could be enhanced by employing overprovisioning. For example, write data could be immediately written to a block of the unreported addressable space.
  • background processing by solid state storage device 401 will compare the newly written data against corresponding existing data written previously to the storage array for a given block of storage space.
  • a subsequent background read-modify-write process can then be performed by memory subsystems 430 or storage processing system 420 on the existing data against the new data written to the unreported addressable space, and the new data can then modify the existing data via a read-modify-write to create updated data to replace the existing data.
  • the update data could then be committed to the storage block previously occupied by the existing data, located within the reported addressable space.
  • Garbage collection can then be performed on old data portions, such as to mark that portion of the unreported addressable space as free to be used for further write transactions with background read-modified-writes.
  • FIG. 7 is a system diagram illustrating storage system 700 , as an example of elements of storage system 100 found in FIG. 1 , although storage system 100 could use other configurations.
  • Solid state storage device 701 can perform operations as discussed herein for solid state storage device 101 , 401 , 801 , or 1210 , although other configurations could be employed.
  • Storage system 700 includes host system 740 and solid state storage device 701 .
  • Host system 740 and solid state storage device 701 communicate over host interface 750 .
  • Host system 740 comprises a computer system, such as a server, personal computer, laptop, tablet, gaming system, entertainment system, embedded computer system, industrial computer system, network system, or other computer system.
  • Host interface 750 could comprise serial attached SCSI (SAS), aggregated SAS, Ethernet, small-computer system interface (SCSI), integrated drive electronics (IDE), Serial AT attachment interface (ATA), parallel ATA, FibreChannel, InfiniBand, Thunderbolt, universal serial bus (USB), FireWire, PCI Express, communication signaling, or other communication interface type, and could comprise optical, wired, wireless, or other interface media.
  • SAS serial attached SCSI
  • SCSI small-computer system interface
  • IDE integrated drive electronics
  • ATA Serial AT attachment interface
  • ATA parallel ATA
  • FibreChannel FibreChannel
  • InfiniBand InfiniBand
  • Thunderbolt universal serial bus
  • USB universal serial bus
  • FireWire PCI Express
  • communication signaling or other communication interface type, and could comprise optical, wired, wireless
  • Solid state storage device 701 includes chip-scale device 710 , connector 711 , and non-volatile memories (MEM) 730 .
  • Connector 711 includes physical structure and connection components to attach a transmission medium to solid state storage device 701 .
  • Connector 711 could include a connector, antenna, port, or other interconnection components.
  • MEM 730 each include non-transitory non-volatile computer-readable media, such as flash memory, electrically erasable and programmable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory.
  • MEM 730 could each comprise a microchip or collection of microchips to each form a storage array.
  • Chip-scale device 710 includes host interface 712 , primary processor 713 , dynamic random access memory (DRAM) 714 , firmware 715 , memory processor 716 , and peripheral input/output (I/O) 717 .
  • Chip-scale device could comprise a field-programmable gate array (FPGA), application specific integrated circuit (ASIC), or other integrated microchip circuit and logic elements, including combinations thereof.
  • FPGA field-programmable gate array
  • ASIC application specific integrated circuit
  • Each element of chip-scale device 710 can communicate over associated logic and signaling elements, not shown for clarity in FIG. 7 .
  • the signaling elements could include busses, discrete links, point-to-point links, or other links.
  • Host interface 712 includes circuitry and logic to communicate over host interface 750 to exchange read and write commands with host system 740 along with associated data.
  • Primary processor 713 includes logic and processing circuitry to process read and write commands to determine data storage operations, such as data parallelization, data interleaving, read-modify-write optimization, redundancy operations, or other operations for storing and retrieving data with MEM 730 through memory processor 716 .
  • Dynamic random access memory (DRAM) 714 includes random-access memory elements and access logic for primary processor 713 to retrieve executable instructions to perform as indicated herein. DRAM 714 could also include storage allocation information or cached data associated with reads/writes.
  • Firmware 715 includes non-volatile memory elements, such as static ram (SRAM), flash memory, or other non-volatile memory elements which store computer-readable instructions for operating chip-scale device 710 as discussed herein when executed by primary processor 713 or memory processor 716 .
  • Firmware 715 could include operating systems, applications, storage allocation information, configuration information, or other computer-readable instructions stored on a non-transitory computer-readable medium.
  • Memory processor 716 includes logic and circuitry for reading from and writing to a plurality of memory arrays, such as MEM 730 .
  • Memory processor could also include interfacing logic for communicating over memory interfaces 752 or write circuitry for writing to flash memory or other memory technologies.
  • Peripheral I/O 717 includes circuitry and logic for communicating with further external systems, such as computer-readable storage media, programming elements for chip-scale device 710 , debugging interfaces, power control interfaces, clock control interfaces, or other external interfaces, including combinations thereof.
  • external systems such as computer-readable storage media, programming elements for chip-scale device 710 , debugging interfaces, power control interfaces, clock control interfaces, or other external interfaces, including combinations thereof.
  • FIG. 8 is a system diagram illustrating storage system 800 , as an example of elements of storage system 100 found in FIG. 1 , although storage system 100 could use other configurations.
  • Storage system 800 includes flash storage device 801 and host system 850 .
  • Flash storage device 801 and host system 850 communicate over link 860 , which is an aggregated serial attached SCSI (SAS) interface in this example.
  • SAS serial attached SCSI
  • host system 850 can transfer data to be stored by flash storage device 801 , such as a ‘write’ instruction where flash storage device 801 stores associated data on a computer-readable storage medium, namely ones of flash memory arrays 832 .
  • Host system 850 can also request data previously stored by flash storage device 801 , such as during a ‘read’ instruction, and flash storage device 801 retrieves associated data and transfers the requested data to host system 850 . Additionally, further transactions than writes and reads could be handled by the elements of flash storage device 801 , such as metadata requests and manipulation, file or folder deletion or moving, volume information requests, file information requests, or other transactions.
  • flash storage is used in this example, it should be understood that other non-transitory computer-readable storage media and technologies could be employed.
  • Flash storage device 801 includes interface system 810 , storage processing system 820 , memory subsystems 830 , power control system 870 , and backup power source 880 .
  • the elements of flash storage device 801 are included within a single enclosure, such as a casing.
  • the enclosure includes connector 812 attached thereon to communicatively couple the associated elements of flash storage device 801 to external systems, connectors, and/or cabling.
  • the enclosure includes various printed circuit boards with the components and elements of flash storage device 801 disposed thereon. Printed circuit traces or discrete wires are employed to interconnect the various elements of flash storage device 801 . If multiple printed circuit boards are employed, inter-board connectors are employed to communicatively couple each printed circuit board.
  • backup power source 880 is included in elements external to the enclosure of flash storage device 801 .
  • Interface system 810 includes interface circuitry to exchange data for storage and retrieval with host system 850 over an aggregated SAS interface, namely link 860 .
  • Interface system 810 includes an SAS target portion to communicate with an SAS initiator portion of host system 850 .
  • Link 860 includes an aggregated SAS interface, which includes four individual SAS links merged into a single logical SAS link.
  • Connector 812 serves as a user-pluggable physical connection port between host system 850 and flash storage device 801 .
  • Link 860 could include cables, wires, or optical links, including combinations thereof.
  • Interface system 810 also includes an SAS initiator portion, a PCI Express (PCIe) interface, and associated circuitry.
  • PCIe PCI Express
  • the SAS initiator portion of interface system 810 includes circuitry and logic for initiating instructions and commands over links 864 for exchanging data with SAS target portions of each of memory subsystems 830 .
  • the PCIe interface of interface system 810 communicates over a multi-lane PCIe interface 862 with storage processing system 820 . In this example, eight lanes are shown, which could comprise a ‘x8’ PCIe interface, although other configurations and numbers of lanes could be used.
  • Interface system 810 also communicates with power control system 870 to receive power status information.
  • Storage processing system 820 includes a microprocessor and memory, and processes data for storage and retrieval against storage allocation information as well as exchanges the data to be stored or instructions to retrieve data with interface system 810 .
  • Storage processing system 820 executes computer-readable instructions to operate as described herein.
  • storage processing system 820 includes a PCIe interface for communicating over link 862 with interface system 810 .
  • Storage processing system 820 also communicates with power control system 870 to receive power status information, clock speed configuration information, or other power information.
  • storage processing system 820 includes one PCIe interface with eight PCIe lanes, although other configurations and numbers of lanes could be used.
  • each of memory subsystems 830 are physical drives that are merged together into one physical enclosure with the other elements of flash storage device 801 to create a virtual drive configured and accessed by storage processing system 820 .
  • each of memory subsystems 830 are physical drives encased in a common enclosure with interface system 810 and storage processing system 820 .
  • Each of memory subsystems 830 could also be mounted on common printed circuit boards as elements of interface system 810 and storage processing system 820 .
  • Memory subsystems 830 each include a flash memory controller 831 and a flash memory array 832 .
  • each of memory subsystems 830 comprises an independent flash memory storage drive, where each includes the electronics, circuitry, and microchips typically included in a flash memory drive, such as a USB flash drive, thumb drive, solid state hard drive, or other discrete flash memory device.
  • Flash storage device 801 includes a plurality of these memory subsystems 830 , as shown in FIG. 8 , and the associated electronics, circuitry, and microchips could be included on printed circuit boards with the other elements of flash storage device 801 .
  • a different number of memory subsystems 830 could be included, as indicated by the ellipses.
  • Each flash memory controller 831 includes circuitry to store and retrieve data from associated ones of flash memory arrays 832 and exchange the data with interface system 810 over associated ones of SAS links 864 .
  • Memory subsystems 830 also each include an SAS target portion for communicating with the SAS portion of interface system 810 .
  • the SAS target portion could be included in each of flash memory controllers 831 , or could be included in separate interface elements of each of memory subsystems 830 .
  • Flash memory arrays 832 each include a flash memory storage medium. It should be understood that other non-volatile memory technologies could be employed in each of memory subsystems 830 , as discussed herein.
  • Power control system 870 comprises circuitry and logic to monitor power for elements of flash storage device 801 .
  • Power control system 870 could include circuitry and logic to provide backup power to the elements of flash storage system 801 when a primary power source (not shown in FIG. 8 for clarity) is interrupted.
  • Power control system 870 monitors and conditions power received from at least backup power source 880 over link 868 , and provides power status information to ones of storage processing system 820 , interface system 810 , or memory subsystems 830 over links such as link 866 and link 867 .
  • the links could include power links, discrete communication lines, or other communication interfaces, such as serial, system management bus (SMBus), inter-integrated circuit (I2C), or other communication links.
  • Backup power source 880 includes a power source for providing backup power to elements of flash storage device 801 .
  • Backup power source 880 could include circuitry to store power, condition power, regulate, step up or step down power sources to various voltages, monitor remaining power for power sources, or other circuitry for power supply and conditioning.
  • the power source included in backup power source 880 could be of a variety of backup power source technology types, and could comprise batteries, capacitors, capacitor arrays, flywheels, dynamos, piezoelectric generators, solar cells, thermoelectric generator, or other power source, including combinations thereof.
  • FIG. 9 is a sequence diagram illustrating a method of operation of storage system 800 .
  • host system 850 transfers a write instruction and associated data to be written by flash storage system 801 to interface system 810 .
  • the write instruction typically includes command information associated with the write instruction as well as a storage location indicator, such as a storage address at which to store the data.
  • the write instruction and data is transferred over aggregated SAS link 860 in this example, and received by the SAS target portion of interface system 810 .
  • Interface system 810 could optionally provide an acknowledge message to host system 850 in response to successfully receiving the write instruction and associated data.
  • Interface system 810 then transfers the write instruction and data to storage processing system 820 over PCIe interface 862 .
  • interface system 810 modifies the write instruction or data into a different communication format and protocol for transfer over interface 862 , which could include generating a new write command and associating the data with the new write command for transfer over interface 862 for receipt by storage processing system 820 .
  • Storage processing system 820 receives the write command and data and issues a ‘write complete’ message back to interface system 810 .
  • Interface system 810 then transfers the write complete message or associated information for receipt by host system 850 .
  • the write complete message indicates that host system 850 is free to initiate further commands or end the write process associated with the write command described above. In some examples, the write complete message is associated with command queuing.
  • primary power is interrupted to flash storage device 801 .
  • the point at which power is interrupted in this example is after interface system 810 receives the write instruction from host system 850 and transfers any optional associated acknowledge message to host system 850 for the write instruction, but before the data is written to ones of flash memory arrays 832 .
  • power control system 870 detects the power interruption and provides backup power from backup power source 880 to elements of device 801 .
  • backup power source 880 provides the backup power in a redundant manner with any primary power. If primary power is interrupted, backup power source 880 could apply backup power immediately or simultaneously so flash storage device 801 experiences no interruption in power supply.
  • power control system 870 transfers a power loss indicator to ones of storage processing system 820 , interface system 810 , and memory subsystems 830 . Further power information could be transferred, such as power source profiles, power down instructions, backup power technology type identifiers, remaining backup power levels, or other information.
  • interface system 810 , storage processing system 820 , and memory subsystems 830 enter into a soft power down mode.
  • interface system 810 caches pending write instructions along with associated data as power queued data 845 .
  • the cached write instructions and data are ones that have not been committed to ones of memory subsystems 830 .
  • Power queued data 845 could be stored in a non-transitory computer-readable medium, such as a flash memory, SRAM, or other non-volatile memory. This non-volatile memory could be included in interface system 810 , or external to interface system 810 .
  • storage processing system 820 could commit pending writes within processing system 820 to ones of memory subsystems 830 .
  • storage processing system 820 could transfer any pending read data to interface system 810 , and interface system 810 would cache any read instructions and associated data not yet provided to host system 850 in power queued data 845 . Also, in the soft power down mode, storage processing system 820 commits any storage allocation information to non-volatile memory.
  • the non-volatile memory could be the same memory which includes power queued data 845 .
  • interface system 810 caches pending instructions and data
  • Interface system 810 and storage processing system 820 communicate to coordinate which instructions and data will be completed or committed, and which will be cached before power loss.
  • a predetermined power down sequence could be employed for the soft power down operations, or the soft power down process could be dependent upon the quantity of pending transactions and available backup power. For example, the amount of time within which the soft power down activities must occur varies upon many factors, such as remaining backup power, a quantity of pending transactions, or other factors.
  • Storage processing system 820 could determine a threshold quantity of instructions to complete based on remaining backup power indicators as provided by power control system 870 , and any instructions exceeding the threshold number would be cached. Furthermore, during a power loss, pending read instructions may be inhibited from transfer over host interface 860 , as host system 850 may also be without power. In some examples, incoming write data can be marked or flagged as critical data by a host system, and such data could be committed ahead of other non-critical data to ones of memory subsystem 830 , and the non-critical data would be cached as power queued data 845 .
  • Power control system 870 could receive status indicators from interface system 810 , storage processing system 820 , or memory subsystems 830 which indicate a state of soft power down sequencing, such as if all pending transactions and storage allocation information have been committed or cached. Power control system 870 powers down elements of flash storage device 801 in response to these status indicators, such as powering down ones of memory subsystems 830 when all write data or storage allocation information has been committed, powering down interface system 810 when all remaining pending transactions have been cached, and powering down storage processing system 820 when storage allocation information has been committed. It should be understood that other variations on power down sequencing could occur.
  • primary power resumes.
  • Primary power could resume while flash storage device 801 is still receiving backup power, and no interruption in the operation of flash storage device 801 may occur.
  • power control system 870 applies power to the various elements of flash storage device 801 in response to primary power resuming.
  • interface system 810 retrieves cached transactions and data from power queued data 845 and executes these transactions. For example, pending and cached writes could be committed to ones of memory subsystems 830 , and pending reads could be performed and associated data returned to host system 850 .
  • storage processing system 420 could read this storage allocation information from the associated ones of memory subsystems 830 and transfer this information to a volatile memory location, such as DRAM or a buffer.
  • FIG. 10 includes graphs 1010 - 1031 illustrating example power down curves.
  • Backup power source 880 of FIG. 8 could include a variety of backup power source technology types, as discussed above.
  • Each power source type or technology typically has an associated power output and power profile, which depends highly on the technology and elements employed in the type of backup power source or technology type.
  • Each graph in the top portion of FIG. 10 namely graphs 1010 , 1020 , and 1030 , includes a horizontal time axis and a vertical power output axis.
  • the power output axis relates to the power output of a type of power source, and is related to the passage of time along the horizontal time axis.
  • the power draw axis relates to a forced power draw of flash storage device 801 , and is related to the passage of time along the horizontal time axis.
  • the power draw profile could be pre-programmed into power control system 870 according to the power source type. In other examples, the power draw profile could be programmable over an external configuration interface.
  • Graph 1010 indicates the typical power output of a battery-based power source
  • graph 1020 indicates the typical power output of a capacitor-based power source
  • graph 1030 indicates the typical power output of a flywheel-based power source.
  • Graph 1011 indicates the forced power draw of flash storage device 801 when using a backup power source employing a battery-based power source.
  • Graph 1021 indicates the forced power draw of flash storage device 801 when using a backup power source employing a capacitor or array of capacitors.
  • Graph 1031 indicates the forced power draw of flash storage device 801 when using a backup power source employing a flywheel.
  • the forced power draw includes an artificially induced power draw, or associated current draw, for flash storage system 801 when powered by backup power source 880 .
  • a power draw could be forced by power control system 870 .
  • Power control system 870 could control various parameters of operation of flash storage system 801 to match the power draw of flash storage system 801 to the associated source power output curves.
  • This matching could include powering down various elements of flash storage device 801 in a sequence which reduces power draw according to the typical power output indicated by any of graphs 1010 , 1020 , or 1030 .
  • This matching could include ramping down clock speeds or clock frequencies of various elements of flash storage device 801 to induce a power draw matching that of any of graphs 1010 , 1020 , or 1030 .
  • powering down ones of interface system 810 , storage processing system 820 , or memory subsystems 830 is performed a predetermined sequence in accordance with the power output curve associated with the backup power source type.
  • power control system 870 instructs elements of device 801 to throttle the various interfaces and elements, such as memory elements.
  • interface speeds or a speed of interface transactions is correlated to a power source type, based on the power consumption of the associated circuit and logic elements. For example, in battery power sources, the lower the power drawn correlates to more energy being available so elements of flash storage device 801 can be throttled down in response to a primary power interruption. In flywheel power sources, power down completion time needs to be minimized to ensure maximum energy can be drawn from the flywheel, and elements of flash storage device 801 are throttled up to induce a high power draw for a shorter amount of time.
  • the throttling should be proportional to the voltage of the capacitor or capacitor array, so that when the capacitor has a high voltage, the elements of flash storage device 801 are throttled up to induce a high power draw, and as the voltage of the capacitor drops, a throttling down would increase proportionally.
  • data integrity can be better maintained when a power down sequence as described herein allows maximum use of backup power when maximal power is available from a backup power source, and minimize use of backup power when minimal backup power output is available.
  • critical operations could be committed during the times when maximum backup power is available, and less critical operations could be performed during times when minimal backup power is available.
  • memory devices involved in storing power queued data 845 could include lower power draw elements as compared to memory subsystems 830 , and thus pending transactions could be preferably cached by interface system 810 rather than committed to relatively high-power draw memory subsystems 830 .
  • by intelligently ramping down power draw according to the specific backup power source technology or type smaller backup power sources could be employed as power draw is more tailored to such sources.
  • a cost of a primary source of energy instead of a backup power source is considered when throttling the various elements of device 801 .
  • energy costs may be higher, and during non-peak energy hours, energy costs may be lower.
  • the elements of device 801 could be throttled to a lower performance operation, such as by slowing memory interfaces, or slowing a processor clock speed, among other performance throttling modifications.
  • the elements of device 801 could be allowed to operate at a maximum or higher level of performance and no throttling applied.
  • FIG. 11 is a system diagram illustrating storage system 1100 .
  • Storage system 1100 includes storage processing system 820 of flash storage system 801 , discrete logic 1120 , selection circuit 1122 , resistors 1124 - 1128 , voltage regulator 1130 , and capacitor 1140 .
  • the elements of FIG. 11 are employed as clock rate optimization circuitry in a clock frequency controlling scheme, where the clock rate or clock frequency for a clock system or clock generator circuit associated with storage processing system 820 of flash storage device 801 is varied based on a utilization of processing portions of storage processing system 820 .
  • the elements of FIG. 11 could be included in flash storage system 801 , such as in a common enclosure or on common printed circuit boards.
  • Discrete logic 1120 includes communication logic to interpret indicators transferred by utilization monitor 1112 , such as logic elements, communication interfaces, processing systems, or other circuit elements.
  • Selection circuit 1122 includes solid-state switching elements, such as transistors, transmission gates, or other selection logic to select one of resistors 1124 - 1128 and connect the selected resistor to V out pin 1132 of voltage regulator 1130 .
  • Resistors 1124 - 1128 include resistors, or could include active resistor elements, such as temperature-dependent resistors, voltage or current controlled transistors, or other resistor-like elements.
  • Voltage regulator 1130 includes voltage regulation circuitry to provide power at a predetermined voltage at V out pin 1132 based on varying voltages applied to voltage regulator 1130 .
  • Capacitor 1140 includes capacitor circuit elements or arrays of capacitors.
  • Links 1150 - 1155 include circuit traces, discrete wires, optical links, or other media to communicate indicators, voltages, currents, clock speeds, or power between the various elements of FIG. 11 .
  • storage processing system 820 includes utilization monitor 1112 .
  • Utilization monitor 1112 could include a software process executed by storage processing system 820 which monitors various parameters of utilization to determine a utilization indicator.
  • the various parameters of utilization could include data throughput, processor utilization, memory usage, instruction load, power draw, active processes, active transactions, or other parameters.
  • Utilization monitor 1112 provides an indicator of utilization over link 1150 to discrete logic 1120 .
  • the indicator could be a voltage level proportional to utilization, a multi-level digitally encoded indicator, or a binary indicator, among other indicators.
  • the indicator When the utilization of storage processing system 820 is low, such as during idle states or low throughput states, the indicator could remain in an inactive condition. However, in response to a higher utilization, such as a non-idle state or high throughput state, the indicator could transition to an active condition. Other indicator states could be employed, such as a proportional indicator that varies according to utilization levels.
  • Discrete logic 1120 then communicates with selection circuit 1122 to select one of resistors 1124 - 1128 .
  • Selection circuit 1122 could include a transistor switch or other switching elements. The particular one of resistors 1124 - 1128 which is selected by selection circuit 1122 controls the resistance applied to V out pin 1132 of voltage regulator 1130 , allowing for adjustment in the output supply of voltage regulator 1130 .
  • resistor 1128 is active resistor 1160 , and could correspond to a low utilization, and thus a low associated output voltage of voltage regulator 1130 .
  • resistor 1124 or 1126 could be selected resulting in a correspondingly higher output voltage of voltage regulator 1130 .
  • Capacitor 1140 conditions the output voltage of voltage regulator 1130 to reduce ripple, noise, transition glitches, and provide a smooth voltage to Vin pin 1144 of flash storage device 801 .
  • Vin pin 1144 controls a clock frequency applied to storage processing system 820 , where a clock generation portion of flash storage device 801 or storage processing system 820 determines a clock rate or clock frequency proportionally to the voltage applied to Vin pin 1144 .
  • the clock frequency is increased in speed.
  • the clock frequency is decreased in speed.
  • the applied voltage corresponds to a clock frequency in this example, in other examples the applied voltage could correspond to a core voltage of semiconductor portions of flash storage device 801 , where reduced core voltages correspond to reduced utilization levels, and vice versa.
  • external clock generation circuits have an output clock rate or frequency modified based on the utilization level discussed herein.
  • Thresholds could be employed for the various utilization levels. For example, when utilization is below a first threshold, then the clock speed is adjusted to a first speed via a first voltage level applied to Vin pin 1144 , when utilization is between the first threshold and a second threshold (where the second threshold is higher than the first threshold), then the clock speed is adjusted to a second speed higher than the first speed via a second voltage level applied to Vin pin 1144 , and when utilization is higher than the second threshold, then the clock speed is adjusted to a third speed higher than the second speed via a third voltage level applied to Vin pin 1144 .
  • the output of utilization monitor 1112 is used to control link aggregation of a front-end interface, such as a host interface.
  • a front-end interface such as a host interface.
  • the amount of aggregation can be proportional to utilization of processing system 820 .
  • the number of links aggregated into the host interface can be reduced, possibly to one physical link.
  • an additional physical link can be aggregated into the host interface. Further utilization thresholds could increase further amounts of aggregated physical links.
  • the utilization level information detected by utilization monitor 1112 could be provided to a front-end interface system for responsively controlling the amount of link aggregation.
  • other elements of flash storage system 801 have a clock speed or operation rate modified as done for storage processing system 820 above.
  • memory subsystems 830 could each be throttled or have a modified clock speed according to utilization monitor 1112 .
  • performance such as transaction speed or clock speed of all elements of flash storage system 801 could be actively and dynamically scaled according to the read/write demand of the host system.
  • FIG. 12 is a system diagram illustrating storage system 1200 .
  • Storage system 1200 includes solid state memory device 1210 and configuration system 1240 .
  • Solid state memory device 1210 could be an example of devices 101 , 401 , 701 , and 801 , although devices 101 , 401 , 701 , and 801 could use other configurations.
  • solid state memory device 1210 includes interface 1212 and optional external configuration pins P 1 -P 4 .
  • Interface 1212 comprises a configuration interface for communicating configuration information to configuration system 1240 over configuration link 1230 and receiving configuration instructions from configuration system 1240 over configuration link 1230 .
  • Configuration link 1230 could include further systems, networks, links, routers, switches, or other communication equipment.
  • configuration link 1230 is provided over a front-end interface, such as the various host interfaces described herein.
  • Configuration system 1240 comprises a computer system, processing system, network system, user terminal, remote terminal, web interface, or other configuration system.
  • Configuration system 1240 includes configuration user interface 1250 , which allows a user of configuration system 1240 to create and transfer configuration instructions to solid state memory device 1210 .
  • Configuration user interface 1250 also can present a graphical or text-based user interface to a user for displaying a present configuration or configuration options to the user.
  • the configuration of solid state memory device 1210 can be modified using configuration user interface 1250 or optional configuration pins P 1 -P 4 .
  • pins P 1 -P 4 can be spanned by a removable jumper 1220 or multiple jumpers.
  • a user can alter a configuration of solid state memory device 1210 by bridging various ones of pins P 1 -P 4 .
  • Pins P 1 -P 4 interface with logic or circuitry internal to solid state memory device 1210 , such as programmable logic which triggers a script or software routine to responsively configure firmware or software managing the various elements of solid state memory system 1210 .
  • a user can alter and view configurations of solid state memory device 1210 through configuration user interface 1250 .
  • the function of pins P 1 -P 4 can be altered by configuration user interface 1250 , so that commonly used functions could be easily selected by a jumper or jumpers.
  • a factory-set configuration of pins P 1 -P 4 could be altered by configuration user interface 1250 .
  • four pins are shown in FIG. 12 , it should be understood that a different number of configuration pins could be employed. Instead of configuration pins, micro-switches could be employed.
  • a user can physically change a configuration by altering a physically installed size or number of memory devices.
  • the front-end or host interface could be altered, such as changing a link aggregation configuration, an interface speed, or other parameters of a host interface.
  • a capacity of solid state memory device 1210 could be altered, so as to limit a capacity or select from among various potential capacities.
  • Various performance parameters could be altered. For example, a thermal shut off feature could be altered or enabled/disabled to disable device 1210 or portions thereof according to temperature thresholds.
  • a read-only status could be enabled/disabled, or selectively applied to subdivisions of the total storage capacity, such as different volumes.
  • a redundancy scheme could also be selected, such as a redundant array of independent disk (RAID) array configuration.
  • RAID redundant array of independent disk
  • Various solid state media of device 1210 could be subdivided to create separate RAID volumes or redundant volumes. Striping among various memory subsystems could also be employed.
  • Encryption configurations could also be applied, such as encryption schemes, passwords, encryption keys, or other encryption configurations for data stored within device 1210 . Encryption keys could be transferred to device 1210 over interface 1212 .
  • Compression schemes could also be applied to data read from and written to the various memory subsystems, and the compression schemes could be selected among over the various configuration or jumper interfaces, or a compression scheme could be uploaded via the configuration interfaces.
  • Link aggregation could also be altered by the configuration elements described in FIG. 12 .
  • a number of SAS links could be configured to be aggregated into a single logical link, or separated into separate links, and associated with various volumes.
  • An associated storage processing system could be configured to selectively merge or separate ones of the physical links into the aggregated multi-channel interface based on instructions received over the configuration interfaces.
  • a physical drive is an actual tangible unit of hardware of a disk, solid state, tape, or other storage drive.
  • a logical drive typically describes a part of a physical disk or physical storage device that has been partitioned and allocated as an independent unit, and functions as a separate drive to the host system.
  • one physical drive could be partitioned into logical drives F:, G:, and H:, each letter representing a separate logical drive but all logical drives still part of the one physical drive.
  • Using logical drives is one method of organizing large units of memory capacity into smaller units.
  • a virtual drive is typically an abstraction, such as by spanning, of multiple physical drives or logical drives to represent a single larger drive to a host system.
  • the various solid state memory subsystems are physical drives that are merged together into one enclosure to create a virtual drive configured and accessed by the associated storage processing system, such as storage processing system 820 , among others.
  • the physical drive can have logical volumes associated therewith, and the virtual drives can also have logical volumes associated therewith.
  • the associated storage processing system binds the virtual drive(s) and associated memory subsystems to target ports on the associated interface system, such as interface system 810 . Target and initiator ports on the interface system are configured and controlled by the storage processing system.
  • the virtual drives that have been bound to the target ports are then presented to external systems, such as a host system, as a physical drive.
  • configuration elements of solid state memory device 1210 could also alter how solid state memory device 1210 appears to a host system.
  • Virtual subdivisions of the available storage space of device 1210 could be configured, where the configuration indicates a quantity and arrangement of virtual subdivisions.
  • these virtual subdivisions could present a plurality of virtual drive volumes to a host system.
  • These virtual volumes could be provided over various ones of front-end or host interface links, such as ones of SAS links providing an associated volume.
  • a single device 1210 could appear to a host system as several separate ‘physical’ drives over a single host interface or a plurality of host links comprising a host interface.
  • each of SAS links in host interface 460 could be configured to correspond to a separate virtual drive and each virtual drive could then be presented to host system 450 or to multiple host systems as separate volumes or drives over separate links.
  • the various virtual drives could each comprise different configurations such as sizes, capacities, performances, redundancy, or other parameters.
  • Configuration pins P 1 -P 4 could be employed to select among predetermined volume or drive configurations. For example, first ones of pins P 1 -P 4 could select a first virtual volume and host interface configuration, and second ones of pins P 1 -P 4 could select a second virtual volume and host interface configuration. If multiple virtual drives are employed, then individual ones of the virtual drives could be associated with individual ones of host interface links. The configuration pins P 1 -P 4 could select among these interface and volume configurations. For example, in FIG. 8 , a first plurality of links 860 could correspond to a first virtual drive, and a second plurality of links 860 could correspond to a second virtual drive.
  • This host system 850 would see flash storage device 801 as two separate drives, with each drive using ones of links 860 .
  • the virtual drives could each span all of memory subsystems 830 , or could be apportioned across ones of memory subsystems 830 , and consequently managed by storage processing system 820 .
  • the pins or user configuration interface could also alter identification parameters of a solid state device, such as addressing parameters, SCSI addresses or identifiers, among other identification parameters.
  • Data parallelization could be performed for each volume, where data received for a first volume is parallelized among memory subsystems associated with the first volume, and data received for a second volume is parallelized among memory subsystems associated with the second volume.
  • the various volumes need not comprise exclusive memory subsystems, as the associated storage processor could determine and maintain storage allocation information to mix parallelization of data among all memory subsystems while maintaining separate volume allocations across the various memory subsystems.
  • all memory subsystems could be employed for storage among multiple logical volumes, and the amount of storage allocated to each volume could be dynamically adjusted by modifying the storage allocation information accordingly.
  • Configuration user interface 1250 or pins P 1 -P 4 could be used to adjust these volume allocations and configurations.
  • a processing system such as a storage processing system, could apply the configurations to device 1210 .
  • Associated firmware could be modified, updated, or executed using configurations of jumpers P 1 -P 4 or configuration user interface 1250 .
  • the processing system such as processing system 120 , 420 , 713 , or 820 can perform other functions, such as orchestrate communication between front-end interfaces and back-end interfaces, manage link aggregation, parallelize data, determining addressing associated with parallelized data portions or segments, data integrity checks, such as error checking and correction, data buffering, and optimization of parallelized data portion size to maximize performance of the systems discussed herein. Other operations as discussed herein could be performed by associated processing systems.
  • front-end interface systems such as interface systems 110 , 410 , 712 , or 810 can provide performance functions such as link aggregation, error detection and correction, I/O processing, buffering, or other performance off-loading for features of host interfaces.
  • FIG. 13 includes side view diagrams illustrating storage system 1301 .
  • the diagrams illustrated in FIG. 13 are intended to illustrate the mechanical design and structure of storage system 1301 .
  • the upper diagram illustrates a side-view of an assembled storage system, whereas the lower diagram illustrates a cutaway/simplified view including a reduced number of elements of storage system 1301 to emphasize thermal design elements.
  • Storage system 1301 is an example of devices 101 , 401 , 701 , 801 , or 1210 , although devices 101 , 401 , 701 , 801 , or 1210 could use other configurations.
  • Storage system 1301 includes chassis 1310 - 1312 which provides structural support and mountings for mating the various printed circuit boards (PCB) of storage system 1301 .
  • PCB printed circuit boards
  • storage system 1301 includes four PCBs, namely PCB 1320 - 1326 .
  • Each PCB has a plurality of integrated circuit chips (IC) disposed thereon.
  • the ICs could be attached by solder and/or adhesive to the associated PCB.
  • the ICs are arranged on both sides of many of the PCBs, but are not disposed on an outside surface of PCBs 1320 and 1326 .
  • some outer surfaces of storage system 1301 are formed from surfaces of PCBs 1320 and 1326 .
  • chassis elements 1310 - 1312 form the left/right outer surfaces
  • PCBs 1320 and 1326 form the top/bottom outer surfaces. Since this view is a side view, the ends projecting into the diagram and out of the diagram could be formed with further structural elements, such as chassis elements, end caps, connectors, or other elements.
  • the outer surfaces of PCBs 1320 and 1326 could be coated with a non-conductive or protective coating, such as paint, solder mask, decals, stickers, or other coatings or layers.
  • chassis 1310 - 1312 are structural elements configured to mate with and hold the plurality of PCBs, where the chassis structural elements and the PCBs are assembled to comprise an enclosure for storage system 1301 .
  • a tongue-and-groove style of configuration could be employed, such as slots or grooves to hold the edges of the plurality of PCBs.
  • An outer surface of a PCB comprises a first outer surface of the enclosure and an outer surface of a second PCB comprises a second outer surface of the enclosure.
  • FIG. 13 shows a simplified and cut-away side view of some elements of storage system 1301 .
  • Chassis 1310 and PCB 1322 are included to emphasize thermal management features.
  • Internal to storage system 1301 are high power components, namely components which use a relatively large amount of power and thus become hot during operation. Due to the high-density mechanical design of storage system 1301 , heat from various hot ICs is desired to be channeled to outside enclosure surfaces for radiation and subsequent cooling. These hot ICs may not have immediate access to outside surfaces, and may be disposed in centralized locations.
  • a high-power IC is disposed on one surface of PCB 1322 , namely IC 1350 .
  • This IC could include a processor or other high-density and high-power utilization integrated circuit, such as processing system 120 , 420 , 713 , or 820 , or chip-scale device 710 .
  • Other ICs could be configured in this manner as well.
  • Heat spreader 1360 is thermally bonded to IC 1350 , possibly with heat sink compound, thermally conductive adhesive, or with fasteners connected to PCB 1322 , among other thermal bonding techniques to maximize heat transfer from IC 1350 to heat spreader 1360 .
  • Heat spreader 1360 also overhangs IC 1350 and is further thermally bonded to a low thermal resistance interface 1362 .
  • Heat spreader 1360 and interface 1362 could be thermally bonded similarly to heat spreader 1360 and IC 1322 .
  • Low thermal resistance interface 1362 is then thermally bonded to chassis 1310 , possibly through a groove or slot in chassis 1310 .
  • chassis 1310 and 1312 comprise thermally conductive materials, such as metal, ceramic, plastic, or other material, and are able to sink heat away from high-power ICs, such as high-power IC 1350 .
  • Heat spreader 1360 comprises any material that efficiently transports heat from a hot location to a cooler location, such as by heat dissipation, conduction, or other heat transfer techniques.
  • heat spreader 1360 could comprise a metal composition, such as copper.
  • graphite or planar heat pipes are employed.
  • Interface 1362 comprises any material with a high thermal conductivity and transports heat to chassis 1310 by any physical means, such as conduction, convection, advection, radiation, or a combination thereof.
  • interface 1362 could comprise metal compositions, heat pipes, graphite, or other materials, including combinations thereof.
  • thermal conductivity anisotropy is aligned for heat spreader 1360 and interface 1362 such that the thermal resistance minimum is aligned with the direction of optimal heat flow.
  • heat spreader 1360 is elongated and allowed to thermally contact chassis 1310 , and interface 1362 could be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A solid state storage device includes an interface system configured to communicate with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device. The solid state storage device also includes a storage processing system configured to communicate with the interface system to receive the data, process the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transfer the parallelized data. The interface system is configured to receive the parallelized data, apportion the parallelized data among the plurality of solid state memory subsystems, and transfer the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.

Description

RELATED APPLICATIONS
This patent application is a continuation of U.S. patent application Ser. No. 13/270,084 that was filed on Oct. 10, 2011, and is entitled “SYSTEMS AND METHODS FOR OPTIMIZING DATA STORAGE AMONG A PLURALITY OF SOLID STATE MEMORY SUBSYSTEMS,” which is also related to and claims priority to U.S. Provisional Patent Application No. 61/391,651, entitled “Apparatus and System for Modular Scalable Composite Memory Device,” filed on Oct. 10, 2010. U.S. patent application Ser. No. 13/270,084 and U.S. Provisional Patent Application No. 61/391,651 are hereby incorporated by reference into this patent application.
TECHNICAL FIELD
Aspects of the disclosure are related to the field of computer data storage, and in particular, data storage systems employing solid state storage elements.
TECHNICAL BACKGROUND
Computer systems typically include bulk storage systems, such as magnetic disc drives, optical storage devices, tape drives, or solid state storage drives, among other storage systems. In these computer systems, a host system, such as a network device, server, or end-user computing device, communicates with external bulk storage systems to store data or to access previously stored data. These bulk storage systems are traditionally limited in the number of devices that can be addressed in total, which can be problematic in environments where higher capacity or higher performance is desired.
One such storage technology variety, namely solid state media, typically relies upon non-moving underlying storage medium elements, such as flash memory, phase change memory, magnetoresistive random access memory (MRAM), or other media. Although the solid state memory types can see increased throughput relative to moving disc and tape media, these solid state memory types still have throughput limitations. Also, data access in some solid state media is typically performed in large blocks, such as in NAND flash memory, and the desired data portions must be accessed and parsed by the underlying storage media control elements before subsequent reads or writes can occur. Also, typical solid state memory drives exchange data over a single physical link, which further limits data access flexibility and throughput. However, increasing data storage and retrieval in networked, cloud, and enterprise environments find these limitations of solid state memory and associated drive electronics increasingly troublesome.
Overview
In the examples discussed herein, a plurality of solid state memory subsystems are included in a single device and managed by a storage processing system separate from a host system. Each of the plurality of memory subsystems are internally addressed and mapped by the storage processing system in a parallel manner over multiple channels to allow increased capacity, reduced latency, increased throughput, and more robust feature sets than traditional bulk storage systems.
Examples disclosed herein include systems, methods, and software of solid state storage devices. In a first example, a solid state storage device is disclosed. The solid state storage device includes an interface system configured to communicate with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device. The solid state storage device also includes a storage processing system configured to communicate with the interface system to receive the data, process the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transfer the parallelized data. The interface system is configured to receive the parallelized data, apportion the parallelized data among the plurality of solid state memory subsystems, and transfer the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.
In a second example, a method of operating a solid state storage device is disclosed. The method includes, in an interface system, communicating with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device. The method also includes, in a storage processing system, communicating with the interface system to receive the data, processing the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transferring the parallelized data. The method also includes, in the interface system, receiving the parallelized data, apportioning the parallelized data among the plurality of solid state memory subsystems, and transferring the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.
BRIEF DESCRIPTION OF THE DRAWINGS
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, the disclosure is not limited to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
FIG. 1 is a system diagram illustrating a storage system.
FIG. 2 is a block diagram illustrating a solid state storage device.
FIG. 3 is a flow diagram illustrating a method of operation of a solid state storage device.
FIG. 4 is a system diagram illustrating a storage system.
FIG. 5 is a sequence diagram illustrating a method of operation of a solid state storage device.
FIG. 6 is a sequence diagram illustrating a method of operation of a solid state storage device.
FIG. 7 is a system diagram illustrating a storage system.
FIG. 8 is a system diagram illustrating a storage system.
FIG. 9 is a sequence diagram illustrating a method of operation of a solid state storage device.
FIG. 10 includes graphs illustrating example power down curves.
FIG. 11 is a system diagram illustrating a storage system.
FIG. 12 is a system diagram illustrating a storage system.
FIG. 13 includes side view diagrams illustrating a storage system.
DETAILED DESCRIPTION
FIG. 1 is a system diagram illustrating storage system 100. Storage system 100 includes solid state storage device 101 and host system 150. Solid state storage device 101 and host system 150 communicate over link 140. In FIG. 1, host system 150 can transfer data to be stored by solid state storage device 101, such as a ‘write’ transaction where solid state storage device 101 stores associated data on a computer-readable storage medium, namely ones of solid state storage media 132. Host system 150 can also request data previously stored by solid state storage device 101, such as during a ‘read’ transaction, and solid state storage device 101 retrieves associated data from ones of solid state storage media 132 and transfers the requested data to host system 150. Additionally, further transactions than writes and reads could be handled by the elements of solid state storage device 101, such as metadata requests and manipulation, file or folder deletion or moving, volume information requests, file information requests, or other transactions.
Solid state storage device 101 includes interface system 110, storage processing system 120, storage allocation information 125, two memory subsystems 130, and two solid state storage media 132. Links 140-146 each comprise physical, logical, or virtual communication links, capable of communicating data, control signals, instructions, or commands, along with other information. Links 141-146 are configured to communicatively couple the associated elements of solid state storage device 101, whereas link 140 is configured to communicatively couple solid state storage device 101 to external systems, such as host system 150. In some examples, links 141-146 are encapsulated within the elements of solid state storage device 101, and may be a software or logical links. Also, in the examples herein, communications exchanged with a host system are typically referred to as ‘front-end’ communications, and communications exchanged with memory subsystems are typically referred to as ‘back-end’ communications.
Interface system 110 includes interface circuitry and processing systems to exchange data for storage and retrieval with host system 150 over link 140, as well as to exchange data for processing by storage processing system 120 over link 141. In typical examples, interface system 110 receives instructions and data from host system 150 over an aggregated link, where multiple physical interfaces each comprising a physical communication layer are bonded to form a combined-bandwidth link. Interface system 110 formats the received instructions and associated data for transfer to a central processing system, such as storage processing system 120. Interface system 110 also formats data and information for transfer to host system 150.
Storage processing system 120 includes a processor and non-transitory computer readable memory which includes computer-readable instructions such as firmware. These instructions, when executed by storage processing system 120, instruct storage processing system 120 to operate as described herein. For example, storage processing system could be configured to receive data and instructions transferred by interface system 110, and process the data and instructions to optimize storage and retrieval operations. Write and read instructions and data are processed against storage allocation information 125 to optimize data transfer, such as parallelization, interleaving, portion sizing, portion addressing, or other data transfer optimizations for data storage and retrieval with memory subsystems. Storage processing system 120 and storage allocation information 125 are shown communicatively coupled over link 144, although in other examples, storage allocation information 125 could be included in storage processing system 120 or other circuitry.
Memory subsystems 130 each include circuitry to store and retrieve optimized data portions with solid state storage media 132 over associated links 145-146 and exchange the data with interface system 110 over associated links 142-143. Solid state storage media 132 each include a solid state storage array, such as flash memory, static random-access memory (SRAM), magnetic memory, phase change memory, or other non-transitory, non-volatile storage medium. Although two of memory subsystems 130 and solid state storage media 132 are shown in FIG. 1, it should be understood that a different number could be included.
Links 140-146 each use various communication media, such as air, space, metal, optical fiber, or some other signal propagation path, including combinations thereof. Links 140-146 could each be a direct link or might include various equipment, intermediate components, systems, and networks. Links 140-146 could each be a common link, shared link, aggregated link, or may be comprised of discrete, separate links. Example types of each of links 140-146 could comprise serial attached SCSI (SAS), aggregated SAS, Ethernet, small-computer system interface (SCSI), integrated drive electronics (IDE), Serial AT attachment interface (ATA), parallel ATA, FibreChannel, InfiniBand, Thunderbolt, universal serial bus (USB), FireWire, peripheral component interconnect (PCI), PCI Express (PCIe), communication signaling, or other communication interface types, including combinations or improvements thereof.
The elements of FIG. 1 could be included in a single enclosure, such as a case, with an external connector associated with host interface 140 to communicatively couple the associated elements of solid state storage device 101 to external systems, connectors, cabling and/or transceiver elements. The enclosure could include various printed circuit boards with the components and elements of solid state storage device 101 disposed thereon. Printed circuit traces, flexible printed circuits, or discrete wires could be employed to interconnect the various elements of solid state storage device 101. If multiple printed circuit boards are employed, inter-board connectors or cabling are employed to communicatively couple each printed circuit board.
FIG. 2 includes example embodiments of several elements of solid state storage device 101. Specifically, FIG. 2 includes interface system 110, storage processing system 120, memory subsystem 130, and solid state storage medium 132. It should be understood that the elements of FIG. 2 are merely exemplary, and could include other configurations. Furthermore, elements of FIG. 2 could also be exemplary embodiments of the elements described in FIGS. 3-13. As indicated by the ellipses in FIG. 2, further memory subsystems and solid state storage media could be included, along with associated interfaces.
In FIG. 2, interface system 110 includes host interface 212, input/output (I/O) processing system 214, and high-speed interface 216. Host interface 212, I/O processing system 214, and high-speed interface 216 each communicate over bus 219, although discrete links could be employed. Host interface 212 includes connectors, buffers, transceivers, and other input/output circuitry to communicate with a host system over external device interface 140. External device interface 140 could include multiple physical links aggregated into a single interface. I/O processing system 214 includes a processor and memory for exchanging data between host interface 212 and high-speed interface 216, as well as controlling the various features of host interface 212 and high-speed interface 216. Host interface 212 also communicates with memory subsystem 130 over internal device interface 142 in this example. As shown in FIG. 2, host interface 212 communicates with both an external host system and memory subsystem 130 using a similar communication interface and protocol type. Although external device interface 140 may include an external connector, internal device interface 142 may instead employ circuit traces or internal connectors. External device interface 140 and internal device interface 142 could each comprise an aggregated or non-aggregated serial interface, such as serial-attached SCSI (SAS), although other interfaces could be employed. High-speed interface 216 includes buffers, transceivers, and other input/output circuitry to communicate over internal storage interface 141. Internal storage interface 141 could comprise a multi-lane high-speed serial interface, such as PCI-Express (PCIe), although other interfaces could be employed. Interface system 110 may be distributed or concentrated among multiple elements that together form the elements of interface system 110.
FIG. 2 also includes storage processing system 120, which includes high-speed interface 222, processing system 224, memory 226, and other I/O 228. High-speed interface 222, processing system 224, memory 226, and other I/O 228 each communicate over bus 229, although discrete links could be employed. High-speed interface 222 includes buffers, transceivers, and other input/output circuitry to communicate over internal storage interface 141. Processing system 224 includes a processor and memory for processing data against storage allocation information 125 to determine ones of memory subsystem 130 to store the data on, to parallelize the data for interleaved storage across ones of the lanes of high-speed interface 222 or multiple memory subsystems, among other operations including reconstructing parallelized data during read operations. Processing system 224 may store data and executable computer-readable processing instructions on memory 226 or within optional non-volatile memory 250, which could include random-access memory (RAM). Other I/O 228 includes other various interfaces for communicating with processing system 224, such as power control interfaces, SMBus interfaces, system configuration and control interfaces, or an interface to communicate with non-volatile memory 250 over link 144. Optional non-volatile memory 250 comprises a non-transitory computer-readable medium, such as static RAM, flash memory, electronically erasable and reprogrammable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory. Non-volatile memory 250 is shown to include storage allocation information 125, and could include further information. Storage allocation information 125 includes tables, databases, linked lists, trees, or other data structures for indicating where data is stored within and among a plurality of memory subsystems 130. In some examples, storage allocation information 125 is stored within storage processing system 120, such as within memory 226. Storage processing system 120 may be distributed or concentrated among multiple elements that together form the elements of storage processing system 120.
FIG. 2 also includes memory subsystem 130, which includes target interface 232, processing system 234, and memory interface 236. Target interface 232, processing system 234, and memory interface 236 communicate over bus 239, although discrete links could be employed. Target interface 232 includes buffers, transceivers, and other input/output circuitry to communicate with over internal device interface 142. Processing system 234 includes a processor and memory for exchanging data between target interface 232 and memory interface 236, as well as controlling the various features of target interface 232 and memory interface 236. Memory interface 236 includes buffers, transceivers, and other input/output circuitry to communicate and control solid state storage medium 132. Memory interface 236 could also include memory technology-specific circuitry, such as flash memory electronic erasing and reprogramming circuitry, phase change memory write circuitry, or other circuitry to store data within and read data from solid state storage medium 132 over interface 145. Solid state storage medium comprises memory elements and interface circuitry, where the memory elements could comprises a non-transitory computer-readable medium, such as static RAM, flash memory, electronically erasable and reprogrammable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory.
In further examples, processing system 234 of memory subsystem 130 comprises an application specific processor used to provide performance off-loading from processing system 120. Some examples of performance off-loading includes memory wear-leveling, bad block management, error-detection and correction, parallel addressing and data channeling to individual solid state media included therein.
FIG. 3 is a flow diagram illustrating a method of operation of solid state storage device 101. The operations of FIG. 3 are referenced herein parenthetically. In FIG. 3, interface system 110 communicates (301) with external host system 150 over aggregated multi-channel interface 140 to receive instructions and data for storage. The data for storage could be included with a ‘write’ instruction or series of commands which indicates data to be written and possibly further information, such as a storage location, storage address, write address, volume information, metadata, or other information associated with the data. Host system 150 transfers the data over link 140, and the data is received by interface system 110 of solid state storage system 101. Interface system 110 could provide an acknowledgment message to host system 150 in response to successfully receiving the write instruction and associated data. In some examples, host interface 212 of FIG. 2 receives the data over link 140 and then transfers the data over bus 219 for processing and formatting by I/O processing system 214, and subsequent transfer by high-speed interface 216 over link 141.
Storage processing system 120 communicates (302) with interface system 110 to receive the data and associated write instruction information, processes the data against storage allocation information 125 to parallelize the data among a plurality of solid state memory subsystems 130, and transfers the parallelized data. Storage processing system 120 receives the data over link 141. In some examples, high-speed interface 222 receives the data and associated information over link 141 and transfers the data and associated information over bus 229. Storage processing system 120 processes the data against storage allocation information 125 to determine which memory subsystems will store the data, and parallelizes the data among several memory subsystems. In this example, two memory subsystems 130 are included, and the data is parallelized among each. The data parallelization could include breaking the data into individual portions for storage on an associated memory subsystem, where the individual portions are then transferred over link 141 by storage processing system 120. The individual portions could be transferred by high-speed interface 222 over link 141. In other examples, the data is interleaved among multiple memory subsystems, such as by striping or mirroring. Storage allocation information 125 typically includes a table, database, tree, or other data structure for indicating where data is stored among multiple memory subsystems as well as other information, such as metadata, file system structure information, volume information, logical drive information, virtual drive information, among other information for storing, retrieving, and handling data stored within solid state storage device 101. Storage processing system 120 could perform other operations on the data, such as read-modify-writes, read-modify-write caching, encryption, encoding, implementing a redundancy scheme, calculating redundancy information, compression, or de-duplication of data during storage and subsequent retrieval, among other operations.
Interface system 110 receives (303) the parallelized data, apportions the parallelized data among the plurality of solid state memory subsystems 130, and transfers the parallelized data for storage by the plurality of solid state memory subsystems 130. The parallelized data is received over link 141 in this example, and subsequently transferred by interface system 110 over ones of links 142-143. In some examples, the data portion is received by target interface 232 and transferred over bus 239. Transferring the data portions over links 142-143 could include initiating a ‘write’ command with each associated memory subsystem 130 for the individual portion of data, and transferring the individual portion of data along with the associated write command to the appropriate memory subsystem 130. Additional data could accompany the parallelized data, such as addressing information, identifiers for the associated memory subsystem, metadata, or other information.
Each of solid state memory subsystems 130 is configured to receive the associated portion of the parallelized data and store the associated portion on associated solid state storage medium 132. Memory interface 236 could transfer the associated portion for storage over link 145-146. Link 145-146 could include multiple links or busses, such as row/column lines, control, address, and data lines, or other configurations. Processing system 234 could instruct memory interface 236 to perform wear-level optimization, bad block handling, write scheduling, write optimization, garbage collection, or other data storage operations.
Although the operations of FIG. 3 discuss a write operation for storage of data by solid state storage device 101, a retrieval or read operation could proceed as well. In a retrieval operation, host system 150 informs solids state storage device 101 of data desired to be retrieved over link 140, such as via a read instruction. Interface system 110 receives this retrieve instruction, and transfers the read instruction to storage processing system 120. Interface system 110 could provide an acknowledge message to host system 150 in response to successfully receiving the read instruction. Storage processing system 120 would process the retrieve command against storage allocation information 125 to determine which of memory subsystems 130 have access to the desired data. Storage processing system 120 then issues individual parallel read commands to interface system 110 which subsequently informs associated memory subsystems 130 to retrieve the data portions from associated solid state storage media 132. Interface system 110 may then receive the data portions and transfer to storage processing system 120 for de-parallelization, merging, or for performing other operations, such as decrypting or de-duplication. The storage allocation information could be processed against the data portions during a read to de-parallelize the data into merged data. Storage processing system 120 then transfers the assembled data for delivery to host system 150 through interface system 110.
FIG. 4 is a system diagram illustrating storage system 400, as an example of elements of storage system 100 found in FIG. 1, although storage system 100 could use other configurations. Storage system 400 includes solid state storage device 401 and host system 450. Solid state storage device 401 and host system 450 communicate over link 460, which is an aggregated serial attached SCSI (SAS) interface in this example. In FIG. 4, host system 450 can transfer data to be stored by solid state storage device 401, such as a ‘write’ operation where solid state storage device 401 stores associated data on a computer-readable storage medium, namely ones of solid state storage arrays 432. Host system 450 can also request data previously stored by solid state storage device 401, such as during a ‘read’ operation, and solid state storage device 401 retrieves associated data and transfers the requested data to host system 450. Solid state storage device 401 includes host interface system 410, storage processing system 420, memory subsystems 430, solid state storage arrays 432, and storage interface system 440.
The SAS interface is employed in this example as a native drive interface, where a native drive interface is typically used by a computer system, such as host 450, for direct access to bulk storage drives. For example, the SAS interface is bootable and does not typically require custom drivers for an operating system to utilize the SAS interface. Link aggregation for host interface 460 can be performed during a configuration process between host 450 and configuration elements of solid state storage device 401, such as firmware elements. In contrast, PCIe interfaces employed internally to solid state storage device 401 are typically non-native drive interfaces, where PCIe is typically not used by a computer system for direct access to bulk storage drives. For example, the PCIe interface does not typically support bootable devices attached thereto, and requires custom device-specific drivers for operating systems to optimally access the associated devices. However, in some examples, instead of a SAS-based front-end host interface, a PCIe or SATA-based front end host interface could be employed.
Host interface system 410 includes interface circuitry to exchange data for storage and retrieval with host system 450 over an aggregated SAS interface, namely link 460. Host interface system 410 includes an SAS target portion to communicate with an SAS initiator portion of host system 450. Link 460 includes an aggregated SAS interface, which could include eight individual SAS links merged into a single logical SAS link, or could include a subset of the eight individual links merged into a logical link. Connector 412 serves as a user-pluggable physical connection point between host system 450 and solid state storage device 401. Link 460 could include cables, wires, or optical links, including combinations thereof. Host interface system 410 also includes a PCI Express (PCIe) interface and associated circuitry. The PCIe interface of host interface 410 communicates over a multi-lane PCIe interface 462 with storage processing system 420. In this example, eight lanes are shown, which could comprise a ‘x8’ PCIe interface, although other configurations and numbers of lanes could be used.
Although in this example, host interface system 410 includes an SAS target portion, in further examples, host interface system 410 could include an SAS initiator portion. The SAS initiator portion could be employed to manage, control, or issue commands to other solid state storage devices. In yet further examples, link 460 could include wireless portions, such as a wireless SAS interface, or other wireless communication and networking communication links.
Storage processing system 420 includes a microprocessor and memory with executable computer-readable instructions. Storage processing system 420 processes the data for storage and retrieval against storage allocation information as well as exchanges the data to be stored, or instructions to retrieve data, with both host interface system 410 and storage interface system 440. Storage processing system 420 executes computer-readable instructions to operate as described herein. As with host interface system 410, storage processing system 420 includes a PCIe interface for communicating over link 462 with host interface system 410. Storage processing system 420 also includes a further PCIe interface for communicating with storage interface system 440 over x8 PCIe link 463. In this example, storage processing system 420 includes two PCIe interfaces with eight PCIe lanes each, although other configurations and numbers of lanes could be used.
Storage interface system 440 includes interface circuitry to exchange data and storage instructions between storage processing system 420 and a plurality of memory subsystems, namely memory subsystems 430. Storage interface system 440 includes a PCIe interface for communicating with storage processing system 420 over link 463, and a SAS interface for communicating with each of memory subsystems 430 over associated ones of links 464. In this example, storage interface system 440 includes one PCIe interface 463 with eight PCIe lanes, although other configurations and numbers of lanes could be used. Also in this example, storage interface system 440 communicates over a single SAS link with each of memory subsystems 430, and includes an SAS initiator portion for communicating with SAS target portions of each memory subsystem 430 over SAS links 464. Although host interface system 410 and storage interface system 440 are shown as separate elements in FIG. 4, in other examples, these elements could be included in a single system, such as shown in interface system 110 or interface system 810, although other configurations could be employed. Also, in other examples, instead of a PCIe interface for link 463, a SAS, SATA, or other aggregated or multi-lane serial link could be employed. Likewise, instead of an SAS interface for link 464, PCIe, SATA, or other links could be employed.
Memory subsystems 430 each include circuitry to store and retrieve data from associated ones of solid state storage arrays 432 over associated links 465 and exchange the data with storage interface system 440 over associated SAS links 464. Memory subsystems 430 also each include an SAS target portion for communicating with the SAS initiator portion of storage interface system 440. Solid state storage arrays 432 each include a solid state storage medium, such as flash memory, static random-access memory, magnetic memory, or other non-volatile memory. Although four of memory subsystems 430 and solid state storage arrays 432 are shown in FIG. 4, it should be understood that a different number could be included with an associated additional link 464.
FIG. 5 is a sequence diagram illustrating a method of operation of solid state storage device 401. In FIG. 5, host system 450 transfers a write command and associated data to be written to host interface system 410. The write command typically includes information on the write command as well as a storage location indicator, such as a storage address at which to store the data. The write command and data is transferred over aggregated SAS link 460 in this example, and received by the SAS target portion of host interface system 410. Host interface system 410 could optionally provide an acknowledge message to host system 450 in response to successfully receiving the write command and associated data. Host interface system 410 then transfers the write command and data to storage processing system 420 over PCIe interface 462. In typical examples, host interface system 410 modifies the write command or data into a different communication format and protocol for transfer over interface 462, which could include generating a new write command and associating the data with the new write command for transfer over interface 462 for receipt by storage processing system 420.
As shown in FIG. 5, storage processing system 420 receives the write command and data and issues a ‘write complete’ message back to host interface system 410. Host interface system 410 then transfers the write complete message or associated information for receipt by host system 450. The write complete message indicates that host system 450 is free to initiate further commands or end the write process associated with the write command described above. In some examples, the write complete message is associated with command queuing. In yet other examples, a ‘write-through’ operation could be performed. In a ‘write-through’ operation, a write complete message is not generated until the associated data has been committed to associated ones of the memory subsystems or to associated ones of the solid state storage media. A ‘write-back’ operation could also instead be performed, where host interface system 410 initiates and transfers a write complete message to host system 450 in response to receiving the write data, which could further reduce latency for host system 450.
Storage processing system 420 then parallelizes the data for storage across multiple memory subsystems. In this example, storage processing system 420 processes storage location information associated with the received data against storage allocation information to determine a parallelization. Parallelizing data includes breaking the data into smaller portions, where each portion is intended for transfer across a different storage interface and subsequent storage by a different storage medium. Parallelizing also includes generating multiple write commands for each data portion. In this example, the data is parallelized into at least four portions. In other examples, a redundancy scheme is applied to the data, and the portions of data could include redundant data portions, parity data, checksum information, or other redundancy information. Parallelizing the data could also include interleaving the data across several storage interfaces and associated storage media. Once the data is parallelized, storage processing system 420 transfers parallelized write commands and parallelized data portions to storage interface system 440. In this example, PCIe interface 463 between storage processing system 420 and storage interface system 440 includes eight lanes, and the data could be transferred in parallel across all eight lanes, or a subset thereof.
Storage interface system 440 receives the parallelized write commands and parallelized data portions over PCIe interface 463 and in response initiates writes over each of SAS interfaces 464 for each of the parallelized data portions. The SAS target portion of each of memory subsystems 430 receives the associated writes and parallelized data portion, and in response, issues associated writes to the associated solid state storage media. The write operation originally transferred by host system 450 for data storage by solid state storage device 401 completes when the data is written to the associated solid state storage arrays.
At a later time, host system 450 issues a read request. The read request is transferred as a read command over SAS interface 460 for receipt by host interface system 410. The read command could include read command information such as storage location information, and a destination address for the read data once retrieved. Host interface 410 receives the read command and in response issues a read command over PCIe interface 462 for receipt by storage processing system 420. Storage processing system 420 processes the read command against storage allocation information to determine where the data requested in the read command is located or stored. Since data in previous write operations was parallelized and stored on different solid state storage arrays, the data must then be retrieved from these arrays. Thus, storage processing system 420 determines individual locations to issue read commands for, and transfers these individual read commands over PCIe interface 463 for receipt by storage interface system 440. Storage interface system 440 issues parallel read commands over individual ones of SAS interfaces 464 for receipt by ones of memory subsystems 430. Ones of memory subsystems 430 issue reads to retrieve the data from solid state memory arrays 432. The read data is transferred by memory subsystems 430 and storage interface system 440 over the associated SAS and PCIe interfaces for receipt by storage processing system 420.
Storage processing system 420 receives the read data, and processes the individual read data portions against the storage allocation information and the read command information to reassemble or merge the individual read data portions into de-parallelized data. The de-parallelized data is then transferred over PCIe interface 462 for subsequent transfer by host interface system 410 over SAS interface 460 for receipt by host system 450. The read operation originally transferred by host system 450 for data retrieval by solid state storage device 401 completes when the data is successfully transferred to host system 450.
FIG. 6 is a sequence diagram illustrating a method of operation of solid state storage device 401 for optimization of read-modify-write operations. In FIG. 6, host system 450 transfers a write command and associated data to be written to host interface system 410. The write command typically includes information on the write command as well as a storage location indicator, such as a storage address at which to store the data. The write command and data is transferred over aggregated SAS link 460 in this example, and received by the SAS target portion of host interface system 410. Host interface system 410 could optionally provide an acknowledgment message to host system 450 in response to successfully receiving the write command and associated data. Host interface system 410 then transfers a related write command and data to storage processing system 420 over PCIe interface 462. In typical examples, host interface system 410 modifies the write command or data into a different communication format and protocol for transfer over interface 462, which could include generating a new write command and associating the data with the new write command for transfer over interface 462 for receipt by storage processing system 420. Storage processing system 420 receives the write command and data and issues a ‘write complete’ message back to host interface system 410. Host interface system 410 then transfers the write complete message or associated information for receipt by host system 450. The write complete message indicates that host system 450 is free to initiate further commands or end the write process associated with the write command described above. In some examples, the write complete message is associated with command queuing.
Storage processing system 420 then determines that a read-modify-write operation would need to be performed to write the data. The determination is based on storage allocation information and information received with the read command. For example, in some types of storage media, such as flash memory, data manipulation occurs in large blocks due to limitations of the underlying media technology. If the amount of data to be written is less than a desired block size, then a read-modify-write would need to be performed. Additionally, the data to be written could be parallelized as discussed in FIG. 5. In this example, storage processing system 420 processes a storage location associated with the received data against storage allocation information to determine a portion of stored data to read before the data received from host system 450 is written. Since data in previous write operations may have been parallelized and stored on different solid state storage arrays, the data must then be retrieved from these arrays. Thus, storage processing system 420 determines individual locations to issue read commands for, and transfers these individual read commands over PCIe interface 463 for receipt by storage interface system 440. Storage interface system 440 issues parallel read commands over individual ones of SAS interfaces 464 for receipt by ones of memory subsystems 430. Ones of memory subsystems 430 issue read operations to retrieve the data from solid state memory arrays 432. The read data is transferred by memory subsystems 430 and storage interface system 440 over the associated SAS and PCIe interfaces for receipt by storage processing system 420. Although multiple parallel arrowheads are not used in FIG. 6 for clarity, it should be understood that these could be used as shown in FIG. 5 for parallel operations.
Once storage processing system 420 receives the read data, storage processing system 420 modifies the read data with the write data received from host system 450 to create read-modified-write data. This read-modified-write data comprises the read data as modified by the write data. However, instead of immediately writing the read-modified-write data to ones of the memory systems, storage processing system 420 caches the read-modified-write data in anticipation of further writes to the same portion or block of data. If such a subsequent write is received, then storage processing system 420 further modifies the read-modified-write data. Storage processing system 420 could wait for further writes until a threshold amount of data has been modified due to subsequent writes before committing the data to ones of the memory subsystems. Although the data to be modified is read before subsequent writes are received in this example, in other examples, storage processing system 420 caches the write data for a plurality of write instructions until a threshold amount of data has been modified for a particular memory block or plurality of blocks, then the data is committed to ones of memory subsystems. In some examples, a read may not be required, as the initial write data and subsequent writes modify an entire block or blocks of data and the full block or blocks can be committed from the cached location to ones of the memory subsystems. Various combinations of the above reads and writes could be performed.
Once the data is ready for storage to ones of the memory subsystems, storage processing system 420 parallelizes the read-modified-write data for storage across multiple computer-readable storage media. In this example, storage processing system 420 processes storage location information associated with the received write data against storage allocation information and location information for the read data to determine a parallelization. Parallelizing data includes breaking the data into smaller portions, where each portion is intended for transfer across a different storage interface and subsequent storage by a different storage medium. In this example, the read-modified-write data is parallelized into at least four portions. In other examples, a redundancy scheme is applied to the read-modified-write data, and the portions of data could include redundant data portions, parity data, checksum information, or other redundancy information. Parallelizing the read-modified-write data could also include interleaving the data across several storage interfaces and associated storage media. Once the read-modified-write data is parallelized, storage processing system 420 transfers parallelized write commands and parallelized read-modified-write data portions to storage interface system 440. In this example, PCIe interface 463 between storage processing system 420 and storage interface system 440 includes eight lanes, and the data could be transferred in parallel across all eight lanes, or a subset thereof.
Storage interface system 440 receives the parallelized write commands and parallelized read-modified-write data portions over PCIe interface 463 and in response initiates write commands over each of SAS interfaces 464 for each of the parallelized data portions. The SAS target portion of each of memory subsystems 430 receives the associated write commands and parallelized data portion, and in response, issues associated write operations to the associated solid state storage media. The write command operation transferred by host system 450 for data storage by solid state storage device 401 completes when the read-modified-write data is written to the associated solid state storage arrays.
In further examples, data could be written in alternate or complementary ways than a read-modify-write to ones of memory subsystems 430 or solid state storage arrays 432. For example, an overprovisioning process could be employed. In overprovisioning, the total addressable storage space of solid state storage device 401, or a virtual subdivision thereof, is reported to be less than an actual addressable storage space. For example, solid state storage device 401 could report 100 gigabytes (100 GB) of addressable space to host system 450, but actually contain 128 GB of addressable space. Read-modify-write procedures could be enhanced by employing overprovisioning. For example, write data could be immediately written to a block of the unreported addressable space. Then background processing by solid state storage device 401 will compare the newly written data against corresponding existing data written previously to the storage array for a given block of storage space. A subsequent background read-modify-write process can then be performed by memory subsystems 430 or storage processing system 420 on the existing data against the new data written to the unreported addressable space, and the new data can then modify the existing data via a read-modify-write to create updated data to replace the existing data. The update data could then be committed to the storage block previously occupied by the existing data, located within the reported addressable space. Garbage collection can then be performed on old data portions, such as to mark that portion of the unreported addressable space as free to be used for further write transactions with background read-modified-writes.
FIG. 7 is a system diagram illustrating storage system 700, as an example of elements of storage system 100 found in FIG. 1, although storage system 100 could use other configurations. Solid state storage device 701 can perform operations as discussed herein for solid state storage device 101, 401, 801, or 1210, although other configurations could be employed. Storage system 700 includes host system 740 and solid state storage device 701. Host system 740 and solid state storage device 701 communicate over host interface 750.
Host system 740 comprises a computer system, such as a server, personal computer, laptop, tablet, gaming system, entertainment system, embedded computer system, industrial computer system, network system, or other computer system. Host interface 750 could comprise serial attached SCSI (SAS), aggregated SAS, Ethernet, small-computer system interface (SCSI), integrated drive electronics (IDE), Serial AT attachment interface (ATA), parallel ATA, FibreChannel, InfiniBand, Thunderbolt, universal serial bus (USB), FireWire, PCI Express, communication signaling, or other communication interface type, and could comprise optical, wired, wireless, or other interface media.
Solid state storage device 701 includes chip-scale device 710, connector 711, and non-volatile memories (MEM) 730. Connector 711 includes physical structure and connection components to attach a transmission medium to solid state storage device 701. Connector 711 could include a connector, antenna, port, or other interconnection components. MEM 730 each include non-transitory non-volatile computer-readable media, such as flash memory, electrically erasable and programmable memory, magnetic memory, phase change memory, optical memory, or other non-volatile memory. MEM 730 could each comprise a microchip or collection of microchips to each form a storage array.
Chip-scale device 710 includes host interface 712, primary processor 713, dynamic random access memory (DRAM) 714, firmware 715, memory processor 716, and peripheral input/output (I/O) 717. Chip-scale device could comprise a field-programmable gate array (FPGA), application specific integrated circuit (ASIC), or other integrated microchip circuit and logic elements, including combinations thereof. Each element of chip-scale device 710 can communicate over associated logic and signaling elements, not shown for clarity in FIG. 7. The signaling elements could include busses, discrete links, point-to-point links, or other links.
Host interface 712 includes circuitry and logic to communicate over host interface 750 to exchange read and write commands with host system 740 along with associated data. Primary processor 713 includes logic and processing circuitry to process read and write commands to determine data storage operations, such as data parallelization, data interleaving, read-modify-write optimization, redundancy operations, or other operations for storing and retrieving data with MEM 730 through memory processor 716. Dynamic random access memory (DRAM) 714 includes random-access memory elements and access logic for primary processor 713 to retrieve executable instructions to perform as indicated herein. DRAM 714 could also include storage allocation information or cached data associated with reads/writes. Firmware 715 includes non-volatile memory elements, such as static ram (SRAM), flash memory, or other non-volatile memory elements which store computer-readable instructions for operating chip-scale device 710 as discussed herein when executed by primary processor 713 or memory processor 716. Firmware 715 could include operating systems, applications, storage allocation information, configuration information, or other computer-readable instructions stored on a non-transitory computer-readable medium. Memory processor 716 includes logic and circuitry for reading from and writing to a plurality of memory arrays, such as MEM 730. Memory processor could also include interfacing logic for communicating over memory interfaces 752 or write circuitry for writing to flash memory or other memory technologies. Peripheral I/O 717 includes circuitry and logic for communicating with further external systems, such as computer-readable storage media, programming elements for chip-scale device 710, debugging interfaces, power control interfaces, clock control interfaces, or other external interfaces, including combinations thereof.
FIG. 8 is a system diagram illustrating storage system 800, as an example of elements of storage system 100 found in FIG. 1, although storage system 100 could use other configurations. Storage system 800 includes flash storage device 801 and host system 850. Flash storage device 801 and host system 850 communicate over link 860, which is an aggregated serial attached SCSI (SAS) interface in this example. In FIG. 8, host system 850 can transfer data to be stored by flash storage device 801, such as a ‘write’ instruction where flash storage device 801 stores associated data on a computer-readable storage medium, namely ones of flash memory arrays 832. Host system 850 can also request data previously stored by flash storage device 801, such as during a ‘read’ instruction, and flash storage device 801 retrieves associated data and transfers the requested data to host system 850. Additionally, further transactions than writes and reads could be handled by the elements of flash storage device 801, such as metadata requests and manipulation, file or folder deletion or moving, volume information requests, file information requests, or other transactions. Although the term ‘flash storage’ is used in this example, it should be understood that other non-transitory computer-readable storage media and technologies could be employed.
Flash storage device 801 includes interface system 810, storage processing system 820, memory subsystems 830, power control system 870, and backup power source 880. In this example, the elements of flash storage device 801 are included within a single enclosure, such as a casing. The enclosure includes connector 812 attached thereon to communicatively couple the associated elements of flash storage device 801 to external systems, connectors, and/or cabling. The enclosure includes various printed circuit boards with the components and elements of flash storage device 801 disposed thereon. Printed circuit traces or discrete wires are employed to interconnect the various elements of flash storage device 801. If multiple printed circuit boards are employed, inter-board connectors are employed to communicatively couple each printed circuit board. In some examples, backup power source 880 is included in elements external to the enclosure of flash storage device 801.
Interface system 810 includes interface circuitry to exchange data for storage and retrieval with host system 850 over an aggregated SAS interface, namely link 860. Interface system 810 includes an SAS target portion to communicate with an SAS initiator portion of host system 850. Link 860 includes an aggregated SAS interface, which includes four individual SAS links merged into a single logical SAS link. Connector 812 serves as a user-pluggable physical connection port between host system 850 and flash storage device 801. Link 860 could include cables, wires, or optical links, including combinations thereof. Interface system 810 also includes an SAS initiator portion, a PCI Express (PCIe) interface, and associated circuitry. The SAS initiator portion of interface system 810 includes circuitry and logic for initiating instructions and commands over links 864 for exchanging data with SAS target portions of each of memory subsystems 830. The PCIe interface of interface system 810 communicates over a multi-lane PCIe interface 862 with storage processing system 820. In this example, eight lanes are shown, which could comprise a ‘x8’ PCIe interface, although other configurations and numbers of lanes could be used. Interface system 810 also communicates with power control system 870 to receive power status information.
Storage processing system 820 includes a microprocessor and memory, and processes data for storage and retrieval against storage allocation information as well as exchanges the data to be stored or instructions to retrieve data with interface system 810. Storage processing system 820 executes computer-readable instructions to operate as described herein. As with interface system 810, storage processing system 820 includes a PCIe interface for communicating over link 862 with interface system 810. Storage processing system 820 also communicates with power control system 870 to receive power status information, clock speed configuration information, or other power information. In this example, storage processing system 820 includes one PCIe interface with eight PCIe lanes, although other configurations and numbers of lanes could be used.
In the examples discussed herein, each of memory subsystems 830 are physical drives that are merged together into one physical enclosure with the other elements of flash storage device 801 to create a virtual drive configured and accessed by storage processing system 820. Thus, each of memory subsystems 830 are physical drives encased in a common enclosure with interface system 810 and storage processing system 820. Each of memory subsystems 830 could also be mounted on common printed circuit boards as elements of interface system 810 and storage processing system 820. Memory subsystems 830 each include a flash memory controller 831 and a flash memory array 832. In this example, each of memory subsystems 830 comprises an independent flash memory storage drive, where each includes the electronics, circuitry, and microchips typically included in a flash memory drive, such as a USB flash drive, thumb drive, solid state hard drive, or other discrete flash memory device. Flash storage device 801 includes a plurality of these memory subsystems 830, as shown in FIG. 8, and the associated electronics, circuitry, and microchips could be included on printed circuit boards with the other elements of flash storage device 801. A different number of memory subsystems 830 could be included, as indicated by the ellipses. Each flash memory controller 831 includes circuitry to store and retrieve data from associated ones of flash memory arrays 832 and exchange the data with interface system 810 over associated ones of SAS links 864. Memory subsystems 830 also each include an SAS target portion for communicating with the SAS portion of interface system 810. The SAS target portion could be included in each of flash memory controllers 831, or could be included in separate interface elements of each of memory subsystems 830. Flash memory arrays 832 each include a flash memory storage medium. It should be understood that other non-volatile memory technologies could be employed in each of memory subsystems 830, as discussed herein.
Power control system 870 comprises circuitry and logic to monitor power for elements of flash storage device 801. Power control system 870 could include circuitry and logic to provide backup power to the elements of flash storage system 801 when a primary power source (not shown in FIG. 8 for clarity) is interrupted. Power control system 870 monitors and conditions power received from at least backup power source 880 over link 868, and provides power status information to ones of storage processing system 820, interface system 810, or memory subsystems 830 over links such as link 866 and link 867. The links could include power links, discrete communication lines, or other communication interfaces, such as serial, system management bus (SMBus), inter-integrated circuit (I2C), or other communication links.
Backup power source 880 includes a power source for providing backup power to elements of flash storage device 801. Backup power source 880 could include circuitry to store power, condition power, regulate, step up or step down power sources to various voltages, monitor remaining power for power sources, or other circuitry for power supply and conditioning. The power source included in backup power source 880 could be of a variety of backup power source technology types, and could comprise batteries, capacitors, capacitor arrays, flywheels, dynamos, piezoelectric generators, solar cells, thermoelectric generator, or other power source, including combinations thereof.
FIG. 9 is a sequence diagram illustrating a method of operation of storage system 800. In FIG. 9, host system 850 transfers a write instruction and associated data to be written by flash storage system 801 to interface system 810. The write instruction typically includes command information associated with the write instruction as well as a storage location indicator, such as a storage address at which to store the data. The write instruction and data is transferred over aggregated SAS link 860 in this example, and received by the SAS target portion of interface system 810. Interface system 810 could optionally provide an acknowledge message to host system 850 in response to successfully receiving the write instruction and associated data. Interface system 810 then transfers the write instruction and data to storage processing system 820 over PCIe interface 862. In typical examples, interface system 810 modifies the write instruction or data into a different communication format and protocol for transfer over interface 862, which could include generating a new write command and associating the data with the new write command for transfer over interface 862 for receipt by storage processing system 820. Storage processing system 820 receives the write command and data and issues a ‘write complete’ message back to interface system 810. Interface system 810 then transfers the write complete message or associated information for receipt by host system 850. The write complete message indicates that host system 850 is free to initiate further commands or end the write process associated with the write command described above. In some examples, the write complete message is associated with command queuing.
However, at some point during the write process, primary power is interrupted to flash storage device 801. The point at which power is interrupted in this example is after interface system 810 receives the write instruction from host system 850 and transfers any optional associated acknowledge message to host system 850 for the write instruction, but before the data is written to ones of flash memory arrays 832. In response to the interruption, power control system 870 detects the power interruption and provides backup power from backup power source 880 to elements of device 801. In some examples, backup power source 880 provides the backup power in a redundant manner with any primary power. If primary power is interrupted, backup power source 880 could apply backup power immediately or simultaneously so flash storage device 801 experiences no interruption in power supply.
In response to detecting the power loss, power control system 870 transfers a power loss indicator to ones of storage processing system 820, interface system 810, and memory subsystems 830. Further power information could be transferred, such as power source profiles, power down instructions, backup power technology type identifiers, remaining backup power levels, or other information. In response to receiving the power loss indicator or other information, interface system 810, storage processing system 820, and memory subsystems 830 enter into a soft power down mode.
In one example of the soft power down mode, further write commands and data are not accepted over link 460, and interface system 810 caches pending write instructions along with associated data as power queued data 845. The cached write instructions and data are ones that have not been committed to ones of memory subsystems 830. Power queued data 845 could be stored in a non-transitory computer-readable medium, such as a flash memory, SRAM, or other non-volatile memory. This non-volatile memory could be included in interface system 810, or external to interface system 810. Also in the soft power down mode, storage processing system 820 could commit pending writes within processing system 820 to ones of memory subsystems 830. For read instructions, storage processing system 820 could transfer any pending read data to interface system 810, and interface system 810 would cache any read instructions and associated data not yet provided to host system 850 in power queued data 845. Also, in the soft power down mode, storage processing system 820 commits any storage allocation information to non-volatile memory. The non-volatile memory could be the same memory which includes power queued data 845.
In another example of the soft power down mode, further write commands are not accepted over link 460 in response to entering into the soft power down mode, and pending write commands proceed. In-flight data associated with the pending write commands would be committed to ones of memory subsystems 830 as the associated write commands complete. Storage processing system 420 then commits any storage allocation information kept in volatile memory elements into non-volatile memory, such as a flash memory, SRAM, or other non-volatile memory. Storage processing system 420 could commit the storage allocation information into ones of memory subsystems 430, to be stored with host data, or within specially partitioned storage areas of memory subsystems 430. The specially partitioned storage areas for committing storage allocation information during soft power down operations could be unreported addressable space of memory subsystems 430, such as that used in overprovisioning.
Although in the examples above interface system 810 caches pending instructions and data, it should understood that the processing and transfer of various instructions and data could be at any stage of processing within flash storage device 801. Interface system 810 and storage processing system 820 communicate to coordinate which instructions and data will be completed or committed, and which will be cached before power loss. A predetermined power down sequence could be employed for the soft power down operations, or the soft power down process could be dependent upon the quantity of pending transactions and available backup power. For example, the amount of time within which the soft power down activities must occur varies upon many factors, such as remaining backup power, a quantity of pending transactions, or other factors. Storage processing system 820 could determine a threshold quantity of instructions to complete based on remaining backup power indicators as provided by power control system 870, and any instructions exceeding the threshold number would be cached. Furthermore, during a power loss, pending read instructions may be inhibited from transfer over host interface 860, as host system 850 may also be without power. In some examples, incoming write data can be marked or flagged as critical data by a host system, and such data could be committed ahead of other non-critical data to ones of memory subsystem 830, and the non-critical data would be cached as power queued data 845.
Power control system 870 could receive status indicators from interface system 810, storage processing system 820, or memory subsystems 830 which indicate a state of soft power down sequencing, such as if all pending transactions and storage allocation information have been committed or cached. Power control system 870 powers down elements of flash storage device 801 in response to these status indicators, such as powering down ones of memory subsystems 830 when all write data or storage allocation information has been committed, powering down interface system 810 when all remaining pending transactions have been cached, and powering down storage processing system 820 when storage allocation information has been committed. It should be understood that other variations on power down sequencing could occur.
At some later point in time, primary power resumes. Primary power could resume while flash storage device 801 is still receiving backup power, and no interruption in the operation of flash storage device 801 may occur. However, when the various soft power down operations have been performed in response to a loss of primary power, then power control system 870 applies power to the various elements of flash storage device 801 in response to primary power resuming. Also in response to power resuming, interface system 810 retrieves cached transactions and data from power queued data 845 and executes these transactions. For example, pending and cached writes could be committed to ones of memory subsystems 830, and pending reads could be performed and associated data returned to host system 850. In examples where the storage allocation information is committed or cached into ones of memory subsystems 830, storage processing system 420 could read this storage allocation information from the associated ones of memory subsystems 830 and transfer this information to a volatile memory location, such as DRAM or a buffer.
FIG. 10 includes graphs 1010-1031 illustrating example power down curves. Backup power source 880 of FIG. 8 could include a variety of backup power source technology types, as discussed above. Each power source type or technology typically has an associated power output and power profile, which depends highly on the technology and elements employed in the type of backup power source or technology type. Each graph in the top portion of FIG. 10, namely graphs 1010, 1020, and 1030, includes a horizontal time axis and a vertical power output axis. The power output axis relates to the power output of a type of power source, and is related to the passage of time along the horizontal time axis. Each graph in the bottom portion of FIG. 10, namely graphs 1011, 1021, and 1031, includes a horizontal time axis and a vertical power draw axis. The power draw axis relates to a forced power draw of flash storage device 801, and is related to the passage of time along the horizontal time axis. In examples where backup power source 880 is included in flash storage device 801, the power draw profile could be pre-programmed into power control system 870 according to the power source type. In other examples, the power draw profile could be programmable over an external configuration interface.
Graph 1010 indicates the typical power output of a battery-based power source, graph 1020 indicates the typical power output of a capacitor-based power source, and graph 1030 indicates the typical power output of a flywheel-based power source. Although three different power source types or technologies are discussed in these graphs, other power types could be employed with associated power output profiles.
Graph 1011 indicates the forced power draw of flash storage device 801 when using a backup power source employing a battery-based power source. Graph 1021 indicates the forced power draw of flash storage device 801 when using a backup power source employing a capacitor or array of capacitors. Graph 1031 indicates the forced power draw of flash storage device 801 when using a backup power source employing a flywheel.
The forced power draw includes an artificially induced power draw, or associated current draw, for flash storage system 801 when powered by backup power source 880. A power draw could be forced by power control system 870. Power control system 870 could control various parameters of operation of flash storage system 801 to match the power draw of flash storage system 801 to the associated source power output curves. This matching could include powering down various elements of flash storage device 801 in a sequence which reduces power draw according to the typical power output indicated by any of graphs 1010, 1020, or 1030. This matching could include ramping down clock speeds or clock frequencies of various elements of flash storage device 801 to induce a power draw matching that of any of graphs 1010, 1020, or 1030. In other examples, powering down ones of interface system 810, storage processing system 820, or memory subsystems 830 is performed a predetermined sequence in accordance with the power output curve associated with the backup power source type.
In yet further examples, power control system 870 instructs elements of device 801 to throttle the various interfaces and elements, such as memory elements. In throttling examples, interface speeds or a speed of interface transactions is correlated to a power source type, based on the power consumption of the associated circuit and logic elements. For example, in battery power sources, the lower the power drawn correlates to more energy being available so elements of flash storage device 801 can be throttled down in response to a primary power interruption. In flywheel power sources, power down completion time needs to be minimized to ensure maximum energy can be drawn from the flywheel, and elements of flash storage device 801 are throttled up to induce a high power draw for a shorter amount of time. In capacitor power sources, the throttling should be proportional to the voltage of the capacitor or capacitor array, so that when the capacitor has a high voltage, the elements of flash storage device 801 are throttled up to induce a high power draw, and as the voltage of the capacitor drops, a throttling down would increase proportionally.
Advantageously, data integrity can be better maintained when a power down sequence as described herein allows maximum use of backup power when maximal power is available from a backup power source, and minimize use of backup power when minimal backup power output is available. For example, critical operations could be committed during the times when maximum backup power is available, and less critical operations could be performed during times when minimal backup power is available. Additionally, memory devices involved in storing power queued data 845 could include lower power draw elements as compared to memory subsystems 830, and thus pending transactions could be preferably cached by interface system 810 rather than committed to relatively high-power draw memory subsystems 830. Also, by intelligently ramping down power draw according to the specific backup power source technology or type, smaller backup power sources could be employed as power draw is more tailored to such sources. Although the power down profiles and associated throttling or induced power draw discussed in FIG. 10 are applied to elements of FIG. 8, it should be understood that these techniques and systems could be applied to any of the storage systems discussed herein.
In a further example, a cost of a primary source of energy instead of a backup power source is considered when throttling the various elements of device 801. For instance, during peak energy usage hours, energy costs may be higher, and during non-peak energy hours, energy costs may be lower. During high energy cost times, the elements of device 801 could be throttled to a lower performance operation, such as by slowing memory interfaces, or slowing a processor clock speed, among other performance throttling modifications. Likewise, during low energy cost times, the elements of device 801 could be allowed to operate at a maximum or higher level of performance and no throttling applied.
FIG. 11 is a system diagram illustrating storage system 1100. In FIG. 11, portions of flash storage system 801 are included for exemplary purposes, and it should be understood that other storage system elements discussed herein could instead be employed. Storage system 1100 includes storage processing system 820 of flash storage system 801, discrete logic 1120, selection circuit 1122, resistors 1124-1128, voltage regulator 1130, and capacitor 1140. The elements of FIG. 11 are employed as clock rate optimization circuitry in a clock frequency controlling scheme, where the clock rate or clock frequency for a clock system or clock generator circuit associated with storage processing system 820 of flash storage device 801 is varied based on a utilization of processing portions of storage processing system 820. The elements of FIG. 11 could be included in flash storage system 801, such as in a common enclosure or on common printed circuit boards.
Discrete logic 1120 includes communication logic to interpret indicators transferred by utilization monitor 1112, such as logic elements, communication interfaces, processing systems, or other circuit elements. Selection circuit 1122 includes solid-state switching elements, such as transistors, transmission gates, or other selection logic to select one of resistors 1124-1128 and connect the selected resistor to Vout pin 1132 of voltage regulator 1130. Resistors 1124-1128 include resistors, or could include active resistor elements, such as temperature-dependent resistors, voltage or current controlled transistors, or other resistor-like elements. Voltage regulator 1130 includes voltage regulation circuitry to provide power at a predetermined voltage at Vout pin 1132 based on varying voltages applied to voltage regulator 1130. Capacitor 1140 includes capacitor circuit elements or arrays of capacitors. Links 1150-1155 include circuit traces, discrete wires, optical links, or other media to communicate indicators, voltages, currents, clock speeds, or power between the various elements of FIG. 11.
In FIG. 11, storage processing system 820 includes utilization monitor 1112. Utilization monitor 1112 could include a software process executed by storage processing system 820 which monitors various parameters of utilization to determine a utilization indicator. The various parameters of utilization could include data throughput, processor utilization, memory usage, instruction load, power draw, active processes, active transactions, or other parameters. Utilization monitor 1112 provides an indicator of utilization over link 1150 to discrete logic 1120. The indicator could be a voltage level proportional to utilization, a multi-level digitally encoded indicator, or a binary indicator, among other indicators.
When the utilization of storage processing system 820 is low, such as during idle states or low throughput states, the indicator could remain in an inactive condition. However, in response to a higher utilization, such as a non-idle state or high throughput state, the indicator could transition to an active condition. Other indicator states could be employed, such as a proportional indicator that varies according to utilization levels. Discrete logic 1120 then communicates with selection circuit 1122 to select one of resistors 1124-1128. Selection circuit 1122 could include a transistor switch or other switching elements. The particular one of resistors 1124-1128 which is selected by selection circuit 1122 controls the resistance applied to Vout pin 1132 of voltage regulator 1130, allowing for adjustment in the output supply of voltage regulator 1130. As the utilization increases, the resistance is adjusted to increase the output voltage of voltage regulator 1130. In this example, resistor 1128 is active resistor 1160, and could correspond to a low utilization, and thus a low associated output voltage of voltage regulator 1130. In other examples, such as during high utilization, one of resistor 1124 or 1126 could be selected resulting in a correspondingly higher output voltage of voltage regulator 1130.
This adjusted output voltage of voltage regulator 1130 is then applied to capacitor 1140. Capacitor 1140 conditions the output voltage of voltage regulator 1130 to reduce ripple, noise, transition glitches, and provide a smooth voltage to Vin pin 1144 of flash storage device 801. In this example, Vin pin 1144 controls a clock frequency applied to storage processing system 820, where a clock generation portion of flash storage device 801 or storage processing system 820 determines a clock rate or clock frequency proportionally to the voltage applied to Vin pin 1144. Thus, as the utilization level of elements of flash storage device 801, such as storage processing system 820, increase from a low utilization level, the clock frequency is increased in speed. Likewise, as the utilization level of elements of flash storage device 801, such as storage processing system 820, decrease from a high utilization level, the clock frequency is decreased in speed. Although the applied voltage corresponds to a clock frequency in this example, in other examples the applied voltage could correspond to a core voltage of semiconductor portions of flash storage device 801, where reduced core voltages correspond to reduced utilization levels, and vice versa. In other examples, external clock generation circuits have an output clock rate or frequency modified based on the utilization level discussed herein.
Thresholds could be employed for the various utilization levels. For example, when utilization is below a first threshold, then the clock speed is adjusted to a first speed via a first voltage level applied to Vin pin 1144, when utilization is between the first threshold and a second threshold (where the second threshold is higher than the first threshold), then the clock speed is adjusted to a second speed higher than the first speed via a second voltage level applied to Vin pin 1144, and when utilization is higher than the second threshold, then the clock speed is adjusted to a third speed higher than the second speed via a third voltage level applied to Vin pin 1144.
In further examples, the output of utilization monitor 1112 is used to control link aggregation of a front-end interface, such as a host interface. For example, when an aggregated front-end SAS interface is used to communicate with a host system, the amount of aggregation can be proportional to utilization of processing system 820. During low utilization times, such as during idle, the number of links aggregated into the host interface can be reduced, possibly to one physical link. Likewise, during times of high utilization, such as when utilization exceeds a first utilization threshold, an additional physical link can be aggregated into the host interface. Further utilization thresholds could increase further amounts of aggregated physical links. The utilization level information detected by utilization monitor 1112 could be provided to a front-end interface system for responsively controlling the amount of link aggregation.
In yet further examples, other elements of flash storage system 801 have a clock speed or operation rate modified as done for storage processing system 820 above. For example, memory subsystems 830 could each be throttled or have a modified clock speed according to utilization monitor 1112. Thus, performance such as transaction speed or clock speed of all elements of flash storage system 801 could be actively and dynamically scaled according to the read/write demand of the host system.
FIG. 12 is a system diagram illustrating storage system 1200. Storage system 1200 includes solid state memory device 1210 and configuration system 1240. Solid state memory device 1210 could be an example of devices 101, 401, 701, and 801, although devices 101, 401, 701, and 801 could use other configurations. In this example, solid state memory device 1210 includes interface 1212 and optional external configuration pins P1-P4. Interface 1212 comprises a configuration interface for communicating configuration information to configuration system 1240 over configuration link 1230 and receiving configuration instructions from configuration system 1240 over configuration link 1230. Configuration link 1230 could include further systems, networks, links, routers, switches, or other communication equipment. In some examples, configuration link 1230 is provided over a front-end interface, such as the various host interfaces described herein.
Configuration system 1240 comprises a computer system, processing system, network system, user terminal, remote terminal, web interface, or other configuration system. Configuration system 1240 includes configuration user interface 1250, which allows a user of configuration system 1240 to create and transfer configuration instructions to solid state memory device 1210. Configuration user interface 1250 also can present a graphical or text-based user interface to a user for displaying a present configuration or configuration options to the user.
The configuration of solid state memory device 1210 can be modified using configuration user interface 1250 or optional configuration pins P1-P4. In some examples, pins P1-P4 can be spanned by a removable jumper 1220 or multiple jumpers. A user can alter a configuration of solid state memory device 1210 by bridging various ones of pins P1-P4. Pins P1-P4 interface with logic or circuitry internal to solid state memory device 1210, such as programmable logic which triggers a script or software routine to responsively configure firmware or software managing the various elements of solid state memory system 1210. Additionally, a user can alter and view configurations of solid state memory device 1210 through configuration user interface 1250. In some examples, the function of pins P1-P4 can be altered by configuration user interface 1250, so that commonly used functions could be easily selected by a jumper or jumpers. A factory-set configuration of pins P1-P4 could be altered by configuration user interface 1250. Although four pins are shown in FIG. 12, it should be understood that a different number of configuration pins could be employed. Instead of configuration pins, micro-switches could be employed.
In typical examples of memory systems, only limited options can be configured. These options can include timing, speed, or other settings. Also, a user can physically change a configuration by altering a physically installed size or number of memory devices. However, in this example, further configurations are provided. The front-end or host interface could be altered, such as changing a link aggregation configuration, an interface speed, or other parameters of a host interface. A capacity of solid state memory device 1210 could be altered, so as to limit a capacity or select from among various potential capacities. Various performance parameters could be altered. For example, a thermal shut off feature could be altered or enabled/disabled to disable device 1210 or portions thereof according to temperature thresholds. A read-only status could be enabled/disabled, or selectively applied to subdivisions of the total storage capacity, such as different volumes. A redundancy scheme could also be selected, such as a redundant array of independent disk (RAID) array configuration. Various solid state media of device 1210 could be subdivided to create separate RAID volumes or redundant volumes. Striping among various memory subsystems could also be employed. Encryption configurations could also be applied, such as encryption schemes, passwords, encryption keys, or other encryption configurations for data stored within device 1210. Encryption keys could be transferred to device 1210 over interface 1212. Compression schemes could also be applied to data read from and written to the various memory subsystems, and the compression schemes could be selected among over the various configuration or jumper interfaces, or a compression scheme could be uploaded via the configuration interfaces. Link aggregation could also be altered by the configuration elements described in FIG. 12. For example, a number of SAS links could be configured to be aggregated into a single logical link, or separated into separate links, and associated with various volumes. An associated storage processing system could be configured to selectively merge or separate ones of the physical links into the aggregated multi-channel interface based on instructions received over the configuration interfaces.
In typical examples, a physical drive is an actual tangible unit of hardware of a disk, solid state, tape, or other storage drive. A logical drive typically describes a part of a physical disk or physical storage device that has been partitioned and allocated as an independent unit, and functions as a separate drive to the host system. For example, one physical drive could be partitioned into logical drives F:, G:, and H:, each letter representing a separate logical drive but all logical drives still part of the one physical drive. Using logical drives is one method of organizing large units of memory capacity into smaller units. A virtual drive is typically an abstraction, such as by spanning, of multiple physical drives or logical drives to represent a single larger drive to a host system. In the examples discussed herein, the various solid state memory subsystems are physical drives that are merged together into one enclosure to create a virtual drive configured and accessed by the associated storage processing system, such as storage processing system 820, among others. The physical drive can have logical volumes associated therewith, and the virtual drives can also have logical volumes associated therewith. The associated storage processing system binds the virtual drive(s) and associated memory subsystems to target ports on the associated interface system, such as interface system 810. Target and initiator ports on the interface system are configured and controlled by the storage processing system. The virtual drives that have been bound to the target ports are then presented to external systems, such as a host system, as a physical drive.
In addition, the configuration elements of solid state memory device 1210 could also alter how solid state memory device 1210 appears to a host system. Virtual subdivisions of the available storage space of device 1210 could be configured, where the configuration indicates a quantity and arrangement of virtual subdivisions. For example, these virtual subdivisions could present a plurality of virtual drive volumes to a host system. These virtual volumes could be provided over various ones of front-end or host interface links, such as ones of SAS links providing an associated volume. Thus, a single device 1210 could appear to a host system as several separate ‘physical’ drives over a single host interface or a plurality of host links comprising a host interface. As a further example, each of SAS links in host interface 460 could be configured to correspond to a separate virtual drive and each virtual drive could then be presented to host system 450 or to multiple host systems as separate volumes or drives over separate links. The various virtual drives could each comprise different configurations such as sizes, capacities, performances, redundancy, or other parameters.
Configuration pins P1-P4 could be employed to select among predetermined volume or drive configurations. For example, first ones of pins P1-P4 could select a first virtual volume and host interface configuration, and second ones of pins P1-P4 could select a second virtual volume and host interface configuration. If multiple virtual drives are employed, then individual ones of the virtual drives could be associated with individual ones of host interface links. The configuration pins P1-P4 could select among these interface and volume configurations. For example, in FIG. 8, a first plurality of links 860 could correspond to a first virtual drive, and a second plurality of links 860 could correspond to a second virtual drive. This host system 850 would see flash storage device 801 as two separate drives, with each drive using ones of links 860. The virtual drives could each span all of memory subsystems 830, or could be apportioned across ones of memory subsystems 830, and consequently managed by storage processing system 820. The pins or user configuration interface could also alter identification parameters of a solid state device, such as addressing parameters, SCSI addresses or identifiers, among other identification parameters.
Data parallelization could be performed for each volume, where data received for a first volume is parallelized among memory subsystems associated with the first volume, and data received for a second volume is parallelized among memory subsystems associated with the second volume. The various volumes need not comprise exclusive memory subsystems, as the associated storage processor could determine and maintain storage allocation information to mix parallelization of data among all memory subsystems while maintaining separate volume allocations across the various memory subsystems. Thus, all memory subsystems could be employed for storage among multiple logical volumes, and the amount of storage allocated to each volume could be dynamically adjusted by modifying the storage allocation information accordingly. Configuration user interface 1250 or pins P1-P4 could be used to adjust these volume allocations and configurations.
For each of the configurations discussed above, a processing system, such as a storage processing system, could apply the configurations to device 1210. Associated firmware could be modified, updated, or executed using configurations of jumpers P1-P4 or configuration user interface 1250. Additionally, the processing system, such as processing system 120, 420, 713, or 820 can perform other functions, such as orchestrate communication between front-end interfaces and back-end interfaces, manage link aggregation, parallelize data, determining addressing associated with parallelized data portions or segments, data integrity checks, such as error checking and correction, data buffering, and optimization of parallelized data portion size to maximize performance of the systems discussed herein. Other operations as discussed herein could be performed by associated processing systems. Additionally, front-end interface systems, such as interface systems 110, 410, 712, or 810 can provide performance functions such as link aggregation, error detection and correction, I/O processing, buffering, or other performance off-loading for features of host interfaces.
FIG. 13 includes side view diagrams illustrating storage system 1301. The diagrams illustrated in FIG. 13 are intended to illustrate the mechanical design and structure of storage system 1301. The upper diagram illustrates a side-view of an assembled storage system, whereas the lower diagram illustrates a cutaway/simplified view including a reduced number of elements of storage system 1301 to emphasize thermal design elements. Storage system 1301 is an example of devices 101, 401, 701, 801, or 1210, although devices 101, 401, 701, 801, or 1210 could use other configurations.
Storage system 1301 includes chassis 1310-1312 which provides structural support and mountings for mating the various printed circuit boards (PCB) of storage system 1301. In this example, storage system 1301 includes four PCBs, namely PCB 1320-1326. Each PCB has a plurality of integrated circuit chips (IC) disposed thereon. The ICs could be attached by solder and/or adhesive to the associated PCB. As shown in FIG. 13, the ICs are arranged on both sides of many of the PCBs, but are not disposed on an outside surface of PCBs 1320 and 1326.
In order to maximize the number of ICs in storage system 1301, which also maximizes the memory density and reduces the size of storage system 1301, some outer surfaces of storage system 1301 are formed from surfaces of PCBs 1320 and 1326. In this manner, no additional casing or enclosure elements are employed on the outer surfaces defined by PCBs 1320 and 1326, and a usable volume for storage system 1301 is maximized for the external dimensions. In this example, chassis elements 1310-1312 form the left/right outer surfaces, and PCBs 1320 and 1326 form the top/bottom outer surfaces. Since this view is a side view, the ends projecting into the diagram and out of the diagram could be formed with further structural elements, such as chassis elements, end caps, connectors, or other elements. The outer surfaces of PCBs 1320 and 1326 could be coated with a non-conductive or protective coating, such as paint, solder mask, decals, stickers, or other coatings or layers.
In this example, chassis 1310-1312 are structural elements configured to mate with and hold the plurality of PCBs, where the chassis structural elements and the PCBs are assembled to comprise an enclosure for storage system 1301. In some examples, a tongue-and-groove style of configuration could be employed, such as slots or grooves to hold the edges of the plurality of PCBs. An outer surface of a PCB comprises a first outer surface of the enclosure and an outer surface of a second PCB comprises a second outer surface of the enclosure.
The lower diagram in FIG. 13 shows a simplified and cut-away side view of some elements of storage system 1301. Chassis 1310 and PCB 1322 are included to emphasize thermal management features. Internal to storage system 1301 are high power components, namely components which use a relatively large amount of power and thus become hot during operation. Due to the high-density mechanical design of storage system 1301, heat from various hot ICs is desired to be channeled to outside enclosure surfaces for radiation and subsequent cooling. These hot ICs may not have immediate access to outside surfaces, and may be disposed in centralized locations.
In the lower diagram, a high-power IC is disposed on one surface of PCB 1322, namely IC 1350. This IC could include a processor or other high-density and high-power utilization integrated circuit, such as processing system 120, 420, 713, or 820, or chip-scale device 710. Other ICs could be configured in this manner as well. Heat spreader 1360 is thermally bonded to IC 1350, possibly with heat sink compound, thermally conductive adhesive, or with fasteners connected to PCB 1322, among other thermal bonding techniques to maximize heat transfer from IC 1350 to heat spreader 1360. Heat spreader 1360 also overhangs IC 1350 and is further thermally bonded to a low thermal resistance interface 1362. Heat spreader 1360 and interface 1362 could be thermally bonded similarly to heat spreader 1360 and IC 1322. Low thermal resistance interface 1362 is then thermally bonded to chassis 1310, possibly through a groove or slot in chassis 1310. In this example, chassis 1310 and 1312 comprise thermally conductive materials, such as metal, ceramic, plastic, or other material, and are able to sink heat away from high-power ICs, such as high-power IC 1350.
Heat spreader 1360 comprises any material that efficiently transports heat from a hot location to a cooler location, such as by heat dissipation, conduction, or other heat transfer techniques. For example, heat spreader 1360 could comprise a metal composition, such as copper. In other examples, graphite or planar heat pipes are employed. Interface 1362 comprises any material with a high thermal conductivity and transports heat to chassis 1310 by any physical means, such as conduction, convection, advection, radiation, or a combination thereof. For example, interface 1362 could comprise metal compositions, heat pipes, graphite, or other materials, including combinations thereof. Thus, thermal conductivity anisotropy is aligned for heat spreader 1360 and interface 1362 such that the thermal resistance minimum is aligned with the direction of optimal heat flow. In further example, heat spreader 1360 is elongated and allowed to thermally contact chassis 1310, and interface 1362 could be omitted.
The included descriptions and figures depict specific embodiments to teach those skilled in the art how to make and use the best mode. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these embodiments that fall within the scope of the invention. Those skilled in the art will also appreciate that the features described above can be combined in various ways to form multiple embodiments. As a result, the invention is not limited to the specific embodiments described above, but only by the claims and their equivalents.

Claims (10)

What is claimed is:
1. A method of operating a data storage device, the method comprising:
monitoring a utilization level of at least a processing system of the data storage device;
determining a quantity of links to aggregate for an external communication interface of the data storage device based at least on the utilization level of at least the processing system;
providing an indication of the quantity of links to aggregate for the external communication interface to an interface system of the data storage device;
providing an indication of the utilization level to a control circuit of the data storage device; and
throttling a clock frequency associated with at least the processing system of the data storage device based at least on the indication of the utilization level.
2. The method of claim 1, wherein throttling the clock frequency associated with at least the processing system of the data storage device based at least on the indication of the utilization level comprises changing the clock frequency proportional to the utilization level.
3. The method of claim 1, further comprising:
determining a voltage level based at least on the indication of the utilization level;
providing a voltage signal at the voltage level to the processing system; and
wherein throttling the clock frequency comprises throttling the clock frequency based at least on the voltage signal.
4. The method of claim 1, wherein monitoring the utilization level of at least the processing system of the data storage device comprises, in the processing system, executing a software process configured to monitor the utilization level of at least the processing system.
5. The method of claim 4, further comprising:
in the processing system, providing the indication of the utilization level to an external pin of the processing system.
6. A data storage device, comprising:
a processing system configured to monitor a utilization level of at least the processing system;
the processing system configured to determine a quantity of links to aggregate for an external communication interface of the data storage device based at least on the utilization level of at least the processing system;
the processing system configured to provide an indication of the quantity of links to aggregate for the external communication interface to an interface system of the data storage device;
the processing system configured to provide an indication of the utilization level to a control circuit of the data storage device; and
the control circuit configured to throttle a clock frequency associated with at least the processing system based at least on the indication of the utilization level.
7. The data storage device of claim 6, comprising:
the control circuit configured to change the clock frequency proportional to the utilization level to throttle the clock frequency.
8. The data storage device of claim 6, comprising:
the control circuit configured to determine a voltage level based at least on the indication of the utilization level;
the control circuit configured to provide a voltage signal at the voltage level to the processing system; and
the processing system configured to throttle the clock frequency based at least on the voltage signal.
9. The data storage device of claim 6, comprising:
the processing system configured to execute a software process configured to monitor the utilization level of at least the processing system.
10. The data storage device of claim 6, comprising:
the processing system configured to provide the indication of the utilization level to an external pin of the processing system.
US14/204,423 2010-10-10 2014-03-11 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems Expired - Fee Related US9285827B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/204,423 US9285827B2 (en) 2010-10-10 2014-03-11 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
US15/017,071 US10191667B2 (en) 2010-10-10 2016-02-05 Systems and methods for optimizing data storage among a plurality of storage drives
US16/254,721 US10795584B2 (en) 2010-10-10 2019-01-23 Data storage among a plurality of storage drives
US17/019,601 US11366591B2 (en) 2010-10-10 2020-09-14 Data storage among a plurality of storage drives

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US39165110P 2010-10-10 2010-10-10
US13/270,084 US8688926B2 (en) 2010-10-10 2011-10-10 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
US14/204,423 US9285827B2 (en) 2010-10-10 2014-03-11 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/270,084 Continuation US8688926B2 (en) 2010-10-10 2011-10-10 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/017,071 Continuation US10191667B2 (en) 2010-10-10 2016-02-05 Systems and methods for optimizing data storage among a plurality of storage drives

Publications (2)

Publication Number Publication Date
US20140201562A1 US20140201562A1 (en) 2014-07-17
US9285827B2 true US9285827B2 (en) 2016-03-15

Family

ID=45926053

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/270,084 Active 2032-09-13 US8688926B2 (en) 2010-10-10 2011-10-10 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
US14/204,423 Expired - Fee Related US9285827B2 (en) 2010-10-10 2014-03-11 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
US15/017,071 Active 2032-01-16 US10191667B2 (en) 2010-10-10 2016-02-05 Systems and methods for optimizing data storage among a plurality of storage drives
US16/254,721 Active US10795584B2 (en) 2010-10-10 2019-01-23 Data storage among a plurality of storage drives
US17/019,601 Active US11366591B2 (en) 2010-10-10 2020-09-14 Data storage among a plurality of storage drives

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/270,084 Active 2032-09-13 US8688926B2 (en) 2010-10-10 2011-10-10 Systems and methods for optimizing data storage among a plurality of solid state memory subsystems

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/017,071 Active 2032-01-16 US10191667B2 (en) 2010-10-10 2016-02-05 Systems and methods for optimizing data storage among a plurality of storage drives
US16/254,721 Active US10795584B2 (en) 2010-10-10 2019-01-23 Data storage among a plurality of storage drives
US17/019,601 Active US11366591B2 (en) 2010-10-10 2020-09-14 Data storage among a plurality of storage drives

Country Status (1)

Country Link
US (5) US8688926B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10365981B2 (en) 2016-08-19 2019-07-30 Samsung Electronics Co., Ltd. Adaptive multipath fabric for balanced performance and high availability
US11061574B2 (en) 2018-12-05 2021-07-13 Samsung Electronics Co., Ltd. Accelerated data processing in SSDs comprises SPAs an APM and host processor whereby the SPAs has multiple of SPEs
US11200194B1 (en) 2018-02-23 2021-12-14 MagStor Inc. Magnetic tape drive
US11892961B1 (en) 2018-02-23 2024-02-06 MagStor Inc. Magnetic tape drive and assembly for a tape drive

Families Citing this family (112)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8688926B2 (en) * 2010-10-10 2014-04-01 Liqid Inc. Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
TWI417727B (en) * 2010-11-22 2013-12-01 Phison Electronics Corp Memory storage device, memory controller thereof, and method for responding instruction sent from host thereof
JP6049716B2 (en) * 2011-07-27 2016-12-21 シーゲイト テクノロジー エルエルシーSeagate Technology LLC Technology for secure storage hijacking protection
JP2013061799A (en) 2011-09-13 2013-04-04 Toshiba Corp Memory device, control method for memory device and controller
TWI430094B (en) * 2011-09-22 2014-03-11 Phison Electronics Corp Memory storage device, memory controller, and temperature management method
US9152568B1 (en) * 2011-12-05 2015-10-06 Seagate Technology Llc Environmental-based device operation
US9158726B2 (en) 2011-12-16 2015-10-13 Inphi Corporation Self terminated dynamic random access memory
US8949473B1 (en) * 2012-02-16 2015-02-03 Inphi Corporation Hybrid memory blade
US9069717B1 (en) 2012-03-06 2015-06-30 Inphi Corporation Memory parametric improvements
WO2014058854A1 (en) * 2012-10-09 2014-04-17 Securboration, Inc. Systems and methods for automatically parallelizing sequential code
US9082472B2 (en) * 2012-11-15 2015-07-14 Taejin Info Tech Co., Ltd. Back-up power management for efficient battery usage
CN104346232A (en) * 2013-08-06 2015-02-11 慧荣科技股份有限公司 Data storage device and access limiting method thereof
US9231701B2 (en) 2013-08-29 2016-01-05 Corning Optical Communications Wireless Ltd Attenuation systems with cooling functions and related components and methods
US10193377B2 (en) * 2013-10-30 2019-01-29 Samsung Electronics Co., Ltd. Semiconductor energy harvest and storage system for charging an energy storage device and powering a controller and multi-sensor memory module
EP3063641A4 (en) * 2013-10-31 2017-07-05 Hewlett-Packard Enterprise Development LP Target port processing of a data transfer
CN103809920B (en) * 2014-02-13 2017-05-17 杭州电子科技大学 Realizing method of ultra-large capacity solid state disk
KR102128472B1 (en) 2014-02-17 2020-06-30 삼성전자주식회사 Storage device for performing in-storage computing operations, method thereof, and system including same
WO2015126429A1 (en) 2014-02-24 2015-08-27 Hewlett-Packard Development Company, L.P. Repurposable buffers for target port processing of a data transfer
US10467166B2 (en) 2014-04-25 2019-11-05 Liqid Inc. Stacked-device peripheral storage card
US10114784B2 (en) 2014-04-25 2018-10-30 Liqid Inc. Statistical power handling in a scalable storage system
US9798636B2 (en) 2014-06-23 2017-10-24 Liqid Inc. Front end traffic handling in modular switched fabric based data storage systems
CN106462498B (en) * 2014-06-23 2019-08-02 利奇德股份有限公司 Modularization architecture for exchanging for data-storage system
US9904651B2 (en) * 2014-07-31 2018-02-27 Samsung Electronics Co., Ltd. Operating method of controller for setting link between interfaces of electronic devices, and storage device including controller
JP2016053757A (en) * 2014-09-02 2016-04-14 株式会社東芝 Memory system
US9653124B2 (en) 2014-09-04 2017-05-16 Liqid Inc. Dual-sided rackmount storage assembly
US10362107B2 (en) 2014-09-04 2019-07-23 Liqid Inc. Synchronization of storage transactions in clustered storage systems
US9760295B2 (en) * 2014-09-05 2017-09-12 Toshiba Memory Corporation Atomic rights in a distributed memory system
US10031691B2 (en) 2014-09-25 2018-07-24 International Business Machines Corporation Data integrity in deduplicated block storage environments
US9712619B2 (en) 2014-11-04 2017-07-18 Pavilion Data Systems, Inc. Virtual non-volatile memory express drive
US9565269B2 (en) * 2014-11-04 2017-02-07 Pavilion Data Systems, Inc. Non-volatile memory express over ethernet
US10198183B2 (en) 2015-02-06 2019-02-05 Liqid Inc. Tunneling of storage operations between storage nodes
EP3062142B1 (en) 2015-02-26 2018-10-03 Nokia Technologies OY Apparatus for a near-eye display
US9710170B2 (en) * 2015-03-05 2017-07-18 Western Digital Technologies, Inc. Processing data storage commands for enclosure services
US10019388B2 (en) 2015-04-28 2018-07-10 Liqid Inc. Enhanced initialization for data storage assemblies
US10191691B2 (en) 2015-04-28 2019-01-29 Liqid Inc. Front-end quality of service differentiation in storage system operations
US10108422B2 (en) 2015-04-28 2018-10-23 Liqid Inc. Multi-thread network stack buffering of data frames
US10067905B2 (en) * 2015-05-26 2018-09-04 Plasmability, Llc Digital interface for manufacturing equipment
CN106557143B (en) * 2015-09-28 2020-02-28 伊姆西Ip控股有限责任公司 Apparatus and method for data storage device
US9933954B2 (en) * 2015-10-19 2018-04-03 Nxp Usa, Inc. Partitioned memory having pipeline writes
WO2017078698A1 (en) * 2015-11-04 2017-05-11 Hewlett-Packard Development Company, L.P. Throttling components of a storage device
US10019403B2 (en) 2015-11-04 2018-07-10 International Business Machines Corporation Mapping data locations using data transmissions
US10275160B2 (en) 2015-12-21 2019-04-30 Intel Corporation Method and apparatus to enable individual non volatile memory express (NVME) input/output (IO) Queues on differing network addresses of an NVME controller
US10013168B2 (en) * 2015-12-24 2018-07-03 Intel Corporation Disaggregating block storage controller stacks
US10381055B2 (en) 2015-12-26 2019-08-13 Intel Corporation Flexible DLL (delay locked loop) calibration
US10255215B2 (en) 2016-01-29 2019-04-09 Liqid Inc. Enhanced PCIe storage device form factors
US10031845B2 (en) 2016-04-01 2018-07-24 Intel Corporation Method and apparatus for processing sequential writes to a block group of physical blocks in a memory device
US10019198B2 (en) * 2016-04-01 2018-07-10 Intel Corporation Method and apparatus for processing sequential writes to portions of an addressable unit
TWI721319B (en) 2016-06-10 2021-03-11 美商利魁得股份有限公司 Multi-port interposer architectures in data storage systems
KR102683728B1 (en) * 2016-07-22 2024-07-09 삼성전자주식회사 Method of achieving low write latency in a data starage system
US11294839B2 (en) 2016-08-12 2022-04-05 Liqid Inc. Emulated telemetry interfaces for fabric-coupled computing units
US11880326B2 (en) 2016-08-12 2024-01-23 Liqid Inc. Emulated telemetry interfaces for computing units
WO2018031937A1 (en) 2016-08-12 2018-02-15 Liqid Inc. Disaggregated fabric-switched computing platform
US10200376B2 (en) 2016-08-24 2019-02-05 Intel Corporation Computer product, method, and system to dynamically provide discovery services for host nodes of target systems and storage resources in a network
US10176116B2 (en) 2016-09-28 2019-01-08 Intel Corporation Computer product, method, and system to provide discovery services to discover target storage resources and register a configuration of virtual target storage resources mapping to the target storage resources and an access control list of host nodes allowed to access the virtual target storage resources
US10348605B2 (en) * 2016-10-28 2019-07-09 Western Digital Technologies, Inc. Embedding analyzer functionality in storage devices
US10650552B2 (en) 2016-12-29 2020-05-12 Magic Leap, Inc. Systems and methods for augmented reality
EP3343267B1 (en) 2016-12-30 2024-01-24 Magic Leap, Inc. Polychromatic light out-coupling apparatus, near-eye displays comprising the same, and method of out-coupling polychromatic light
WO2018200761A1 (en) 2017-04-27 2018-11-01 Liqid Inc. Pcie fabric connectivity expansion card
US10795842B2 (en) 2017-05-08 2020-10-06 Liqid Inc. Fabric switched graphics modules within storage enclosures
US11436087B2 (en) * 2017-05-31 2022-09-06 Everspin Technologies, Inc. Systems and methods for implementing and managing persistent memory
US10168905B1 (en) * 2017-06-07 2019-01-01 International Business Machines Corporation Multi-channel nonvolatile memory power loss management
US10578870B2 (en) 2017-07-26 2020-03-03 Magic Leap, Inc. Exit pupil expander
KR102352156B1 (en) * 2017-10-26 2022-01-17 삼성전자주식회사 Slave device for performing address resolution protocol and operation method thereof
US11544168B2 (en) * 2017-10-30 2023-01-03 SK Hynix Inc. Memory system
KR102414047B1 (en) 2017-10-30 2022-06-29 에스케이하이닉스 주식회사 Convergence memory device and method thereof
KR102596429B1 (en) 2017-12-10 2023-10-30 매직 립, 인코포레이티드 Anti-reflection coatings on optical waveguides
WO2019126331A1 (en) 2017-12-20 2019-06-27 Magic Leap, Inc. Insert for augmented reality viewing device
US10866798B2 (en) * 2017-12-28 2020-12-15 Intel Corporation Firmware upgrade method and apparatus
US10755676B2 (en) 2018-03-15 2020-08-25 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
US10739844B2 (en) 2018-05-02 2020-08-11 Intel Corporation System, apparatus and method for optimized throttling of a processor
CN112601975B (en) 2018-05-31 2024-09-06 奇跃公司 Radar head pose positioning
WO2020010097A1 (en) 2018-07-02 2020-01-09 Magic Leap, Inc. Pixel intensity modulation using modifying gain values
US11856479B2 (en) 2018-07-03 2023-12-26 Magic Leap, Inc. Systems and methods for virtual and augmented reality along a route with markers
WO2020010226A1 (en) 2018-07-03 2020-01-09 Magic Leap, Inc. Systems and methods for virtual and augmented reality
US11624929B2 (en) 2018-07-24 2023-04-11 Magic Leap, Inc. Viewing device with dust seal integration
EP3827224B1 (en) 2018-07-24 2023-09-06 Magic Leap, Inc. Temperature dependent calibration of movement detection devices
JP7401519B2 (en) 2018-08-02 2023-12-19 マジック リープ, インコーポレイテッド Visual recognition system with interpupillary distance compensation based on head motion
EP3830631A4 (en) 2018-08-03 2021-10-27 Magic Leap, Inc. Unfused pose-based drift correction of a fused pose of a totem in a user interaction system
US10660228B2 (en) 2018-08-03 2020-05-19 Liqid Inc. Peripheral storage card with offset slot alignment
US11068440B2 (en) * 2018-08-17 2021-07-20 Jimmy C Lin Application-specific computing system and method
EP3840645A4 (en) 2018-08-22 2021-10-20 Magic Leap, Inc. Patient viewing system
EP3881279A4 (en) 2018-11-16 2022-08-17 Magic Leap, Inc. Image size triggered clarification to maintain image sharpness
CN113454507B (en) 2018-12-21 2024-05-07 奇跃公司 Cavitation structure for promoting total internal reflection within a waveguide
US10585827B1 (en) 2019-02-05 2020-03-10 Liqid Inc. PCIe fabric enabled peer-to-peer communications
CN113518961B (en) * 2019-02-06 2024-09-24 奇跃公司 Clock speed determination and adjustment based on target intent
EP3939030A4 (en) 2019-03-12 2022-11-30 Magic Leap, Inc. Registration of local content between first and second augmented reality viewers
WO2020195233A1 (en) * 2019-03-28 2020-10-01 日本電気株式会社 Wireless packet transmitting device, wireless packet transmitting method, and non-transitory computer-readable medium
US11256649B2 (en) 2019-04-25 2022-02-22 Liqid Inc. Machine templates for predetermined compute units
WO2020219801A1 (en) 2019-04-25 2020-10-29 Liqid Inc. Multi-protocol communication fabric control
CN114127837A (en) 2019-05-01 2022-03-01 奇跃公司 Content providing system and method
US11573708B2 (en) 2019-06-25 2023-02-07 Micron Technology, Inc. Fail-safe redundancy in aggregated and virtualized solid state drives
US11055249B2 (en) 2019-06-25 2021-07-06 Micron Technology, Inc. Access optimization in aggregated and virtualized solid state drives
US11768613B2 (en) 2019-06-25 2023-09-26 Micron Technology, Inc. Aggregation and virtualization of solid state drives
US10942881B2 (en) 2019-06-25 2021-03-09 Micron Technology, Inc. Parallel operations in aggregated and virtualized solid state drives
US11513923B2 (en) 2019-06-25 2022-11-29 Micron Technology, Inc. Dynamic fail-safe redundancy in aggregated and virtualized solid state drives
US11762798B2 (en) 2019-06-25 2023-09-19 Micron Technology, Inc. Aggregated and virtualized solid state drives with multiple host interfaces
US10942846B2 (en) * 2019-06-25 2021-03-09 Micron Technology, Inc. Aggregated and virtualized solid state drives accessed via multiple logical address spaces
WO2021021670A1 (en) 2019-07-26 2021-02-04 Magic Leap, Inc. Systems and methods for augmented reality
US11163497B2 (en) * 2019-07-31 2021-11-02 EMC IP Holding Company LLC Leveraging multi-channel SSD for application-optimized workload and raid optimization
US11829250B2 (en) * 2019-09-25 2023-11-28 Veritas Technologies Llc Systems and methods for efficiently backing up large datasets
US11281275B2 (en) * 2019-10-10 2022-03-22 Dell Products L.P. System and method for using input power line telemetry in an information handling system
EP4058936A4 (en) 2019-11-14 2023-05-03 Magic Leap, Inc. Systems and methods for virtual and augmented reality
JP2023502927A (en) 2019-11-15 2023-01-26 マジック リープ, インコーポレイテッド Visualization system for use in a surgical environment
US11593240B2 (en) 2020-02-12 2023-02-28 Samsung Electronics Co., Ltd. Device and method for verifying a component of a storage device
US11822826B2 (en) * 2020-02-20 2023-11-21 Raytheon Company Sensor storage system
US12056394B2 (en) * 2020-08-13 2024-08-06 Cadence Design Systems, Inc. Memory interface training
TWI755068B (en) * 2020-09-21 2022-02-11 宜鼎國際股份有限公司 Data storage device with system operation capability
CN114895847A (en) * 2020-10-12 2022-08-12 长江存储科技有限责任公司 Nonvolatile memory, storage device, and method of operating nonvolatile memory
US11442776B2 (en) 2020-12-11 2022-09-13 Liqid Inc. Execution job compute unit composition in computing clusters
US12081526B2 (en) * 2021-05-19 2024-09-03 Western Digital Technologies, Inc. Data storage device data recovery using remote network storage
TWI789020B (en) * 2021-09-23 2023-01-01 宇瞻科技股份有限公司 Control system and control method of storage device
CN114625679A (en) * 2021-10-09 2022-06-14 深圳宏芯宇电子股份有限公司 Interface switching device and method

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110423A1 (en) * 2001-12-11 2003-06-12 Advanced Micro Devices, Inc. Variable maximum die temperature based on performance state
US20030126478A1 (en) * 2001-12-28 2003-07-03 Burns James S. Multiple mode power throttle mechanism
US20060277206A1 (en) * 2005-06-02 2006-12-07 Bailey Philip G Automated reporting of computer system metrics
US7243145B1 (en) * 2002-09-30 2007-07-10 Electronic Data Systems Corporation Generation of computer resource utilization data per computer application
US7260487B2 (en) * 2005-11-29 2007-08-21 International Business Machines Corporation Histogram difference method and system for power/performance measurement and management
US20080198744A1 (en) * 2002-08-14 2008-08-21 Siemens Aktiengesellschaft Access control for packet-oriented networks
US20090006837A1 (en) * 2007-06-29 2009-01-01 Rothman Michael A Method and apparatus for improved memory reliability, availability and serviceability
US20090193203A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Reduce Latency by Running a Memory Channel Frequency Fully Asynchronous from a Memory Device Frequency
US20090193201A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Increase the Overall Bandwidth of a Memory Channel By Allowing the Memory Channel to Operate at a Frequency Independent from a Memory Device Frequency
US20090190427A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Enable a Memory Hub Device to Manage Thermal Conditions at a Memory Device Level Transparent to a Memory Controller
US7606960B2 (en) * 2004-03-26 2009-10-20 Intel Corporation Apparatus for adjusting a clock frequency of a variable speed bus
US7725757B2 (en) * 2004-03-03 2010-05-25 Intel Corporation Method and system for fast frequency switch for a power throttle in an integrated device
US8125919B1 (en) * 2009-03-24 2012-02-28 Sprint Spectrum L.P. Method and system for selectively releasing network resources

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5828207A (en) 1993-04-20 1998-10-27 The United States Of America As Represented By The Secretary Of The Navy Hold-up circuit with safety discharge for preventing shut-down by momentary power interruption
US6061750A (en) 1998-02-20 2000-05-09 International Business Machines Corporation Failover system for a DASD storage controller reconfiguring a first processor, a bridge, a second host adaptor, and a second device adaptor upon a second processor failure
US6411986B1 (en) 1998-11-10 2002-06-25 Netscaler, Inc. Internet client-server multiplexer
US7934074B2 (en) * 1999-08-04 2011-04-26 Super Talent Electronics Flash module with plane-interleaved sequential writes to restricted-write flash chips
US7877542B2 (en) * 2000-01-06 2011-01-25 Super Talent Electronics, Inc. High integration of intelligent non-volatile memory device
US6325636B1 (en) 2000-07-20 2001-12-04 Rlx Technologies, Inc. Passive midplane for coupling web server processing cards with a network interface(s)
US7505889B2 (en) * 2002-02-25 2009-03-17 Zoran Corporation Transcoding media system
US7315954B2 (en) * 2002-08-02 2008-01-01 Seiko Epson Corporation Hardware switching apparatus for soft power-down and remote power-up
US7365454B2 (en) 2003-02-11 2008-04-29 O2Micro International Limited Enhanced power switch device enabling modular USB PC cards
JPWO2004079583A1 (en) * 2003-03-05 2006-06-08 富士通株式会社 Data transfer control device and DMA data transfer control method
JP4710368B2 (en) 2005-03-18 2011-06-29 富士フイルム株式会社 Coating film curing method and apparatus
JP4394624B2 (en) 2005-09-21 2010-01-06 株式会社日立製作所 Computer system and I / O bridge
US8344475B2 (en) * 2006-11-29 2013-01-01 Rambus Inc. Integrated circuit heating to effect in-situ annealing
US8150800B2 (en) 2007-03-28 2012-04-03 Netapp, Inc. Advanced clock synchronization technique
US8706914B2 (en) * 2007-04-23 2014-04-22 David D. Duchesneau Computing infrastructure
US20080281938A1 (en) 2007-05-09 2008-11-13 Oracle International Corporation Selecting a master node in a multi-node computer system
US7653773B2 (en) * 2007-10-03 2010-01-26 International Business Machines Corporation Dynamically balancing bus bandwidth
US9146892B2 (en) 2007-10-11 2015-09-29 Broadcom Corporation Method and system for improving PCI-E L1 ASPM exit latency
US8582448B2 (en) * 2007-10-22 2013-11-12 Dell Products L.P. Method and apparatus for power throttling of highspeed multi-lane serial links
US8103810B2 (en) 2008-05-05 2012-01-24 International Business Machines Corporation Native and non-native I/O virtualization in a single adapter
KR101515525B1 (en) * 2008-10-02 2015-04-28 삼성전자주식회사 Memory device and operating method of memory device
US8656117B1 (en) * 2008-10-30 2014-02-18 Nvidia Corporation Read completion data management
US8037247B2 (en) 2008-12-12 2011-10-11 At&T Intellectual Property I, L.P. Methods, computer program products, and systems for providing an upgradeable hard disk
WO2010096263A2 (en) 2009-02-17 2010-08-26 Rambus Inc. Atomic-operation coalescing technique in multi-chip systems
US8725946B2 (en) * 2009-03-23 2014-05-13 Ocz Storage Solutions, Inc. Mass storage system and method of using hard disk, solid-state media, PCIe edge connector, and raid controller
US8677180B2 (en) 2010-06-23 2014-03-18 International Business Machines Corporation Switch failover control in a multiprocessor computer system
US20120030544A1 (en) * 2010-07-27 2012-02-02 Fisher-Jeffes Timothy Perrin Accessing Memory for Data Decoding
US8688926B2 (en) * 2010-10-10 2014-04-01 Liqid Inc. Systems and methods for optimizing data storage among a plurality of solid state memory subsystems
EP2652623B1 (en) 2010-12-13 2018-08-01 SanDisk Technologies LLC Apparatus, system, and method for auto-commit memory
US8589723B2 (en) 2010-12-22 2013-11-19 Intel Corporation Method and apparatus to provide a high availability solid state drive
US8954798B2 (en) 2011-02-11 2015-02-10 Taejin Info Tech Co., Ltd. Alarm-based backup and restoration for a semiconductor storage device
US8712975B2 (en) 2011-03-08 2014-04-29 Rackspace Us, Inc. Modification of an object replica
US8792273B2 (en) 2011-06-13 2014-07-29 SMART Storage Systems, Inc. Data storage system with power cycle management and method of operation thereof
GB2493132B (en) 2011-07-11 2018-02-28 Metaswitch Networks Ltd Controlling an apparatus in a LAN by selecting between first and second hardware interfaces for performing data communication

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030110423A1 (en) * 2001-12-11 2003-06-12 Advanced Micro Devices, Inc. Variable maximum die temperature based on performance state
US20030126478A1 (en) * 2001-12-28 2003-07-03 Burns James S. Multiple mode power throttle mechanism
US20080198744A1 (en) * 2002-08-14 2008-08-21 Siemens Aktiengesellschaft Access control for packet-oriented networks
US7243145B1 (en) * 2002-09-30 2007-07-10 Electronic Data Systems Corporation Generation of computer resource utilization data per computer application
US7725757B2 (en) * 2004-03-03 2010-05-25 Intel Corporation Method and system for fast frequency switch for a power throttle in an integrated device
US7606960B2 (en) * 2004-03-26 2009-10-20 Intel Corporation Apparatus for adjusting a clock frequency of a variable speed bus
US20060277206A1 (en) * 2005-06-02 2006-12-07 Bailey Philip G Automated reporting of computer system metrics
US7260487B2 (en) * 2005-11-29 2007-08-21 International Business Machines Corporation Histogram difference method and system for power/performance measurement and management
US20090006837A1 (en) * 2007-06-29 2009-01-01 Rothman Michael A Method and apparatus for improved memory reliability, availability and serviceability
US20090190427A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Enable a Memory Hub Device to Manage Thermal Conditions at a Memory Device Level Transparent to a Memory Controller
US20090193201A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Increase the Overall Bandwidth of a Memory Channel By Allowing the Memory Channel to Operate at a Frequency Independent from a Memory Device Frequency
US20090193203A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Reduce Latency by Running a Memory Channel Frequency Fully Asynchronous from a Memory Device Frequency
US8125919B1 (en) * 2009-03-24 2012-02-28 Sprint Spectrum L.P. Method and system for selectively releasing network resources

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Aragon et al., "Control Speculation for Energy-Efficient Next-Generation Superscalar Processors", IEEE Transaction on Computers vol. 55 No. 3, Mar. 2006, 281-291. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10365981B2 (en) 2016-08-19 2019-07-30 Samsung Electronics Co., Ltd. Adaptive multipath fabric for balanced performance and high availability
US11693747B2 (en) 2016-08-19 2023-07-04 Samsung Electronics Co., Ltd. Adaptive multipath fabric for balanced performance and high availability
US11200194B1 (en) 2018-02-23 2021-12-14 MagStor Inc. Magnetic tape drive
US11892961B1 (en) 2018-02-23 2024-02-06 MagStor Inc. Magnetic tape drive and assembly for a tape drive
US11061574B2 (en) 2018-12-05 2021-07-13 Samsung Electronics Co., Ltd. Accelerated data processing in SSDs comprises SPAs an APM and host processor whereby the SPAs has multiple of SPEs
US11112972B2 (en) 2018-12-05 2021-09-07 Samsung Electronics Co., Ltd. System and method for accelerated data processing in SSDs
US11768601B2 (en) 2018-12-05 2023-09-26 Samsung Electronics Co., Ltd. System and method for accelerated data processing in SSDS

Also Published As

Publication number Publication date
US11366591B2 (en) 2022-06-21
US20200409564A1 (en) 2020-12-31
US20160154591A1 (en) 2016-06-02
US20120089854A1 (en) 2012-04-12
US10191667B2 (en) 2019-01-29
US8688926B2 (en) 2014-04-01
US20140201562A1 (en) 2014-07-17
US10795584B2 (en) 2020-10-06
US20190155519A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
US11366591B2 (en) Data storage among a plurality of storage drives
US10318164B2 (en) Programmable input/output (PIO) engine interface architecture with direct memory access (DMA) for multi-tagging scheme for storage devices
CN106909314B (en) Storage system and control method
US8719495B2 (en) Concatenating a first raid with a second raid
US9959058B1 (en) Utilizing flash optimized layouts which minimize wear of internal flash memory of solid state drives
KR102663302B1 (en) Data aggregation in zns drive
US9727267B1 (en) Power management and monitoring for storage devices
US10235069B2 (en) Load balancing by dynamically transferring memory range assignments
US20110035540A1 (en) Flash blade system architecture and method
TW201826909A (en) Modular carrier form factors for computing platforms
EP3926451B1 (en) Communication of data relocation information by storage device to host to improve system performance
US10042585B2 (en) Pervasive drive operating statistics on SAS drives
US10095432B2 (en) Power management and monitoring for storage devices
JP2017049965A (en) Storage and storage system
CN108205478B (en) Intelligent sequential SCSI physical layer power management
US11194489B2 (en) Zone-based device with control level selected by the host
WO2024063821A1 (en) Dynamic and shared cmb and hmb allocation
US11861224B2 (en) Data transfer management from host buffers
Micheloni et al. Solid state drives (ssds)
US20180113611A1 (en) Thunderbolt Flash Drive
KR20190102998A (en) Data storage device and operating method thereof
JP2018195185A (en) Storage device and control method
KR20210006163A (en) Controller, memory system and operating method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: LIQID INC., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PURESILICON, INC.;REEL/FRAME:034212/0148

Effective date: 20140117

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

AS Assignment

Owner name: CANADIAN IMPERIAL BANK OF COMMERCE, CANADA

Free format text: SECURITY INTEREST;ASSIGNOR:LIQID INC.;REEL/FRAME:050630/0636

Effective date: 20191003

AS Assignment

Owner name: HORIZON TECHNOLOGY FINANCE CORPORATION, CONNECTICUT

Free format text: SECURITY INTEREST;ASSIGNOR:LIQID INC.;REEL/FRAME:054900/0539

Effective date: 20201231

AS Assignment

Owner name: LIQID INC., COLORADO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CANADIAN IMPERIAL BANK OF COMMERCE;REEL/FRAME:055953/0860

Effective date: 20210406

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240315