US20210141540A1 - Computing device and method for inferring a predicted number of physical blocks erased from a flash memory - Google Patents
Computing device and method for inferring a predicted number of physical blocks erased from a flash memory Download PDFInfo
- Publication number
- US20210141540A1 US20210141540A1 US17/156,740 US202117156740A US2021141540A1 US 20210141540 A1 US20210141540 A1 US 20210141540A1 US 202117156740 A US202117156740 A US 202117156740A US 2021141540 A1 US2021141540 A1 US 2021141540A1
- Authority
- US
- United States
- Prior art keywords
- flash memory
- physical blocks
- erased
- computing device
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
- G06F2212/1036—Life time enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7204—Capacity control, e.g. partitioning, end-of-life degradation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7205—Cleaning, compaction, garbage collection, erase control
Definitions
- the present disclosure relates to the field of memory devices comprising flash memory. More specifically, the present disclosure relates to a computing device and method for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- Flash memory is a form of electrically-erasable programmable read-only memory (EEPROM) with the following characteristic: a portion of the memory is erased before data are written in the erased portion of the memory.
- EEPROM electrically-erasable programmable read-only memory
- a conventional EEPROM erases data on a bit-by-bit level
- a flash memory erases data on a block-by-block level.
- These blocks are usually referred to as physical blocks of memory, by contrast to logical blocks of memory.
- the size of the physical block may vary from one byte to a plurality of bytes.
- flash memory and more generally of EEPROM
- flash memory is that it is a nonvolatile form of memory, which does not require power to preserve stored data with integrity, so that a device embedding a flash memory can be turned off without losing data.
- the flash memory is weared out by erase operations performed on the physical blocks of the flash memory.
- the manufacturer of the flash memory generally provides a life expectancy of the flash memory expressed in a limitation on the number of erase operations which can be performed.
- the flash memory can support 10 000 erase operations on physical blocks, or the flash memory can support an average of 350 physical blocks being erased per hour for a duration of 10 years.
- a model for predicting the number of physical blocks erased from the flash memory during a write operation may be useful for taking actions to preserve the lifespan of the flash memory.
- the present disclosure provides a computing device.
- the computing device comprises memory for storing a predictive model generated by a neural network training engine.
- the computing device also comprises a processing unit for executing a neural network inference engine.
- the neural network inference engine uses the predictive model for inferring a predicted number of physical blocks erased from a flash memory based on inputs.
- the inputs comprise a total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory.
- the inputs may further include a temperature at which the flash memory is operating.
- the present disclosure provides a computing device.
- the memory comprises a memory device comprising flash memory.
- the flash memory comprises a plurality of physical blocks for writing data.
- the computing device also comprises memory for storing a predictive model generated by a neural network training engine, and a total number of physical blocks previously erased from the flash memory.
- the computing device further comprises a processing unit for executing a neural network inference engine.
- the neural network inference engine uses the predictive model for inferring a predicted number of physical blocks erased from the flash memory based on inputs.
- the inputs comprise the total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory.
- the inputs may further include a temperature at which the flash memory is operating.
- the memory for storing the predictive model and the total number of physical blocks previously erased from the flash memory may consist of the flash memory.
- the present disclosure provides a method for inferring a predicted number of physical blocks erased from a flash memory.
- the method comprises storing, by a computing device, a predictive model generated by a neural network training engine.
- the method comprises executing, by a processing unit of the computing device, a neural network inference engine.
- the neural network inference engine uses the predictive model for inferring the predicted number of physical blocks erased from the flash memory based on inputs.
- the inputs comprise a total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory.
- the inputs may further include a temperature at which the flash memory is operating.
- the flash memory may be comprised in the computing device, and the method may further comprise storing by the computing device the total number of physical blocks previously erased from the flash memory.
- FIG. 1 is a schematic representation of a memory device comprising flash memory
- FIG. 2 represents a method for performing a write operation on the flash memory of FIG. 1 ;
- FIG. 3 represents a computing device executing a neural network inference engine
- FIG. 4 represents a method executed by the computing device of FIG. 3 for inferring a predicted number of physical blocks erased from a flash memory
- FIG. 5 represents a computing device comprising a flash memory and executing a neural network inference engine
- FIG. 6 represents a method executed by the computing device of FIG. 5 for inferring a predicted number of physical blocks erased from the flash memory
- FIG. 7 is a schematic representation of the neural network inference engine executed by the computing devices of FIGS. 3 and 5 .
- Various aspects of the present disclosure generally address one or more of the problems related to the wear out of flash memory embedded in a memory device.
- the flash memory is weared out by erase operations performed on physical blocks of the flash memory.
- the present disclosure aims at providing a mechanism for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- a memory device 10 comprising flash memory 110 (represented in FIG. 1 ), and a method 200 (represented in FIG. 2 ) for writing data on the flash memory 110 of the memory device 10 , are represented.
- the flash memory 110 comprises a plurality of physical blocks of memory 112 . Only three physical blocks 112 have been represented in FIG. 1 for simplification purposes.
- the number of physical blocks 112 of the flash memory 110 depends on the capacity of the flash memory 110 , which is usually expressed in gigabytes (e.g. 16, 32, 64, 128, etc.).
- the number of physical blocks 112 of the flash memory 110 also depends on the size of the physical block 112 , which varies from one to several bytes.
- the present disclosure is not limited to flash memory, but can be extended to any form of memory operating as follows: a physical block of the memory is erased before new data is written to this physical block of the memory.
- the memory device 10 also comprises a flash memory controller 120 for controlling the operations of the flash memory 110 , and a host interface 100 connected to a bus 30 .
- the memory device 10 further comprises a memory device controller 130 for controlling the operations of the memory device 10 .
- An internal bus 140 interconnects several components of the memory device 10 .
- the internal bus 140 represented in FIG. 1 interconnects the host interface 100 , the memory device controller 130 and the flash memory controller 120 .
- memory device 10 is an embedded multimedia card (eMMC), which has an architecture similar to the one represented in FIG. 1 in terms of electronic components.
- eMMC embedded multimedia card
- other types of memory devices 10 embedding the flash memory 110 and the memory device controller 130 having the capability to control a write speed on the bus 30
- the architecture of the memory device 10 may vary.
- the memory device controller 130 is integrated with the host interface 100
- the memory device controller 130 is integrated with the flash memory controller 120
- the flash memory controller 120 is integrated with the host interface 100 , etc.
- At least one host device 20 uses the bus 30 for writing data to (and/or reading data from) the memory device 10 .
- Examples of host devices include: a processor, a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller unit (MCU), a field-programmable gate array (FPGA), etc.
- a single host device 20 is represented in FIG. 1 for simplification purposes. However, a plurality of host devices (e.g. a plurality of processors) may be using the bus 30 for transmitting data to and/or receiving data from the memory device 10 .
- Examples of computing devices comprising the memory device 10 and at least one host device 20 include personal computers, laptops, tablets, smartphones, digital cameras, scientific instruments, medical devices, industrial appliances (e.g. environment controllers, sensors, controlled appliances), etc.
- industrial appliances e.g. environment controllers, sensors, controlled appliances
- a write operation on the flash memory 110 is illustrated in FIG. 2 , and comprises the following steps of the method 200 .
- the host device 20 transmits data to be written on the flash memory 110 of the memory device 10 over the bus 30 .
- the host interface 100 receives the data transmitted by the host device 20 via the bus 30 .
- the host interface 100 transmits the data received from the host device 20 via the bus 30 to the flash memory controller 120 over the internal bus 140 .
- the host interface 100 transmits the data received from the host device 20 via the bus 30 to the memory device controller 130 over the internal bus 140 .
- the memory device controller 130 performs some processing prior to effectively allowing the write operation. For instance, the memory device controller 130 determines if a write operation can be effectively performed, checks the integrity of the data, etc. Then, the memory device controller 130 transmits the data to the flash memory controller 120 over the internal bus 140 .
- the flash memory controller 120 receives the data transmitted by the host interface via the internal bus 140 .
- the flash memory controller 120 erases at least one physical block 112 of the flash memory 110 .
- the number of physical blocks erased depends on the size of the physical blocks 112 , and the size of the data received by the flash memory controller 120 . For example, if each physical block 112 has a size of 8 bytes, and the received data have a size of 1024 bytes, then 128 physical blocks 112 are erased.
- the flash memory controller 120 maintains a mapping of the physical blocks 112 currently in use and available for reading by host device(s) 20 .
- the physical blocks 112 erased at step 250 are not currently in use.
- the flash memory controller 120 writes the data received at step 240 in the at least one physical block 112 erased at step 250 .
- the flash memory controller 120 comprises one or more logical blocks 122 .
- Only one logical block 122 is represented in FIG. 1 for simplification purposes.
- the one or more logical blocks 122 correspond to a dedicated memory (not represented in FIG. 1 ) of the flash memory controller 120 , where data received at step 240 are temporarily stored, before steps 250 (erase) and 260 (write) are executed.
- the usage of one or more logical blocks 122 is well known in the art of flash memory management, and allows an optimization of the operations of the flash memory controller 120 .
- step 240 is repeated several times, and the data received at each occurrence of step 240 are aggregated and stored in the one or more logical blocks 122 .
- steps 250 (erase) and 260 (write) are executed only once for the aggregated data stored in the one or more logical blocks 122 .
- the flash memory controller 120 reports the number of physical blocks 112 erased at step 250 to the memory device controller 130 .
- the flash memory controller 120 does not perform step 270 each time step 250 is performed, but reports an aggregated number of physical blocks 112 erased corresponding to several executions of step 250 .
- the erase operation is reported to the memory device controller 130 at step 270 .
- step 250 physical blocks B 1 , B 2 and B 3 among the plurality of physical blocks 112 of the flash memory 110 are erased. A number of 3 blocks erased is reported at step 270 .
- step 270 physical blocks B 2 and B 5 among the plurality of physical blocks 112 of the flash memory 110 are erased. A number of 2 blocks erased is reported at step 270 .
- step 270 physical blocks B 1 , B 3 , B 4 and B 6 among the plurality of physical blocks 112 of the flash memory 110 are erased. A number of 4 blocks erased is reported at step 270 .
- the memory device controller 130 processes the reported (by the flash memory controller 120 at step 270 ) number of physical blocks 112 erased. For example, the memory device controller 130 reports the number of physical blocks 112 erased to the host device via the host interface 100 and the bus 30 , as will be illustrated later in the description.
- the memory device controller 130 , the flash memory controller 120 and optionally the host interface 100 are electronic devices comprising a processing unit capable of executing instructions of a software program.
- the memory device controller 130 , the flash memory 120 (and optionally the host interface 100 ) also include internal memory for storing instructions of the software programs executed by these electronic devices, data received from other entities of the memory device 10 via the internal bus 140 , data generated by the software programs, etc.
- a standalone memory e.g. the flash memory 110 , or another dedicated memory not represented in FIG. 1
- the memory device controller 130 and the flash memory controller 120 consist of microcontroller units (MCU), which are well known in the art of electronics.
- MCU microcontroller units
- the memory device controller 130 executes instructions of a software program implementing the steps of the method 200 executed by the memory device controller 130 .
- the flash memory controller 120 executes instructions of a software program implementing the steps of the method 200 executed by the flash memory controller 120 .
- An internal memory of the memory device controller 130 or the flash memory controller 120 a standalone memory of the memory device 10 , are examples of a non-transitory computer program product adapted for storing instructions of the software programs executed by the memory device controller 130 or the flash memory controller 120 .
- the memory device controller 130 , the flash memory controller 120 and the host interface 100 are pure hardware components, such as a field-programmable gate array (FPGA).
- FPGA field-programmable gate array
- the host interface 100 performs simple operations and can be more cost-effectively implemented by a FPGA.
- FIGS. 3 and 4 a computing device 300 (represented in FIG. 3 ) and a method 400 (represented in FIG. 4 ) for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network are illustrated.
- the flash memory in question corresponds the flash memory ( 110 ) of the memory device ( 10 ) represented in FIG. 1 .
- the computing device 300 comprises a processing unit 310 , memory 320 , and a communication interface 330 .
- the computing device 300 may comprise additional components (not represented in FIG. 3 for simplification purposes), such as a user interface, a display, another communication interface, etc.
- the processing unit 310 comprises one or more processors (not represented in FIG. 3 ) capable of executing instructions of a computer program. Each processor may further comprise one or several cores.
- the memory 320 stores instructions of computer program(s) executed by the processing unit 310 , data generated by the execution of the computer program(s), data received via the communication interface 330 , etc. Only a single memory 320 is represented in FIG. 3 , but the computing device 300 may comprise several types of memories, including volatile memory (such as a volatile Random Access Memory (RAM)) and non-volatile memory (such as a hard drive).
- volatile memory such as a volatile Random Access Memory (RAM)
- non-volatile memory such as a hard drive
- the steps of the method 400 are implemented by the computing device 300 , to infer a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- a dedicated computer program has instructions for implementing the steps of the method 400 .
- the instructions are comprised in a non-transitory computer program product (e.g. the memory 320 ) of the computing device 300 .
- the instructions provide for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network, when executed by the processing unit 310 of the computing device 300 .
- the instructions are deliverable to the computing device 300 via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network (not represented in FIG. 3 for simplification purposes) through the communication interface 330 ).
- the execution of the neural network training engine 311 generates a predictive model, which is stored in the memory 320 and used by the neural network inference engine 312 .
- the control module 314 controls the operations of the neural network training engine 311 and the neural network inference engine 312 .
- the method 400 comprises the step 405 of executing the neural network training engine 311 (by the processing unit 310 of the computing device 300 ) to generate the predictive model.
- This step is performed under the control of the control module 314 , which feeds a plurality of inputs and a corresponding plurality of outputs to the neural network training engine 311 .
- This training process is well known in the art, and will be detailed later in the description.
- the control module 314 receives the plurality of inputs and the corresponding plurality of outputs via the communication interface 330 from one or more remote computing devices (not represented in FIG. 3 ) in charge of collecting the plurality of inputs and the corresponding plurality of outputs.
- control module 314 receives the plurality of inputs and the corresponding plurality of outputs via a user interface of the computing device 300 (not represented in FIG. 3 ) from a user in charge of collecting the plurality of inputs and the corresponding plurality of outputs.
- the method 400 comprises the step 410 of storing the predictive model in the memory 320 of the of the computing device 300 .
- the method 400 comprises the step 415 of determining operational parameters of a write operation on a flash memory. This step is performed by the control module 314 .
- the flash memory is not part of the computing device 300 .
- Another configuration where the flash memory is part of the computing device 300 will be illustrated later in the description.
- the control module 314 receives the operational parameters of the flash memory via the communication interface 330 from one or more remote computing devices (not represented in FIG. 3 ) in charge of collecting the operational parameters of the flash memory.
- the control module 314 receives the operational parameters of the flash memory via a user interface of the computing device 300 (not represented in FIG. 3 ) from a user in charge of collecting the operational parameters of the flash memory.
- the method 400 comprises the step 420 of executing the neural network inference engine 312 (by the processing unit 310 ).
- the neural network inference engine 312 uses the predictive model (stored in memory 320 at step 410 ) for inferring a predicted number of physical blocks erased from the flash memory based on the operational parameters of the write operation on the flash memory (determined at step 415 ). This step is performed under the control of the control module 314 , which feeds the operational parameters of the flash memory to the neural network inference engine 312 , and receives the inferred predicted number of physical blocks erased from the flash memory from the neural network inference engine 312 .
- the method 400 comprises the step 425 of processing the predicted number (inferred at step 420 ) of physical blocks erased from the flash memory.
- This step is performed by the control module 314 , which receives the predicted number of physical blocks erased from the flash memory from the neural network inference engine 312 .
- the control module 314 displays the predicted number of physical blocks erased from the flash memory to a user on a display (not represented in FIG. 3 ) of the computing device 300 .
- the control module 314 transmits the predicted number of physical blocks erased from the flash memory via the communication interface 330 to one or more remote computing devices (not represented in FIG. 3 ) in charge of processing the predicted number of physical blocks erased from the flash memory.
- steps 415 , 420 and 425 can be repeated a plurality of time, with different operational parameters of the write operation on the flash memory at step 415 at each repetition of steps 415 , 420 and 425 .
- neural network training engine 311 and the neural network inference engine 312 are represented as separate entities in FIG. 3 , they can be implemented by a single module (e.g. a single software module) capable of performing both the training phase and the inference phase of a neural network.
- step 405 requires more processing power than the inferring phase performed at step 420 .
- step 405 may be performed by a training server not represented in FIG. 3 .
- the training server comprises a processing unit for executing the neural network training engine 311 which performs step 405 to generate the predictive model.
- the training server also comprises memory for storing the predictive model, and a communication interface for transferring the predictive model to the computing device 300 .
- the processing unit 310 of the computing device 300 does not execute the neural network training engine 311 .
- the training server needs more processing power for executing the neural network training engine 311 (which is more computational intensive) than the computing device 300 for executing the neural network inference engine 312 (which is less computational intensive).
- FIGS. 1, 5 and 6 a computing device 500 (represented in FIG. 5 ) and a method 700 (represented in FIG. 6 ) for inferring a predicted number of physical blocks erased from the flash memory 110 of the computing device 500 through the usage of a neural network are illustrated.
- the computing device 500 comprises a processing unit 510 , an optional memory 520 , the memory device 10 comprising the flash memory 110 , and a communication interface 550 .
- the computing device 500 may comprise additional components (not represented in FIG. 5 for simplification purposes), such as a user interface, a display, another communication interface, etc.
- the memory device 10 comprising the flash memory 110 of FIG. 5 corresponds to the memory device 10 comprising the flash memory 110 of FIG. 1 .
- the processing unit 510 of FIG. 5 corresponds to the host device 20 of FIG. 1 .
- the physical blocks 112 of the flash memory 110 represented in FIG. 1 are not represented in FIG. 5 for simplification purposes.
- the processing unit 510 comprises one or more processors (not represented in FIG. 5 ) capable of executing instructions of a computer program. Each processor may further comprise one or several cores.
- the memory 520 stores instructions of computer program(s) executed by the processing unit 510 , data generated by the execution of the computer program(s), data received via the communication interface 550 , etc. Only a single memory 650 is represented in FIG. 5 , but the computing device 500 may comprise several types of memories, including volatile memory (such as a volatile Random Access Memory (RAM)) and non-volatile memory (such as a hard drive). Alternatively, at least some of the aforementioned data are not stored in the memory 520 , but are stored in the flash memory 110 of the memory device 10 . In still another alternative, there is no memory 520 , and all of the aforementioned data are stored in the flash memory 110 of the computing device 10 .
- volatile memory such as a volatile Random Access Memory (RAM)
- non-volatile memory such as a hard drive
- the steps of the method 700 are implemented by the computing device 500 , to infer a predicted number of physical blocks erased from the flash memory 110 through the usage of a neural network.
- a dedicated computer program has instructions for implementing the steps of the method 700 .
- the instructions are comprised in a non-transitory computer program product (e.g. the memory 520 or the flash memory 110 ) of the computing device 500 .
- the instructions provide for inferring a predicted number of physical blocks erased from a flash memory 110 through the usage of a neural network, when executed by the processing unit 510 of the computing device 500 .
- the instructions are deliverable to the computing device 500 via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network (not represented in FIG. 5 for simplification purposes) through the communication interface 550 ).
- the driver 516 controls the exchange of data over the bus 30 between the control module 514 executed by the processing unit 510 and the host interface 100 (represented in FIG. 1 ) of the memory device 10 .
- the functionalities of the driver 516 are well known in the art.
- a training server 600 Also represented in FIG. 5 is a training server 600 .
- the training server 600 comprises a processing unit, memory and a communication interface.
- the processing unit of the training server 600 executes a neural network training engine 611 .
- the execution of the neural network training engine 611 generates a predictive model, which is transmitted to the computing device 500 via the communication interface of the training server 600 .
- the predictive model is received via the communication interface 550 of the computing device 500 , stored in the memory 520 (or flash memory 110 ), and used by the neural network inference engine 512 .
- the control module 514 controls the operations of the neural network inference engine 512 .
- the method 700 comprises the step 705 of executing the neural network training engine 611 (by the processing unit of the training server 600 ) to generate the predictive model. This step is similar to step 405 of the method 400 represented in FIG. 4 .
- the method 700 comprises the step 710 of transmitting the predictive model to the computing device 500 , via the communication interface of the training server 600 .
- the method 700 comprises the step 715 of storing the predictive model in the memory 520 (or the flash memory 110 ) of the computing device 500 .
- the predictive model is received via the communication interface 550 (or another communication interface not represented in FIG. 5 ) of the computing device 500 , and stored in the memory 520 by the processing unit 510 .
- the method 700 comprises the step 720 of determining operational parameters of a write operation on the flash memory 110 . This step is performed by the control module 514 ; and will be detailed later in the description, when the operational parameters of the flash memory 110 are disclosed.
- the method 700 comprises the step 725 of executing the neural network inference engine 512 (by the processing unit 510 ).
- the neural network inference engine 512 uses the predictive model (stored in memory 520 or flash memory 110 at step 715 ) for inferring a predicted number of physical blocks erased from the flash memory 110 based on the operational parameters of the write operation on the flash memory 110 (determined at step 720 ). This step is performed under the control of the control module 514 , which feeds the operational parameters of the flash memory 110 to the neural network inference engine 512 , and receives the inferred predicted number of physical blocks erased from the flash memory 110 from the neural network inference engine 512 .
- the method 700 comprises the step 730 of processing the predicted number (inferred at step 725 ) of physical blocks erased from the flash memory 110 .
- This step is performed by the control module 514 , which receives the predicted number of physical blocks erased from the flash memory 110 from the neural network inference engine 512 . This step will be detailed later in the description, when the operational parameters of the flash memory 110 are disclosed.
- the method 700 comprises the step 735 of performing the write operation on the flash memory 110 .
- the execution of this step has been previously described in relation to FIGS. 1 and 2 .
- steps 720 , 725 , 730 and 735 can be repeated a plurality of time, with different operational parameters of the write operation on the flash memory 110 at step 720 at each repetition of steps 720 , 725 , 730 and 735 .
- an inference server executes the neural network inference engine 512 .
- the inference server receives the predictive model from the training server 600 and performs step 715 of the method 700 consisting in storing the received predictive model in a memory of the inference server.
- the computing device 500 transmits the operational parameters of the write operation on the flash memory 110 to the inference server.
- the inference server performs step 725 of the method 700 consisting in executing the neural network inference engine 512 (by a processing unit of the inference server), which uses the predictive model for inferring a predicted number of physical blocks erased from the flash memory 110 based on the operational parameters of the write operation on the flash memory 110 .
- the inference server transmits the predicted number of physical blocks erased from the flash memory 110 to the computing device 500 , which then performs steps 730 and 735 of the method 700 .
- the computing device 500 does not execute the neural network inference engine 512 , and does not perform steps 715 and 725 , which are performed instead by the inference server.
- FIGS. 1, 3 and 5 Reference is now made concurrently to FIGS. 1, 3 and 5 , and more particularly to the neural network inference engine ( 312 or 512 ) and the neural network training engine ( 311 or 611 ).
- Various types of operational parameters of the flash memory 110 may affect the number of physical blocks 112 erased from the flash memory 110 when performing a current write operation on the flash memory 110 .
- the present disclosure aims at providing a mechanism for inferring a number of physical blocks 112 erased from the flash memory 110 when performing a current write operation on the flash memory 110 .
- the inferred number of physical blocks 112 erased shall be as close as possible to the actual number of physical blocks 112 erased when performing the current write operation on the flash memory 110 .
- the mechanism disclosed in the present disclosure takes advantage of the neural network technology to “guess” the number of physical blocks 112 erased when performing the current write operation on the flash memory 110 .
- One operational parameter is a total number of physical blocks 112 previously erased from the flash memory 110 .
- This total number of previously erased physical blocks 112 is the addition of a plurality of numbers of physical blocks 112 erased from the flash memory 110 when performing a corresponding plurality of previous write operations on the flash memory 110 .
- the total number of physical blocks 112 previously erased from the flash memory 110 shall take into consideration all the erase operations previously performed on the flash memory 110 since the beginning of the usage of the flash memory 110 .
- Another operational parameter is an amount of data to be written on the flash memory 110 for performing the current write operation on the flash memory 110 .
- this amount of data is expressed as a number of bytes, a number of kilobytes, a number of megabytes, etc.
- Still another operational parameter is a temperature at which the flash memory 110 is operating.
- the evaluation of the temperature at which the flash memory 110 is operating is more or less precise, based on how it is measured.
- the temperature is a temperature of a room where the computing device (e.g. 500 ) hosting the flash memory 110 is located.
- the temperature is a temperature measured by a temperature sensor comprised in the computing device (e.g. 500 ) hosting the flash memory 110 .
- the position of the temperature sensor with respect to the memory device 10 embedding the flash memory 110 may vary.
- the temperature sensor may be positioned within the memory device 10 , to be closer to the flash memory 110 .
- Yet another operational parameter consists of one or more characteristics of the flash memory 110 .
- characteristics of the flash memory 110 include a manufacturer of the flash memory 110 , a model of the flash memory 110 , a capacity of the flash memory 110 , a number of physical blocks 112 of the flash memory 110 , a capacity of the physical blocks 112 of the flash memory 110 , etc.
- a combination of the aforementioned operational parameters is taken into consideration by the neural network inference engine ( 312 or 512 ) and the neural network training engine ( 311 or 611 ).
- the best combination can be determined during the learning phase with the neural network training engine ( 311 or 611 ). The best combination may depend on one or more characteristics of the flash memory 110 .
- the training phase can be used to identify the best combination of operational parameters, and only those operational parameters will be used by the neural network training engine ( 311 or 611 ) to generate the predictive model used by the neural network inference engine ( 312 or 512 ). Alternatively, all the available operational parameters can be used by the neural network training engine ( 311 or 611 ) to generate the predictive model.
- the neural network training engine ( 311 or 611 ) will simply learn to ignore the operational parameters which do not have a significant influence on the number of physical blocks 112 erased from the flash memory 110 when performing a current write operation on the flash memory 110 .
- the temperature may not have an impact (at least for a type of flash memory 110 having on one or more specific characteristics), in which case it will be ignored by the predictive model.
- the neural network training engine ( 311 or 611 ) is trained with a plurality of inputs corresponding to the operational parameters of the flash memory 110 , and a corresponding plurality of outputs corresponding to the measured (or at least evaluated as precisely as possible) number of physical blocks 112 erased from the flash memory 110 when performing a current write operation on the flash memory 110 .
- the neural network implemented by the neural network training engine ( 311 or 611 ) adjusts its weights. Furthermore, during the learning phase, the number of layers of the neural network and the number of nodes per layer can be adjusted to improve the accuracy of the model.
- the predictive model generated by the neural network training engine ( 311 or 611 ) includes the number of layers, the number of nodes per layer, and the weights.
- the inputs and outputs for the learning phase of the neural network can be collected through an experimental process.
- a test computing device 500 is placed in various operating conditions corresponding to various values of the operational parameters of the flash memory 110 .
- the number of physical blocks 112 erased from the flash memory 110 is determined and used as the output for the neural network.
- the set of values comprises the current total number of physical blocks previously erased from the flash memory 110 , and an amount of data to write on the flash memory 110 .
- the control module 514 of the computing device 500 orders the driver 516 to transfer data corresponding to the amount of data to the memory device 10 via the bus 30 , and the data corresponding to the amount of data are written on the flash memory 110 .
- the driver 516 receives from the memory device 10 the number of physical blocks 112 erased from the flash memory 110 for performing the write operation of the data corresponding to the amount of data (based on FIGS. 1, 2 and 5 , the number of physical blocks 112 erased from the flash memory 110 is reported by the flash memory controller 120 to the host interface 100 , and from the host interface 100 to the driver 516 ).
- the neural network training engine 611 of the training server 600 is trained with this set of data for the current iteration: the current total number of physical blocks previously erased from the flash memory 110 , the amount of data to write on the flash memory 110 , optionally a temperature at which the flash memory 110 is operating measured by a temperature sensor, optionally one or more characteristics of the flash memory 110 , and the reported number of physical blocks 112 erased from the flash memory 110 .
- the current total number of physical blocks previously erased from the flash memory 110 is then updated with the reported number of physical blocks 112 erased from the flash memory 110 , and the next iteration of the learning phase is performed.
- the training phase is performed under the control of a user.
- the user specifies via a user interface of the computing device 500 the amount of data to write on the flash memory 110 at each iteration. This amount of data is varied at will by the user.
- the temperature at which the flash memory 110 is operating is varied by the user.
- various types of flash memory 110 are used, in order to vary one or more characteristics of the flash memory 110 .
- a plurality of variations and combinations of the operational parameters is performed under the direction of the user, until a robust predictive model is generated by the neural network training engine 611 .
- the inputs and outputs for the learning phase of the neural network can be collected through a mechanism for automatically collecting data while the computing device 500 is operating in real conditions.
- a collecting software is executed by the processing unit 510 of the computing device 500 .
- the collection of data is not directed by a user for the sole purpose of feeding inputs and outputs to the neural network training engine 611 .
- the collecting software records various operating conditions when write operations are performed on the flash memory 110 . More specifically, the collecting software records and updates the total number of physical blocks previously erased from the flash memory 110 .
- the collecting software also records the amount of data to write on the flash memory 110 for a write operation, optionally a temperature at which the flash memory 110 is operating measured by a temperature sensor embedded in the computing device 500 (or a temperature sensor located outside the computing device 500 , but reachable through the communication interface 550 ), and the reported number of physical blocks 112 erased from the flash memory 110 for each write operation.
- bias and weights are generally collectively referred to as weights in the neural network terminology), reinforcement learning, etc.
- the neural network inference engine uses the predictive model (e.g. the values of the weights) determined during the learning phase to infer an output (the predicted number of physical blocks 112 erased from the flash memory 110 when performing a current write operation on the flash memory 110 ) based on inputs (the operational parameters of the flash memory 110 ), as is well known in the art.
- the predictive model e.g. the values of the weights
- the computing device 300 does not include a flash memory.
- the operational parameters of the write operation on the flash memory are not operational parameters of a flash memory embedded in the computing device 300 .
- the operational parameters are either provided by a user via a user interface of the computing device 300 , or received from a remote computing entity via the communication interface 330 .
- the processing of the predicted number of physical blocks erased from the flash memory does not involve taking actions for preserving the lifespan of a flash memory embedded in the computing device 300 .
- the predicted number of physical blocks erased from the flash memory is either displayed on a display of the computing device 300 , or transmitted to one or more remote computing entities via the communication interface 330 .
- the neural network inference engine 312 can be used for simulation purposes. For example, different scenarios are tested for evaluating the lifespan of a flash memory, taking into consideration various operating conditions of the flash memory.
- the computing device 500 includes the flash memory 110 .
- the operational parameters of the write operation on the flash memory are operational parameters of the flash memory 110 embedded in the computing device 500 .
- the processing of the predicted number of physical blocks erased from the flash memory 110 involves taking actions for preserving the lifespan of the flash memory 110 embedded in the computing device 500 .
- the predicted number of physical blocks erased from the flash memory 110 is taken into consideration optionally in combination with other information related to previous write operations on the flash memory 110 , for determining if an action for preserving the lifespan of the flash memory 110 shall be taken for the current write operation.
- the resulting actions for preserving the lifespan of the flash memory 110 may include preventing some of the write operations on the flash memory 110 , reducing the write speed of the bus 30 for limiting the amount of data written on the flash memory 110 through the bus 30 , aggregating several write operations on the flash memory 110 (writing a single larger amount of data instead of several smaller amounts of data may reduce the overall number of physical blocks erased from the flash memory 110 ), etc.
- the actual number of physical blocks 112 erased from the flash memory 110 by the wrote operation is reported by the flash memory controller 120 to the host interface 100 , from the host interface 100 to the driver 516 , and from the driver 516 to the control module 514 .
- the actual number of physical blocks 112 erased from the flash memory 110 is used at step 720 for updating the total number of physical blocks 112 previously erased from the flash memory 110 .
- the write operation performed at 735 consists in writing on the flash memory 110 the amount of data determined at step 720 ; except if it is determined at step 730 that an action for preserving the lifespan of the flash memory 110 shall be taken for the current write operation, which may affect the actual amount of data written on the flash memory 110 for the current write operation.
- FIG. 7 illustrates the aforementioned neural network inference engine with its inputs and its output.
- FIG. 7 corresponds to the neural network inference engine 312 executed at step 420 of the method 400 , as illustrated in FIGS. 3 and 4 .
- FIG. 7 also corresponds to the neural network inference engine 512 executed at step 725 of the method 700 , as illustrated in FIGS. 5 and 6 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
- This is a Continuation Application of U.S. patent application Ser. No. 15/819,606, filed Nov. 21, 2017, now allowed, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
- The present disclosure relates to the field of memory devices comprising flash memory. More specifically, the present disclosure relates to a computing device and method for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- Flash memory is a form of electrically-erasable programmable read-only memory (EEPROM) with the following characteristic: a portion of the memory is erased before data are written in the erased portion of the memory. However, a conventional EEPROM erases data on a bit-by-bit level, while a flash memory erases data on a block-by-block level. These blocks are usually referred to as physical blocks of memory, by contrast to logical blocks of memory. The size of the physical block may vary from one byte to a plurality of bytes. Thus, a physical block on a flash memory is erased before new data is written to this physical block of the flash memory. One advantage of flash memory (and more generally of EEPROM) is that it is a nonvolatile form of memory, which does not require power to preserve stored data with integrity, so that a device embedding a flash memory can be turned off without losing data.
- The flash memory is weared out by erase operations performed on the physical blocks of the flash memory. The manufacturer of the flash memory generally provides a life expectancy of the flash memory expressed in a limitation on the number of erase operations which can be performed. For example, the flash memory can support 10 000 erase operations on physical blocks, or the flash memory can support an average of 350 physical blocks being erased per hour for a duration of 10 years.
- It is very difficult to consistently predict the number of physical blocks erased from the flash memory by a write operation on the flash memory. However, a model for predicting the number of physical blocks erased from the flash memory during a write operation may be useful for taking actions to preserve the lifespan of the flash memory.
- Current advances in artificial intelligence, and more specifically in neural networks, can be taken advantage of to define a model taking into consideration operating conditions of the flash memory to predict the number of physical blocks erased from the flash memory during a write operation.
- Therefore, there is a need for a new computing device and method for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- According to a first aspect, the present disclosure provides a computing device. The computing device comprises memory for storing a predictive model generated by a neural network training engine. The computing device also comprises a processing unit for executing a neural network inference engine. The neural network inference engine uses the predictive model for inferring a predicted number of physical blocks erased from a flash memory based on inputs. The inputs comprise a total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory. The inputs may further include a temperature at which the flash memory is operating.
- According to a second aspect, the present disclosure provides a computing device. The memory comprises a memory device comprising flash memory. The flash memory comprises a plurality of physical blocks for writing data. The computing device also comprises memory for storing a predictive model generated by a neural network training engine, and a total number of physical blocks previously erased from the flash memory. The computing device further comprises a processing unit for executing a neural network inference engine. The neural network inference engine uses the predictive model for inferring a predicted number of physical blocks erased from the flash memory based on inputs. The inputs comprise the total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory. The inputs may further include a temperature at which the flash memory is operating. The memory for storing the predictive model and the total number of physical blocks previously erased from the flash memory may consist of the flash memory.
- According to a third aspect, the present disclosure provides a method for inferring a predicted number of physical blocks erased from a flash memory. The method comprises storing, by a computing device, a predictive model generated by a neural network training engine. The method comprises executing, by a processing unit of the computing device, a neural network inference engine. The neural network inference engine uses the predictive model for inferring the predicted number of physical blocks erased from the flash memory based on inputs. The inputs comprise a total number of physical blocks previously erased from the flash memory, and an amount of data to be written on the flash memory. The inputs may further include a temperature at which the flash memory is operating. The flash memory may be comprised in the computing device, and the method may further comprise storing by the computing device the total number of physical blocks previously erased from the flash memory.
- Embodiments of the disclosure will be described by way of example only with reference to the accompanying drawings, in which:
-
FIG. 1 is a schematic representation of a memory device comprising flash memory; -
FIG. 2 represents a method for performing a write operation on the flash memory ofFIG. 1 ; -
FIG. 3 represents a computing device executing a neural network inference engine; -
FIG. 4 represents a method executed by the computing device ofFIG. 3 for inferring a predicted number of physical blocks erased from a flash memory; -
FIG. 5 represents a computing device comprising a flash memory and executing a neural network inference engine; -
FIG. 6 represents a method executed by the computing device ofFIG. 5 for inferring a predicted number of physical blocks erased from the flash memory; and -
FIG. 7 is a schematic representation of the neural network inference engine executed by the computing devices ofFIGS. 3 and 5 . - The foregoing and other features will become more apparent upon reading of the following non-restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings. Like numerals represent like features on the various drawings.
- Various aspects of the present disclosure generally address one or more of the problems related to the wear out of flash memory embedded in a memory device. The flash memory is weared out by erase operations performed on physical blocks of the flash memory. The present disclosure aims at providing a mechanism for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network.
- Referring now concurrently to
FIGS. 1 and 2 , amemory device 10 comprising flash memory 110 (represented inFIG. 1 ), and a method 200 (represented inFIG. 2 ) for writing data on theflash memory 110 of thememory device 10, are represented. - The
flash memory 110 comprises a plurality of physical blocks ofmemory 112. Only threephysical blocks 112 have been represented inFIG. 1 for simplification purposes. The number ofphysical blocks 112 of theflash memory 110 depends on the capacity of theflash memory 110, which is usually expressed in gigabytes (e.g. 16, 32, 64, 128, etc.). The number ofphysical blocks 112 of theflash memory 110 also depends on the size of thephysical block 112, which varies from one to several bytes. - The present disclosure is not limited to flash memory, but can be extended to any form of memory operating as follows: a physical block of the memory is erased before new data is written to this physical block of the memory.
- The
memory device 10 also comprises aflash memory controller 120 for controlling the operations of theflash memory 110, and ahost interface 100 connected to abus 30. Thememory device 10 further comprises amemory device controller 130 for controlling the operations of thememory device 10. Aninternal bus 140 interconnects several components of thememory device 10. For example, theinternal bus 140 represented inFIG. 1 interconnects thehost interface 100, thememory device controller 130 and theflash memory controller 120. - An example of
memory device 10 is an embedded multimedia card (eMMC), which has an architecture similar to the one represented inFIG. 1 in terms of electronic components. However, other types of memory devices 10 (embedding theflash memory 110 and thememory device controller 130 having the capability to control a write speed on the bus 30) are also supported by the present disclosure. Furthermore, the architecture of thememory device 10 may vary. For example, thememory device controller 130 is integrated with thehost interface 100, thememory device controller 130 is integrated with theflash memory controller 120, theflash memory controller 120 is integrated with thehost interface 100, etc. - At least one
host device 20 uses thebus 30 for writing data to (and/or reading data from) thememory device 10. Examples of host devices include: a processor, a central processing unit (CPU), a graphics processing unit (GPU), a microcontroller unit (MCU), a field-programmable gate array (FPGA), etc. Asingle host device 20 is represented inFIG. 1 for simplification purposes. However, a plurality of host devices (e.g. a plurality of processors) may be using thebus 30 for transmitting data to and/or receiving data from thememory device 10. - Examples of computing devices (not represented in the Figures) comprising the
memory device 10 and at least onehost device 20 include personal computers, laptops, tablets, smartphones, digital cameras, scientific instruments, medical devices, industrial appliances (e.g. environment controllers, sensors, controlled appliances), etc. - A write operation on the
flash memory 110 is illustrated inFIG. 2 , and comprises the following steps of themethod 200. - At
step 210, thehost device 20 transmits data to be written on theflash memory 110 of thememory device 10 over thebus 30. - At
step 220, thehost interface 100 receives the data transmitted by thehost device 20 via thebus 30. - At
step 230, thehost interface 100 transmits the data received from thehost device 20 via thebus 30 to theflash memory controller 120 over theinternal bus 140. - In an alternative configuration not represented in
FIG. 2 , thehost interface 100 transmits the data received from thehost device 20 via thebus 30 to thememory device controller 130 over theinternal bus 140. Thememory device controller 130 performs some processing prior to effectively allowing the write operation. For instance, thememory device controller 130 determines if a write operation can be effectively performed, checks the integrity of the data, etc. Then, thememory device controller 130 transmits the data to theflash memory controller 120 over theinternal bus 140. - At
step 240, theflash memory controller 120 receives the data transmitted by the host interface via theinternal bus 140. - At
step 250, theflash memory controller 120 erases at least onephysical block 112 of theflash memory 110. The number of physical blocks erased depends on the size of thephysical blocks 112, and the size of the data received by theflash memory controller 120. For example, if eachphysical block 112 has a size of 8 bytes, and the received data have a size of 1024 bytes, then 128physical blocks 112 are erased. Theflash memory controller 120 maintains a mapping of thephysical blocks 112 currently in use and available for reading by host device(s) 20. Thephysical blocks 112 erased atstep 250 are not currently in use. - At
step 260, theflash memory controller 120 writes the data received atstep 240 in the at least onephysical block 112 erased atstep 250. - Optionally, the
flash memory controller 120 comprises one or morelogical blocks 122. Only onelogical block 122 is represented inFIG. 1 for simplification purposes. The one or morelogical blocks 122 correspond to a dedicated memory (not represented inFIG. 1 ) of theflash memory controller 120, where data received atstep 240 are temporarily stored, before steps 250 (erase) and 260 (write) are executed. The usage of one or morelogical blocks 122 is well known in the art of flash memory management, and allows an optimization of the operations of theflash memory controller 120. For example,step 240 is repeated several times, and the data received at each occurrence ofstep 240 are aggregated and stored in the one or morelogical blocks 122. Then, steps 250 (erase) and 260 (write) are executed only once for the aggregated data stored in the one or morelogical blocks 122. - At
step 270, theflash memory controller 120 reports the number ofphysical blocks 112 erased atstep 250 to thememory device controller 130. Alternatively, theflash memory controller 120 does not performstep 270 eachtime step 250 is performed, but reports an aggregated number ofphysical blocks 112 erased corresponding to several executions ofstep 250. Each time a physical block among the plurality ofphysical blocks 112 of theflash memory 110 is erased, the erase operation is reported to thememory device controller 130 atstep 270. - For example, during a first instance of
step 250, physical blocks B1, B2 and B3 among the plurality ofphysical blocks 112 of theflash memory 110 are erased. A number of 3 blocks erased is reported atstep 270. During a second instance ofstep 250, physical blocks B2 and B5 among the plurality ofphysical blocks 112 of theflash memory 110 are erased. A number of 2 blocks erased is reported atstep 270. During a third instance ofstep 250, physical blocks B1, B3, B4 and B6 among the plurality ofphysical blocks 112 of theflash memory 110 are erased. A number of 4 blocks erased is reported atstep 270. - At
step 280, thememory device controller 130 processes the reported (by theflash memory controller 120 at step 270) number ofphysical blocks 112 erased. For example, thememory device controller 130 reports the number ofphysical blocks 112 erased to the host device via thehost interface 100 and thebus 30, as will be illustrated later in the description. - The
memory device controller 130, theflash memory controller 120 and optionally thehost interface 100 are electronic devices comprising a processing unit capable of executing instructions of a software program. Thememory device controller 130, the flash memory 120 (and optionally the host interface 100) also include internal memory for storing instructions of the software programs executed by these electronic devices, data received from other entities of thememory device 10 via theinternal bus 140, data generated by the software programs, etc. Alternatively, a standalone memory (e.g. theflash memory 110, or another dedicated memory not represented inFIG. 1 ) is included in thememory device 10 for storing the software programs executed by at least one of thememory device controller 130 and theflash memory controller 120, data received and generated by at least one of thememory device controller 130 and theflash memory controller 120. For instance, thememory device controller 130 and theflash memory controller 120 consist of microcontroller units (MCU), which are well known in the art of electronics. - The
memory device controller 130 executes instructions of a software program implementing the steps of themethod 200 executed by thememory device controller 130. Theflash memory controller 120 executes instructions of a software program implementing the steps of themethod 200 executed by theflash memory controller 120. - An internal memory of the
memory device controller 130 or theflash memory controller 120, a standalone memory of thememory device 10, are examples of a non-transitory computer program product adapted for storing instructions of the software programs executed by thememory device controller 130 or theflash memory controller 120. - Alternatively, at least some of the
memory device controller 130, theflash memory controller 120 and thehost interface 100 are pure hardware components, such as a field-programmable gate array (FPGA). For example, thehost interface 100 performs simple operations and can be more cost-effectively implemented by a FPGA. - Referring now concurrently to
FIGS. 3 and 4 , a computing device 300 (represented inFIG. 3 ) and a method 400 (represented inFIG. 4 ) for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network are illustrated. The flash memory in question corresponds the flash memory (110) of the memory device (10) represented inFIG. 1 . - The
computing device 300 comprises aprocessing unit 310,memory 320, and acommunication interface 330. Thecomputing device 300 may comprise additional components (not represented inFIG. 3 for simplification purposes), such as a user interface, a display, another communication interface, etc. - The
processing unit 310 comprises one or more processors (not represented inFIG. 3 ) capable of executing instructions of a computer program. Each processor may further comprise one or several cores. - The
memory 320 stores instructions of computer program(s) executed by theprocessing unit 310, data generated by the execution of the computer program(s), data received via thecommunication interface 330, etc. Only asingle memory 320 is represented inFIG. 3 , but thecomputing device 300 may comprise several types of memories, including volatile memory (such as a volatile Random Access Memory (RAM)) and non-volatile memory (such as a hard drive). - The steps of the
method 400 are implemented by thecomputing device 300, to infer a predicted number of physical blocks erased from a flash memory through the usage of a neural network. - A dedicated computer program has instructions for implementing the steps of the
method 400. The instructions are comprised in a non-transitory computer program product (e.g. the memory 320) of thecomputing device 300. The instructions provide for inferring a predicted number of physical blocks erased from a flash memory through the usage of a neural network, when executed by theprocessing unit 310 of thecomputing device 300. The instructions are deliverable to thecomputing device 300 via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network (not represented inFIG. 3 for simplification purposes) through the communication interface 330). - The instructions comprised in the dedicated computer program product, and executed by the
processing unit 310, implement a neuralnetwork training engine 311, a neuralnetwork inference engine 312, and acontrol module 314. - The execution of the neural
network training engine 311 generates a predictive model, which is stored in thememory 320 and used by the neuralnetwork inference engine 312. Thecontrol module 314 controls the operations of the neuralnetwork training engine 311 and the neuralnetwork inference engine 312. - The
method 400 comprises thestep 405 of executing the neural network training engine 311 (by theprocessing unit 310 of the computing device 300) to generate the predictive model. This step is performed under the control of thecontrol module 314, which feeds a plurality of inputs and a corresponding plurality of outputs to the neuralnetwork training engine 311. This training process is well known in the art, and will be detailed later in the description. For example, thecontrol module 314 receives the plurality of inputs and the corresponding plurality of outputs via thecommunication interface 330 from one or more remote computing devices (not represented inFIG. 3 ) in charge of collecting the plurality of inputs and the corresponding plurality of outputs. Alternatively, thecontrol module 314 receives the plurality of inputs and the corresponding plurality of outputs via a user interface of the computing device 300 (not represented inFIG. 3 ) from a user in charge of collecting the plurality of inputs and the corresponding plurality of outputs. - The
method 400 comprises thestep 410 of storing the predictive model in thememory 320 of the of thecomputing device 300. - The
method 400 comprises thestep 415 of determining operational parameters of a write operation on a flash memory. This step is performed by thecontrol module 314. In the present configuration, the flash memory is not part of thecomputing device 300. Another configuration where the flash memory is part of thecomputing device 300 will be illustrated later in the description. For example, thecontrol module 314 receives the operational parameters of the flash memory via thecommunication interface 330 from one or more remote computing devices (not represented inFIG. 3 ) in charge of collecting the operational parameters of the flash memory. Alternatively, thecontrol module 314 receives the operational parameters of the flash memory via a user interface of the computing device 300 (not represented inFIG. 3 ) from a user in charge of collecting the operational parameters of the flash memory. - The
method 400 comprises thestep 420 of executing the neural network inference engine 312 (by the processing unit 310). The neuralnetwork inference engine 312 uses the predictive model (stored inmemory 320 at step 410) for inferring a predicted number of physical blocks erased from the flash memory based on the operational parameters of the write operation on the flash memory (determined at step 415). This step is performed under the control of thecontrol module 314, which feeds the operational parameters of the flash memory to the neuralnetwork inference engine 312, and receives the inferred predicted number of physical blocks erased from the flash memory from the neuralnetwork inference engine 312. - The
method 400 comprises thestep 425 of processing the predicted number (inferred at step 420) of physical blocks erased from the flash memory. This step is performed by thecontrol module 314, which receives the predicted number of physical blocks erased from the flash memory from the neuralnetwork inference engine 312. For example, thecontrol module 314 displays the predicted number of physical blocks erased from the flash memory to a user on a display (not represented inFIG. 3 ) of thecomputing device 300. Alternatively, thecontrol module 314 transmits the predicted number of physical blocks erased from the flash memory via thecommunication interface 330 to one or more remote computing devices (not represented inFIG. 3 ) in charge of processing the predicted number of physical blocks erased from the flash memory. - Once
steps steps step 415 at each repetition ofsteps - Although the neural
network training engine 311 and the neuralnetwork inference engine 312 are represented as separate entities inFIG. 3 , they can be implemented by a single module (e.g. a single software module) capable of performing both the training phase and the inference phase of a neural network. - Furthermore, the training phase performed at
step 405 requires more processing power than the inferring phase performed atstep 420. Thus, step 405 may be performed by a training server not represented inFIG. 3 . The training server comprises a processing unit for executing the neuralnetwork training engine 311 which performsstep 405 to generate the predictive model. The training server also comprises memory for storing the predictive model, and a communication interface for transferring the predictive model to thecomputing device 300. In this configuration, theprocessing unit 310 of thecomputing device 300 does not execute the neuralnetwork training engine 311. The training server needs more processing power for executing the neural network training engine 311 (which is more computational intensive) than thecomputing device 300 for executing the neural network inference engine 312 (which is less computational intensive). - Referring now concurrently to
FIGS. 1, 5 and 6 , a computing device 500 (represented inFIG. 5 ) and a method 700 (represented inFIG. 6 ) for inferring a predicted number of physical blocks erased from theflash memory 110 of thecomputing device 500 through the usage of a neural network are illustrated. - The
computing device 500 comprises aprocessing unit 510, anoptional memory 520, thememory device 10 comprising theflash memory 110, and acommunication interface 550. Thecomputing device 500 may comprise additional components (not represented inFIG. 5 for simplification purposes), such as a user interface, a display, another communication interface, etc. - The
memory device 10 comprising theflash memory 110 ofFIG. 5 corresponds to thememory device 10 comprising theflash memory 110 ofFIG. 1 . Theprocessing unit 510 ofFIG. 5 corresponds to thehost device 20 ofFIG. 1 . Thephysical blocks 112 of theflash memory 110 represented inFIG. 1 are not represented inFIG. 5 for simplification purposes. - The
processing unit 510 comprises one or more processors (not represented inFIG. 5 ) capable of executing instructions of a computer program. Each processor may further comprise one or several cores. - The
memory 520 stores instructions of computer program(s) executed by theprocessing unit 510, data generated by the execution of the computer program(s), data received via thecommunication interface 550, etc. Only a single memory 650 is represented inFIG. 5 , but thecomputing device 500 may comprise several types of memories, including volatile memory (such as a volatile Random Access Memory (RAM)) and non-volatile memory (such as a hard drive). Alternatively, at least some of the aforementioned data are not stored in thememory 520, but are stored in theflash memory 110 of thememory device 10. In still another alternative, there is nomemory 520, and all of the aforementioned data are stored in theflash memory 110 of thecomputing device 10. - The steps of the
method 700 are implemented by thecomputing device 500, to infer a predicted number of physical blocks erased from theflash memory 110 through the usage of a neural network. - A dedicated computer program has instructions for implementing the steps of the
method 700. The instructions are comprised in a non-transitory computer program product (e.g. thememory 520 or the flash memory 110) of thecomputing device 500. The instructions provide for inferring a predicted number of physical blocks erased from aflash memory 110 through the usage of a neural network, when executed by theprocessing unit 510 of thecomputing device 500. The instructions are deliverable to thecomputing device 500 via an electronically-readable media such as a storage media (e.g. CD-ROM, USB key, etc.), or via communication links (e.g. via a communication network (not represented inFIG. 5 for simplification purposes) through the communication interface 550). - The instructions comprised in the dedicated computer program product, and executed by the
processing unit 510, implement a neuralnetwork inference engine 512, acontrol module 514, and adriver 516. - The
driver 516 controls the exchange of data over thebus 30 between thecontrol module 514 executed by theprocessing unit 510 and the host interface 100 (represented inFIG. 1 ) of thememory device 10. The functionalities of thedriver 516 are well known in the art. - Also represented in
FIG. 5 is atraining server 600. Although not represented inFIG. 5 for simplification purposes, thetraining server 600 comprises a processing unit, memory and a communication interface. The processing unit of thetraining server 600 executes a neuralnetwork training engine 611. - The execution of the neural
network training engine 611 generates a predictive model, which is transmitted to thecomputing device 500 via the communication interface of thetraining server 600. The predictive model is received via thecommunication interface 550 of thecomputing device 500, stored in the memory 520 (or flash memory 110), and used by the neuralnetwork inference engine 512. Thecontrol module 514 controls the operations of the neuralnetwork inference engine 512. - The
method 700 comprises thestep 705 of executing the neural network training engine 611 (by the processing unit of the training server 600) to generate the predictive model. This step is similar to step 405 of themethod 400 represented inFIG. 4 . - The
method 700 comprises thestep 710 of transmitting the predictive model to thecomputing device 500, via the communication interface of thetraining server 600. - The
method 700 comprises thestep 715 of storing the predictive model in the memory 520 (or the flash memory 110) of thecomputing device 500. The predictive model is received via the communication interface 550 (or another communication interface not represented inFIG. 5 ) of thecomputing device 500, and stored in thememory 520 by theprocessing unit 510. - The
method 700 comprises thestep 720 of determining operational parameters of a write operation on theflash memory 110. This step is performed by thecontrol module 514; and will be detailed later in the description, when the operational parameters of theflash memory 110 are disclosed. - The
method 700 comprises thestep 725 of executing the neural network inference engine 512 (by the processing unit 510). The neuralnetwork inference engine 512 uses the predictive model (stored inmemory 520 orflash memory 110 at step 715) for inferring a predicted number of physical blocks erased from theflash memory 110 based on the operational parameters of the write operation on the flash memory 110 (determined at step 720). This step is performed under the control of thecontrol module 514, which feeds the operational parameters of theflash memory 110 to the neuralnetwork inference engine 512, and receives the inferred predicted number of physical blocks erased from theflash memory 110 from the neuralnetwork inference engine 512. - The
method 700 comprises thestep 730 of processing the predicted number (inferred at step 725) of physical blocks erased from theflash memory 110. This step is performed by thecontrol module 514, which receives the predicted number of physical blocks erased from theflash memory 110 from the neuralnetwork inference engine 512. This step will be detailed later in the description, when the operational parameters of theflash memory 110 are disclosed. - The
method 700 comprises thestep 735 of performing the write operation on theflash memory 110. The execution of this step has been previously described in relation toFIGS. 1 and 2 . - Once
steps steps flash memory 110 atstep 720 at each repetition ofsteps - In an alternative configuration, an inference server (not represented in
FIG. 5 ) executes the neuralnetwork inference engine 512. The inference server receives the predictive model from thetraining server 600 and performs step 715 of themethod 700 consisting in storing the received predictive model in a memory of the inference server. After performingstep 720 of themethod 700, thecomputing device 500 transmits the operational parameters of the write operation on theflash memory 110 to the inference server. The inference server performsstep 725 of themethod 700 consisting in executing the neural network inference engine 512 (by a processing unit of the inference server), which uses the predictive model for inferring a predicted number of physical blocks erased from theflash memory 110 based on the operational parameters of the write operation on theflash memory 110. The inference server transmits the predicted number of physical blocks erased from theflash memory 110 to thecomputing device 500, which then performssteps method 700. Thus, in this alternative configuration, thecomputing device 500 does not execute the neuralnetwork inference engine 512, and does not performsteps - Reference is now made concurrently to
FIGS. 1, 3 and 5 , and more particularly to the neural network inference engine (312 or 512) and the neural network training engine (311 or 611). - Various types of operational parameters of the
flash memory 110 may affect the number ofphysical blocks 112 erased from theflash memory 110 when performing a current write operation on theflash memory 110. The present disclosure aims at providing a mechanism for inferring a number ofphysical blocks 112 erased from theflash memory 110 when performing a current write operation on theflash memory 110. The inferred number ofphysical blocks 112 erased shall be as close as possible to the actual number ofphysical blocks 112 erased when performing the current write operation on theflash memory 110. The mechanism disclosed in the present disclosure takes advantage of the neural network technology to “guess” the number ofphysical blocks 112 erased when performing the current write operation on theflash memory 110. - Following are examples of operational parameters of the
flash memory 110, which are used as inputs of the neuralnetwork training engine 311 or 611 (during a training phase) and the neuralnetwork inference engine 312 or 512 (during an operational phase). - One operational parameter is a total number of
physical blocks 112 previously erased from theflash memory 110. This total number of previously erasedphysical blocks 112 is the addition of a plurality of numbers ofphysical blocks 112 erased from theflash memory 110 when performing a corresponding plurality of previous write operations on theflash memory 110. For more accuracy, the total number ofphysical blocks 112 previously erased from theflash memory 110 shall take into consideration all the erase operations previously performed on theflash memory 110 since the beginning of the usage of theflash memory 110. - Another operational parameter is an amount of data to be written on the
flash memory 110 for performing the current write operation on theflash memory 110. For example, this amount of data is expressed as a number of bytes, a number of kilobytes, a number of megabytes, etc. - Still another operational parameter is a temperature at which the
flash memory 110 is operating. The evaluation of the temperature at which theflash memory 110 is operating is more or less precise, based on how it is measured. For example, the temperature is a temperature of a room where the computing device (e.g. 500) hosting theflash memory 110 is located. Alternatively, the temperature is a temperature measured by a temperature sensor comprised in the computing device (e.g. 500) hosting theflash memory 110. The position of the temperature sensor with respect to thememory device 10 embedding theflash memory 110 may vary. The temperature sensor may be positioned within thememory device 10, to be closer to theflash memory 110. - Yet another operational parameter consists of one or more characteristics of the
flash memory 110. Examples of characteristics of theflash memory 110 include a manufacturer of theflash memory 110, a model of theflash memory 110, a capacity of theflash memory 110, a number ofphysical blocks 112 of theflash memory 110, a capacity of thephysical blocks 112 of theflash memory 110, etc. - A person skilled in the art would readily understand that additional operational parameters may have an impact on the number of
physical blocks 112 erased from theflash memory 110 when performing a current write operation on theflash memory 110, and can also be taken into consideration by the neural network inference engine (312 or 512) and the neural network training engine (311 or 611). - Furthermore, a combination of the aforementioned operational parameters is taken into consideration by the neural network inference engine (312 or 512) and the neural network training engine (311 or 611). The best combination can be determined during the learning phase with the neural network training engine (311 or 611). The best combination may depend on one or more characteristics of the
flash memory 110. The training phase can be used to identify the best combination of operational parameters, and only those operational parameters will be used by the neural network training engine (311 or 611) to generate the predictive model used by the neural network inference engine (312 or 512). Alternatively, all the available operational parameters can be used by the neural network training engine (311 or 611) to generate the predictive model. In this case, the neural network training engine (311 or 611) will simply learn to ignore the operational parameters which do not have a significant influence on the number ofphysical blocks 112 erased from theflash memory 110 when performing a current write operation on theflash memory 110. For example, the temperature may not have an impact (at least for a type offlash memory 110 having on one or more specific characteristics), in which case it will be ignored by the predictive model. - During the learning phase, the neural network training engine (311 or 611) is trained with a plurality of inputs corresponding to the operational parameters of the
flash memory 110, and a corresponding plurality of outputs corresponding to the measured (or at least evaluated as precisely as possible) number ofphysical blocks 112 erased from theflash memory 110 when performing a current write operation on theflash memory 110. - As is well known in the art of neural network, during the training phase, the neural network implemented by the neural network training engine (311 or 611) adjusts its weights. Furthermore, during the learning phase, the number of layers of the neural network and the number of nodes per layer can be adjusted to improve the accuracy of the model. At the end of the training phase, the predictive model generated by the neural network training engine (311 or 611) includes the number of layers, the number of nodes per layer, and the weights.
- The inputs and outputs for the learning phase of the neural network can be collected through an experimental process. For example, a
test computing device 500 is placed in various operating conditions corresponding to various values of the operational parameters of theflash memory 110. For each set of values of the operational parameters, the number ofphysical blocks 112 erased from theflash memory 110 is determined and used as the output for the neural network. - At a current iteration of the learning phase, the set of values comprises the current total number of physical blocks previously erased from the
flash memory 110, and an amount of data to write on theflash memory 110. Thecontrol module 514 of thecomputing device 500 orders thedriver 516 to transfer data corresponding to the amount of data to thememory device 10 via thebus 30, and the data corresponding to the amount of data are written on theflash memory 110. In return, thedriver 516 receives from thememory device 10 the number ofphysical blocks 112 erased from theflash memory 110 for performing the write operation of the data corresponding to the amount of data (based onFIGS. 1, 2 and 5 , the number ofphysical blocks 112 erased from theflash memory 110 is reported by theflash memory controller 120 to thehost interface 100, and from thehost interface 100 to the driver 516). - The neural
network training engine 611 of thetraining server 600 is trained with this set of data for the current iteration: the current total number of physical blocks previously erased from theflash memory 110, the amount of data to write on theflash memory 110, optionally a temperature at which theflash memory 110 is operating measured by a temperature sensor, optionally one or more characteristics of theflash memory 110, and the reported number ofphysical blocks 112 erased from theflash memory 110. - The current total number of physical blocks previously erased from the
flash memory 110 is then updated with the reported number ofphysical blocks 112 erased from theflash memory 110, and the next iteration of the learning phase is performed. - The training phase is performed under the control of a user. For instance, the user specifies via a user interface of the
computing device 500 the amount of data to write on theflash memory 110 at each iteration. This amount of data is varied at will by the user. Similarly, the temperature at which theflash memory 110 is operating is varied by the user. Furthermore, various types offlash memory 110 are used, in order to vary one or more characteristics of theflash memory 110. A plurality of variations and combinations of the operational parameters is performed under the direction of the user, until a robust predictive model is generated by the neuralnetwork training engine 611. - Alternatively, the inputs and outputs for the learning phase of the neural network can be collected through a mechanism for automatically collecting data while the
computing device 500 is operating in real conditions. For example, a collecting software is executed by theprocessing unit 510 of thecomputing device 500. In this case, the collection of data is not directed by a user for the sole purpose of feeding inputs and outputs to the neuralnetwork training engine 611. The collecting software records various operating conditions when write operations are performed on theflash memory 110. More specifically, the collecting software records and updates the total number of physical blocks previously erased from theflash memory 110. The collecting software also records the amount of data to write on theflash memory 110 for a write operation, optionally a temperature at which theflash memory 110 is operating measured by a temperature sensor embedded in the computing device 500 (or a temperature sensor located outside thecomputing device 500, but reachable through the communication interface 550), and the reported number ofphysical blocks 112 erased from theflash memory 110 for each write operation. - Various techniques well known in the art of neural networks are used for performing (and improving) the generation of the predictive model, such as forward and backward propagation, usage of bias in addition to the weights (bias and weights are generally collectively referred to as weights in the neural network terminology), reinforcement learning, etc.
- During the operational phase, the neural network inference engine (312 or 512) uses the predictive model (e.g. the values of the weights) determined during the learning phase to infer an output (the predicted number of
physical blocks 112 erased from theflash memory 110 when performing a current write operation on the flash memory 110) based on inputs (the operational parameters of the flash memory 110), as is well known in the art. - Reference is now made concurrently to
FIGS. 3 and 4 , where thecomputing device 300 does not include a flash memory. - As mentioned previously, at
step 415 of themethod 400, the operational parameters of the write operation on the flash memory are not operational parameters of a flash memory embedded in thecomputing device 300. The operational parameters are either provided by a user via a user interface of thecomputing device 300, or received from a remote computing entity via thecommunication interface 330. - Similarly, with respect to step 425 of the
method 400, the processing of the predicted number of physical blocks erased from the flash memory does not involve taking actions for preserving the lifespan of a flash memory embedded in thecomputing device 300. As mentioned previously, the predicted number of physical blocks erased from the flash memory is either displayed on a display of thecomputing device 300, or transmitted to one or more remote computing entities via thecommunication interface 330. - Thus, the neural
network inference engine 312 can be used for simulation purposes. For example, different scenarios are tested for evaluating the lifespan of a flash memory, taking into consideration various operating conditions of the flash memory. - Reference is now made concurrently to
FIGS. 1, 5 and 6 , where thecomputing device 500 includes theflash memory 110. - By contrast to
steps method 400 illustrated inFIG. 4 , atsteps method 700, the operational parameters of the write operation on the flash memory are operational parameters of theflash memory 110 embedded in thecomputing device 500. - Thus, with respect to step 730 of the
method 700, the processing of the predicted number of physical blocks erased from theflash memory 110 involves taking actions for preserving the lifespan of theflash memory 110 embedded in thecomputing device 500. For example, the predicted number of physical blocks erased from theflash memory 110 is taken into consideration optionally in combination with other information related to previous write operations on theflash memory 110, for determining if an action for preserving the lifespan of theflash memory 110 shall be taken for the current write operation. The resulting actions for preserving the lifespan of theflash memory 110 may include preventing some of the write operations on theflash memory 110, reducing the write speed of thebus 30 for limiting the amount of data written on theflash memory 110 through thebus 30, aggregating several write operations on the flash memory 110 (writing a single larger amount of data instead of several smaller amounts of data may reduce the overall number of physical blocks erased from the flash memory 110), etc. - At
step 735, the actual number ofphysical blocks 112 erased from theflash memory 110 by the wrote operation is reported by theflash memory controller 120 to thehost interface 100, from thehost interface 100 to thedriver 516, and from thedriver 516 to thecontrol module 514. The actual number ofphysical blocks 112 erased from theflash memory 110 is used atstep 720 for updating the total number ofphysical blocks 112 previously erased from theflash memory 110. - Furthermore, the write operation performed at 735 consists in writing on the
flash memory 110 the amount of data determined atstep 720; except if it is determined atstep 730 that an action for preserving the lifespan of theflash memory 110 shall be taken for the current write operation, which may affect the actual amount of data written on theflash memory 110 for the current write operation. - Reference is now made to
FIG. 7 , which illustrates the aforementioned neural network inference engine with its inputs and its output.FIG. 7 corresponds to the neuralnetwork inference engine 312 executed atstep 420 of themethod 400, as illustrated inFIGS. 3 and 4 .FIG. 7 also corresponds to the neuralnetwork inference engine 512 executed atstep 725 of themethod 700, as illustrated inFIGS. 5 and 6 . - Although the present disclosure has been described hereinabove by way of non-restrictive, illustrative embodiments thereof, these embodiments may be modified at will within the scope of the appended claims without departing from the spirit and nature of the present disclosure.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/156,740 US20210141540A1 (en) | 2017-11-21 | 2021-01-25 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/819,606 US10956048B2 (en) | 2017-11-21 | 2017-11-21 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
US17/156,740 US20210141540A1 (en) | 2017-11-21 | 2021-01-25 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/819,606 Continuation US10956048B2 (en) | 2017-11-21 | 2017-11-21 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210141540A1 true US20210141540A1 (en) | 2021-05-13 |
Family
ID=66533936
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/819,606 Active 2038-03-23 US10956048B2 (en) | 2017-11-21 | 2017-11-21 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
US17/156,740 Abandoned US20210141540A1 (en) | 2017-11-21 | 2021-01-25 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/819,606 Active 2038-03-23 US10956048B2 (en) | 2017-11-21 | 2017-11-21 | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory |
Country Status (1)
Country | Link |
---|---|
US (2) | US10956048B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI851120B (en) * | 2023-03-29 | 2024-08-01 | 旺宏電子股份有限公司 | Memory device and intelligent operation method thereof |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12061971B2 (en) | 2019-08-12 | 2024-08-13 | Micron Technology, Inc. | Predictive maintenance of automotive engines |
US20210073066A1 (en) * | 2019-09-05 | 2021-03-11 | Micron Technology, Inc. | Temperature based Optimization of Data Storage Operations |
KR20210101982A (en) * | 2020-02-11 | 2021-08-19 | 삼성전자주식회사 | Storage device, and operating method of memory controller |
CN112085107A (en) * | 2020-09-10 | 2020-12-15 | 苏州大学 | Method and system for predicting service life of flash memory block based on three-dimensional flash memory storage structure |
CN112908399B (en) * | 2021-02-05 | 2022-01-18 | 置富科技(深圳)股份有限公司 | Flash memory abnormality detection method and device, computer equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8112586B1 (en) * | 2008-06-13 | 2012-02-07 | Emc Corporation | Predicting and optimizing I/O performance characteristics in a multi-level caching system |
US9170897B2 (en) | 2012-05-29 | 2015-10-27 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing solid-state storage reliability |
CN102449607B (en) * | 2009-07-22 | 2015-05-27 | 株式会社日立制作所 | Storage system provided with a plurality of flash packages |
US9158674B2 (en) * | 2012-12-07 | 2015-10-13 | Sandisk Technologies Inc. | Storage device with health status check feature |
JP6007329B2 (en) * | 2013-07-17 | 2016-10-12 | 株式会社日立製作所 | Storage controller, storage device, storage system |
JP2015064860A (en) * | 2013-08-27 | 2015-04-09 | キヤノン株式会社 | Image forming apparatus and control method of the same, and program |
US9569120B2 (en) * | 2014-08-04 | 2017-02-14 | Nvmdurance Limited | Adaptive flash tuning |
US10261897B2 (en) * | 2017-01-20 | 2019-04-16 | Samsung Electronics Co., Ltd. | Tail latency aware foreground garbage collection algorithm |
CN109378027A (en) * | 2017-08-09 | 2019-02-22 | 光宝科技股份有限公司 | The control method of solid state storage device |
-
2017
- 2017-11-21 US US15/819,606 patent/US10956048B2/en active Active
-
2021
- 2021-01-25 US US17/156,740 patent/US20210141540A1/en not_active Abandoned
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI851120B (en) * | 2023-03-29 | 2024-08-01 | 旺宏電子股份有限公司 | Memory device and intelligent operation method thereof |
Also Published As
Publication number | Publication date |
---|---|
US10956048B2 (en) | 2021-03-23 |
US20190155520A1 (en) | 2019-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210141540A1 (en) | Computing device and method for inferring a predicted number of physical blocks erased from a flash memory | |
US20210264276A1 (en) | Computing device and method for inferring a predicted number of data chunks writable on a flash memory before wear out | |
US9645924B2 (en) | Garbage collection scaling | |
US9996460B2 (en) | Storage device, system including storage device and method of operating the same | |
CN111176564B (en) | Method and device for determining data placement strategy in SSD | |
CN110389909A (en) | Use the system and method for the performance of deep neural network optimization solid state drive | |
CN105242871B (en) | A kind of method for writing data and device | |
TWI711926B (en) | Memory system and method of operating the same | |
BR112020020110A2 (en) | multitasking recurrent neural networks | |
EP2495728A1 (en) | Using temperature sensors with a memory device | |
US10963332B2 (en) | Data storage systems and methods for autonomously adapting data storage system performance, capacity and/or operational requirements | |
US20230153003A1 (en) | Open block family duration limited by time and temperature | |
TW201526019A (en) | Data storage device and data maintenance method thereof | |
US20220137869A1 (en) | System and memory for artificial neural network | |
US11922051B2 (en) | Memory controller, processor and system for artificial neural network | |
US10650879B2 (en) | Device and method for controlling refresh cycles of non-volatile memories | |
US11074173B1 (en) | Method and system to determine an optimal over-provisioning ratio | |
US12147702B2 (en) | Host training indication for memory artificial intelligence | |
CN113625935A (en) | Method, device, equipment and storage medium for reducing read interference influence | |
Baek et al. | Don’t make cache too complex: A simple probability-based cache management scheme for SSDs | |
US20230229352A1 (en) | Host training indication for memory artificial intelligence | |
US20230147773A1 (en) | Storage device and operating method | |
Chakraborttii | Improving Performance of Solid State Drives Using Machine Learning | |
US20240193088A1 (en) | Memory prefetch based on machine learning | |
US20240160260A1 (en) | Electronic device for predicting chip temperature and performing pre-operation, and operation method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DISTECH CONTROLS INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GERVAIS, FRANCOIS;REEL/FRAME:055018/0719 Effective date: 20180222 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |