Test Principle and Main Points of SSD Power – Down Protection

Preface Solid-state hard drive If you must use FTL to do the logical address and the physical address between the conversion, if the SSD read, write, delete and other normal work in the case of abnormal power-down, may cause mapping table because of too late to update and lost There is a fault that the […]

Test Principle and Main Points of SSD Power – Down Protection

Preface

Solid-state hard drive If you must use FTL to do the logical address and the physical address between the conversion, if the SSD read, write, delete and other normal work in the case of abnormal power-down, may cause mapping table because of too late to update and lost There is a fault that the SSD can not be recognized by the system.

At the same time, in order to improve read and write performance, usually use SDRAM to do the cache, if the process of reading and writing encountered abnormal power-down, SDRAM data may be too late to write Nand Flash data loss, or update the mapping table too late to write Nand Flash The mapping table is missing.

Abnormal power failure caused by the phenomenon

SSD abnormal power failure usually there are three kinds of failure phenomenon:

1, SSD can not reproduce the system identification, the need to rebuild the mapping table or by simple and crude way to re-production can be used;

2, many times after power failure, SSD there are a lot of “new bad block”;

The mechanism behind the new bad block is that when the SSD reads, writes, or erases unsuccessfully, it will be identified as a bad block. Of course, these blocks are not really bad blocks, just because they are abnormal Power failure caused by the wrong judgment.

3, SDRAM data loss;

Common power-down protection mechanism

Each power-down protection mechanism to understand different, different for the user, the protection mechanism is completely different, generally there will be the following two practices:

1, save all the data in SDRAM

Abnormal power-down, SDRAM all data must be fully written to Nand Flash, in general, SDRAM capacity is set to 1000% of the amount of SSD bare capacity, for small capacity SSD, SDRAM need to write Nand Flash data Relatively small, through the super capacitor or tantalum capacitor can continue to write data. However, if the SSD capacity is large enough, for example: 8TB, then, SDRAM need to write Nand Flash data will be very large, if still rely on super capacitor or tantalum capacitor to do power supply, will inevitably face the following three tricky problem:

a, the need for more tantalum capacitor particles to do the protection, in the actual engineering practice, this is a very serious test, the engineers face the thickness, the standard size limit, PCB area is not enough to use;

b, even if there is enough capacitance to do the protection, when the implementation of the “restart”, the SSD will not start properly, you must first shut down for some time before restarting, because: SSD need to put all the tantalum capacitor after power Identified;

c, when the use of a few years after the tantalum capacitor or super capacitor after aging, when the tantalum capacitor power supply can not achieve the initial design target value, the user still has data loss after power loss or SSD can not identify the potential risks, if the initial design That is, to do redundant capacitors, then, will return to the problem “b” death cycle.

It is gratifying that the problems of b and c are perfect solutions to solve these thorny problems only need the engineers enough mind and experience only.

2, only save the SDRAM user data, without saving the mapping table

This will reduce the use of SDRAM and the use of tantalum capacitors, “do not save the mapping table” does not mean that the mapping table is lost, just do not save the last data write update map, when the SSD re-power, Looking for the last mapping table to save the new data written to re-build the mapping table, the drawbacks of this approach is not enough mechanism to set the reasonable, then rebuild the mapping table will be longer, SSD takes some time to normal access to normal

For controllers without SDRAM design, all data is written directly to Nand Flash. When data is lost, the data that is not written to Nand Flash will be returned to the host. If no additional data needs to be saved, High reliability requirements of the application, no SDRAM design is king, its representative is a German industrial brand master, its only drawback is that the performance is not good enough, in fact, many applications and the need for the highest performance, and Is “enough” performance.

Test methods and principles

Specific test, SSD need as a system disk and as a disk from the two cases of testing, so the main disk and do from the disk test method is the only difference is that the main disk need to test the computer to power off the machine, and from the disk only SSD can be on the power off.

a, respectively, of the SSD as a blank disc, data is written in the 25% and 50% when writing data, the write data for 85% and 3000, respectively, the abnormal power down 100% test write data, each down And the power-on time interval of 3 seconds;

The principle of writing different capacity data to the disk is: When the SSD write a certain amount of data, the background began to garbage collection, garbage collection means that the relocation of data, data migration means that the mapping table updates, at this time Abnormal power failure is usually a problem.

b, when the normal write data, the SSD abnormal power-down

c, when the data is deleted when the abnormal power-down

In the windows, delete the data also need to perform eight actions, and the establishment of the same document, the mapping table also need to update.

d, when the SSD read file is abnormal power down, test 3000 times, power off time interval of 3 seconds;

e, when the normal shutdown process abnormal power down, test 3000 times;

f, when the normal start of the operating system abnormal power down, test 3000 times;

accessorytown-20
US