SSD Cache (QCache 2.0)
 
SSD cache is a large-capacity secondary cache that uses enterprise SSD drives positioned between the RAID controller's primary DRAM memory cache and hard disk drives (HDD). SSD read/write cache boosts random IOPS of read and write I/O for the system by copying frequently accessed random data to SSD drives, which are faster than HDDs. Therefore increasing the overall random IOPS and lowering the costs by using only few SSD drives. By this technology, QCache 2.0 can improve random read performance by up to 92 times and random write by up to 171 times. SSD drives also provide a much larger, scalable cache than the memory. SSD cache is a licensed feature available on the XCubeSAN series.
 
*
Usage Timing of SSD Cache
 
Generally, SSD read cache is particularly effective when:
  • Reads are far more common than writes in the production environment, common in live database or web service applications.
  • The inferior speeds of HDD reads cause performance bottlenecks.
  • The amount of repeatedly accessed data is smaller than the capacity of the SSD cache.

SSD read-write cache is particularly effective when:
  • Reads and writes mix in the production environment, common in file service applications.
  • The inferior speeds of HDD reads and writes cause performance bottlenecks.
  • Same as SSD read cache case; the size of repeatedly accessed data is smaller than the capacity of the SSD cache.
  • Willing to take a little risk to increase write performance because it’s write cache buffering at SSD cache pool. Of course, these write data in SSD cache can be used at the next read.
*
System Memory & SSD Cache Capacity
 
SSD cache function needs system memory to store metadata. The usable capacity of SSD cache is in proportion to the size of the controller system memory. The table is the relationship between the system memory per controller and the maximum SSD cache capacity.
 
1 Please be aware that the default memory of XS3200 / XS1200 controller is 4GB. It needs 8GB to enable QCache.
 
*
SSD Cache Pool Architecture
 
QCache 2.0 supports read and write cache which are up to four SSD cache pools per system. Each SSD cache pool can be used by one dedicated storage pool and its multiple volumes shared for effective usage of resources.
 
  • A SSD cache pool is grouped to provide capacity for SSD cache usage of a dedicated storage pool.
  • Supports read and write cache which are up to four SSD cache pools per system.
  • Read cache pool supports add / remove SSDs to increase / decrease the SSD cache capacity.
  • Read-write cache pool supports add 2 SSDs at a time to increase the SSD cache capacity
  • The volumes in pool can be selected to enable / disable SSD cache function, the volumes enabled SSD cache with hot data will consume the capacity of the SSD cache pool.
  • Up to 32 volumes can be enabled SSD cache in a pool.
*
SSD Read Cache with NRAID+
 
QCache 2.0 supports read and write cache which are up to four SSD cache pools per system. Each SSD cache pool can be used by one dedicated storage pool and its multiple volumes shared for effective usage of resources.
 
  • SSD Read Cache technology uses NRAID+ which is parallel NRAID without striping.
  • Compare to the NRAID or RAID 0, NRAID+ distributes cache data over all SSDs.
  • This NRAID+ technology combines with the advantages of NRAID and has better random I/O than NRAID. It also has the advantage of easy to add/remove SSDs from the SSD cache pool to increase/decrease the capacity.
*
SSD Read-write Cache with NRAID 1+
 
QCache 2.0 supports read and write cache which are up to four SSD cache pools per system. Each SSD cache pool can be used by one dedicated storage pool and its multiple volumes shared for effective usage of resources.
 
  • SSD Read Cache technology uses NRAID+ which is parallel NRAID without striping.
  • Compare to the NRAID or RAID 0, NRAID+ distributes cache data over all SSDs.
  • This NRAID+ technology combines with the advantages of NRAID and has better random I/O than NRAID. It also has the advantage of easy to add/remove SSDs from the SSD cache pool to increase/decrease the capacity.
*
*
Cache I/O Types
 
There are three defined cache I/O types including Database, File System, Web Service, and one customization option available which are applied to a SSD cache pool. According to your application, suitable cache I/O type would benefit the SSD running.
Test Results
Test Cases 1: SSD Read Cache with 1 / 2 / 4 / 8 SSDs
 
This test verifies the dramatic performance gains offered by SSD read cache. We test the SSD read cache with 1 / 2 / 4 / 8 SSDs. According to the design structure of RAID level in SSD read cache pool, in theory, the more SSDs are used, the better SSD read cache is. We also set the populate-on-read threshold to 1 which means the data hits once and is populated to SSD.
*
*
Summary
 
  • Without SSD cache, the average of IOPS is 4,512. Enable SSD cache with 8 SSDs, IOPS increases to 216,434. It improves (216,434 – 4,512) / 4,512 = 46.968 == 4,697%. And the warm-up time is about 7 minutes.
  • SSD Cache 2.0 can improve random read performance by up to 47 times.
  • The more SSDs are used, the better SSD read cache is.
 
Test Equipments & Configurations2
SSD Cache
  • I/O Type: Customization
  • Cache Block Size: 4MB
  • Populate-on-read Threshold: 1
  • Populate-on-write Threshold: 0
I/O Pattern
  • I/O Pattern
  • Workers: 1
  • Outstanding (Queue Depth): 128
  • Access Specifications: 4KB, 100% Read, 100% Random
Test Cases 2: SSD Write Cache with 2 / 4 / 8 SSDs
 
We plan to test the SSD read-write cache with 2 / 4 / 8 SSDs in this test. The same, we set the populate-on-write threshold to 1 which means the data hits once and is populated to SSD.
*
*
Summary
 
  • Without SSD cache, the average of IOPS is 1,660. Enable SSD cache with 8 SSDs, IOPS increases to 143,898. It improves (143,898 – 1,660) / 1,660 = 85.685 == 8,569%. And the warm-up time is about 6.5 minutes.
  • SSD Cache 2.0 can improve random write performance by up to 86 times.
  • The more SSDs are used, the better SSD write cache is.
 
Test Equipments & Configurations2
SSD Cache
  • I/O Type: Customization
  • Cache Block Size: 4MB
  • Populate-on-read Threshold: 1
  • Populate-on-write Threshold: 1
I/O Pattern
  • I/O Pattern
  • Workers: 1
  • Outstanding (Queue Depth): 128
  • Access Specifications: 4KB, 100% Write, 100% Random
Test Cases 3: Simulate Database Application
 
This test simulates the database application. We use the Database I/O type in the configuration of SSD cache pool, and use the database access pattern (8KB, 67% read, 100% random) to test.
*
*
Summary
 
  • The result is very good when the amount of the hot data is less than the capacity of SSD cache pool.
  • User has to accurately estimate the amount of hot data used to achieve the best result.
 
Test Equipments & Configurations2
SSD Cache
  • I/O Type: Database
  • Cache Block Size: 1MB
  • Populate-on-read Threshold: 2
  • Populate-on-write Threshold: 1
I/O Pattern
  • Workers: 1
  • Outstanding (Queue Depth): 128
  • Access Specifications: 8KB, 67% Read, 100% Random
Test Cases 4: Best Practice of SSD Read Cache on Dual Controller
 
The cases above are tested on single controller. This test provides the best practice of SSD read cache on dual controller. We assume that the performance can be twice than the test on single controller.
*
*
Summary
  • Without SSD cache, the average of IOPS is 4,986. After enabling SSD cache with 8 SSDs, IOPS increases to 461,037. It improves (461,037 – 4,986) / 4,986 = 91.466 == 9,147%. And the warm-up time is about 13.5 minutes.
  • SSD Cache 2.0 can improve random read performance by up to 92 times.
  • The test is the highest SSD cache performance of a system.
 
Test Equipments & Configurations3
I/O Type: Customization
  • Cache Block Size: 4MB
  • Populate-on-read Threshold: 1
  • Populate-on-write Threshold: 0
I/O Pattern
  • Workers: 1
  • Outstanding (Queue Depth): 128
  • Access Specifications: 4KB, 100% Read, 100% Random
Test Cases 5: Best Practice of SSD Write Cache on Dual Controller
 
This test provides the best practice of SSD write cache on dual controller. The same, we assume that the performance can be twice than the test on single controller.
*
*
Summary
 
  • Without SSD cache, the average of IOPS is 1,268. After enabling SSD cache with 8 SSDs, IOPS increases to 217,495. It improves (217,495 – 1,268) / 1,268 = 170.526 == 17,053%. And the warm-up time is about 18 minutes.
  • SSD Cache 2.0 can improve random write performance by up to 171 times.
  • The test is the highest increasing rate of SSD cache in a system.
 
Test Equipments & Configurations3
I/O Type: Customization
  • Cache Block Size: 4MB
  • Populate-on-read Threshold: 1
  • Populate-on-write Threshold: 1
I/O Pattern
  • I/O Pattern
  • Workers: 1
  • Outstanding (Queue Depth): 128
  • Access Specifications: 4KB, 100% Write, 100% Random
Conclusion
 
The hybrid storage concept of storage acceleration uses the idea of hot data to accelerate I/O performance of an entire storage system. When hardware and IT administration costs are taken into consideration, it turns out that SSD cache as available in modern SAN systems is generally the best way for most businesses to gain the benefits of the faster performance from flash based storage without sacrificing the reliability of their data.
 
Test Equipments & Configurations
2. Test Equipments & Configurations - Single Controller
 
Server
  • Model: HP Z840 (CPU: 2 x Xeon E5-2620v3 2.4Hz / RAM: 32GB)
    • FC HBA: QLogic QLE2694-SR
    • OS: Windows Server 2012 R2
  • Model: Dell E25S (CPUx2 :Xeon E5-2620v3 2.4Hz / RAM:32GB)
    • FC HBA: QLogic QLE2694-SR
    • OS: Windows Server 2012 R2
Storage
  • Model: XCubeSAN XS5224D
    • Memory: 16GB (2 x 8GB in bank 1 & 3) per controller
    • Firmware 1.1.2
    • HDD: 16 x Seagate Constellation ES, ST500NM0001, 500GB, SAS 6Gb/s
    • SSD: 8 x HGST Ultrastar SSD800MH.B, HUSMH8010BSS200, 100GB, SAS 12Gb/s
  • HDD Pool: 1 x RAID 5 Pool with 16 x NL-SAS HDDs in Controller 1
  • HDD Volume: 2 x 45GB in Pool
  • FC Session: 2 per Volume
*
3. Test Equipments & Configurations - Dual Controller
 
Server
  • Model: HP Z840 (CPU: 2 x Xeon E5-2620v3 2.4Hz / RAM: 32GB)
    • FC HBA: QLogic QLE2694-SR
    • OS: Windows Server 2012 R2
  • Model: Dell E25S (CPUx2 :Xeon E5-2620v3 2.4Hz / RAM:32GB)
    • FC HBA: QLogic QLE2694-SR
    • OS: Windows Server 2012 R2
Storage
  • Model: XCubeSAN XS5224D
    • Memory: 16GB (2 x 8GB in bank 1 & 3) per controller
    • Firmware 1.1.2
    • HDD: 16 x Seagate Constellation ES, ST500NM0001, 500GB, SAS 6Gb/s
    • SSD: 8 x HGST Ultrastar SSD800MH.B, HUSMH8010BSS200, 100GB, SAS 12Gb/s
  • HDD Pool: 1 x RAID 5 Pool with 16 x NL-SAS HDDs in Controller 1
                      1 x RAID 5 Pool 2 with 16 x NL-SAS HDDs in Controller 2
  • HDD Volume: 2 x 45GB in Pool
                            2 x 45GB in Pool 2
  • FC Session: 1 per Volume
*