Avago Technologies MegaRAID User Manual

MegaRAID®
SAS Software
User Guide
51530-00
Rev. B
July 2011
Revision History
Version and Date Description of Changes
51530-00, Rev B, July 2011 • Updated the guide with VMware 5.0 information.
51530-00A, Rev A, May 2011 • Updated screen shots wherever required in the document.
• Updated the guide with CacheCade Pro 2.0 SSD Read/Write Caching software content.
• Added around 53 events from 0x0189 to 0x01bd in Appendix A.
• Updated the content in the Section 5.15.15, Download Firmware to the Physical Devices.
• Modified content in Chapter 4, WebBIOS Configuration Utility.
• Updated content in Chapter 5, MegaRAID Command Tool.
• Updated content in Chapter 6, MegaRAID Storage Manager Overview and Installation.
• Updated content in Chapter 7, MegaRAID Storage Manager Window and Menus.
• Updated content in Chapter 8, Configuration.
• Updated content in Chapter 9, Monitoring Controllers and Its Attached Devices.
• Updated content in Chapter 10, Maintaining and Managing Storage Configurations.
• Updated content in Chapter 11, Using MegaRAID Advanced Software.
• Updated content in Appendix A, Events and Messages.
• Created a new appendix - Appendix D, History of Technical Changes.
• Updated the name of the CacheCade SSD Caching software.
NOTE: For a history of all technical changes made to this guide for the previous releases, refer to
Appendix D, History of Technical Changes.
LSI and the LSI logo are trademarks or registered trademarks of LSI Corporation or its subsidiaries. All other brand and product names may be trademarks of their respective companies. LSI Corporation reserves the right to make changes to the product(s) or information disclosed herein at any time without notice. LSI Corporation does not assume any responsibility or liability arising
This document contains proprietary information of LSI Corporation. The information contained herein is not to be used by or disclosed to third parties without the express written permission of LSI Corporation.
Corporate Headquarters Email Website
Milpitas, CA globalsupport@lsi.com www.lsi.com 800-372-2447
Document Number: 51530-00, Rev. B Copyright © 2011 LSI Corporation All Rights Reserved
MegaRAID SAS Software User Guide Table of Contents
Table of Contents
Chapter 1: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
1.1 SAS Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
1.2 Serial-Attached SCSI Device Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
1.3 Serial ATA III Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
1.4 Solid State Drive Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
1.4.1 SSD Guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
1.5 Dimmer Switch Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
1.6 UEFI 2.0 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
1.7 Configuration Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
1.7.1 Valid Drive Mix Configurations with HDDs and SSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
1.8 Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Chapter 2: Introduction to RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.1 RAID Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.2 RAID Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.3 RAID Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.4 Components and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
2.4.1 Drive Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
2.4.2 Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
2.4.3 Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
2.4.4 Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
2.4.5 Copyback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
2.4.6 Background Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
2.4.7 Patrol Read . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
2.4.8 Disk Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
2.4.9 Disk Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
2.4.10 Parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
2.4.11 Disk Spanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
2.4.12 Hot Spares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
2.4.13 Disk Rebuilds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
2.4.14 Rebuild Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
2.4.15 Hot Swap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
2.4.16 Drive States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
2.4.17 Virtual Drive States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
2.4.18 Beep Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
2.4.19 Enclosure Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
2.5 RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
2.5.1 Summary of RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
2.5.2 Selecting a RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
2.5.3 RAID 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
2.5.4 RAID 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
2.5.5 RAID 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
2.5.6 RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
2.5.7 RAID 00 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
2.5.8 RAID 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
2.5.9 RAID 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
2.5.10 RAID 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
LSI Corporation Confidential | July 2011 Page 3
MegaRAID SAS Software User GuideTable of Contents
2.6 RAID Configuration Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
2.6.1 Maximizing Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
2.6.2 Maximizing Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
2.6.3 Maximizing Storage Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
2.7 RAID Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
2.7.1 RAID Availability Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
2.8 Configuration Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
2.9 Number of Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
2.9.1 Drive Group Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Chapter 3: SafeStore Disk Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
3.2 Purpose and Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
3.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
3.4 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
3.4.1 Enable Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
3.4.2 Change Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
3.4.3 Create Secure Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
3.4.4 Import a Foreign Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
3.5 Instant Secure Erase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Chapter 4: WebBIOS Configuration Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
4.2 Starting the WebBIOS configuration utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
4.3 WebBIOS configuration utility Main Dialog Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
4.4 Managing Software Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
4.4.1 Managing MegaRAID Advanced Software Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
4.4.2 Reusing the Activation Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
4.4.3 Managing Advanced Software Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
4.4.4 Activating an Unlimited Key Over a Trial Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
4.4.5 Activating a Trial Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
4.4.6 Activating an Unlimited Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
4.4.7 Securing MegaRAID Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
4.4.8 Confirm Re-hosting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
4.4.9 Re-hosting Process Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
4.5 Creating a Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
4.5.1 Using Automatic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
4.5.2 Using Manual Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
4.6 CacheCade Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.6.1 Creating a CacheCade 2.0 SSD Read Caching Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.6.2 Creating a CacheCade Pro 2.0 SSD Read/Write Caching Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.7 Selecting SafeStore Encryption Services Security Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.7.1 Enabling the Security Key Identifier, Security Key, and Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.7.2 Enabling Drive Security using EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.3 Changing the Security Key Identifier, Security Key, and Pass Phrase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.7.4 Change Security from EKM to LKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.7.5 Changing Security from LKM to EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.7.6 Disabling the Drive Security Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.8 Viewing and Changing Device Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.8.1 Viewing Controller Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.8.2 Viewing Virtual Drive Properties, Policies, and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.8.3 Viewing Drive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Page 4 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Table of Contents
4.8.4 Shield State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.8.5 Viewing and Changing Battery Backup Unit Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.8.6 Managing Link Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.8.7 Viewing Enclosure Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.8.8 SSD Disk Cache Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.8.9 Emergency Hotspare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.8.10 Emergency Hotspare for Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.9 Viewing and Expanding a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.10 Suspending and Resuming Virtual Drive Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.11 Using MegaRAID Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.11.1 Recovery Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.11.2 Enabling the Recovery Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.11.3 Creating Snapshots and Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.11.4 Creating Concurrent Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.11.5 Selecting the Snapshot Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.11.6 Viewing Snapshot Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.11.7 Restoring a Virtual Drive by Rolling Back to a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.11.8 Cleaning Up a Snapshot Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
4.12 Non-SED Secure Erase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.12.1 Erasing a Non - SED Physical Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.12.2 Virtual Drive Erase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
4.13 Viewing System Event Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
4.14 Managing Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
4.14.1 Running a Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
4.14.2 Deleting a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
4.14.3 Importing or Clearing a Foreign Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
4.14.4 Importing Foreign Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
4.14.5 Import Foreign Drives in EKM/EKM Secured Locked Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
4.14.6 Importing Foreign Drives for LKM-Secured Locked Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
4.14.7 Importing Foreign Drives in LKM Mode EKM-Secured Locked Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
4.14.8 Migrating the RAID Level of a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
4.14.9 New Drives Attached to a MegaRAID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
4.15 WebBIOS Dimmer Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
4.15.1 Power-Save Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
4.15.2 Power Save Settings-Advanced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
4.15.3 Power-Save While Creating Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Chapter 5: MegaRAID Command Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.1 Product Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.2 Novell NetWare, SCO, Solaris, FreeBSD, and MS-DOS Operating System Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.3 Command Line Abbreviations and Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3.1 Abbreviations Used in the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3.2 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.4 Pre-boot MegaCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.5 CacheCade Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.5.1 Create a Solid State Drive Cache Drive to Use as Secondary Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.5.2 Delete a Solid State Drive Cache Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.5.3 Associate/Disassociate Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.5.4 Display CacheCade Pro 2.0 Configurations on a Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 222
5.5.5 Create a RAID Drive Group for CacheCade Pro 2.0 from All Unconfigured Good Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
LSI Corporation Confidential | July 2011 Page 5
MegaRAID SAS Software User GuideTable of Contents
5.5.6 Remove Blocked Access on a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.5.7 Create RAID 0 Configuration with SSD Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.5.8 Create a RAID Level 10, 50, 60 (spanned) Configuration with SSD Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.5.9 Delete Virtual Drives with SSD Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.5.10 Clear Configurations on CacheCade Pro 2.0 Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.5.11 Create a CacheCade Pro 2.0 Virtual Drive with RAID level and Write Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.6 Software License Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.7 SafeStore Security Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.7.1 Use Instant Secure Erase on a Physical Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.7.2 Secure Data on a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.7.3 Destroy the Security Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.7.4 Create a Security Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.7.5 Create a Drive Security Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.7.6 Change the Security Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.7.7 Get the Security Key ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.7.8 Set the Security Key ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.7.9 Verify the Security Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.8 Controller Property-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.8.1 Display Controller Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.8.2 Display Number of Controllers Supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.8.3 Enable or Disable Automatic Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.8.4 Flush Controller Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.8.5 Set Controller Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.8.6 Display Specified Controller Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.8.7 Set Factory Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.8.8 Set SAS Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.8.9 Set Time and Date on Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.8.10 Display Time and Date on Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.8.11 Get Connector Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.8.12 Set Connector Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.9 Patrol Read-Related Controller Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.9.1 Set Patrol Read Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.9.2 Set Patrol Read Delay Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.10 BIOS-Related Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.10.1 Set or Display Bootable Virtual Drive ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.10.2 Select BIOS Status Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.11 Battery Backup Unit-Related Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.11.1 Display BBU Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.11.2 Display BBU Status Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.11.3 Display BBU Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.11.4 Display BBU Design Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.11.5 Display Current BBU Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5.11.6 Start BBU Learning Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5.11.7 Place Battery in Low-Power Storage Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5.11.8 Set BBU Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.12 Options for Displaying Logs Kept at the Firmware Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.12.1 Event Log Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.12.2 Set BBU Terminal Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
5.13 Configuration-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
5.13.1 Create a RAID Drive Group from All Unconfigured-Good Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
5.13.2 Add RAID 0, 1, 5, or 6 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.13.3 Add RAID 10, 50, or 60 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.13.4 Clear the Existing Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Page 6 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Table of Contents
5.13.5 Save the Configuration on the Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.13.6 Restore the Configuration Data from File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
5.13.7 Manage Foreign Configuration Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
5.13.8 Delete Specified Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
5.13.9 Display the Free Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.14 Virtual Drive-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.14.1 Display Virtual Drive Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.14.2 Change the Virtual Drive Cache and Access Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.14.3 Display the Virtual Drive Cache and Access Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.14.4 Manage Virtual Drives Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.14.5 Manage a Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.14.6 Schedule a Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.14.7 Manage a Background Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.14.8 Perform a Virtual Drive Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
5.14.9 Display Information about Virtual Drives and Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.14.10 Display the Bad Block Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.14.11 Display the Number of Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.14.12 Clear the LDBBM Table Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.14.13 Display the List of Virtual Drives with Preserved Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.14.14 Discard the Preserved Cache of a Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.14.15 Expand a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.15 Drive-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.15.1 Display Drive Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.15.2 Set the Drive State to Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.15.3 Set the Drive State to Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.15.4 Change the Drive State to Unconfigured-Good . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5.15.5 Change the Drive State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5.15.6 Manage a Drive Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5.15.7 Rebuild a Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5.15.8 Locate the Drives and Activate LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.15.9 Mark the Configured Drive as Missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.15.10 Display the Drives in Missing Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.15.11 Replace the Configured Drives and Start an Automatic Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.15.12 Prepare the Unconfigured Drive for Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.15.13 Display Total Number of Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.15.14 Display List of Physical Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
5.15.15 Download Firmware to the Physical Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
5.15.16 Configure All Free Drives into a RAID 0, 1, 5, or 6 Configuration for a Specific Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
5.15.17 Set the Mapping Mode of the Drives to the Selected Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.15.18 Secure Erase for Virtual Drives and Physical Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.15.19 Perform the Copyback Operation on the Selected Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.16 Enclosure-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.16.1 Display Enclosure Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.16.2 Display Enclosure Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.16.3 Upgrading the Firmware without Restarting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.17 Flashing the Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.17.1 Flash the Firmware with the ROM File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.17.2 Flash the Firmware in Mode 0 with the ROM File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.18 SAS Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.19 Diagnostic-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.19.1 Start Controller Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.19.2 Perform a Full Stroke Seek Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
LSI Corporation Confidential | July 2011 Page 7
MegaRAID SAS Software User GuideTable of Contents
5.19.3 Start Battery Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.20 Recovery (Snapshot)-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.20.1 Enable the Snapshot Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.20.2 Disable the Snapshot Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.20.3 Take a Snapshot of a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.20.4 Set the Snapshot Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.20.5 Delete a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5.20.6 Create a View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5.20.7 Delete a View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5.20.8 Roll back to an Older Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.20.9 Display Snapshot and View Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.20.10 Clean the Recoverable Free Space on the Drives in a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.20.11 Display the Information for a Specific View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.20.12 Enable the Snapshot Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.21 FastPath-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.22 Dimmer Switch-Related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
5.22.1 Display Selected Adapter Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
5.22.2 Sets the Properties on the Selected Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.22.3 Displays the Power-Saving Level on the Virtual Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.22.4 Displays about Adding a RAID Level to a Specified Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5.22.5 Create a RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5.22.6 Add the Unconfigured Drive to a Specified Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.22.7 Displays the Cache and Access Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.23 Performance Monitoring Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.23.1 Starting Performance Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.23.2 Stopping Performance Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.23.3 Saving Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.24 Miscellaneous Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.24.1 Display the Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.24.2 Display the MegaCLI Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.24.3 Display Help for MegaCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5.24.4 Display Summary Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Chapter 6: MegaRAID Storage Manager Overview and Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.1.1 Creating Storage Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.1.2 Monitoring Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.1.3 Maintaining Storage Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.2 Hardware and Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.3 Prerequisites to Running MegaRAID Storage Manager Remote Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.4 Installing MegaRAID Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6.4.1 Prerequisite for MegaRAID Storage Manager Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6.4.2 Installing MegaRAID Storage Manager Software on Microsoft Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6.4.3 Installing MegaRAID Storage Manager for the Solaris SPARC Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
6.4.4 Uninstalling MegaRAID Storage Manager Software for Solaris SPARC Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
6.4.5 Installing MegaRAID Storage Manager for Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.4.6 Prerequisites for Installing MegaRAID Storage Manager on the RHEL6.0 x64 Operating System . . . . . . . . . . . . . . . .
. . . . . . 288
6.4.7 Linux Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
6.4.8 Kernel Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.4.9 Uninstalling MegaRAID Storage Manager Software on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.4.10 MegaRAID Storage Manager Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
6.5 MegaRAID Storage Manager Support and Installation on VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.5.1 Pre-requisites for Installing MegaRAID Storage Manager for VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Page 8 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Table of Contents
6.5.2 Installing MegaRAID Storage Manager on VMware ESX (VMware Classic) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.5.3 Uninstalling MegaRAID Storage Manager for VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.5.4 MegaRAID Storage Manager Support on the VMware ESXi Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.5.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.6 Installing and Configuring a CIM Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.6.1 Installing a CIM SAS Storage Provider on the Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.6.2 Installing a CIM SAS Storage Provider on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.7 Installing and Configuring an SNMP Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.7.1 Prerequisite for LSI SNMP Agent RPM Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.7.2 Prerequisite for Installing SNMP Agent on Linux Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.7.3 Installing and Configuring an SNMP Agent on a Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.7.4 Installing and Configuring an SNMP Agent on the Solaris Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.7.5 Installing an SNMP Agent on the Windows Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
6.8 MegaRAID Storage Manager Support and Installation on the Solaris 10 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.8.1 Installing MegaRAID Storage Manager Software for the Solaris 10 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.8.2 Uninstalling MegaRAID Storage Manager Software for the Solaris 10 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.9 Installing MegaCLI for VMware 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.10 MegaRAID Storage Manager Remotely Connecting to VMware ESX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.11 Prerequisites to Running MegaRAID Storage Manager Remote Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Chapter 7: MegaRAID Storage Manager Window and Menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7.1 Starting the MegaRAID Storage Manager Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7.2 MegaRAID Storage Manager Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.2.1 Dashboard, Physical View, Logical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.2.2 Shield State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
7.2.3 Shield State Physical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
7.2.4 Logical View Shield State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
7.2.5 Viewing the Physical Drive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
7.2.6 Viewing Server Profile of a Drive in Shield State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7.2.7 Displaying the Virtual Drive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7.2.8 Emergency HotSpare Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.2.9 SSD Disk Cache Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
7.2.10 Non-SED Secure Erase Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.2.11 Rebuild Write Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
7.2.12 Background Suspend or Resume Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
7.2.13 Enclosure Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
7.3 Monitoring Battery Backup Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
7.3.1 Properties and Graphical View Tabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.3.2 Event Log Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
7.3.3 Menu Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Chapter 8: Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.1 Creating a New Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.1.1 Selecting Virtual Drive Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.1.2 Optimum Controller Settings for CacheCade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.1.3 Optimum Controller Settings for FastPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.1.4 Creating a Virtual Drive Using Simple Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.1.5 Creating a Virtual Drive using Advanced Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8.2 Converting JBOD Drives to Unconfigured Good . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.2.1 Converting JBOD to Unconfigured Good from the MegaRAID Storage Manager Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
LSI Corporation Confidential | July 2011 Page 9
MegaRAID SAS Software User GuideTable of Contents
8.3 Adding Hot Spare Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.4 Changing Adjustable Task Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
8.5 Changing Power Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
8.5.1 Enhanced Dimmer Switch Power Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
8.5.2 Power Save Settings - Advanced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
8.5.3 Automatically Spin Up Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
8.5.4 Power-Save Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
8.5.5 Power Save Mode - SSD Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
8.6 Changing Virtual Drive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
8.7 Changing a Virtual Drive Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
8.7.1 Accessing the Modify Drive Group Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
8.7.2 Adding a Drive or Drives to a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8.7.3 Removing a Drive from a Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8.7.4 Replacing a Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8.7.5 Migrating the RAID Level of a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
8.7.6 New Drives Attached to a MegaRAID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
8.8 Deleting a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Chapter 9: Monitoring Controllers and Its Attached Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.1 Alert Delivery Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.1.1 Vivaldi Log / MegaRAID Storage Manager Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
9.1.2 System Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9.1.3 Pop-up Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9.1.4 E-mail Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9.2 Configuring Alert Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
9.3 Editing Alert Delivery Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
9.4 Changing Alert Delivery Methods for Individual Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
9.5 Changing the Severity Level for Individual Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
9.6 Rollback to Default Individual Event Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.7 Entering or Editing the Sender Email Address and SMTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.8 Authenticating the SMTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
9.9 Adding Email Addresses of Recipients of Alert Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
9.10 Testing Email Addresses of Recipients of Alert Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
9.11 Removing Email Addresses of Recipients of Alert Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.12 Saving Backup Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.13 Loading Backup Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.14 Monitoring Server Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
9.15 Monitoring Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
9.16 Monitoring Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
9.17 Running a Patrol Read . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
9.18 Monitoring Virtual Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
9.19 Monitoring Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
9.19.1 Monitoring Battery Backup Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
9.20 Battery Learn Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
9.20.1 Setting Learn Cycle Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
9.20.2 Starting a Learn Cycle Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
9.21 Monitoring Rebuilds and Other Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Page 10 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Table of Contents
Chapter 10: Maintaining and Managing Storage Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
10.1 Initializing a Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
10.1.1 Running a Group Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
10.2 Running a Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
10.2.1 Setting the Consistency Check Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
10.2.2 Scheduling a Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
10.2.3 Running a Group Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
10.3 Scanning for New Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10.4 Rebuilding a Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10.4.1 New Drives Attached to a MegaRAID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
10.5 Making a Drive Offline or Missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.6 Removing a Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.7 Upgrading the Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Chapter 11: Using MegaRAID Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.1 MegaRAID Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.2 Recovery Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.2.1 MegaRAID Software Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.2.2 Managing MegaRAID Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11.2.3 Activation Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
11.2.4 Advanced MegaRAID Software Status Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
11.2.5 Application Scenarios and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
11.2.6 Activating an Unlimited Key Over a Trial Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
11.2.7 Configuring Key Vault (Re-hosting process) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11.2.8 Re-hosting Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
11.2.9 Deactivate Trial Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
11.2.10 MegaRAID Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
11.2.11 Recovery Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
11.2.12 Enabling the Recovery Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
11.2.13 Snapshot Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11.2.14 Selecting the Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
11.2.15 Scheduling Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
11.2.16 Editing Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
11.2.17 Snapshot Base Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
11.2.18 Manage Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
11.2.19 Editing Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
11.2.20 Advanced Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
11.2.21 Create View Using Manage Snapshots Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
11.2.22 Viewing Snapshot Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
11.2.23 No View Details for Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
11.2.24 No Snapshot Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
11.2.25 Graphical Representation of Repository Virtual Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
11.2.26 Deleting a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
11.3 Disabling MegaRAID Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
11.4 CacheCade Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
11.4.1 Using the CacheCade 2.0 SSD Read Caching Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
11.4.2 Using the CacheCade Pro 2.0 SSD Read/Write Caching Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
11.5 FastPath Advanced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
11.5.1 Setting FastPath Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
11.6 LSI SafeStore Encryption Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
LSI Corporation Confidential | July 2011 Page 11
MegaRAID SAS Software User GuideTable of Contents
11.6.1 Enabling Drive Security using EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
11.6.2 Supporting EKM Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
11.6.3 Change Security Settings- LKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
11.6.4 Change Security Settings - EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
11.6.5 Importing Foreign Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
11.6.6 Importing Foreign Drives to LKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
11.6.7 Importing Foreign Drives to EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
11.6.8 Importing Foreign Drives to EKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
11.6.9 Enabling Drive Security using LKM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
11.6.10 Changing the Drive Security Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
11.6.11 Disabling Drive Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
11.6.12 Importing or Clearing a Foreign Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
11.7 Managing Link Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Appendix A: Events and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
A.1 Error Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
A.2 Event Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Appendix B: MegaCLI Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
B.1 Error Messages and Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Appendix C: Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Appendix D: History of Technical Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Page 12 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 1: Overview
Chapter 1

Overview

This chapter provides an overview of this guide, which documents the utilities used to configure, monitor, and maintain MegaRAID® Serial-attached SCSI (SAS) RAID controllers with RAID control capabilities and the storage-related devices connected to them.
This guide describes how to use the MegaRAID Storage Manager™ software, the WebBIOS™ configuration utility, and the MegaRAID command line interface (CLI).
This chapter documents the SAS technology, Serial ATA (SATA) t echnology, M egaRAID CacheCade™ 2.0 SSD Read Caching software, SSD Guard™, Dimmer Switch™, UEFI 2.0, configuration scenarios, and drive types. Other features such as FastPath and SafeStore are described in other chapters of this guide.

| SAS Technology

NOTE: This guide does not include the latest CacheCade and Enterprise Key
Management System (EKMS) features.
1.1 SAS Technology The MegaRAID 6Gb/s SAS RAID controllers are high-performance intelligent PCI
Express-to-SAS/Serial ATA II controllers with RAID control capabilities. MegaRAID 6Gb/s SAS RAID controllers provide reliability, high performance, and fault-tolerant disk subsystem management. They are an ideal RAID solution for the internal storage of workgroup, departmental, and enterprise systems. MegaRAID 6Gb/s SAS RAID controllers offer a cost-effective way to implement RAID in a server.
SAS technology brings a wealth of options and flexibility with the use of SAS devices, Serial ATA (SATA) II devices, and CacheCade 2.0 SSD Read Caching software devices within the same storage infrastructure. These devices bring individual characteristics that make each of these more suitable choice depending on your storage needs. MegaRAID gives you the flexibility to combine these two similar technologies on the same controller, within the same enclosure, and in the same virtual drive.
NOTE: LSI® recommends that you carefully assess any decision to combine SAS drives and SATA drives within the same virtual drives. Although you can mix drives, LSI strongly discourages this practice; this applies to both HDDs and CacheCade 2.0 SSD Read Caching software.
LSI Corporation Confidential | July 2011 Page 13
| Serial-Attached SCSI Device Interface
MegaRAID SAS Software User GuideChapter 1: Overview
MegaRAID 6Gb/s SAS RAID controllers are based on the LSI first-to-market SAS IC technology and proven MegaRAID technology. As second-generation PCI Express RAID controllers, the MegaRAID SAS RAID controllers address the growing demand for increased data throughput and scalability requirements across midrange and enterprise-class server platforms. LSI offers a family of MegaRAID SAS RAID controllers addressing the needs for both internal and external solutions.
The SAS controllers support the ANSI Serial Attached SCSI standard, version 2.1. In addition, the controller supports the SATA II protocol defined by the Serial ATA specification, version 3.0. Supporting both the SAS and SATA II interfaces, the SAS controller is a versatile controller that provides the backbone of both server environments and high-end workstation environments.
Each port on the SAS RAID controller supports SAS devices or SATA III devices using the following protocols:
SAS Serial SCSI Protocol (SSP), which enables communication with other SAS
devices
SATA III, which enables communication with other SATA III devices
Serial Management Protocol (SMP), which communicates topology management
information directly with an attached SAS expander device
Serial Tunneling Protocol (STP), which enables communication with a SATA III device
through an attached expander

1.2 Serial-Attached SCSI Device Interface

SAS is a serial, point-to-point, enterprise-level device interface that leverages the proven SCSI protocol set. SAS is a convergence of the advantages of SATA II, SCSI, and Fibre Channel, and is the future mainstay of the enterprise and high-end workstation storage markets. SAS offers a higher bandwidth per pin than parallel SCSI, and it improves the signal and data integrity.
The SAS interface uses the proven SCSI command set to ensure reliable data transfers, while providing the connectivity and flexibility of point-to-point serial data transfers. The serial transmission of SCSI commands eliminates clock-skew challenges. The SAS interface provides improved performance, simplified cabling, smaller connectors, lower pin count, and lower power requirements when compared to parallel SCSI.
SAS controllers leverage a common electrical and physical connection interface that is compatible with Serial ATA technology. The SAS and SATA II protocols use a thin, 7-wire connector instead of the 68-wire SCSI cable or 26-wire ATA cable. The SAS/SATA II connector and cable are easier to manipulate, allow connections to smaller devices, and do not inhibit airflow. The point-to-point SATA II architecture eliminates inherent difficulties created by the legacy ATA master-slave architecture, while maintaining compatibility with existing ATA firmware.
Page 14 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 1: Overview

| Serial ATA III Features

1.3 Serial ATA III Features The SATA bus is a high-speed, internal bus that provides a low pin count (LPC), low
voltage level bus for device connections between a host controller and a SATA device.
The following list describes the SATA III features of the RAID controllers:
Supports SATA III data transfers of 6Gb/s
Supports STP data transfers of 6Gb/s
Provides a serial, point-to-point storage interface
Simplifies cabling between devices
Eliminates the master-slave construction used in parallel ATA
Allows addressing of multiple SATA II targets through an expander
Allows multiple initiators to address a single target (in a fail-over configuration)
through an expander

1.4 Solid State Drive Features The MegaRAID firmware supports the use of SSDs as standard drives and/or additional

controller cache, referred to as CacheCade 2.0 SSD Read Caching software. SSD drives are expected to behave like SATA or SAS HDDs except for the following:
High random read speed (because there is no read-write head to move)
High performance-to-power ratio, as these drives have very low power
consumption compared to HDDs
Low latency
High mechanical reliability
Lower weight and size
NOTE: Support for SATA SSD drives applies only to those drives that support ATA-8 ACS compliance.
You can choose whether to allow a virtual drive to consist of both CacheCade 2.0 SSD Read Caching software devices and HDDs. For a virtual drive that consists of CacheCade
2.0 SSD Read Caching software only, you can choose whether to allow SAS CacheCade
2.0 SSD Read Caching software drives and SATA CacheCade 2.0 SSD Read Caching software drives in that virtual drive. For virtual drives that have both CacheCade 2.0 SSD Read Caching software and HDDs, you can choose whether to mix SAS and SATA HDD drives with SAS and SATA CacheCade 2.0 SSD Read Caching software devices in various combinations.l
NOTE: Support for SATA SDD drives applies only to those drives that support ATA-8 ACS compliance.
LSI Corporation Confidential | July 2011 Page 15

| Dimmer Switch Features

1.4.1 SSD Guard SSD Guard, a feature that is unique to MegaRAID, increases the reliability of SSDs by

MegaRAID SAS Software User GuideChapter 1: Overview
automatically copying data from a drive with potential to fail to a designated hot spare or newly inserted drive. Because SSDs are more reliable than hard disk drives (HDDs), non-redundant RAID 0 configurations are much more common than in the past. SSD Guard offers added data protection for RAID 0 configurations.
SSD Guard works by looking for a predictive failure while monitoring the SDD Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) error log. If errors indicate that a SSD failure is imminent, the MegaRAID software starts a rebuild to preserve the data on the SSD and sends appropriate warning event notifications.
1.5 Dimmer Switch Features Powering drives and cooling drives represent a major cost for data centers. The
MegaRAID Dimmer Switch feature set reduces the power consumption of the devices connected to a MegaRAID controller. This helps to share resources more efficiently and lowers the cost.
Dimmer Switch I - Spin down unconfigured disks. This feature is configurable and can be disabled.
Dimmer Switch II - Spin down Hot Spares. This feature is configurable and can be disabled.
Dimmer Switch III - This new feature spins down any Logical Disk after 30 minutes of inactivity, by default, if the array can be spun up within 60 seconds. This feature is configurable and can be disabled.

1.6 UEFI 2.0 Support UEFI 2.0 provides MegaRAID customers with expanded platform support. The

MegaRAID UEFI 2.0 driver, a boot service device driver, handles block IO requests and SCSI pass-through (SPT) commands, and offers the ability to launch pre-boot MegaRAID management applications through a driver configuration protocol (DCP). The UEFI driver also supports driver diagnostic protocol, which allows administrators to access pre-boot diagnostics.

1.7 Configuration Scenarios You can use the SAS RAID controllers in three scenarios:

Low-end, Internal SATA II Configurations
In these configurations, use the RAID controller as a high-end SATA II-compatible controller that connects up to 8 disks either directly or through a port expander. These configurations are mostly for low-end or entry servers. Enclosure management is provided through out-of-band Inter-IC (I2C) bus. Side bands of both types of internal SAS connectors support the SFF-8485 (SGPIO) interface.
Midrange Internal SAS Configurations
These configurations are like the internal SATA II configurations, but with high-end disks. These configurations are more suitable for low-range to midrange servers.
High-end External SAS/SATA II Configurations
These configurations are for both internal connectivity and external connectivity, using SATA II drives, SAS drives, or both. External enclosure management is supported through in-band, SCSI-enclosed storage. The configuration must support STP and SMP.
Page 16 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 1: Overview
Flash ROM/
SAS
PCI Express
RAID Controller
SAS/SATA II Device
32-Bit Memory
Address/Data
Bus
PSBRAM/
I2C
SAS/SATA II Device
SAS/SATA II Device
SAS/SATA II Device
PCI Express Interface
NVSRAM
I
2
C
Interface
LSISASx12
Flash ROM/
NVSRAM/
SRAM
I
2
C/UART
LSISASx12
SAS/SATA II
Drives
PCI Express Interface
SAS/SATA
Drives
SAS/SATA II
Drives
SAS/SATA II
Drives
SAS/SATA II
Drives
8
SRAM
SRAMSDRAM
Peripheral
Bus
72-bit DDR/DDR2
with ECC
Interface
LSISAS1078
PCI Express to SAS ROC
SAS
RAID Controller
Expander
Expander
Figure1 shows a direct-connect configuration. The Inter-IC (I2C) interface
communicates with peripherals. The external memory bus provides a 32-bit memory bus, parity checking, and chip select signals for pipelined synchronous burst static random access memory (PSBRAM), nonvolatile static random access memory (NVSRAM), and Flash ROM.
NOTE: The external memory bus is 32-bit for the SAS 8704ELP and the SAS 8708ELP, and 64-bit for the SAS 8708EM2, the SAS 8880EM2, and the SAS 8888ELP.
| Configuration Scenarios
LSI Corporation Confidential | July 2011 Page 17
Figure 1: Example of an LSI SAS Direct-Connect Application
Figure2 shows an example of a SAS RAID controller configured with an LSISASx12
expander that is connected to SAS disks, SATA II disks, or both.
Figure 2: Example of an LSI SAS RAID Controller Configured with an LSISASx12
Expander
| Configuration Scenarios
MegaRAID SAS Software User GuideChapter 1: Overview

1.7.1 Valid Drive Mix Configurations with HDDs and SSDs

You can allow a virtual drive to consist of both SSDs and HDDs. For virtual drives that have both SSDs and HDDs, you can choose whether to mix SAS drives and SATA drives on the CacheCade 2.0 SSD Read Caching software devices.
You can choose whether to allow a virtual drive to consist of both CacheCade 2.0 SSD Read Caching software devices and HDDs. For a virtual drive that consists of CacheCade
2.0 SSD Read Caching software only, you can choose whether to allow SAS CacheCade
2.0 SSD Read Caching software drives and SATA CacheCade 2.0 SSD Read Caching software drives in that virtual drive. For virtual drives that have both CacheCade 2.0 SSD Read Caching software and HDDs, you can choose whether to mix SAS and SATA HDD drives with SAS and SATA CacheCade 2.0 SSD Read Caching software devices in various combinations.
Tab le 1 lists the valid drive mix configurations you can use when you create virtual
drives and allow HDD and CacheCade 2.0 SSD Read Caching software mixing. The valid drive mix configurations are based on manufacturer settings.
Table 1: Valid Drive Mix Configurations
# Valid Drive Mix Configurations
1. SAS HDD with SAS SDD (SAS-only configuration)
2. SATA HDD with SATA CacheCade 2.0 SSD Read Caching software (SATA-only configuration)
3. SAS HDD with a mix of SAS and SATA CacheCade 2.0 SSD Read Caching software (a SATA HDD cannot be added)
4. SATA HDD with a mix of SAS and SATA CacheCade 2.0 SSD Read Caching software (a SAS HDD cannot be added)
5. SAS CacheCade 2.0 SSD Read Caching software with a mix of SAS and SATA HDD (a SATA CacheCade 2.0 SSD Read Caching software cannot be added)
6. SATA CacheCade 2.0 SSD Read Caching software with a mix of SAS and SATA HDD (a SAS CacheCade 2.0 SSD Read Caching software cannot be added)
7. A mix of SAS and SATA HDD with a mix of SAS and SATA CacheCade 2.0 SSD Read Caching software
8. A CacheCade 2.0 SSD Read Caching software cannot be added to a HDD, but a SAS/SATA mix is allowed.
NOTE: Only one of the valid configurations listed in Table 1 is allowed based on your controller card manufacturing settings.
NOTE: The valid drive mix also applies to hot spares. For hot spare information, see
Section2.4.12, Hot Spares, on page28.
Page 18 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 1: Overview

| Technical Support

1.8 Technical Support For assistance with installing, configuring, or running your MegaRAID 6Gb/s SAS RAID
controllers, contact an LSI Technical Support representative.
Click the following link to access the LSI Technical Support page for storage and board support:
http://www.lsi.com/support/storage/tech_support/index.html
From this page, you can send an e-mail or call a Technical Support representative, or submit a new service request and view its status.
E-mail:
http://www.lsi.com/support/support_form.html
Phone Support:
http://www.lsi.com/support/storage/phone_tech_support/index.html
1-800-633-4545 (North America)
00-800-5745-6442 (International)
LSI Corporation Confidential | July 2011 Page 19
| Technical Support
MegaRAID SAS Software User GuideChapter 1: Overview
Page 20 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID

| RAID Description

Chapter 2

Introduction to RAID

This chapter describes Redundant Array of Independent Disks (RAID), RAID functions and benefits, RAID components, RAID levels, and configuration strategies. In addition, it defines the RAID availability concept, and offers tips for configuration planning.
2.1 RAID Description RAID is an array, or group, of multiple independent physical drives that provide high
performance and fault tolerance. A RAID drive group improves I/O (input/output) performance and reliability. The RAID drive group appears to the host computer as a single storage unit or as multiple virtual units. I/O is expedited because several drives can be accessed simultaneously.

2.2 RAID Benefits RAID drive groups improve data storage reliability and fault tolerance compared to

single-drive storage systems. Data loss resulting from a drive failure can be prevented by reconstructing missing data from the remaining drives. RAID has gained popularity because it improves I/O performance and increases storage subsystem reliability.

2.3 RAID Functions Virtual drives are drive groups or spanned drive groups that are available to the

operating system. The storage space in a virtual drive is spread across all of the drives in the drive group.
Your drives must be organized into virtual drives in a drive group, and they must be able to support the RAID level that you select. Some common RAID functions follow:
Creating hot spare drives
Configuring drive groups and virtual drives
Initializing one or more virtual drives
Accessing controllers, virtual drives, and drives individually
Rebuilding failed drives
Verifying that the redundancy data in virtual drives using RAID level 1, 5, 6, 10, 50, or
60 is correct
Reconstructing virtual drives after changing RAID levels or adding a drive to a drive
group
Selecting a host controller on which to work
LSI Corporation Confidential | July 2011 Page 21
| Components and Features
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID

2.4 Components and Features

RAID levels describe a system for ensuring the availability and redundancy of data stored on large disk subsystems. See Section2.5, RAID Levels for detailed information about RAID levels. The following subsections describes the components of RAID drive groups and RAID levels.

2.4.1 Drive Group A drive group is a group of physical drives. These drives are managed in partitions

known as virtual drives.

2.4.2 Virtual Drive A virtual drive is a partition in a drive group that is made up of contiguous data

segments on the drives. A virtual drive can consist of an entire drive group, more than one entire drive group, a part of a drive group, parts of more than one drive group, or a combination of any two of these conditions.

2.4.3 Fault Tolerance Fault tolerance is the capability of the subsystem to undergo a drive failure or failures

without compromising data integrity, and processing capability. The RAID controller provides this support through redundant drive groups in RAID levels 1, 5, 6, 10, 50, and
60. The system can still work properly even with drive failure in a drive group, though performance can be degraded to some extent.
In a span of RAID 1 drive groups, each RAID 1 drive group has two drives and can tolerate one drive failure. The span of RAID 1 drive groups can contain up to 32 drives, and tolerate up to 16 drive failures - one in each drive group. A RAID 5 drive group can tolerate one drive failure in each RAID 5 drive group. A RAID 6 drive group can tolerate up to two drive failures.
Each spanned RAID 10 virtual drive can tolerate multiple drive failures, as long as each failure is in a separate drive group. A RAID 50 virtual drive can tolerate two drive failures, as long as each failure is in a separate drive group. RAID 60 drive groups can tolerate up to two drive failures in each drive group.
NOTE: RAID level 0 is not fault tolerant. If a drive in a RAID 0 drive group fails, the entire virtual drive (all drives associated with the virtual drive) fails.
Fault tolerance is often associated with system availability because it allows the system to be available during the failures. However, fault tolerance means that it is also important for the system to be available during the repair of the problem.
A hot spare is an unused drive that, in case of a disk failure in a redundant RAID drive group, can be used to rebuild the data and re-establish redundancy. After the hot spare is automatically moved into the RAID drive group, the data is automatically rebuilt on the hot spare drive. The RAID drive group continues to handle requests while the rebuild occurs.
Auto-rebuild allows a failed drive to be replaced and the data automatically rebuilt by “hot-swapping” the drive in the same drive bay. The RAID drive group continues to handle requests while the rebuild occurs.
Page 22 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
2.4.3.1 Multipathing The firmware provides support for detecting and using multiple paths from the RAID
| Components and Features
controllers to the SAS devices that are in enclosures. Devices connected to enclosures have multiple paths to them. With redundant paths to the same port of a device, if one path fails, another path can be used to communicate between the controller and the device. Using multiple paths with load balancing, instead of a single path, can increase reliability through redundancy.
Applications show the enclosures and the drives connected to the enclosures. The firmware dynamically recognizes new enclosures added to a configuration along with their contents (new drives). In addition, the firmware dynamically adds the enclosure and its contents to the management entity currently in use.
Multipathing provides the following features:
Support for failover, in the event of path failure
Auto-discovery of new or restored paths while the system is online, and reversion to
system load-balancing policy
Measurable bandwidth improvement to the multi-path device
Support for changing the load-balancing path while the system is online
The firmware determines whether enclosure modules (ESMs) are part of the same enclosure. When a new enclosure module is added (allowing multi-path) or removed (going single path), an Asynchronous Event Notification (AEN) is generated. AENs about drives contain correct information about the enclosure, when the drives are connected by multiple paths. The enclosure module detects partner ESMs and issues events appropriately.
In a system with two ESMs, you can replace one of the ESMs without affecting the virtual drive availability. For example, the controller can run heavy I/Os, and when you replace one of the ESMs, I/Os should not stop. The controller uses different paths to balance the load on the entire system.
In the MegaRAID Storage Manager utility, when multiple paths are available to a drive, the drive information shows only one enclosure. The utility shows that a redundant path is available to a drive. All drives with a redundant path display this information. The firmware supports online replacement of enclosure modules.

2.4.4 Consistency Check The consistency check operation verifies correctness of the data in virtual drives that

use RAID levels 1, 5, 6, 10, 50, and 60. (RAID 0 does not provide data redundancy.) For example, in a system with parity, checking consistency means computing the data on one drive and comparing the results to the contents of the parity drive.
NOTE: It is recommended that you perform a consistency check at least once a month.

2.4.5 Copyback The copyback feature allows you to copy data from a source drive of a virtual drive to a

destination drive that is not a part of the virtual drive. Copyback is often used to create or restore a specific physical configuration for a drive group (for example, a specific arrangement of drive group members on the device I/O buses). Copyback can be run automatically or manually.
LSI Corporation Confidential | July 2011 Page 23
| Components and Features
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
Typically, when a drive fails or is expected to fail, the data is rebuilt on a hot spare. The failed drive is replaced with a new disk. Then the data is copied from the hot spare to the new drive, and the hot spare reverts from a rebuild drive to its original hot spare status. The copyback operation runs as a background activity, and the virtual drive is still available online to the host.
Copyback is also initiated when the first Self-Monitoring Analysis and Reporting Technology (SMART) error occurs on a drive that is part of a virtual drive. The destination drive is a hot spare that qualifies as a rebuild drive. The drive with the SMART error is marked as “failed” only after the successful completion of the copyback. This situation avoids putting the drive group in Degraded status.
NOTE: During a copyback operation, if the drive group involved in the copyback is deleted because of a virtual drive deletion, the destination drive reverts to an Unconfigured Good state or hot spare state.
Order of Precedence.
In the following scenarios, rebuild takes precedence over the copyback operation:
If a copyback operation is already taking place to a hot spare drive, and any virtual
drive on the controller degrades, the copyback operation aborts, and a rebuild starts. The rebuild changes the virtual drive to the Optimal state.
The rebuild operation takes precedence over the copyback operation when the
conditions exist to start both operations. For example:
— The hot spare is not configured (or unavailable) in the system. — Two drives (both members of virtual drives) exist, with one drive exceeding the
SMART error threshold, and the other failed.
— If you add a hot spare (assume a global hot spare) during a copyback operation,
the copyback is aborted, and the rebuild operation starts on the hot spare.

2.4.6 Background Initialization Background initialization is a check for media errors on the drives when you create a

virtual drive. It is an automatic operation that starts five minutes after you create the virtual drive. This check ensures that striped data segments are the same on all of the drives in the drive group.
Background initialization is similar to a consistency check. The difference between the two is that a background initialization is forced on new virtual drives and a consistency check is not.
New RAID 5 virtual drives and new RAID 6 virtual drives require a minimum number of drives for a background initialization to start. If there are fewer drives, the background initialization does not start. The background initialization needs to be started manually. The following number of drives are required:
— New RAID 5 virtual drives must have at least five drives for background
initialization to start.
— New RAID 6 virtual drives must have at least seven drives for background
initialization to start.
Page 24 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Segment 1 Segment 5 Segment 9
Segment 2 Segment 6
Segment 10
Segment 3 Segment 7
Segment 11
Segment 4 Segment 8
Segment 12
| Components and Features
The default and recommended background initialization rate is 30 percent. Before you change the rebuild rate, you must stop the background initialization or the rate change will not affect the background initialization rate. After you stop background initialization and change the rebuild rate, the rate change takes effect when you restart background initialization.

2.4.7 Patrol Read Patrol read involves the review of your system for possible drive errors that could lead

to drive failure and then action to correct errors. The goal is to protect data integrity by detecting drive failure before the failure can damage data. The corrective actions depend on the drive group configuration and the type of errors.
Patrol read starts only when the controller is idle for a defined period of time and no other background tasks are active, though it can continue to run during heavy I/O processes.
You can use the MegaRAID Command Tool or the MegaRAID Storage Manager software to select the patrol read options, which you can use to set automatic or manual operation, or disable patrol read. See Section5.8, Controller Property-Related Options and Section9.17, Running a Patrol Read.

2.4.8 Disk Striping Disk striping allows you to write data across multiple drives instead of just one drive.

Disk striping involves partitioning each drive storage space into stripes that can vary in size from 8 KB to 1024 KB. These stripes are interleaved in a repeated sequential manner. The combined storage space is composed of stripes from each drive. It is recommended that you keep stripe sizes the same across RAID drive groups.
For example, in a four-disk system using only disk striping (used in RAID level 0), segment 1 is written to disk 1, segment 2 is written to disk 2, and so on. Disk striping enhances performance because multiple drives are accessed simultaneously, but disk striping does not provide data redundancy.
Figure 3: Example of Disk Striping (RAID 0)
2.4.8.1 Stripe Width
Stripe width is the number of drives involved in a drive group where striping is implemented. For example, a four-disk drive group with disk striping has a stripe width of four.
2.4.8.2 Stripe Size The stripe size is the length of the interleaved data segments that the RAID controller
writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of disk space and has 16 KB of data residing on each disk in the stripe. In this case, the stripe size is 64 KB, and the strip size is 16 KB.
2.4.8.3 Strip Size The strip size is the portion of a stripe that resides on a single drive.
LSI Corporation Confidential | July 2011 Page 25
| Components and Features
Segment 1 Segment 2 Segment 3
Segment 1 Duplicated Segment 2 Duplicated Segment 3 Duplicated
Segment 4 Segment 4 Duplicated

2.4.9 Disk Mirroring With mirroring (used in RAID 1 and RAID 10), data written to one drive is simultaneously

MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
written to another drive. The primary advantage of disk mirroring is that it provides 100 percent data redundancy. Because the contents of the disk are completely written to a second disk, data is not lost if one disk fails. In addition, both drives contain the same data at all times, so either disk can act as the operational disk. If one disk fails, the contents of the other disk can be used to run the system and reconstruct the failed disk.
Disk mirroring provides 100 percent redundancy, but it is expensive because each drive in the system must be duplicated. Figure4 shows an example of disk mirroring.
Figure 4: Example of Disk Mirroring (RAID 1)

2.4.10 Parity Parity generates a set of redundancy data from two or more parent data sets. The

redundancy data can be used to reconstruct one of the parent data sets in the event of a drive failure. Parity data does not fully duplicate the parent data sets, but parity generation can slow the write process. In RAID, this method is applied to entire drives or stripes across all of the drives in a drive group. The types of parity are described in
Tab le 2 .
Table 2: Types of Parity
Parity Type Description
Dedicated The parity data on two or more drives is stored on an additional disk.
Distributed The parity data is distributed across more than one drive in the system.
Page 26 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Segment 1 Segment 7
Segment 2 Segment 8
Segment 3 Segment 9
Segment 4
Segment 10
Segment 5
Parity (6-10)
Parity (11–15)
Parity (1-5)
Segment 6
Note: Parity is distributed across all drives in the drive group.
Segment 12
Segment 15
Segment 11
Segment 14
Segment 13 Segment 19 Segment 25
Segment 20
Segment 23
Segment 18
Segment 21
Segment 16 Segment 22
Segment 17
Parity (21-25)
Parity (26–30)
Parity (16-20)
Segment 24
Segment 30
Segment 27 Segment 29
Segment 26
Segment 28
60 GB 60 GB
Can Be Accessed as
One 120-GB Drive
60 GB 60 GB
Can Be Accessed as
One 120-GB Drive
RAID 5 combines distributed parity with disk striping. If a single drive fails, it can be rebuilt from the parity and the data on the remaining drives. An example of a RAID 5 drive group is shown in Figure5. RAID 5 uses parity to provide redundancy for one drive failure without duplicating the contents of entire drives. RAID 6 uses distributed parity and disk striping, also, but adds a second set of parity data so that it can survive up to two drive failures.
Figure 5: Example of Distributed Parity (RAID 5)
| Components and Features

2.4.11 Disk Spanning Disk spanning allows multiple drives to function like one big drive. Spanning

overcomes lack of disk space and simplifies storage management by combining existing resources or adding relatively inexpensive resources. For example, four 20-GB drives can be combined to appear to the operating system as a single 80-GB drive.
Spanning alone does not provide reliability or performance enhancements. Spanned virtual drives must have the same stripe size and must be contiguous. In Figure6, RAID 1 drive groups are turned into a RAID 10 drive group.
NOTE: Make sure that the spans are in different backplanes, so that if one span fails, you do not lose the whole drive group.
Figure 6: Example of Disk Spanning
Spanning two contiguous RAID 0 virtual drives does not produce a new RAID level or add fault tolerance. It does increase the capacity of the virtual drive and improves performance by doubling the number of spindles.
LSI Corporation Confidential | July 2011 Page 27
| Components and Features
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
2.4.11.1 Spanning for RAID 00, RAID 10, RAID 50, and RAID 60
Tab le 3 describes how to configure RAID 00, RAID 10, RAID 50, and RAID 60 by
spanning. The virtual drives must have the same stripe size and the maximum number of spans is 8. The full drive capacity is used when you span virtual drives; you cannot specify a smaller drive capacity.
See Chapter8, Configuration for detailed procedures for configuring drive groups and virtual drives, and spanning the drives.
Table 3: Spanning for RAID 10, RAID 50, and RAID 60
Level Description
00 Configure RAID 00 by spanning two contiguous RAID 0 virtual drives, up to
the maximum number of supported devices for the controller.
10 Configure RAID 10 by spanning two contiguous RAID 1 virtual drives, up to
the maximum number of supported devices for the controller. RAID 10 supports a maximum of 8 spans. You must use an even number of drives in each RAID virtual drive in the span. The RAID 1 virtual drives must have the same stripe size.
50 Configure RAID 50 by spanning two contiguous RAID 5 virtual drives. The
RAID 5 virtual drives must have the same stripe size.
60 Configure RAID 60 by spanning two contiguous RAID 6 virtual drives. The
RAID 6 virtual drives must have the same stripe size.
NOTE: In a spanned virtual drive (R10, R50, R60) the span numbering starts from Span 0, Span 1, Span 2, and so on.

2.4.12 Hot Spares A hot spare is an extra, unused drive that is part of the disk subsystem. It is usually in

Standby mode, ready for service if a drive fails. Hot spares permit you to replace failed drives without system shutdown or user intervention. MegaRAID SAS RAID controllers can implement automatic and transparent rebuilds of failed drives using hot spare drives, providing a high degree of fault tolerance and zero downtime.
The RAID management software allows you to specify drives as hot spares. When a hot spare is needed, the RAID controller assigns the hot spare that has a capacity closest to and at least as great as that of the failed drive to take the place of the failed drive. The failed drive is removed from the virtual drive and marked ready awaiting removal after the rebuild to a hot spare begins. You can make hot spares of the drives that are not in a RAID virtual drive.
You can use the RAID management software to designate the hot spare to have enclosure affinity, meaning that if drive failures are present on a split backplane configuration, the hot spare will be used first on the backplane side in which it resides.
If the hot spare is designated as having enclosure affinity, it attempts to rebuild any failed drives on the backplane in which it resides before rebuilding any other drives on other backplanes.
NOTE: If a rebuild to a hot spare fails for any reason, the hot spare drive is marked as failed. If the source drive fails, both the source drive and the hot spare drive are marked as failed.
Page 28 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
| Components and Features
The hot spare can be of two types:
Global hot spare
Dedicated hot spare
2.4.12.1 Global Hot Spare Use a global hot spare drive to replace any failed drive in a redundant drive group as
long as its capacity is equal to or larger than the coerced capacity of the failed drive. A global hot spare defined on any channel should be available to replace a failed drive on both channels.
2.4.12.2 Dedicated Hot Spare Use a dedicated hot spare to replace a failed drive only in a selected drive group. One or
more drives can be designated as a member of a spare drive pool. The most suitable drive from the pool is selected for failover. A dedicated hot spare is used before one from the global hot spare pool.
Hot spare drives can be located on any RAID channel. Standby hot spares (not being used in RAID drive group) are polled every 60 seconds at a minimum, and their status made available in the drive group management software. RAID controllers offer the ability to rebuild with a disk that is in a system but not initially set to be a hot spare.
Observe the following parameters when using hot spares:
Hot spares are used only in drive groups with redundancy: RAID levels 1, 5, 6, 10, 50,
and 60.
A hot spare connected to a specific RAID controller can be used to rebuild a drive
that is connected only to the same controller.
You must assign the hot spare to one or more drives through the controller BIOS or
use drive group management software to place it in the hot spare pool.
A hot spare must have free space equal to or greater than the drive it replaces. For
example, to replace an 500-GB drive, the hot spare must be 500-GB or larger.

2.4.13 Disk Rebuilds When a drive in a RAID drive group fails, you can rebuild the drive by re-creating the

data that was stored on the drive before it failed. The RAID controller re-creates the data using the data stored on the other drives in the drive group. Rebuilding can be done only in drive groups with data redundancy, which includes RAID 1, 5, 6, 10, 50, and 60 drive groups.
The RAID controller uses hot spares to rebuild failed drives automatically and transparently, at user-defined rebuild rates. If a hot spare is available, the rebuild can start automatically when a drive fails. If a hot spare is not available, the failed drive must be replaced with a new drive so that the data on the failed drive can be rebuilt.
The failed drive is removed from the virtual drive and marked ready awaiting removal when the rebuild to a hot spare begins. If the system goes down during a rebuild, the RAID controller automatically resumes the rebuild after the system reboots.
NOTE: When the rebuild to a hot spare begins, the failed drive is often removed from the virtual drive before management applications detect the failed drive. When this occurs, the events logs show the drive rebuilding to the hot spare without showing the failed drive. The formerly failed drive will be marked as “ready” after a rebuild begins to a hot spare.
LSI Corporation Confidential | July 2011 Page 29
| Components and Features
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
NOTE: If a source drive fails during a rebuild to a hot spare, the rebuild fails, and the failed source drive is marked as offline. In addition, the rebuilding hot spare drive is changed back to a hot spare. After a rebuild fails because of a source drive failure, the dedicated hot spare is still dedicated and assigned to the correct drive group, and the global hot spare is still global.
An automatic drive rebuild will not start if you replace a drive during a RAID-level migration. The rebuild must be started manually after the expansion or migration procedure is complete. (RAID-level migration changes a virtual drive from one RAID level to another.)

2.4.14 Rebuild Rate The rebuild rate is the percentage of the compute cycles dedicated to rebuilding failed

drives. A rebuild rate of 100 percent means that the system gives priority to rebuilding the failed drives.
The rebuild rate can be configured between 0 percent and 100 percent. At 0 percent, the rebuild is done only if the system is not doing anything else. At 100 percent, the rebuild has a higher priority than any other system activity. Using 0 percent or 100 percent is not recommended. The default rebuild rate is 30 percent.

2.4.15 Hot Swap A hot swap is the manual replacement of a defective drive unit while the computer is

still running. When a new drive has been installed, a rebuild occurs automatically if these situation occurs:
The newly inserted drive is the same capacity as or larger than the failed drive.
The newly inserted drive is placed in the same drive bay as the failed drive it is
replacing.
The RAID controller can be configured to detect the new drives and rebuild the contents of the drive automatically.
2.4.16 Drive States A drive state is a property indicating the status of the drive. The drive states are
described in Tab le 4 .

Table 4: Drive States

State Description
Online A drive that can be accessed by the RAID controller and is part of the virtual
drive.
Unconfigured Good
Hot Spare A drive that is powered up and ready for use as a spare in case an online
Fai led A drive that was originally configured as Online or Hot Spare, but on which
Rebuild A drive to which data is being written to restore full redundancy for a virtual
A drive that is functioning normally but is not configured as a part of a virtual drive or as a hot spare.
drive fails.
the firmware detects an unrecoverable error.
drive.
Page 30 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Table 4: Drive States (Continued)
State Description
| Components and Features
Unconfigured Bad A drive on which the firmware detects an unrecoverable error; the drive was
Unconfigured Good or the drive could not be initialized.
Missing A drive that was Online but which has been removed from its location.
Offline A drive that is part of a virtual drive but which has invalid data as far as the
RAID configuration is concerned.
2.4.17 Virtual Drive States The virtual drive states are described in Ta ble 5 .

Table 5: Virtual Drive States

State Description
Optimal The virtual drive operating condition is good. All configured drives are
online.
Degraded The virtual drive operating condition is not optimal. One of the configured
drives has failed or is offline.
Partial Degraded The operating condition in a RAID 6 virtual drive is not optimal. One of the
configured drives has failed or is offline. RAID 6 can tolerate up to two drive failures.
Fai led The virtual drive has failed.
Offline The virtual drive is not available to the RAID controller.

2.4.18 Beep Codes An alarm sounds on the MegaRAID controller when a virtual drive changes from an

optimal state to another state, when a hot spare rebuilds, and for test purposes.
Table 6: Beep Codes, Events, and Virtual Drive States
Event Virtual Drive State Beep Code
RAID 0 virtual drive loses a virtual
Offline 3 seconds on and 1 second off
drives
RAID 1 loses a mirror drive Degraded 1 second on and 1 second off
RAID 1 loses both drives Offline 3 seconds on and 1 second off
RAID 5 loses one drive Degraded 1 second on and 1 second off
RAID 5 loses two or more drives Offline 3 seconds on and 1 second off
RAID 6 loses one drive Partially
1 second on and 1 second off
Degraded
RAID 6 loses two drives Degraded 1 second on and 1 second off
RAID 6 loses more than two drives Offline 3 seconds on and 1 second off
A hot spare completes the rebuild
N/A 1 second on and 3 seconds off process and is brought into a drive group
LSI Corporation Confidential | July 2011 Page 31

| RAID Levels

2.4.19 Enclosure Management Enclosure management is the intelligent monitoring of the disk subsystem by software,

MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
hardware or both. The disk subsystem can be part of the host computer or can reside in an external disk enclosure. Enclosure management helps you stay informed of events in the disk subsystem, such as a drive or power supply failure. Enclosure management increases the fault tolerance of the disk subsystem.
2.5 RAID Levels The RAID controller supports RAID levels 0, 00, 1, 5, 6, 10, 50, and 60. The supported
RAID levels are summarized in the following section.
In addition, the RAID controller supports independent drives (configured as RAID 0 and RAID 00.) The following sections describe the RAID levels in detail.

2.5.1 Summary of RAID Levels RAID 0 uses striping to provide high data throughput, especially for large files in an

environment that does not require fault tolerance.
RAID 1 uses mirroring so that data written to one drive is simultaneously written to another drive. RAID 1 is good for small databases or other applications that require small capacity but complete data redundancy.
RAID 5 uses disk striping and parity data across all drives (distributed parity) to provide high data throughput, especially for small random access.
RAID 6 uses distributed parity, with two independent parity blocks per stripe, and disk striping. A RAID 6 virtual drive can survive the loss of any two drives without losing data. A RAID 6 drive group, which requires a minimum of three drives, is similar to a RAID 5 drive group. Blocks of data and parity information are written across all drives. The parity information is used to recover the data if one or two drives fail in the drive group.
A RAID 00 drive group is a spanned drive group that creates a striped set from a series of RAID 0 drive groups.
RAID 10, a combination of RAID 0 and RAID 1, consists of striped data across mirrored spans. A RAID 10 drive group is a spanned drive group that creates a striped set from a series of mirrored drives. RAID 10 allows a maximum of 8 spans. You must use an even number of drives in each RAID virtual drive in the span. The RAID 1 virtual drives must have the same stripe size. RAID 10 provides high data throughput and complete data redundancy but uses a larger number of spans.
RAID 50, a combination of RAID 0 and RAID 5, uses distributed parity and disk striping. A RAID 50 drive group is a spanned drive group in which data is striped across multiple RAID 5 drive groups. RAID 50 works best with data that requires high reliability, high request rates, high data transfers, and medium-to-large capacity.
NOTE: Having virtual drives of different RAID levels, such as RAID 0 and RAID 5, in the same drive group is not allowed. For example, if an existing RAID 5 virtual drive is created out of partial space in an array, the next virtual drive in the array has to be RAID 5 only.
Page 32 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
| RAID Levels
RAID 60, a combination of RAID 0 and RAID 6, uses distributed parity, with two independent parity blocks per stripe in each RAID set, and disk striping. A RAID 60 virtual drive can survive the loss of two drives in each of the RAID 6 sets without losing data. RAID 60 works best with data that requires high reliability, high request rates, high data transfers, and medium-to-large capacity.

2.5.2 Selecting a RAID Level To ensure the best performance, you should select the optimal RAID level when you

create a system drive. The optimal RAID level for your drive group depends on a number of factors:
The number of drives in the drive group
The capacity of the drives in the drive group
The need for data redundancy
The disk performance requirements
2.5.3 RAID 0 RAID 0 provides disk striping across all drives in the RAID drive group. RAID 0 does not
provide any data redundancy, but RAID 0offers the best performance of any RAID level. RAID 0 breaks up data into smaller segments, and then stripes the data segments across each drive in the drive group. The size of each data segment is determined by the stripe size. RAID 0 offers high bandwidth.
NOTE: RAID level 0 is not fault tolerant. If a drive in a RAID 0 drive group fails, the entire virtual drive (all drives associated with the virtual drive) fails.
By breaking up a large file into smaller segments, the RAID controller can use both SAS drives and SATA drives to read or write the file faster. RAID 0 involves no parity calculations to complicate the write operation. This situation makes RAID 0 ideal for applications that require high bandwidth but do not require fault tolerance. Table 7 provides an overview of RAID 0. Figure7 provides a graphic example of a RAID 0 drive group.
Table 7: RAID 0 Overview
Uses Provides high data throughput, especially for large files. Any environment
Strong points Provides increased data throughput for large files.
Weak points Does not provide fault tolerance or high bandwidth.
Drives 1 to 32
that does not require fault tolerance.
No capacity loss penalty for parity.
All data is lost if any drive fails.
LSI Corporation Confidential | July 2011 Page 33
| RAID Levels
Segment 1 Segment 3 Segment 5
Segment 2 Segment 4 Segment 6
Segment 7 Segment 8
Segment 1
Segment 1 Duplicate
Segment 2
Segment 3 Duplicate
Segment 4 Duplicate
Segment 3
Segment 4
Segment 5
Segment 6
Segment 7
Segment 8
Segment 5 Duplicate
Segment 6 Duplicate
Segment 7 Duplicate
Segment 8 Duplicate
Segment 2 Duplicate
...
...
...
...

RAID1

RAID1
RAID1
RAID1
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
Figure 7: RAID 0 Drive Group Example with Two Drives
2.5.4 RAID 1 In RAID 1, the RAID controller duplicates all data from one drive to a second drive in the
drive group. RAID 1 supports an even number of drives from 2 through 32 in a single span. RAID 1 provides complete data redundancy, but at the cost of doubling the required data storage capacity. Tabl e 8 provides an overview of RAID 1. Figure8 provides a graphic example of a RAID 1 drive group.
Table 8: RAID 1 Overview
Uses Use RAID 1 for small databases or any other environment that requires fault
tolerance but small capacity.
Strong points Provides complete data redundancy. RAID 1 is ideal for any application that
requires fault tolerance and minimal capacity.
Weak points Requires twice as many drives. Performance is impaired during drive
rebuilds.
Drives 2 through 32 (must be an even number of drives)
Figure 8: RAID 1 Drive Group
2.5.5 RAID 5 RAID 5 includes disk striping at the block level and parity. Parity is the data’s property of
being odd or even, and parity checking is used to detect errors in the data. In RAID 5, the parity information is written to all drives. RAID 5 is best suited for networks that perform a lot of small input/output (I/O) transactions simultaneously.
RAID 5 addresses the bottleneck issue for random I/O operations. Because each drive contains both data and parity, numerous writes can take place concurrently.
Page 34 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Segment 1 Segment 7
Segment 2 Segment 8
Segment 3 Segment 9
Segment 4
Segment 10
Segment 5
Parity (6-10)
Parity (11–15)
Parity (1-5)
Segment 6
Note: Parity is distributed across all drives in the drive group.
Segment 12
Segment 15
Segment 11
Segment 14
Segment 13 Segment 19 Segment 25
Segment 20
Segment 23
Segment 18
Segment 21
Segment 16 Segment 22
Segment 17
Parity (21-25)
Parity (26–30)
Parity (16-20)
Segment 24
Segment 30
Segment 27 Segment 29
Segment 26
Segment 28
Tab le 9 provides an overview of RAID 5. Figure9 provides a graphic example of a RAID 5
drive group.
Table 9: RAID 5 Overview
| RAID Levels
Uses Provides high data throughput, especially for large files. Use RAID 5 for
transaction processing applications because each drive can read and write independently. If a drive fails, the RAID controller uses the parity drive to re-create all missing information. Use also for office automation and online customer service that requires fault tolerance. Use for any application that has high read request rates but low write request rates.
Strong points Provides data redundancy, high read rates, and good performance in most
environments. Provides redundancy with lowest loss of capacity.
Weak points Not well-suited to tasks requiring lot of writes. Suffers more impact if no
cache is used (clustering). Drive performance is reduced if a drive is being rebuilt. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes.
Number of Drives
3 through 32
in this RAID Level
Figure 9: RAID 5 Drive Group with Six Drives
2.5.6 RAID 6 RAID 6 is similar to RAID 5 (disk striping and parity), except that instead of one parity
block per stripe, there are two. With two independent parity blocks, RAID 6 can survive the loss of any two drives in a virtual drive without losing data. RAID 6 provides a high level of data protection through the use of a second parity block in each stripe. Use RAID 6 for data that requires a very high level of protection from loss.
In the case of a failure of one drive or two drives in a virtual drive, the RAID controller uses the parity blocks to re-create all of the missing information. If two drives in a RAID 6 virtual drive fail, two drive rebuilds are required, one for each drive. These rebuilds do not occur at the same time. The controller rebuilds one failed drive, and then the other failed drive.
LSI Corporation Confidential | July 2011 Page 35
| RAID Levels
Segment 1 Segment 6
Segment 2 Segment 7
Segment 3 Segment 8
Segment 4
Parity (P5-P8)
Parity (P1-P4) Parity (Q5-Q8)
Parity (Q9–Q12)
Parity (Q1-Q4)
Segment 5
Note: Parity is distributed across all drives in the drive group.
Segment 10
Parity (P9-P12)
Segment 9
Segment 12
Segment 1 1 Segment 16
Parity (P17-P20)
Parity (P13-P16)
Segment 19
Segment 15
Segment 17
Segment 13 Segment 18
Segment 14
Parity (Q17-Q20)
Parity (Q13-Q16)
Segment 20
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
Tab le 1 0 provides an overview of a RAID 6 drive group.
Table 10: RAID 6 Overview
Uses Use for office automation and online customer service that requires fault
tolerance. Use for any application that has high read request rates but low write request rates.
Strong points Provides data redundancy, high read rates, and good performance in most
environments. Can survive the loss of two drives or the loss of a drive while another drive is being rebuilt. Provides the highest level of protection against drive failures of all of the RAID levels. Read performance is similar to that of RAID 5.
Weak points Not well-suited to tasks requiring a lot of writes. A RAID 6 virtual drive has to
generate two sets of parity data for each write operation, which results in a significant decrease in performance during writes. Drive performance is reduced during a drive rebuild. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes. RAID 6 costs more because of the extra capacity required by using two parity blocks per stripe.
Drives 3 through 32
Figure10 shows a RAID 6 data layout. The second set of parity drives is denoted by Q.
The P drives follow the RAID 5 parity scheme.
Figure 10: Example of Distributed Parity across Two Blocks in a Stripe (RAID 6)
2.5.7 RAID 00 A RAID 00 drive group is a spanned drive group that creates a striped set from a series
of RAID 0 drive groups. RAID 00 does not provide any data redundancy, but, along with RAID 0, does offer the best performance of any RAID level. RAID 00 breaks up data into smaller segments and then stripes the data segments across each drive in the drive groups. The size of each data segment is determined by the stripe size. RAID 00 offers high bandwidth.
NOTE: RAID level 00 is not fault tolerant. If a drive in a RAID 0 drive group fails, the entire virtual drive (all drives associated with the virtual drive) fails.
Page 36 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Segment 1
Segment
2
Segment
3
Segment
6
Segment
8
Segment
5
Segment
7
Segment
9
Segment
11
Segment
13
Segment
15
Segment
10
Segment
12
Segment
14
Segment
16
Segment
4
...
...
...
...
RAID
0
RAID
0
RAID
0
RAID
0
RAID
00
RAID 0
Segment
17
Segment
18
Segment
19
Segment
20
Segment
21
Segment
22
Segment
23
Segment
24
By breaking up a large file into smaller segments, the RAID controller can use both SAS drives and SATA drives to read or write the file faster. RAID 00 involves no parity calculations to complicate the write operation. This situation makes RAID 00 ideal for applications that require high bandwidth but do not require fault tolerance. Table 1 1 provides an overview of RAID 00. Figure11 provides a graphic example of a RAID 00 drive group.
Table 11: RAID 00 Overview
| RAID Levels
Uses Provides high data throughput, especially for large files. Any environment
that does not require fault tolerance.
Strong points Provides increased data throughput for large files.
No capacity loss penalty for parity.
Weak points Does not provide fault tolerance or high bandwidth.
All data lost if any drive fails.
Drives 2 through 256
Figure 11: RAID 00 Drive Group Example with Two Drives
2.5.8 RAID 10 RAID 10 is a combination of RAID 0 and RAID 1, and it consists of stripes across mirrored
drives. RAID 10 breaks up data into smaller blocks and then mirrors the blocks of data to each RAID 1 drive group. The first RAID 1 drive in each drive group then duplicates its data to the second drive. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set. The RAID 1 virtual drives must have the same stripe size.
Spanning is used because one virtual drive is defined across more than one drive group. Virtual drives defined across multiple RAID 1 level drive groups are referred to as RAID level 10, (1+0). Data is striped across drive groups to increase performance by enabling access to multiple drive groups simultaneously.
Each spanned RAID 10 virtual drive can tolerate multiple drive failures, as long as each failure is in a separate drive group. If drive failures occur, less than total drive capacity is available.
Configure RAID 10 by spanning two contiguous RAID 1 virtual drives, up to the maximum number of supported devices for the controller. RAID 10 supports a maximum of 8 spans, with a maximum of 32 drives per span. You must use an even number of drives in each RAID 10 virtual drive in the span.
LSI Corporation Confidential | July 2011 Page 37
| RAID Levels
Segment 1
Segment 1 Duplicate
Segment 2
Segment 3 Duplicate
Segment 4 Duplicate
Segment 3
Segment 4
Segment 5
Segment 6
Segment 7
Segment 8
Segment 5 Duplicate
Segment 6 Duplicate
Segment 7 Duplicate
Segment 8 Duplicate
Segment 2 Duplicate
...
...
...
...
RAID1
RAID1
RAID1
RAID1
RAID 10
RAID 0
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
NOTE: Other factors, such as the type of controller, can restrict the number of drives supported by RAID 10 virtual drives.
Tab le 1 2 provides an overview of RAID 10.
Table 12: RAID 10 Overview
Uses Appropriate when used with data storage that needs 100 percent
redundancy of mirrored drive groups and that also needs the enhanced I/O performance of RAID 0 (striped drive groups.) RAID 10 works well for medium-sized databases or any environment that requires a higher degree of fault tolerance and moderate-to-medium capacity.
Strong Points Provides both high data transfer rates and complete data redundancy.
Weak Points Requires twice as many drives as all other RAID levels except RAID 1.
Drives 4 to 32 in multiples of 4 — The maximum number of drives supported by the
controller (using an even number of drives in each RAID 10 virtual drive in the span).
In Figure12, virtual drive 0 is created by distributing data across four drive groups (drive groups 0 through 3).
Figure 12: RAID 10 Level Virtual Drive
2.5.9 RAID 50 RAID 50 provides the features of both RAID 0 and RAID 5. RAID 50 includes both parity
and disk striping across multiple drive groups. RAID 50 is best implemented on two RAID 5 drive groups with data striped across both drive groups.
RAID 50 breaks up data into smaller blocks and then stripes the blocks of data to each RAID 5 disk set. RAID 5 breaks up data into smaller blocks, calculates parity by performing an exclusive-or on the blocks and then writes the blocks of data and parity to each drive in the drive group. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set.
RAID level 50 can support up to 8 spans and tolerate up to 8 drive failures, though less than total drive capacity is available. Though multiple drive failures can be tolerated, only one drive failure can be tolerated in each RAID 5 level drive group.
Page 38 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Segment 1
Segment 2
Segment 5
Segment 6
RAID 0
RAID 50
Parity (1,2)
Segment 3
Segment 4
Segment 8
Segment 7
Segment 9
Segment 10
Segment 1 1
Segment 12
Parity (5,6)
Parity (9,10)
Parity (11,12)
Parity (7,8)
Parity (3,4)
RAID 5
RAID 5
Tab le 1 3 provides an overview of RAID 50.
Table 13: RAID 50 Overview
| RAID Levels
Uses Appropriate when used with data that requires high reliability, high request
rates, high data transfer, and medium-to-large capacity.
Strong points Provides high data throughput, data redundancy, and very good
performance.
Weak points Requires 2 times to 8 times as many parity drives as RAID 5.
Drives 8 spans of RAID 5 drive groups containing 3 to 32 drives each (limited by the
maximum number of devices supported by the controller)
Figure 13: RAID 50 Level Virtual Drive

2.5.10 RAID 60 RAID 60 provides the features of both RAID 0 and RAID 6, and includes both parity and

disk striping across multiple drive groups. RAID 6 supports two independent parity blocks per stripe. A RAID 60 virtual drive can survive the loss of two drives in each of the RAID 6 sets without losing data. RAID 60 is best implemented on two RAID 6 drive groups with data striped across both drive groups.
RAID 60 breaks up data into smaller blocks and then stripes the blocks of data to each RAID 6 disk set. RAID 6 breaks up data into smaller blocks, calculates parity by performing an exclusive-or on the blocks, and then writes the blocks of data and parity to each drive in the drive group. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set.
LSI Corporation Confidential | July 2011 Page 39
| RAID Configuration Strategies
Segment 1 Segment 8
Segment 2
Segment 7
Segment 10
Segment 5
Parity (P1-P2)
Parity (Q11–Q12)
Note: Parity is distributed across all drives in the drive group.
Parity (Q1-Q2)
Segment 11
Segment 12
Parity (P15-P16)
Segment 15 Segment 16 Parity (Q15-Q16)
Segment 3 Segment 6
Segment 4
Parity (P9-P10)
Parity (Q9–Q10)Parity (P11-P12)
Segment 9
Parity (P13-P14)
Segment 14
Segment 13
Parity (Q13-Q14)
RAID
60
RAID 6
RAID 6
RAID 0
Parity (Q3-Q4)
Parity (P3-P4)
Parity (Q5-Q6)
Parity (P5-P6)
Parity (P3-P4)
Parity (Q3-Q4)
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
RAID 60 can support up to 8 spans and tolerate up to 16 drive failures, though less than total drive capacity is available. Two drive failures can be tolerated in each RAID 6 level drive group.
Table 14: RAID 60 Overview
Uses Provides a high level of data protection through the use of a second parity
block in each stripe. Use RAID 60 for data that requires a very high level of protection from loss.
In the case of a failure of one drive or two drives in a RAID set in a virtual drive, the RAID controller uses the parity blocks to re-create all of the missing information. If two drives in a RAID 6 set in a RAID 60 virtual drive fail, two drive rebuilds are required, one for each drive. These rebuilds can occur at the same time.
Use for office automation and online customer service that requires fault tolerance. Use for any application that has high read request rates but low write request rates.
Strong points Provides data redundancy, high read rates, and good performance in most
environments. Each RAID 6 set can survive the loss of two drives or the loss of a drive while another drive is being rebuilt. Provides the highest level of protection against drive failures of all of the RAID levels. Read performance is similar to that of RAID 50, though random reads in RAID 60 might be slightly faster because data is spread across at least one more disk in each RAID 6 set.
Weak points Not well suited to tasks requiring lot of writes. A RAID 60 virtual drive has to
generate two sets of parity data for each write operation, which results in a significant decrease in performance during writes. Drive performance is reduced during a drive rebuild. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes. RAID 6 costs more because of the extra capacity required by using two parity blocks per stripe.
Drives A minimum of 8.
Figure14 shows a RAID 60 data layout. The second set of parity drives is denoted by Q.
The P drives follow the RAID 5 parity scheme.
Figure 14: RAID 60 Level Virtual Drive

2.6 RAID Configuration Strategies

Page 40 LSI Corporation Confidential | July 2011
The following factors in RAID drive group configuration are most important:
Virtual drive availability (fault tolerance)
Virtual drive performance
Virtual drive capacity
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
| RAID Configuration Strategies
You cannot configure a virtual drive that optimizes all three factors, but it is easy to choose a virtual drive configuration that maximizes one factor at the expense of another factor. For example, RAID 1 (mirroring) provides excellent fault tolerance, but requires a redundant drive.
The following subsections describe how to use the RAID levels to maximize virtual drive availability (fault tolerance), virtual drive performance, and virtual drive capacity.

2.6.1 Maximizing Fault Tolerance Fault tolerance is achieved through the ability to perform automatic and transparent

rebuilds using hot spare drives and hot swaps. A hot spare drive is an unused online available drive that the RAID controller instantly plugs into the system when an active drive fails. After the hot spare is automatically moved into the RAID drive group, the failed drive is automatically rebuilt on the spare drive. The RAID drive group continues to handle requests while the rebuild occurs.
A hot swap is the manual substitution of a replacement unit in a disk subsystem for a defective one, where the substitution can be performed while the subsystem is running hot swap drives. Auto-Rebuild in the WebBIOS Configuration Utility allows a failed drive to be replaced and automatically rebuilt by “hot-swapping” the drive in the same drive bay. The RAID drive group continues to handle requests while the rebuild occurs, providing a high degree of fault tolerance and zero downtime.
Table 15: RAID Levels and Fault Tolerance
RAID Level
0 Does not provide fault tolerance. All data is lost if any drive fails. Disk striping writes data across multiple drives instead of just one drive.
It involves partitioning each drive storage space into stripes that can vary in size. RAID 0 is ideal for applications that require high performance but do not require fault tolerance.
1 Provides complete data redundancy. If one drive fails, the contents of the other drive in the drive group can be used to run the system
and reconstruct the failed drive. The primary advantage of disk mirroring is that it provides 100 percent data redundancy. Because the contents of the drive are completely written to a second drive, no data is lost if one of the drives fails. Both drives contain the same data at all times. RAID 1 is ideal for any application that requires fault tolerance and minimal capacity.
5 Combines distributed parity with disk striping. Parity provides redundancy for one drive failure without duplicating the contents of
entire drives. If a drive fails, the RAID controller uses the parity data to reconstruct all missing information. In RAID 5, this method is applied to entire drives or stripes across all drives in a drive group. Using distributed parity, RAID 5 offers fault tolerance with limited overhead.
6 Combines distributed parity with disk striping. RAID 6 can sustain two drive failures and still maintain data integrity. Parity provides
redundancy for two drive failures without duplicating the contents of entire drives. If a drive fails, the RAID controller uses the parity data to reconstruct all missing information. In RAID 6, this method is applied to entire drives or stripes across all of the drives in a drive group. Using distributed parity, RAID 6 offers fault tolerance with limited overhead.
00 Does not provide fault tolerance. All data in a virtual drive is lost if any drive in that virtual drive fails. Disk striping writes data across
multiple drives instead of just one drive. It involves partitioning each drive storage space into stripes that can vary in size. RAID 00 is ideal for applications that require high bandwidth but do not require fault tolerance.
Fault Tolerance
LSI Corporation Confidential | July 2011 Page 41
| RAID Configuration Strategies
Table 15: RAID Levels and Fault Tolerance (Continued)
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
RAID Level
Fault Tolerance
10 Provides complete data redundancy using striping across spanned RAID 1 drive groups. RAID 10 works well for any environment that
requires the 100 percent redundancy offered by mirrored drive groups. RAID 10 can sustain a drive failure in each mirrored drive group and maintain data integrity.
50 Provides data redundancy using distributed parity across spanned RAID 5 drive groups. RAID 50 includes both parity and disk striping
across multiple drives. If a drive fails, the RAID controller uses the parity data to re-create all missing information. RAID 50 can sustain one drive failure per RAID 5 drive group and still maintain data integrity.
60 Provides data redundancy using distributed parity across spanned RAID 6 drive groups. RAID 60 can sustain two drive failures per RAID
6 drive group and still maintain data integrity. It provides the highest level of protection against drive failures of all of the RAID levels. RAID 60 includes both parity and disk striping across multiple drives. If a drive fails, the RAID controller uses the parity data to re-create all missing information.

2.6.2 Maximizing Performance A RAID disk subsystem improves I/O performance. The RAID drive group appears to the

host computer as a single storage unit or as multiple virtual units. I/O is faster because drives can be accessed simultaneously. Table 1 6 describes the performance for each RAID level.
Table 16: RAID Levels and Performance
RAID Level
0 RAID 0 (striping) offers excellent performance. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the
drive group. Disk striping writes data across multiple drives instead of just one drive. It involves partitioning each drive storage space into stripes that can vary in size from 8 KB to 1024 KB. These stripes are interleaved in a repeated sequential manner. Disk striping enhances performance because multiple drives are accessed simultaneously.
1 With RAID 1 (mirroring), each drive in the system must be duplicated, which requires more time and resources than striping.
Performance is impaired during drive rebuilds.
5 RAID 5 provides high data throughput, especially for large files. Use this RAID level for any application that requires high read request
rates, but low write request rates, such as transaction processing applications, because each drive can read and write independently. Because each drive contains both data and parity, numerous writes can take place concurrently. In addition, robust caching algorithms and hardware-based exclusive-or assist make RAID 5 performance exceptional in many different environments.
Parity generation can slow the write process, making write performance significantly lower for RAID 5 than for RAID 0 or RAID 1. Drive performance is reduced when a drive is being rebuilt. Clustering can also reduce drive performance. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes.
6 RAID 6 works best when used with data that requires high reliability, high request rates, and high data transfer. It provides high data
throughput, data redundancy, and very good performance. However, RAID 6 is not well suited to tasks requiring a lot of writes. A RAID 6 virtual drive has to generate two sets of parity data for each write operation, which results in a significant decrease in performance during writes. Drive performance is reduced during a drive rebuild. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes.
00 RAID 00 (striping in a spanned drive group) offers excellent performance. RAID 00 breaks up data into smaller blocks and then writes a
block to each drive in the drive groups. Disk striping writes data across multiple drives instead of just one drive. Striping involves partitioning each drive storage space into stripes that can vary in size from 8 KB to 1024 KB. These stripes are interleaved in a repeated sequential manner. Disk striping enhances performance because multiple drives are accessed simultaneously.
Performance
Page 42 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID
Table 16: RAID Levels and Performance (Continued)
| RAID Configuration Strategies
RAID Level
10 RAID 10 works best for data storage that need the enhanced I/O performance of RAID 0 (striped drive groups), which provides high
data transfer rates. Spanning increases the capacity of the virtual drive and improves performance by doubling the number of spindles. The system performance improves as the number of spans increases. (The maximum number of spans is 8.) As the storage space in the spans is filled, the system stripes data over fewer and fewer spans, and RAID performance degrades to that of a RAID 1 or RAID 5 drive group.
50 RAID 50 works best when used with data that requires high reliability, high request rates, and high data transfer. It provides high data
throughput, data redundancy, and very good performance. Spanning increases the capacity of the virtual drive and improves performance by doubling the number of spindles. The system performance improves as the number of spans increases. (The maximum number of spans is 8.) As the storage space in the spans is filled, the system stripes data over fewer and fewer spans and RAID performance degrades to that of a RAID 1 or RAID 5 drive group.
60 RAID 60 works best when used with data that requires high reliability, high request rates, and high data transfer. It provides high data
throughput, data redundancy, and very good performance. Spanning increases the capacity of the virtual drive and improves performance by doubling the number of spindles. The system performance improves as the number of spans increases. (The maximum number of spans is 8.) As the storage space in the spans is filled, the system stripes data over fewer and fewer spans, and RAID performance degrades to that of a RAID 1 or RAID 6 drive group.
RAID 60 is not well suited to tasks requiring a lot of writes. A RAID 60 virtual drive has to generate two sets of parity data for each write operation, which results in a significant decrease in performance during writes. Drive performance is reduced during a drive rebuild. Environments with few processes do not perform as well because the RAID overhead is not offset by the performance gains in handling simultaneous processes.

2.6.3 Maximizing Storage Capacity Storage capacity is an important factor when selecting a RAID level. There are several

Performance
variables to consider. Striping alone (RAID 0) requires less storage space than mirrored data (RAID 1) or distributed parity (RAID 5 or RAID 6). RAID 5, which provides redundancy for one drive failure without duplicating the contents of entire drives, requires less space than RAID 1. Ta ble 1 7 explains the effects of the RAID levels on storage capacity.
Table 17: RAID Levels and Capacity
RAID Level
0 RAID 0 (striping) involves partitioning each drive storage space into stripes that can vary in size. The combined storage space is
composed of stripes from each drive. RAID 0 provides maximum storage capacity for a given set of drives. The usable capacity of a RAID 0 array is equal to the number of
drives in the array into the capacity of the smallest drive in the array.
1 With RAID 1 (mirroring), data written to one drive is simultaneously written to another drive, which doubles the required data storage
capacity. This situation is expensive because each drive in the system must be duplicated. The usable capacity of a RAID 1 array is equal to the capacity of the smaller of the two drives in the array.
5 RAID 5 provides redundancy for one drive failure without duplicating the contents of entire drives. RAID 5 breaks up data into smaller
blocks, calculates parity by performing an exclusive-or on the blocks and then writes the blocks of data and parity to each drive in the drive group. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set. The usable capacity of a RAID 5 array is equal to the number of drives in the array, minus one, into the capacity of the smallest drive in the array.
6 RAID 6 provides redundancy for two drive failures without duplicating the contents of entire drives. However, it requires extra capacity
because it uses two parity blocks per stripe. This makes RAID 60 more expensive to implement. The usable capacity of a RAID 6 array is equal to the number of drives in the array, minus two, into the capacity of the smallest drive in the array.
00 RAID 00 (striping in a spanned drive group) involves partitioning each drive storage space into stripes that can vary in size. The
combined storage space is composed of stripes from each drive. RAID 00 provides maximum storage capacity for a given set of drives.
LSI Corporation Confidential | July 2011 Page 43
Capacity
| RAID Availability
Table 17: RAID Levels and Capacity (Continued)
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
RAID Level
10 RAID 10 requires twice as many drives as all other RAID levels except RAID 1.
RAID 10 works well for medium-sized databases or any environment that requires a higher degree of fault tolerance and moderate-to-medium capacity. Disk spanning allows multiple drives to function like one large drive. Spanning overcomes lack of disk space and simplifies storage management by combining existing resources or adding relatively inexpensive resources.
50 RAID 50 requires two to four times as many parity drives as RAID 5. This RAID level works best when used with data that requires
medium to large capacity.
60 RAID 60 provides redundancy for two drive failures in each RAID set without duplicating the contents of entire drives. However, it
requires extra capacity because a RAID 60 virtual drive has to generate two sets of parity data for each write operation. This situation makes RAID 60 more expensive to implement.
Capacity

2.7 RAID Availability

2.7.1 RAID Availability Concept Data availability without downtime is essential for many types of data processing and

storage systems. Businesses want to avoid the financial costs and customer frustration associated with failed servers. RAID helps you maintain data availability and avoid downtime for the servers that provide that data. RAID offers several features, such as spare drives and rebuilds, that you can use to fix any drive problems, while keeping the servers running and data available. The following subsections describe these features.
2.7.1.1 Spare Drives You can use spare drives to replace failed or defective drives in a drive group.
A replacement drive must be at least as large as the drive it replaces. Spare drives include hot swaps, hot spares, and cold swaps.
A hot swap is the manual substitution of a replacement unit in a disk subsystem for a defective one, where the substitution can be performed while the subsystem is running (performing its normal functions). The backplane and enclosure must support hot swap in order for the functionality to work.
Hot spare drives are drives that power up along with the RAID drives and operate in a Standby state. If a drive used in a RAID virtual drive fails, a hot spare automatically takes its place, and the data on the failed drive is rebuilt on the hot spare. Hot spares can be used for RAID levels 1, 5, 6, 10, 50, and 60.
NOTE: If a rebuild to a hot spare fails for any reason, the hot spare drive will be marked as “failed.” If the source drive fails, both the source drive and the hot spare drive will be marked as “failed.”
A cold swap requires that you power down the system before replacing a defective drive in a disk subsystem.
2.7.1.2 Rebuilding If a drive fails in a drive group that is configured as a RAID 1, 5, 6, 10, 50, or 60 virtual
drive, you can recover the lost data by rebuilding the drive. If you have configured hot spares, the RAID controller automatically tries to use them to rebuild failed drives. Manual rebuild is necessary if hot spares with enough capacity to rebuild the failed drives are not available. You must insert a drive with enough storage into the subsystem before rebuilding the failed drive.
Page 44 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 2: Introduction to RAID

| Configuration Planning

2.8 Configuration Planning Factors to consider when planning a configuration are the number of drives the RAID
controller can support, the purpose of the drive group, and the availability of spare drives.
Each type of data stored in the disk subsystem has a different frequency of read and write activity. If you know the data access requirements, you can more successfully determine a strategy for optimizing the disk subsystem capacity, availability, and performance.
Servers that support video-on-demand typically read the data often, but write data infrequently. Both the read and write operations tend to be long. Data stored on a general-purpose file server involves relatively short read and write operations with relatively small files.

2.9 Number of Drives Your configuration planning for the SAS RAID controller depends in part on the number

of drives that you want to use in a RAID drive group.
The number of drives in a drive group determines the RAID levels that can be supported. Only one RAID level can be assigned to each virtual drive.

2.9.1 Drive Group Purpose Important factors to consider when creating RAID drive groups include availability,

performance, and capacity. Define the major purpose of the drive group by answering questions related to these factors, such as the following, which are followed by suggested RAID levels for each situation:
Will this drive group increase the system storage capacity for general-purpose file
and print servers? Use RAID 5, 6, 10, 50, or 60.
Does this drive group support any software system that must be available 24 hours
per day? Use RAID 1, 5, 6, 10, 50, or 60.
Will the information stored in this drive group contain large audio or video files that
must be available on demand? Use RAID 0 or 00.
Will this drive group contain data from an imaging system? Use RAID 0, 00, or 10.
LSI Corporation Confidential | July 2011 Page 45
| Number of Drives
MegaRAID SAS Software User GuideChapter 2: Introduction to RAID
Fill out Tab le 18 to help you plan the drive group configuration. Rank the requirements for your drive group, such as storage space and data redundancy, in order of importance, and then review the suggested RAID levels.
Table 18: Factors to Consider for Drive Group Configuration
Requirement Rank Suggested RAID Levels
Storage space RAID 0, RAID 5, RAID 00
Data redundancy RAID 5, RAID 6, RAID 10, RAID 50, RAID 60
Drive performance and throughput RAID 0, RAID 00, RAID 10
Hot spares (extra drives required) RAID 1, RAID 5, RAID 6, RAID 10, RAID 50,
RAID 60
Page 46 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 3: SafeStore Disk Encryption

| Overview

Chapter 3

SafeStore Disk Encryption

This chapter describes the LSI SafeStore™ Disk Encryption service. The SafeStore Disk Encryption service is a collection of features within LSI storage products that supports self-encrypting disks. SafeStore encryption services supports local key management.
3.1 Overview The SafeStore Disk Encryption service offers the ability to encrypt data on drives and
use disk-based key management to provide data security. This solution provides data protection in the event of theft or loss of physical drives. With self-encrypting drives, if you remove a drive from its storage system or the server in which it is housed, the data on that drive is encrypted and useless to anyone who attempts to access without the appropriate security authorization.
With the SafeStore encryption service, data is encrypted by the drives. You can designate which data to encrypt at the individual virtual disk (VD) level.
Any encryption solution requires management of the encryption keys. The security service provides a way to manage these keys. Both the WebBIOS Configuration Utility (Chapter 4) and the MegaRAID Storage Manager software (Chapter 11) offer procedures that you can use to manage the security settings for the drives.

3.2 Purpose and Benefits Security is a growing market concern and requirement. MegaRAID customers are

looking for a comprehensive storage encryption solution to protect data. You can use the SafeStore encryption service to help protect your data.
In addition, SafeStore local key management removes the administrator from most of the daily tasks of securing data, thereby reducing user error and decreasing the risk of data loss. Also, SafeStore local key management supports instant secure erase of drives that permanently removes data when repurposing or decommissioning drives. These services provide a much more secure level of data erasure than other common erasure methods, such as overwriting or degaussing.
LSI Corporation Confidential | July 2011 Page 47

| Terminology

MegaRAID SAS Software User GuideChapter 3: SafeStore Disk Encryption
3.3 Terminology Tab le 1 9 describes the terminology related to the SafeStore encryption feature.
Table 19: Terminology Used in FDE
Option Description
Authenticated Mode The RAID configuration is keyed to a user password. The password must be provided on system boot to
authenticate the user and facilitate unlocking the configuration for user access to the encrypted data.
Blob A blob is created by encrypting a keys using another key. There are two types of blob in the system –
encryption key blob and security key blob.
Key backup You need to provide the controller with a lock key if the controller is replaced or if you choose to migrate
Password An optional authenticated mode is supported in which you must provide a password on each boot to
Re-provisioning Re-provisioning disables the security system of a device. For a controller, it involves destroying the
Security Key A key based on a user-provided string. The controller uses the security key to lock and unlock access to the
Un-Authenticated Mode This mode allows controller to boot and unlock access to user configuration without user intervention. In
Volume Encryption Keys (VEK) The controller uses the volume encryption keys to encrypt data when a controller-encrypted virtual disk is
secure virtual disks. To do this task, you must back up the security key.
make sure the system boots only if the user is authenticated. Firmware uses the user password to encrypt the security key in the security key blob stored on the controller.
security key. For SafeStore encrypted drives, when the drive lock key is deleted, the drive is unlocked and any user data on the drive is securely deleted. This situation does not apply to controller-encrypted drives, because deleting the virtual disk destroys the encryption keys and causes a secure erase. See Section3.5,
Instant Secure Erase, for information about the instant secure erase feature.
secure user data. This key is encrypted into the security key blob and stored on the controller. If the security key is unavailable, user data is irretrievably lost. You must take all precautions to never lose the security key.
this mode, the security key is encrypted into a security key blob, stored on the controller, but instead of a user password, an internal key specific to the controller is used to create the security key blob.
created. These keys are not available to the user. The firmware uses a unique 512-bit key for each virtual disk. The VEKs for the virtual disks are stored on the physical disks in a VEK blob.

3.4 Workflow

3.4.1 Enable Security You can enable security on the controller. After you enable security, you have the

option to create secure virtual drives using a security key.
There are three procedures you can perform to create secure virtual drives using a security key:
Create the security key identifier
Create the security key
Create a password (optional)
3.4.1.1 Create the Security Key Identifier
Page 48 LSI Corporation Confidential | July 2011
The security key identifier appears whenever you enter the security key. If you have multiple security keys, the identifier helps you determine which security key to enter. The controller provides a default identifier for you. You can use the default setting or enter your own identifier.
MegaRAID SAS Software User Guide Chapter 3: SafeStore Disk Encryption
3.4.1.2 Create the Security Key You need to enter the security key to perform certain operations. You can choose a
| Workflow
strong security key that the controller suggests.
CAUTION: If you forget the security key, you will lose access to your data.
3.4.1.3 Create a Password The password provides additional security. The password should be different from the
security key. You can select a setting in the utilities so that you must enter the password whenever you boot your server.
CAUTION: If you forget the password, you will lose access to your data.
When you use the specified security key identifier, security key, and password, security is enabled on the controller.

3.4.2 Change Security You can change the security settings on the controller, and you have the option to

change the security key identifier, security key, and password. If you have previously removed any secured drives, you still need to supply the old security key to import them.
You can perform three procedures to change the security settings on the controller:
Change the security key identifier
Change the security key
Change a password
See Section4.7, Selecting SafeStore Encryption Services Security Options for the procedures used to change security options in WebBIOS or Section11.6, LSI SafeStore
Encryption Services for the procedures used to change security options in the
MegaRAID Storage Manager software.
3.4.2.1 Change the Security Key Identifier
You have the option to edit the security key identifier. If you plan to change the security key, it is highly recommended that you change the security key identifier. Otherwise, you will not be able to differentiate between the security keys.
You can select whether you want to keep the current security key identifier or enter a new one. To change the security key identifier, enter a new security key identifier.
3.4.2.2 Change the Security Key You can choose to keep the current security key or enter a new one. To change the
security key, you can either enter the new security key or accept the security key that the controller suggests.
3.4.2.3 Add or Change the Pass Word
You have the option to add a password or change the existing one. To change the password, enter the new password. To keep the existing password, enter the current password. If you choose this option, you must enter the password whenever you boot your server.
This procedure updates the existing configuration on the controller to use the new security settings.
LSI Corporation Confidential | July 2011 Page 49

| Instant Secure Erase

3.4.3 Create Secure Virtual Drives You can create a secure virtual drive and set its parameters as desired. To create a

MegaRAID SAS Software User GuideChapter 3: SafeStore Disk Encryption
secure virtual drive, select a configuration method. You can select either simple configuration or advanced configuration.
3.4.3.1 Simple Configuration If you select simple configuration, select the redundancy type and drive security
method to use for the drive group.
See Section8.1.4, Creating a Virtual Drive Using Simple Configuration for the procedures used to select the redundancy type and drive security method for a configuration.
3.4.3.2 Advanced Configuration If you select advanced configuration, select the drive security method, and add the
drives to the drive group.
See Section8.1.5, Creating a Virtual Drive using Advanced Configuration for the procedures used to import a foreign configuration.
After the drive group is secured, you cannot remove the security without deleting the virtual drives.

3.4.4 Import a Foreign Configuration After you create a security key, you can run a scan for a foreign configuration and

import a locked configuration. (You can import unsecured or unlocked configurations when security is disabled.) A foreign configuration is a RAID configuration that already exists on a replacement set of drives that you install in a computer system. WebBIOS Configuration Utility and the MegaRAID Storage Manager software allows you to import the existing configuration to the RAID controller or clear the configuration so you can create a new one.
See Section4.8, Viewing and Changing Device Properties for the procedure used to import a foreign configuration in WebBIOS or Section11.6.12, Importing or Clearing a
Foreign Configuration for the procedure in the MegaRAID Storage Manager software.
To import a foreign configuration, you must first enable security to allow importation of locked foreign drives. If the drives are locked and the controller security is disabled, you cannot import the foreign drives. Only unlocked drives can be imported when security is disabled.
After you enable the security, you can import the locked drives. To import the locked drives, you must provide the security key used to secure them. Verify whether any drives are left to import as the locked drives can use different security keys. If there are any drives left, repeat the import process for the remaining drives. After all of the drives are imported, there is no configuration to import.
3.5 Instant Secure Erase Instant Secure Erase is a feature used to erase data from encrypted drives. After the
initial investment for an encrypted disk, there is no additional cost in dollars or time to erase data using the Instant Secure Erase feature.
You can change the encryption key for all MegaRAID RAID controllers that are connected to encrypted drives. All encrypted drives, whether locked or unlocked, always have an encryption key. This key is set by the drive and is always active. When the drive is unlocked, the data to host from the drive (on reads) and from the host to the drive cache (on writes) is always provided. However, when resting on the drive platters, the data is always encrypted by the drive.
Page 50 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 3: SafeStore Disk Encryption
You might not want to lock your drives because you have to manage a password if they are locked. Even if you do not lock the drives, there is still a benefit to using encrypted disks.
If you are concerned about data theft or other security issues, you might already invest in drive disposal costs, and there are benefits to using SafeStore encryption over other technologies that exist today, both in terms of the security provided and time saved.
If the encryption key on the drive changes, the drive cannot decrypt the data on the platters, effectively erasing the data on the disks. The National Institute of Standards and Technology (http://www.nist.gov) values this type of data erasure above secure erase and below physical destruction of the device.
Consider the following reasons for using instant secure erase.
If you need to repurpose the hard drive for a different application.
You might need to move the drive to another server to expand storage elsewhere, but the drive is in use. The data on the drive might contain sensitive data including customer information that, if lost or divulged, could cause an embarrassing disclosure of a security hole. You can use the instant secure erase feature to effectively erase the data so that the drive can be moved to another server or area without concern that old data could be found.
| Instant Secure Erase
If you need to replace drives.
If the amount of data has outgrown the storage system, and there is no room to expand capacity by adding drives, you might choose to purchase upgrade drives. If the older drives support encryption, you can erase the data instantly so the new drives can be used.
If you need to return a disk for warranty activity.
If the drive is beginning to show SMART predictive failure alerts, you might want to return the drive for replacement. If so, the drive must be effectively erased if there is sensitive data. Occasionally a drive is in such bad condition that standard erasure applications do not work. If the drive still allows any access, it might be possible to destroy the encryption key.
LSI Corporation Confidential | July 2011 Page 51
| Instant Secure Erase
MegaRAID SAS Software User GuideChapter 3: SafeStore Disk Encryption
Page 52 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility

| Overview

Chapter 4

WebBIOS Configuration Utility

This chapter describes the WebBIOS Configuration Utility (CU), which enables you to create and manage RAID configurations on LSI SAS controllers.
4.1 Overview The WebBIOS configuration utility, unlike the MegaRAID Storage Manager software,
resides in the SAS controller BIOS and operates independently of the operating system.
You can use the WebBIOS configuration utility to perform the following tasks:
Create drive groups and virtual drives for storage configurations.
Display controller, drive, virtual drive, and battery backup unit (BBU) properties, and
change parameters.
Delete virtual drives.
Migrate a storage configuration to a different RAID level.
Detect configuration mismatches.
Import a foreign configuration.
Scan devices connected to the controller.
Initialize virtual drives.
Check configurations for data consistency.
Create a CacheCade 2.0 SSD Read Caching configuration.
The WebBIOS configuration utility provides a configuration wizard to guide you through the configuration of virtual drives and drive groups.

4.2 Starting the WebBIOS configuration utility

LSI Corporation Confidential | July 2011 Page 53
To start the WebBIOS configuration utility, perform the following steps:
1. When the host computer is booting, hold down the Ctrl key and press the H key when the following text appears on the dialog:
Copyright© LSI Corporation Press <Ctrl><H> for WebBIOS
The Controller Selection dialog appears.
2. If the system has multiple SAS controllers, select a controller.
3. Click Start to continue.
The main WebBIOS configuration utility dialog appears.
NOTE: On systems that do not have the PS2 port, you must enable 'port 60/64 emulation' in the System BIOS to emulate USB as PS2. When this option is disabled on this system, WebBIOS does not work.
tions
| WebBIOS configuration utility Main Dialog Op-

4.3 WebBIOS configuration utility Main Dialog Options

MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 15: WebBIOS Configuration Utility Main Dialog
In the right frame, the dialog shows the virtual drives configured on the controller, and the drives that are connected to the controller. In addition, the dialog identifies drives that are foreign or missing.
NOTE: In the list of virtual drives, the drive nodes are sorted based on the order in which you added the drives to the drive group, rather than the physical slot order that displays in the physical trees.
NOTE: The minimum dialog resolution for WebBIOS is 640 x 480.
To toggle between the Physical view and the Logical view of the storage devices connected to the controller, click Physical View or Logical View in the menu in the left frame. When the Logical View dialog appears, it shows the drive groups that are configured on this controller.
NOTE: Unconfigured Bad drives are only displayed in the Physical View.
Page 54 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
For drives in an enclosure, the dialog shows the following drive information:
Enclosure
Slot
Interface type (such as SAS or SATA)
Drive type (HDD or SSD)
Drive size
Drive status (such as Online or Unconfigured Good)
The toolbar at the top of the WebBIOS configuration utility has the following buttons, as listed in Table20.
Table 20: WebBIOS configuration utility Toolbar Icons
Icon Description
Click this icon to return to the main dialog from any other WebBIOS configuration utility dialog.
Click this icon to return to the previous dialog that you were viewing.
| WebBIOS configuration utility Main Dialog Op-
tions
Click this icon to exit the WebBIOS configuration utility wizard.
Click this icon to turn off the sound on the onboard controller alarm.
Click this icon to display information about the WebBIOS configuration utility version, bus number, and device number.
The following is a description of the options listed on the left frame of the WebBIOS configuration utility main dialog (the hotkey shortcut for each option is shown in parentheses next to the option name):
Advanced Software Options: (Alt+a) Select this option to enable the advanced
features in the controller. For more information, see section Section4.4.1, Managing
MegaRAID Advanced Software Options.
Controller Selection: (Alt+c) Select this option to view the Controller Selection
dialog, where you can select a different SAS controller. You can also view information about the controller and the devices connected to it, or create a new configuration on the controller.
Controller Properties: (Alt+p) Select this option to view the properties of the
currently selected SAS controller. For more information, see Section4.8.1, Viewing
Controller Properties.
LSI Corporation Confidential | July 2011 Page 55
| Managing Software Licensing
Drive Security: (Alt+r) Select this option to encrypt data on the drives and use
disk-based key management for the data security solution. This solution protects your data in case of theft or loss of physical drives. For more information, see
Section4.7, Selecting SafeStore Encryption Services Security Options.
Scan Devices: (Alt+s) Select this option to have the WebBIOS configuration utility
re-scan the physical and virtual drives for any changes in the drive status or the physical configuration. The WebBIOS configuration utility displays the results of the scan in the physical and virtual drive descriptions.
Virtual Drives: (Alt+v) Select this option to view the Virtual Drives dialog, where
you can change and view virtual drive properties, delete virtual drives, initialize drives, and perform other tasks. For more information, see Section4.8.2, Viewing
Virtual Drive Properties, Policies, and Operations.
Drives: (Alt+d) Select this option to view the Drives dialog, where you can view
drive properties, create hot spares, and perform other tasks. For more information, see Section4.8.3, Viewing Drive Properties.
Configuration Wizard: (Alt+o) Select this option to start the Configuration Wizard
and create a new storage configuration, clear a configuration, or add a configuration. For more information, see Section4.5, Creating a Storage
Configuration.
Logical View/Physical View: (Alt+l) for the Logical view; Alt+h for the Physical
view) Select this option to toggle between the Physical View dialog and the Logical View dialog.
Events: (Alt+e) Select this option to view system events in the Event Information
dialog. For more information, see Section4.13, Viewing System Event Information.
Exit: (Alt+x) Select this option to exit the WebBIOS configuration utility and
continue with system boot.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility

4.4 Managing Software Licensing

The MegaRAID advanced software offers the software license key feature to enable the advanced options in WebBIOS. The license key, also known as the Activation key is used to transfer the advanced features from one controller to another by configuring the Key Vault.
You need to configure the Advanced Software Options menu present in the WebBIOS main dialog to use the advanced features present in the controller.

4.4.1 Managing MegaRAID Advanced Software Options

Perform the following steps to configure the Advanced Software Options wizard to enable the advanced options using the activation key.
1. Click Advanced Software Options menu on the WebBIOS main dialog.
The Advanced Software Options wizard appears, as shown in the following figure.
Page 56 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Managing Software Licensing
Figure 16: Manage MegaRAID Advanced Software Options Wizard
NOTE: When you click the Advanced Software Options menu in the main WebBIOS
dialog, if re-hosting is not required, the Manage MegaRAID Advanced Software Options dialog appears; otherwise, if the user decides to opt for the re-hosting process, the
Section4.4.8, Confirm Re-hosting Process dialog appears.
The Activated Advanced Software Options field consists of Advanced Software Options, License, and Mode columns.
The Advanced Software option column displays the list of advanced software
features available in the controller.
The License column displays the license details for the list of advanced softwares
present in the Advanced Software options column. The license details validates if the software is under trial period, or if it can be used without any trial period (Unlimited).
The Mode column displays the current status of the advanced software. The current
status can be Secured, Not secured, or Factory installed.
Both the Safe ID and the Serial Number fields consist of a pre-defined value internally generated by the controller.
2. Click Activate.
The Advanced Software Options Summary wizard appears, as shown in Figure22.
3. Click Configure Key Vault.
The Confirm Rehosting Process wizard appears, as shown in Figure27.
The Configure Key Vault button is conditional, and appears in two scenarios.
LSI Corporation Confidential | July 2011 Page 57
| Managing Software Licensing
—Scenario #1
—Scenario# 2
4. Click Deactivate All Trial Software.
The WebBIOS Deactivate All Trial Advanced Software Options dialog appears, as shown in the following figure.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
When features have been transferred from NVRAM to key vault, and no re-hosting is required, the Configure Key Vault button is not displayed.
When the re-hosting process needs to be completed, the Configure Key Vault button appears.
Figure 17: Deactivate All Trial Advanced Software Options Dialog
To deactivate the software that is being used with a trial key, click Ye s ; otherwise, click No.
When the activation key is improper in the Activation field in the Advanced Software Options wizard, the following messages appear based on the scenarios.
—Scenario # 1
If you enter an invalid activation key, the following message appears.
Page 58 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Figure 18: Invalid Activation Key Message
—Scenario # 2
If you leave the activation key field blank or enter space characters, the following message appears.
| Managing Software Licensing
Figure 19: Config Utility Cannot Activate Advanced Software Options Message
—Scenario # 3
If you enter an incorrect activation key, and if there is a mismatch between the activation key and the controller, the following message appears.
Figure 20: Activation Key Mismatch Message
LSI Corporation Confidential | July 2011 Page 59
| Managing Software Licensing
4.4.2 Reusing the Activation Key If you are using an existing activated key, the features are transferred to the key vault,
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
and the message appears as shown in the following figure.

Figure 21: Reusing the Activation Key

4.4.3 Managing Advanced Software Summary

When you click Activate in Manage MegaRAID Advanced Software Options dialog, the Advanced Software Options Summary wizard appears, as shown in the following figure.
Figure 22: Advanced Software Options Summary Wizard
The Summary field displays the list of the advanced software options along with their
former status and new status in the controller.
The Advanced Software Options column displays the currently available software
in the controller.
The Former Status column displays the status of the available advanced software
prior to entering the activation key.
The New Status column displays the status of the available advanced software,
after entering the activation key.
Page 60 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Managing Software Licensing

4.4.4 Activating an Unlimited Key Over a Trial Key

When you activate an unlimited key over a trial key, the Review the summary and go back if you need to make corrections message appears, as shown in
the following figure.
Figure 23: Activating an Unlimited Key over a Trial Key

4.4.5 Activating a Trial Software When you activate a trial software, the This trial software expires in 30

days message appears, as shown in the following figure.
LSI Corporation Confidential | July 2011 Page 61
| Managing Software Licensing
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 24: Activating a Trial Software Application
4.4.6 Activating an Unlimited Key When you activate an unlimited key, the Review the summary and go back
if you need to make corrections message appears, as shown in the following figure.

Figure 25: Activating an Unlimited Key

Page 62 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Managing Software Licensing

4.4.7 Securing MegaRAID Advanced Software

If the advanced software is not secured, when you click the Configure Key Vault button in the Advanced Software Options wizard, the WebBIOS Secure MegaRAID Advanced Software Options dialog box appears, as shown in the following figure.
Figure 26: Secure Advanced Software Options

4.4.8 Confirm Re-hosting Process The confirming re-hosting process involves the process of transferring or re-hosting the

advanced software features from one controller to another.
When you need to transfer the features from one controller (example, controller 1) to another controller (example, controller 2) and in the controller 2 NVRAM, if there are some features that need to be transferred to key vault, the Confirm Re-hosting Process dialog appears as shown in Figure27.
Perform the following steps to confirm the rehosting process.
1. Click the Configure Key Vault button in the Advanced Software Options wizard.
The Confirm Rehosting Process wizard appears as shown in the following figure.
LSI Corporation Confidential | July 2011 Page 63
| Managing Software Licensing
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 27: Confirm Re-hosting Process Dialog
2. Select the I acknowledge that I have completed the re-hosting process in the LSI Advanced Software License Management Portal check box.
3. Click Next.
The Manage Advanced Software Options Summary dialog appears as shown in
Figure23.
Page 64 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility

4.4.9 Re-hosting Process Complete In a scenario where only key vault feature needs to be transferred from controller 1 to

| Managing Software Licensing
controller 2, the Re-hosting Process Complete dialog appears as shown in the following figure.
Figure 28: Re-hosting Process Complete Dialog
1. Select the I acknowledge that I have completed the re-hosting process in the LSI Advanced Software License Management Portal check box.
2. Click Next.
The Manage MegaRAID Advanced Software Options Wizard appears.
The rehosting process is completed.
NOTE: If you click Next in the Re-hosting Process Complete dialog, if re-hosting is not complete, the features are not copied into the key vault, and the features remain in the key vault itself, but you can still use the advanced features.
LSI Corporation Confidential | July 2011 Page 65
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility

4.5 Creating a Storage Configuration

This section explains how to use the WebBIOS configuration utility Configuration Wizard to configure RAID drive groups and virtual drives to create storage configurations:
Follow these steps to start the Configuration wizard, and select a configuration option and mode:
1. Click Configuration Wizard on the WebBIOS main dialog.
The first Configuration Wizard dialog appears, as shown in the following figure.
Figure 29: WebBIOS Configuration Wizard Dialog
2. Select a configuration option.
CAUTION: If you choose the first or second option, all existing data in the configuration will be deleted. Make a backup of any data that you want to keep before you choose an option.
Clear Configuration: Clears the existing configuration. — New Configuration: Clears the existing configuration and lets you create a new
configuration.
Add Configuration: Retains the existing storage configuration and adds new
drives to it (this option does not cause any data loss).
3. Click Next.
A dialog box warns that you will lose data if you select Clear Configuration or New Configuration.
Page 66 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
4. The Convert JBOD Drives to Unconfigured Drives dialog appears, as shown in the following figure.
NOTE: The JBOD Drives to Unconfigured Drives dialog appears only if the system detects JBOD drives.
| Creating a Storage Configuration
Figure 30: JBOD Drives to Unconfigured Good Dialog
5. Click Next.
The WebBIOS Configuration Method dialog appears, as shown in the following figure.
LSI Corporation Confidential | July 2011 Page 67
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 31: WebBIOS Configuration Method Wizard
6. Select a configuration mode:
Manual Configuration: Allows you to control all attributes of the new storage
configuration as you create drive groups and virtual drives, and set their parameters.
Automatic Configuration: Automatically creates an optimal RAID configuration.
If you select Automatic Configuration, you can choose whether to create a redundant RAID drive group or a non-redundant RAID 0 drive group. Select one of the following options in the Redundancy drop down list:
Redundancy when possibleNo redundancy
If you select Automatic Configuration, you can choose whether to use a drive security method. Select one of the following options in the Drive Security Method drop down list:
No EncryptionDrive Encryption
7. Click Next to continue.
If you select the Automatic Configuration radio button, continue with Section4.5.1,
Using Automatic Configuration. If you select Manual Configuration, continue with
Section4.5.2, Using Manual Configuration.
Page 68 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility

4.5.1 Using Automatic Configuration Follow these instructions to create a configuration with automatic configuration, either

| Creating a Storage Configuration
with or without redundancy:
1. When WebBIOS displays the proposed new configuration, review the information on the dialog, and click Accept to accept it. (Or click Back to go back and change the configuration.)
RAID 0: If you select Automatic Configuration and No Redundancy, WebBIOS
creates a RAID 0 configuration.
RAID 1: If you select Automatic Configuration and Redundancy when
possible, and only two drives are available, WebBIOS creates a RAID 1 configuration.
RAID 5: If you select Automatic Configuration and Redundancy when
possible, and three or more drives are available, WebBIOS creates a RAID 5 configuration.
RAID 6: If you select Automatic Configuration and Redundancy when
possible, and the RAID 6 option is enabled, and three or more drives are available, WebBIOS creates a RAID 6 configuration.
2. Click Ye s when you are prompted to save the configuration.
3. Click Ye s when you are prompted to initialize the new virtual drives.
WebBIOS configuration utility begins a background initialization of the virtual drives.
New RAID 5 virtual drives and new RAID 6 virtual drives require a minimum number of drives for a background initialization to start. If there are fewer drives, the background initialization will not start. The following number of drives is required:
— New RAID 5 virtual drives must have at least five drives for a background
initialization to start.
— New RAID 6 virtual drives must have at least seven drives for a background
initialization to start.

4.5.2 Using Manual Configuration This section contains the procedures for creating RAID drive groups for RAID levels 0, 1,

5, 6, 00, 10, 50, and 60.
4.5.2.1 Using Manual Configuration: RAID 0
RAID 0 provides drive striping across all drives in the RAID drive group. RAID 0 does not provide any data redundancy but does offer excellent performance. RAID 0 is ideal for applications that require high bandwidth but do not require fault tolerance. RAID 0 also denotes an independent or single drive.
NOTE: RAID level 0 is not fault-tolerant. If a drive in a RAID 0 drive group fails, the whole virtual drive (all drives associated with the virtual drive) fails.
When you select Manual Configuration and click Next, the Drive Group Definition dialog appears. Use this dialog to select drives to create drive groups.
1. Hold Ctrl while selecting two or more ready drives in the Drives panel on the left until you have selected all desired drives for the drive group.
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right, as shown in Figure32.
If you need to undo the changes, click Reclaim.
LSI Corporation Confidential | July 2011 Page 69
| Creating a Storage Configuration
3. Choose whether to use power save mode.
4. Choose whether to use drive encryption.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 32: Drive Group Definition Dialog
5. After you finish selecting drives for the drive group, click Accept DG.
6. Click Next.
The Virtual Drive Definition dialog appears, as shown in Figure33. This dialog lists the possible RAID levels for the drive group.
Use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
Page 70 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
Figure 33: WebBIOS Virtual Drive Definition Dialog
7. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options:
RAID Level: The drop-down list shows the possible RAID levels for the virtual
drive. Select RAID 0.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB, and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, and 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default is 64 KB.
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access. This is the default.
Read Only: Allow read-only access.
Blocked: Do not allow access.
LSI Corporation Confidential | July 2011 Page 71
| Creating a Storage Configuration
Read Policy: Specify the read policy for this virtual drive.
Normal: This option disables the read ahead capability. This option is the
default.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This option is the default setting.
Write Back with BBU: Select this mode if you want the controller to use Write back mode but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
CAUTION: LSI allows Write back mode to be used with or without a BBU. LSI recommends that you use either a battery to protect the controller cache, or an uninterruptible power supply (UPS) to protect the entire system. If you do not use a battery or a UPS, and a power failure occurs, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. It does not
affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Cache: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status.
No: Leave background initialization enabled, which means that a new
configuration can be initialized in the background while you use WebBIOS to perform other configuration tasks. This option is the default setting.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
Select Size: Specify the size of the virtual drive in MB, GB, or TB. Usually, this is
the full size for RAID 0 shown in the Configuration panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
Page 72 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Update Size: Click Update Size to update the Select size value for the selected
RAID levels
8. Click Accept to accept the changes to the virtual drive definition.
If you need to undo the changes, click Reclaim.
9. Click Next after you finish defining the virtual drives.
The Configuration Preview dialog appears, as shown in Figure34.
| Creating a Storage Configuration
Figure 34: RAID 0 Configuration Preview Dialog
10. Check the information in the Configuration Preview Dialog.
11. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Back to return to the previous dialogs and change the configuration.
12. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
4.5.2.2 Using Manual Configuration: RAID 1
In RAID 1, the RAID controller duplicates all data from one drive to a second drive. RAID 1 provides complete data redundancy, but at the cost of doubling the required data storage capacity. It is appropriate for small databases or any other environment that requires fault tolerance but small capacity.
When you select Manual Configuration and click Next, the Drive Group Definition dialog appears. Use this dialog to select drives to create drive groups.
1. Hold Ctrl while you select two ready drives in the Drives panel on the left. You must select an even number of drives.
LSI Corporation Confidential | July 2011 Page 73
| Creating a Storage Configuration
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right, as shown in Figure35.
If you need to undo the changes, click Reclaim.
3. Choose whether to use power save mode.
4. Choose whether to use drive encryption.
NOTE: A RAID 1 virtual drive can contain up to 16 drive groups and 32 drives in a single span. (Other factors, such as the type of controller, can limit the number of drives.) You must use two drives in each RAID 1 drive group in the span.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 35: Drive Group Definition Dialog
5. After you finish selecting drives for the drive group, click Accept DG.
6. Click Next.
The Drive Group Definition dialog appears, as shown in Figure36. You use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
Page 74 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
Figure 36: Virtual Group Drive Definition Dialog
7. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options:
RAID Level: The drop-down list shows the possible RAID levels for the virtual
drive. Select RAID 1.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB, and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default is 64 KB.
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access. This option is the default setting.
Read Only: Allow read-only access.
Blocked: Do not allow access.
LSI Corporation Confidential | July 2011 Page 75
| Creating a Storage Configuration
Read Policy: Specify the read policy for this virtual drive
Normal: This option disables the read ahead capability. This option is the default
setting.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This is the default setting.
Write Back with BBU: Select this mode if you want the controller to use Write back mode, but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
CAUTION: LSI allows Write back mode to be used with or without a BBU. LSI recommends that you use either a battery to protect the controller cache, or an UPS to protect the entire system. If you do not use a battery or a UPS, and a power failure occurs, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. It does not
affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Policy: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status:
No: Leave background initialization enabled, which means that a new
configuration can be initialized in the background while you use WebBIOS to do other configuration tasks. This is the default.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
Select Size: Specify the size of the virtual drives in MB, GB, or TB. Usually, this
would be the full size for RAID 1 shown in the Configuration panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
Page 76 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Update Size: Click Update Size to update the Select size field value for the
selected RAID levels
8. Click Accept to accept the changes to the virtual drive definition.
If you need to undo the changes, click Reclaim.
9. Click Next after you finish defining the virtual drives.
The Configuration Preview dialog appears, as shown in Figure37.
| Creating a Storage Configuration
Figure 37: RAID 1 Configuration Preview Dialog
10. Check the information in the Configuration Preview dialog.
11. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Back to return to the previous dialogs and change the configuration.
12. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
4.5.2.3 Using Manual Configuration: RAID 5
RAID 5 uses drive striping at the block level and parity. In RAID 5, the parity information is written to all drives. It is best suited for networks that perform a lot of small input/output (I/O) transactions simultaneously. RAID 5 provides data redundancy, high read rates, and good performance in most environments. It also provides redundancy with lowest loss of capacity.
RAID 5 provides high data throughput. RAID 5 is useful for transaction processing applications because each drive can read and write independently. If a drive fails, the RAID controller uses the parity drive to re-create all missing information. You can use RAID 5 for office automation and online customer service that require fault tolerance.
LSI Corporation Confidential | July 2011 Page 77
| Creating a Storage Configuration
In addition, RAID 5 is good for any application that has high read request rates but low write request rates.
When you select Manual Configuration and click Next, the Drive Group Definition dialog appears. You use this dialog to select drives to create drive groups.
1. Hold Ctrl while you select at least three ready drives in the Physical Drives panel on the left.
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right, as shown in Figure38.
3. If you need to undo the changes, click Reclaim.
4. Choose whether to use power save mode.
5. Choose whether to use drive encryption.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 38: Drive Group Definition Dialog
6. After you finish selecting drives for the drive group, click Accept DG.
7. Click Next.
The Virtual Drive Definition dialog appears, as shown in Figure39. You use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
Page 78 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
f
| Creating a Storage Configuration
Figure 39: Virtual Drive Definition Dialog
8. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options.
RAID Level: The drop-down list provides the possible RAID levels for the virtual
drive. Select RAID 5.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB, and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default is 64 KB.
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access. This option is the default setting.
Read Only: Allow read-only access.
Blocked: Do not allow access.
LSI Corporation Confidential | July 2011 Page 79
| Creating a Storage Configuration
Read Policy: Specify the read policy for this virtual drive.
Normal: This option disables the read ahead capability. This option is the default
setting.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This option is the default setting.
Write Back with BBU: Select this mode if you want the controller to use Write back mode but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
CAUTION: LSI allows Write back mode to be used with or without a BBU. LSI recommends that you use either a battery to protect the controller cache, or a UPS to protect the entire system. If you do not use a battery or a UPS, and a power failure occurs, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. It does not
affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Policy: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status.
No: Leave background initialization enabled, which means that a new
configuration can be initialized in the background while you use WebBIOS to perform other configuration tasks. This option is the default setting.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
NOTE: New RAID 5 virtual drives require at least five drives for a background initialization to start.
Page 80 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Select Size: Specify the size of the virtual drive in MB, GB, or TB. Usually, this
setting would be the full size for RAID 5 shown in the Configuration panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
Update Size: Click Update Size to update the Select size field value for the
selected RAID levels
Click Accept to accept the changes to the virtual drive definition.
9. If you need to undo the changes, click Reclaim.
10. Click Next after you finish defining the virtual drives.
The Configuration Preview dialog appears, as shown in Figure40.
| Creating a Storage Configuration
Figure 40: RAID 5 Configuration Preview Dialog
11. Check the information in the configuration preview.
12. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Cancel to end the operation dialogs and return to the WebBIOS main menu, or click Back to return to the previous dialogs and change the configuration.
13. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
LSI Corporation Confidential | July 2011 Page 81
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
4.5.2.4 Using Manual Configuration: RAID 6
RAID 6 is similar to RAID 5 (drive striping and distributed parity), except that instead of one parity block per stripe, there are two. With two independent parity blocks, RAID 6 can survive the loss of any two drives in a virtual drive without losing data. Use RAID 6 for data that requires a very high level of protection from loss.
RAID 6 is best suited for networks that perform a lot of small input/output (I/O) transactions simultaneously. It provides data redundancy, high read rates, and good performance in most environments.
In the case of a failure of one drive or two drives in a virtual drive, the RAID controller uses the parity blocks to recreate all of the missing information. If two drives in a RAID 6 virtual drive fail, two drive rebuilds are required, one for each drive. These rebuilds do not occur at the same time. The controller rebuilds one failed drive, and then the other failed drive.
NOTE: Integrated MegaRAID displays new drives as Just a Bunch of Disks (JBOD). For MegaRAID, unless the inserted drive contains valid DDF metadata, new drives display as JBOD. Rebuilds start only on Unconfigured Good drives, so you have to change the new drive state from JBOD to Unconfigured Good to start a rebuild.
When you select Manual Configuration, and click Next, the WebBIOS Drive Group Definition dialog appears. You use this dialog to select drives to create drive groups.
1. Hold Ctrl while selecting at least three ready drives in the Drives panel on the left.
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right, as shown in Figure41.
3. If you need to undo the changes, click Reclaim.
4. Choose weather to use power save mode.
5. Choose whether to use drive encryption.
The drop-down list in the Encryption field lists the options.
Page 82 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
Figure 41: Drive Group Definition Dialog
6. After you finish selecting drives for the drive group, click Accept DG for each drive.
7. Click Next.
The Virtual Drive Definition dialog appears, as shown in Figure42. Use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
LSI Corporation Confidential | July 2011 Page 83
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 42: WebBIOS Virtual Drive Definition Dialog
8. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options:
RAID Level: The drop-down menu lists the possible RAID levels for the virtual
drive. Select RAID 6.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB, and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default setting is 64 KB.
NOTE: WebBIOS does not allow you to select 8KB as the stripe size when you create a RAID 6 drive group with three drives.
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access. This option is the default setting.
Read Only: Allow read-only access.
Blocked: Do not allow access.
Page 84 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Read Policy: Specify the read policy for this virtual drive.
Normal: This option disables the read ahead capability. This option is the default setting.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This is the default.
Write Back with BBU: Select this mode if you want the controller to use Write back mode, but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
| Creating a Storage Configuration
CAUTION: LSI allows Write back mode to be used with or without a BBU. LSI
recommends that you use either a battery to protect the controller cache, or a UPS to protect the entire system. If you do not use a battery or a UPS, and a power failure occurs, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. It does not
affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Policy: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status:
No: Leave background initialization enabled, which means that a new
configuration can be initialized in the background while you use WebBIOS to do other configuration tasks. This option is the default setting.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
NOTE: New RAID 6 virtual drives require at least seven drives for a background initialization to start.
LSI Corporation Confidential | July 2011 Page 85
| Creating a Storage Configuration
Select Size: Specify the size of the virtual drive in MB, GB, or TB. Usually, this
would be the full size for RAID 6 shown in the Configuration panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
Update Size: Click Update Size to update the Select size field value for the
selected RAID levels
9. Click Accept to accept the changes to the virtual drive definition.
If you need to undo the changes, click Reclaim.
10. Click Next after you finish defining the virtual drives.
The Configuration Preview dialog appears, as shown in Figure43.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 43: RAID 6 Configuration Preview Dialog
11. Check the information in the configuration preview dialog.
12. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Back to return to the previous dialogs and change the configuration.
13. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
Page 86 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
4.5.2.5 Using Manual Configuration: RAID 00
A RAID 00 drive group is a spanned drive group that creates a striped set from a series of RAID 0 drive groups. It breaks up data into smaller blocks and then stripes the blocks of data to RAID 00 drive groups. The size of each block is determined by the stripe size parameter, which is 64 KB.
RAID 00 does not provide any data redundancy but does offer excellent performance. RAID 00 is ideal for applications that require high bandwidth but do not require fault tolerance.
When you select Manual Configuration and click Next, the WebBIOS Drive Group Definition dialog appears.
You use the Drive Group Definition dialog to select drives to create drive groups.
1. Hold Ctrl key while you select ready drives in the Drives panel on the left.
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right.
3. If you need to undo the changes, click Reclaim.
4. Click Accept DG to create a RAID 0 drive group.
An icon for the next drive group appears in the right panel.
5. Hold the Ctrl key while you select more ready drives in the Drives panel to create a second RAID 0 drive group.
6. Click Add To Array to move the drives to a second drive group configuration in the Drive Groups panel, as shown in Figure44.
If you need to undo the changes, click Reclaim.
NOTE: RAID 00 supports a maximum of eight spans, with a maximum of 32 drives per span. (Other factors, such as the type of controller, can limit the number of drives.)
7. Choose whether to use drive encryption.
8. Click Accept DG to create a RAID 0 drive group.
LSI Corporation Confidential | July 2011 Page 87
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 44: Drive Group Definition Dialog
9. Repeat step 4 through step 6 until you have selected all the drives you want for the drive groups.
10. After you finish selecting drives for the drive groups, select each drive group, and click Accept DG for each selection.
11. Click Next.
The Span Definition dialog appears, as shown in Figure45. This dialog shows the drive group holes that you can select to add to a span.
Page 88 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
Figure 45: Span Definition Dialog
12. Under the Array With Free Space frame, select a drive group, and then click Add to SPAN.
The drive group you select appears in the right frame under Span.
13. Click Add to SPAN.
14. Repeat the previous two steps until you have selected all of the drive groups that you want.
15. Click Next.
The Virtual Drive Group Definition dialog appears, as shown in Figure46. You use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
LSI Corporation Confidential | July 2011 Page 89
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 46: Virtual Drive Group Definition dialog
16. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options:
RAID Level: The drop-down list shows the possible RAID levels for the virtual
drive. Select RAID 00.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, or 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default value is 64 KB.
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access.
Read Only: Allow read-only access. This option is the default.
Blocked: Do not allow access.
Page 90 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Read Policy: Specify the read policy for this virtual drive.
Normal: This option disables the read ahead capability. This option is the default.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This option is the default setting.
Write Back with BBU: Select this mode if you want the controller to use Write back mode but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
| Creating a Storage Configuration
CAUTION: LSI allows Writeback mode to be used with or without a BBU. To protect the
entire system, LSI recommends that you use either a battery to protect the controller cache or a UPS. If you do not use a battery or a UPS, and there is a power failure, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. The policy
does not affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, the block comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Policy: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status.
No: Leave background initialization enabled. This means that a new
configuration can be initialized in the background while you use WebBIOS to do other configuration tasks. This setting is the default.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
Select Size: Specify the size of the virtual drive in MB, GB, or TB. Usually, this
would be the full size for RAID 00 shown in the Configuration Panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
LSI Corporation Confidential | July 2011 Page 91
| Creating a Storage Configuration
Update Size: Click Update Size to update the Select size field value for the
selected RAID levels
17. Click Accept to accept the changes to the virtual drive definition.
18. If you need to undo the changes, click Reclaim.
19. After you finish defining the virtual drives, click Next.
The Configuration Preview dialog appears, as shown in Figure47.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 47: RAID 00 Configuration Preview Dialog
20. Check the information in the Configuration Preview Dialog.
21. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Cancel to end the operation and return to the WebBIOS main menu, or click Back to return to the previous dialogs and change the configuration.
22. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
4.5.2.6 Using Manual Configuration: RAID 10
RAID 10, a combination of RAID 1 and RAID 0, has mirrored drives. It breaks up data into smaller blocks, then stripes the blocks of data to each RAID 1 drive group. Each RAID 1 drive group then duplicates its data to its other drive. The size of each block is determined by the stripe size parameter, which is 64 KB. RAID 10 can sustain one drive failure in each drive group while maintaining data integrity.
Page 92 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
RAID 10 provides both high data transfer rates and complete data redundancy. It works best for data storage that must have 100 percent redundancy of RAID 1 (mirrored drive groups) and that also needs the enhanced I/O performance of RAID 0 (striped drive groups); it works well for medium-sized databases or any environment that requires a higher degree of fault tolerance and moderate to medium capacity.
When you select Manual Configuration and click Next, the Drive Group Definition dialog appears.
You use the Drive Group Definition dialog to select drives to create drive groups.
1. Hold the Ctrl key while selecting two ready drives in the Drives panel on the left.
2. Click Add To Array to move the drives to a proposed two-drive group configuration in the Drive Groups panel on the right.
3. If you need to undo the changes, click Reclaim.
4. Click Accept DG to create a RAID 1 drive group.
An icon for the next drive group appears in the right panel.
5. Click the icon for the next drive group to select it.
6. Hold the Ctrl key while selecting two more ready drives in the Drives panel to create a second RAID 1 drive group with two drives.
7. Click Add To Array to move the drives to a second two-drive group configuration in the Drive Groups panel, as shown in Figure48.
| Creating a Storage Configuration
If you need to undo the changes, click Reclaim.
8. Choose whether to use power saving.
9. Choose whether to use drive encryption.
NOTE: RAID 10 supports a maximum of eight spans, with a maximum of 32 drives per span. (Other factors, such as the type of controller, can limit the number of drives.) You must use an even number of drives in each RAID 10 drive group in the span.
LSI Corporation Confidential | July 2011 Page 93
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 48: Drive Group Definition dialog
10. Repeat steps 7, 8, and 9 until you have selected all the drives you want for the drive groups.
11. After you finish selecting drives for the drive groups, select each drive group, and click Accept DG for each drive group.
12. Click Next.
The Span Definition dialog appears, as shown in Figure49. This dialog displays the drive group holes you can select to add to a span.
Page 94 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
Figure 49: Span Definition Dialog
13. Under the Array With Free Space column, select a drive and click Add to SPAN.
The drive group you select displays in the right frame under the heading Span.
14. Click Add to SPAN.
Both drive groups display in the right frame under Span.
15. If there are additional drive groups with two drives each, you can add them to the virtual drive.
16. Click Next.
The Virtual Drive Definition dialog appears, as shown in Figure50. You use this dialog to select the RAID level, strip size, read policy, and other attributes for the new virtual drives.
LSI Corporation Confidential | July 2011 Page 95
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 50: WebBIOS Virtual Drive Definition Dialog
NOTE: The WebBIOS Configuration Utility shows the maximum available capacity while
creating the RAID 10 drive group. In version 1.03 of the utility, the maximum size of the RAID 10 drive group is the sum total of the two RAID 1 drive groups. In version 1.1, the maximum size is the size of the smaller drive group multiplied by 2.
17. Change the virtual drive options from the defaults listed on the dialog as needed.
Here are brief explanations of the virtual drive options:
RAID Level: The drop-down menu lists the possible RAID levels for the virtual
drive. Select RAID 10.
Strip Size: The strip size is the portion of a stripe that resides on a single drive in
the drive group. The stripe consists of the data segments that the RAID controller writes across multiple drives, not including parity drives. For example, consider a stripe that contains 64 KB of drive space and has 16 KB of data residing on each drive in the stripe. In this case, the stripe size is 64 KB and the strip size is 16 KB. You can set the strip size to 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, and 1024 KB. A larger strip size produces higher read performance. If your computer regularly performs random read requests, choose a smaller strip size. The default is 64 KB.
Page 96 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
Access Policy: Select the type of data access that is allowed for this virtual drive.
RW: Allow read/write access.
Read Only: Allow read-only access. This option is the default setting.
Blocked: Do not allow access.
Read Policy: Specify the read policy for this virtual drive.
Normal: This option disables the read ahead capability. This option is the default
setting.
Ahead: This option enables read ahead capability, which allows the controller to read sequentially ahead of requested data and to store the additional data in cache memory, anticipating that the data will be needed soon. This option speeds up reads for sequential data, but there is little improvement when accessing random data.
Write Policy: Specify the write policy for this virtual drive.
WBack: In Write back mode, the controller sends a data transfer completion signal to the host when the controller cache has received all of the data in a transaction. This setting is recommended in Standard mode.
WThru: In Write through mode, the controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. This is the default.
Write Back with BBU: Select this mode if you want the controller to use Write back mode but the controller has no BBU or the BBU is bad. If you do not choose this option, the controller firmware automatically switches to Write through mode if it detects a bad or missing BBU.
| Creating a Storage Configuration
CAUTION: LSI allows Write back mode to be used with or without a BBU. LSI
recommends that you use either a battery to protect the controller cache, or a UPS to protect the entire system. If you do not use a battery or a UPS, and a power failure occurs, you risk losing the data in the controller cache.
IO Policy: The IO policy applies to reads on a specific virtual drive. It does not
affect the read ahead cache.
Direct: In Direct I/O mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. This option is the default setting.
Cached: In Cached I/O mode, all reads are buffered in cache memory.
Drive Policy: Specify the drive cache policy.
Enable: Enable the drive cache.
Disable: Disable the drive cache. This option is the default setting.
NoChange: Leave the current drive cache policy as is.
Disable BGI: Specify the Background Initialization (BGI) status.
No: Leave background initialization enabled. This option means that a new
configuration can be initialized in the background while you use WebBIOS to do other configuration tasks. This option is the default setting.
Ye s : Select Ye s if you do not want to allow background initializations for configurations on this controller.
LSI Corporation Confidential | July 2011 Page 97
| Creating a Storage Configuration
Select Size: Specify the size of the virtual drive in MB, GB, or TB. Usually, this
would be the full size for RAID 10 shown in the Configuration panel on the right. You can specify a smaller size if you want to create other virtual drives on the same drive group.
Update Size: Click Update Size to update the Select size field value for the
selected RAID levels.
18. Click Accept to accept the changes to the virtual drive definition.
If you need to undo the changes, click Reclaim.
19. After you finish defining the virtual drives, click Next.
The Configuration Preview dialog appears, as shown in Figure51.
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 51: RAID 10 Configuration Preview Dialog
20. Check the information in the Configuration Preview.
21. If the virtual drive configuration is acceptable, click Accept to save the configuration. Otherwise, click Cancel to end the operation and return to the WebBIOS main menu, or click Back to return to the previous dialogs and change the configuration.
22. If you accept the configuration, click Ye s at the prompt to save the configuration.
The WebBIOS main menu appears.
Page 98 LSI Corporation Confidential | July 2011
MegaRAID SAS Software User Guide Chapter 4: WebBIOS Configuration Utility
| Creating a Storage Configuration
4.5.2.7 Using Manual Configuration: RAID 50
RAID 50 provides the features of both RAID 0 and RAID 5. RAID 50 uses both distributed parity and drive striping across multiple drive groups. It provides high data throughput, data redundancy, and very good performance. It is best implemented on two RAID 5 drive groups with data striped across both drive groups. Though multiple drive failures can be tolerated, only one drive failure can be tolerated in each RAID 5 level drive group.
RAID 50 is appropriate when used with data that requires high reliability, high request rates, high data transfer, and medium-to-large capacity.
When you select Manual Configuration and click Next, the Drive Group Definition dialog appears. You use this dialog to select drives to create drive group.
1. Hold the Ctrl key while selecting at least three ready drives in the Drives panel on the left.
2. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right.
If you need to undo the changes, click Reclaim.
3. Click Accept DG to create a RAID 5 drive group.
An icon for a second drive group appears in the right panel.
4. Click the icon for the second drive group to select it.
5. Hold the Ctrl key while selecting at least three more ready drives in the Drives panel to create a second drive group.
6. Click Add To Array to move the drives to a proposed drive group configuration in the Drive Groups panel on the right, as shown in Figure52.
If you need to undo the changes, click Reclaim.
7. Choose whether to use drive encryption.
LSI Corporation Confidential | July 2011 Page 99
| Creating a Storage Configuration
MegaRAID SAS Software User GuideChapter 4: WebBIOS Configuration Utility
Figure 52: Drive Group Definition Dialog
8. After you finish selecting drives for the drive groups, select each drive group and click Accept DG for each drive group.
9. Click Next.
The Span Definition dialog appears, as shown in Figure53. This dialog displays the drive group holes you can select to add to a span.
Page 100 LSI Corporation Confidential | July 2011
Loading...