Quantcast
Channel: MSA Storage topics
Viewing all 2285 articles
Browse latest View live

MSA 2040: The disk group is quarantined.

$
0
0

Hi,

we have an MSA 2040 with a disk group quarantined and one degraded. when we check the disks, all 24 disks are online and "OK".

We already restarted the controllers, but we're not getting this degraded / quarantined status away, even if all disks are online (and we did not replace them)

Anyone has an idea?

# show disks
Location Serial Number Vendor Rev Description Usage Jobs Speed (kr/min) Size Sec Fmt Disk Group Pool Tier Health
---------------------------------------------------------------------------------------------------------------------------------------------------
1.1 0DGHLSHF HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.2 05V2SKVZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.3 05V2SPTZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.4 05V2VU3Z HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.5 05V2WTJZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.6 05V33T7Z HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.7 0DGLYEKH HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.8 05V30NVZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.9 W8GDJTJX HP HPDC SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.10 05V2SUMZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.11 05V30N4Z HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.12 05V30LLZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.13 05G1V3TZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.14 05G1WK7Z HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg01 dg01 N/A OK
1.15 05G22YRZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.16 05G230BZ HP HPD5 SAS LINEAR POOL 10 900.1GB 512n dg02 dg02 N/A OK
1.17 WBN03YG3 HP HPD1 SAS LINEAR POOL 10 1800.3GB 512e dg04 dg04 N/A OK
1.18 WBN02BDH HP HPD1 SAS LINEAR POOL 10 1800.3GB 512e dg04 dg04 N/A OK
1.19 WBN03CRV HP HPD1 SAS LINEAR POOL 10 1800.3GB 512e dg04 dg04 N/A OK
1.20 WBN0338N HP HPD1 SAS LINEAR POOL 10 1800.3GB 512e dg04 dg04 N/A OK
1.21 W4615LM3 HP HPD8 SAS MDL LINEAR POOL 7 2000.3GB 512e dg03 dg03 N/A OK
1.22 W4615LKC HP HPD8 SAS MDL LINEAR POOL 7 2000.3GB 512e dg03 dg03 N/A OK
1.23 W4616HDV HP HPD8 SAS MDL LINEAR POOL 7 2000.3GB 512e dg03 dg03 N/A OK
1.24 W4616GXT HP HPD8 SAS MDL LINEAR POOL 7 2000.3GB 512e dg03 dg03 N/A OK
---------------------------------------------------------------------------------------------------------------------------------------------------
Info: * Rates may vary. This is normal behavior. (2019-05-21 17:34:42)

Success: Command completed successfully. (2019-05-21 17:34:42)

# show disk-groups
Name Size Free Class Pool Tier % of Pool Own Pref RAID Disks Spr Chk Status Jobs Job% Serial Number Spin Down SD Delay Sec Fmt
Health Reason
Action
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dg01 5395.6GB 0B Linear dg01 N/A 100 A A RAID6 8 0 512k FTOL 00c0ff26810700007769605600000000 Disabled 0 512n
OK

dg01 1798.5GB 37.7MB Linear dg01 N/A 100 A A RAID10 4 0 512k QTOF 00c0ff25c6e90000b86d8b5a00000000 Disabled 0 512n
Fault The disk group is quarantined.
- Look for events in the event log related to quarantine (172, 485, or 590) and follow the recommended actions for those events.
dg02 5395.6GB 0B Linear dg02 N/A 100 B B RAID6 8 0 512k FTDN 00c0ff2680c00000b769605600000000 Disabled 0 512n
Degraded One disk in the RAID-6 disk group failed. Reconstruction cannot start because there is no spare disk available of the proper type and size.
- Replace the disk with one of the same type (SAS SSD, enterprise SAS, or midline SAS) and the same or greater capacity. For continued optimum I/O performance, the replacement disk should have performance that is the same as or better than the one it is replacing.
- Configure the new disk as a spare so the system can start reconstructing the vdisk.
- To prevent this problem in the future, configure one or more additional disks as spare disks.
dg03 3996.7GB 62.9MB Linear dg03 N/A 100 A A RAID6 4 0 512k FTOL 00c0ff268107000018dde15a00000000 Disabled 0 512e
OK

dg04 3597.0GB 88.0MB Linear dg04 N/A 100 B B RAID10 4 0 512k FTOL 00c0ff2680c00000b6a2755a00000000 Disabled 0 512e
OK

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2019-05-21 17:35:16)

 


MSA 2040 SNMP for PRTG

$
0
0
 
Hello
Can you please tell me how to configure SNMP on MSA 2040 so that I can receive information from it (PRTG)?
If I enter setup notification, snmp is turned off.If I turn it on, I need to select a level and specify the address of the server where it will send the log.But I need something different, to receive data from it.

Is it possible to create multiples iSCSI targets on MSA1040 or MSA2050

$
0
0

Hi,

Is it possible to create multiple iSCSI targets on MSA1040 or MSA2050?

We have a solution that recommands having separate tagets for our iSCSI volumes.

Regards.

Add new physical disks to existing Disk Group on MSA1050

$
0
0

Hi,

 

I intend to add 2 new physical disks on an MSA1050 and extend my existing 

Found some posts on how to do it for MSA1040, but it seems I can't do it on MSA1050 as only option on Modify Disk Group is to rename it.

Is is possible to add the new disks to the existinf Disk Group?

 

Regards,

Afonso

vSphere 6.5 - Round Robin LB Settings (changing iops=1000 to iops=1)

$
0
0

Hi all!

I have bought HP MSA 2052 recently and using it with ESXi 6.5 host. I read "HPE MSA Storage Configuration and Best Practices for VMware vSphere" documentation and there is one section 

"Multipath
With server virtualization environments using vSphere 5.x, HPE recommends using the Round Robin (VMW_PSP_RR) failover policy with all HPE MSA Storage arrays. For optimal default system performance with HPE MSA Storage, configure the round robin load balancing selection to IOPS with a value of 1 for every LUN"

My question is does this settings still valid to 6.5?

Replication between 2 x MSA 2052

$
0
0

Hi all,

we are studying the implementation of a DR site with a twin MSA 2052 setup as the Remote Snap licenses are included. I know that the MSA2040 series support both type of replication, Linear and Virtual while the MSA 2050 series support only Virtual which has it's caveats. Based on the replication white paper it is stated that with Virtualized replication ALL connected ports must be accessible on the same network (while with linear it is suggested but not obligatory) - now to my question: in order to avoid adding 2 x 10Gbit switches to the local site, we would connect the local MSA 2052 to 3 hosts directly with the SFP+ cables, using 2 connections per host (one per MSA controller) so in total 6 for full redundancy. The last (4rth) ports of each controller would be connected to a gbit switch and router going to the remote site (so it's segmented from the other 6 ports) to "talk" to the other MSA which has all it's ports connected to gbit switches. Is there any possible way this configuration would work by enforcing or tieing the 2 ports of the local MSA to a specific replica set for ex.?

Thanks!

HP MSA2312i (2000i G2 MSA) controller revival?

$
0
0

Hello,

 

I have an old MSA2312i which I want to use for my home lab, but only one controller is working.

We had a controller failure a few years ago, so we got a refurbished controller, only this controller had a newer firmware version and partner firmware update was enabled which I did not know.

So when placing the refurb controller it did a firmware update of the controller which was already in the MSA, without me knowing it was doing this I turned off the MSA which resulted in a somewhat bricked controller.

Does anyone know if it is possible to revive this controller?

Someone on Reddit sugested to use xmodem to send the firmware file to the faulty controller, when the tranfser is complete I get the following message:

MANAGEMENT CONTROLLER SECONDARY LOADER MENU                                           

Loader primary code failure!                            

Code load:          
 First select destination:                          
  7. Primary   8. Secondary   9. Both Primary & Secondary (Default)                                                                   

 Then select protocol:                      
  1. FAST BINARY (for enginee                            
  2. KERMIT           
  3. XMODEM           

Other options:              
  5. Run controller (i.e., continue bootup from this point)                                                           
  6. MC Loader Utility Menu                           
  X. Reboot (i.e., run bootup from the beginning)                                                 

To run diagnostics, select 5 and then hold space bar.                                                     
You have chosen primary & secondary flash for code load, now select protocol.                                                                             

Ready to receive xmodem download                                
CC  
File received OK                

SC code on MC processor.                        

FILE CORRUPT!             

Hit <ENTER><ENTER> to continue.

Controller product number: AJ803A

CLI bootup from failed controller: https://pastebin.com/sG74P64Y

CLI management bootloading menu https://pastebin.com/WY7KB4TR

Memory used just 16% from 64 GB

$
0
0

Dear All

Please help, we have RX6600 Server with 64 GB Memory and that server used for oracle database.
I beliave the server used more than 20 GB Memory usage for Oracle. The question is, why Memory Usage in swapinfo just used 16%?

#swapinfo
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 16777216 0 16777216 0% 0 - 1 /dev/vg00/lvol2
dev 50331648 0 50331648 0% 0 - 2 /dev/vg00/lvol9
reserve - 26058748 -26058748
memory 67075016 10902896 56172120 16%


Regards,
Dharma


Memory used just 16% from 64 GB

MSA 1050 SAS connected to VMware 6.7 U1

$
0
0

Firmware is up to date in both MSA and hosts (2 x HP gen 10).

Here is what i get on my MSA with # show ports detail

# show ports detail
Ports Media Target ID Status Speed(A) Ports/Conn Health
Reason Action
------------------------------------------------------------------------------
A1 SAS 500c0ff43b31c000 Up 12Gb 1 OK


Topo(C) Lanes Expected Active Lanes Disabled Lanes
-----------------------------------------------------
Direct 4 4 0

A2 SAS 500c0ff43b31c100 Up 12Gb 1 OK


Topo(C) Lanes Expected Active Lanes Disabled Lanes
-----------------------------------------------------
Direct 4 4 0

------------------------------------------------------------------------------
Ports Media Target ID Status Speed(A) Ports/Conn Health
Reason Action
------------------------------------------------------------------------------
B1 SAS 500c0ff43b31c400 Up 12Gb 1 OK

Topo(C) Lanes Expected Active Lanes Disabled Lanes
-----------------------------------------------------
Direct 4 4 0

B2 SAS 500c0ff43b31c500 Up 12Gb 1 OK


Topo(C) Lanes Expected Active Lanes Disabled Lanes
-----------------------------------------------------
Direct 4 4 0

------------------------------------------------------------------------------
Success: Command completed successfully. (2019-05-22 10:02:59)

It seems to be a common problem with MSA1050 SAS and VMWare 6.7 (U1), has anyone a solution?

Thanks in advance.

 

MSA 2050 Moving disks from one pool to another

$
0
0

I have inherited a MSA2050 set up with two pools. One pool has significant space left (20%) while the other has run out. We are in the process of ordering a new tray of disks, but it will be a couple of months to get through the process. Is there a way to transfer disks between pools, or to just merge two pools together?

Migration of MSA 1050 HDD to MSA 205x

MSA 2050-RAID 6, Block-level data striping with double distributed parity, over multiple enclosures

$
0
0

Good afternoon HPE Community,

Regarding the MSA 2050 Best Practices doc, it states the following: 

Create disk groups across disk enclosures....HPE recommends striping disk groups across shelf enclosures to enable data integrity in the event of an enclosure failure. A disk group created with RAID 1, 10, 5, or 6 can sustain one or more disk enclosure failures without loss of data depending on RAID type. Disk group configuration should consider MSA drive sparing methods such as global and dynamic sparing.”  

What I can't find anywhere ( (CLI, SMU guides, Siesmic, etc.) are specifics.  Client needs to know how many enclosures they can span a RAID 6 group across.  I can't find any limits/maximums regarding enclosures.  Could one put the maximum RAID 6 disk group of 10 drives across 5 enclosures (2 drives in each)?

Thanks in advance for any guidance, Tim.

P.S. this is an oppty, not currently installed.

New MSA Firmware Available

$
0
0

HPE MSA 1050, 2050, and 2052 Storage Systems - Storage Controller Firmware VE270P002-02 and VL270P002-02 are Available

HPE MSA 1040, 2040, and 2042 Storage Systems - Storage Controller Firmware GL225P002-02 is Available

You can find the release notes and download the firmware from:
https://www.hpe.com/storage/MSAFirmware

Cheers!
/Kipp

steps on connecting new purchase msa2050 disk enclosure to existing msa2040 controller enclosure

$
0
0

Dear all,

we purchased a new msa2050 disk enclosure for connecting existing msa2040 controller enclosure. we wish to connect it online without any downtime but I wish to confirm steps written is correct . The following is my draft on it:

1. rack disk enclosure

2. make sure all disks for msa2050 are inserted

3. power on disk enclosure

4. wait until power on cycle for msa2050 is completed ( 10 mins)

5. plug SAS cables to existing msa2040 controller one by one

6. use GUI to confirm all disks in msa2050 are regconized correctly

Please help to comment, thanks in advance.

Thomas


Info for MSA 1050 monitoring

$
0
0

Hello

I've disinstalled IRS for remote monitor and I don't know how can I manage and connect for remote support an MSA 1050 iSCSI. ( one view is installed.

Thanks for your support.

Cristian Boschiroli

HPE P2000 G3 firmware problem

$
0
0

Hi

We have a P2000 G3 that has a 'broken' controller but I need some help and advice.  The firmware version is not listed for this product.  Here's the healthy controller and below the 'broken one:

healthy.jpgbroken.jpg

The storage controller code version is T252R10- and -02 for the healthy one, which I cannot find listed anywhere.  Can I upgrade both to the current firmware or should I take a stepped approach.

I appreciate any help and advice you can give, I have inherited this setup.

Thanks

Pete

P2000 Failed Disks

$
0
0

Hi guys, I have a P2000 that had two failed disks, and one leftover. I replaced the failed disk, but then the vdisk went offline, and quarantined. The reconstruction is ongoing, but then another disk picked as failed, and another as left over. That now brings me to two leftovers, one failed, but reconstruction ongoing. Attached are teh screeenshots. 

 

I cant afford to loose the data. Any one experienced such?1.PNG2.PNG

Call Home functionality HPE MSA 1050/205x

$
0
0

Hi all,

Is it consider adding a "Call Home/auto-support" feature in the new firmware release of the MSA1050/205x (as in Nimble)?

Also what is valid for Oneview and MSA, does it work with MSA iSCSI models (workaround)?

Thanks,

MSA 2040 with D2700 SAS Expansion Storage Provision

$
0
0

Hello,

We recently installed a D2700 Expansion shelf (25 drives @ 600GB) which is attached to our MSA 2040.

When provisioning the storage for the D2700 is it best to spit the disks up per Pool A and B?  I find that when pool A is running low on space I am unable to borrow space from pool B, I am assuming this is by design.

The MSA (2040) Head Unit was configured using HP best practice guide for VMware environments, Disk Group 1A and Disk Group 1B.

Now with the expansion attached am I basically going to be creating new Disk groups,...Disk Group 2A and Disk Group 2B and associate with their respective Pools? A/B?

Any help in provisioning the expansion would be greatly appreciated.

 

Viewing all 2285 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>