Quantcast
Channel: MSA Storage topics
Viewing all 2285 articles
Browse latest View live

P2000 G3 Increase Cooling

$
0
0

Hi,

Does anyone know how to increase the cooling, or overriding the default speeds.  GUI\CLI?

The memory controller Temp A / B are around 70-74 degrees and producing warnings.

The fans are consist spinning at 380-ish RPM and capable of running much higher.

All my fans are working and no faults present, just overworked unit.

Left/Right PSU temp is 39 and onboard temp are around 53-55, so seems like the controller cards are cooking.

Im happy to increase the cooling to max as its sitting in the corner of a datacentre in the dark.  I have no warrenty left, so happy to apply any hacks until the unit dies.

Any help would be amazing.

 


SAN MSA2040

$
0
0

Hi,

How to setup unsuccessful login attempts and send as email alerts in MSA 2040 SMU. Please let me know. Thank you.

 

 

PROVISIONING OF LUN FROM HP STORAGE TO ORACLE SERVERS

$
0
0

Hi ,

I have two environments in my DC. One is HP and the other is Oracle. Each of them have their separate storage  setup and they are functional. My question is :- Is it possible for me to provision LUN  for my Oracle servers in my Oracle environment from the HP storage in my HP environment? If yes, please  what do i need to achieve that and how best can i achieve it.

 

Thanks

 

Taiwo

 

MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

Hi There

Just in case anyone else has this trouble.  I added a read-cache (SSD disk) to Pool A on my MSA 2040.  Pool A held the datastore for my VMWARE 6.7 environment.  Pool A had been up and running for months.  I decided to add the read cache to improve read perfromance (as best practice suggests).

As a test I rebooted one of the esxi hosts after doing this.  The datastore would no longer mount. The vmkernel log showed the error "Invalid physDiskBlockSize 512".

After much faffing about - I decided to remove the read-cache - the Datastore mounted correctly.

FYI.

MSA 1050 SAS connected to VMware 6.7 U1

$
0
0

VMware was installed using latest HPE customized ISO.

Having an issue with MSA 1050 and vSphere.

After configuring MSA was able to create datastores in vCenter.

Created new virtual machine and patched with no issues.

When deploying template, it hangs and never completes.

Storage vMotion between datastores hosted on the MSA have the same issue.

I am able to deploy the template to an iSCSI datastore on old SAN and then storage vMotion to datastore on the MSA.

Once on the datastore hosted by MSA, no clone or storage vmotion operations succeed.

No errors, just hang there. A small VM with only 100GB drive sat for 14 hours until I canceled it.

Going to the iSCSI store took 4 minutes.

I have a case open with VMware, but thought i'd ping here as well.

Been at a standstill for several days.

MSA 2052 advice voor Hyper-v 2016 cluster

$
0
0

Hi All,

We have the following hardware voor our virtualization:

2x HPE Proliant DL380 Gen10
2x Cisco MDS9481s
1x MSA2052 (8x SSD 800 GB en 16x 600 GB SAS 15k), two controllers

The current config of MSA is:
The MSA is connected via FC to the Cisco Fabric and then to HBA of Proliants
There are 4x disk pools:

- Controller A: 4xSSD RAID10 en 8xSAS 15k RAID10
- Controller B: 4xSSD RAID10 en 8xSAS 15k RAID10
- There are 10 CSV's configured:
2x SSD and 3x SAS per Storage pool

QUESTION:
- How to get the most performance out of the MSA?
for our VM's and File servers

We have tested and we saw that the SAS pool was faster then the SSD pool.
Is this normal?

- How about failover?
- Should i balance the volumes on each controller?
- Is there some Best practise for the CSV (Volume) size? We use for our SSD Volume 500 GB and for the SAS 750 GB

MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hi,

is I try to Change Pool Settings and remove the Flag on Enable overcommitment of pool? it is grayed out and I cannot remove it.

Running VL100P001 with 2 Pools  A and B, one disk group assigned to each pool and one LUN in each pool/group.

Is there any bug or any limitation why overcommit cannot be disabled?

We currently are on usage well below the allocated storage.

Thank you

 

 

MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

Hi All,

Recently I extended the size of a volume on the MSA 2040.  The volume is used as a DataStore in my ESXi 6.7 environment.  Using vCenter 6.7.

I assummed that this increase would be automatically seen in VMWARE. However it was not. 

On close inspection, using vSphere web client, I could see that the "device" displayed the correct new increased size but the Datastore on this device still was the old size.

Eventually I got this solved using the steps in this post https://kb.vmware.com/s/article/2046610 . Very tricky steps and probably if you got wrong - these steps could have disasterous consequences.

However, I come from a NetApp environment (an enterprise NetApp environment)  and when a LUN was extended at the storage side the new size automatically appeared in VMWARE (and that was vmware 5.5!)

Is this (having to manually extend the datastore using esxi command line) a limitation of the MSA2040 - or did I get unlucky?

Ta.


Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

Hi There,

Is anyone else out there using a HPE c7000 and MSA 2040 in a software iSCSI implmentation (I have two HPE 5700 switches providing the storage network). One 10GB connection (with a standby) to the storage using SFP-DAC cables.  I'm using VMWARE 6.7. The VMWARE datastore is on SAS 10K disks in a RAID 6 config.  The blades are CL460c's  Gen9.  (Power is set to MAX in the BIOS)

I've built a test VM Windows 2012 server 1 CPU 4GB RAM - One 40GB

I then run this test.

DiskSpd.exe -c15G -d300 -r -w40 -t8 -o32 -b64K -Sh -L c:\temp\testfile.dat

What performance are you getting. I'm not sure what is good for an MSA 2040.

While running the test above I also run performance monitor and specifically look at the counter "Average Disk sec/Transfer" . The average is alway around 0.262 when Microsoft says it should be around 0.005.  Thats whats really bugging me.

Appreciate it if you can run the test (as close to the config above would be great)

MSA 2040 - Understanding Volume Tiering

$
0
0

Hi There

I've been reading the best practice guide for the 2040 and I'm stumped by volume tiering.

The document seems to be suggesting that "busy" data will "AUTOMATICALLY" move to volumes with a higher Tier - which as far as I can see if physically impossible.

quote from the best practive "As data is later accessed by the host application, data will be moved to the most appropriate tier"

Image if there are two volumes - One is "No Affinity" and one is "Performance".  These two volumes are presented to a Windows 2016 server and mounted as the D: and E: drives.  Imagine that the data on the D: drive started to get very busy - Is the best practice doucment trying to tell me that the data will be moved from the D: drive to the E: drive - that to me seems impossible.  Its actually something you'd never want to happen.

ta

 

 

HP MSA 2040 Seperating VMDK storage from NSA (iSCSI) storage

$
0
0

Hi,

I present storage from my 2040 to my esxi hosts - they then mount it as a datastore. 

I then build VMs on that datastore. The local disks on that datastore are vmdk local storage. (For example the C: is a vmdk)

However, some of my VM's require iSCSI their own storage.  (For example the S: drive is an iSCSI drive).

I was thinking of doing the following.  Pool A is reserved for VMDK storage (SSD disks)

Pool B is reserved for the NAS (iSCSI) storage  for the VM's (SSD disks).

Is that a good way to go about it?

ta.

 

 

HPE MSA 2052 remote replication

$
0
0

Hello,

I am a newbie to replication technolgies with HPE MSA 2052s. We are replacing 2 * P2000 arrays in remote geographical locations linked over a WAN connection. I am testing this set up at present with 2 arrays in the same cabinet & same ethernet switch.

The migrating the existing workloads from P2000 is easy enough. Primary site will be running 2 ESXi hosts with HP MSA2052 connected with direct attachment through FC. The VMs on the hosts will be approx 11Tb in size total, genearting approx 10Gb of change daily. We have appox. 21Tb of usable space with 2 * 800Gb SSDs used as performance tier

The intention is to use 2 * iSCSI ports for remote replication between primary & secondary site. Been advised to use virtual replication & do not want to affect the 'on-line day' will utilise scheduled replications 3 times a day.

A number of parameters are confusing me in setting up the replication set.

Do I want to choose Discard or Queue Latest? (Queue latest seems to be the default).

Do I want a Secondary Snapshot history? If so do I need a retention snapshot history of greater than 1? Do I want to keep Primary Snapshot history? & what should be retention priority - I am guessing Never Delete would not be good option if I want to preserve disk space!

Primarily the remote replication will be used for DR site recovery & not recovery of VMs from point in time.

Second last question - I am guessing that the Enable overcommitment of pool setting should be set as ON, other internal snapshots may grow & we run out of space.

Last question - In testing scheduled replication - continually get these errors.

Scheduler: The scheduler was unable to complete the replication. - Command is taking longer to complete than anticipated. (task: du-msa-01, task type: Replicate)

EVENT ID:#A2238

EVENT CODE:362

EVENT SEVERITY:Error

Normally happens 20 secs after replication completed successfully.

Firmware version is VL270R001-01

 

Many Thanks

 

Allan Clark

 

 

 

Create 2 volume msa 2050

$
0
0

hi guys
i have msa 2050 i connected 2 port to my server as backup and redundant,when i create volume  and map it to injector in my windows server i see 2 voulme created.one online and one offline.so where is the problem?
as you see i create 2 voulme one for sql and another for db.but show 4 voulme.Untitled.jpg

P2000 MSA G3 Controller B network degraded (but still seems to be working)

$
0
0

Hi all,

          I've got an odd issue on an old P2000 G3 MSA storage system.  It has 2 controllers both of which are working fine and a number of expansion units (again all working as expected). I noted that it was reporting that controller B's network management port was 'degraded' . Sure enough 'show system' is reporting the system as degraded and I can see a red X against the controller B manangement port.  Show FRUS however says alll subcompoents are in OK state, show controllers again is fine, show cache-parameters is OK,... I can also connect to the system over ssh on both controllers.  What I can't seem to do is connect to the WEBUI on controller B. I've tried restart MC to see if that helps. No. I've also run ping tests from both controllers.

Controller A can ping its gateway with no issues. It can also ping controller B. Controller B however is reporting it cannot or can only intermittently ping the default gateway and it's partner controller A. 

 

So all a bit odd really. If it was failed I'd expect no access at all, yet I can ssh onto it. 

 

Both controller are running TS230P006 which I am guessing is pretty old however we had not seen any issues until recently.

My next plan is probably to fail everything onto controller A and then re-seat controller B to see if the issue clears. If not I guess the next step is a new controller but given the issues I've had in the past being sent controllers with way newer firmware and then having the controllers refuse to talk to each other that will be a last resort.

If anyone has seen this type of issue before or has any suggestions it would be most welcome. 

thanks

Ad.

 

New Failover/DR Setup with MSA

$
0
0

Hi Guys, I have a client that has bought 4 DL380 servers (2PR and 2DR) as well as 2 MSA 2050 (1PR and 1DR).

The environment is purely Windows, hence Hyper-V will be in use for virtualization. Any advice on how to achieve HA/Failover across the two sites?

Is MSA replication and clustering able to achieve this?


HPE Policy on End of Life gear

$
0
0

Hi There

If HPE release a new product do they ensure that that new product is compatible with products that are recently End of Life.

For example from what I can read the 900GB SAS 10k 12GB disk Model Number EG0900JFCKB is compatible with G1 - G7 servers.  Does that mean its not compatible with G9 servers?  Do HPe release a new disk for the G8-G9 servers? What happens if a someone (a friend of mine!) has bought a disk without checking the Model Number and starts to notice performance issues that he cannot explain?

What about old storage array compatibility with disks;  the disk fits in the slot - but thats that mean the disk and the storage are compatible?

SPOCK doe snot provide this level of detail about compatibility - the release notes do not seem to provide this level of detail on compatibility.

HPE MSA 1040 SAN Storage Mapping to Oracle Linux

$
0
0

hello,

I have an MSA 1040 SAN here and I need to know how to map its LUNs or Volumes to Oracle Linux 6.9 Servers?!!

is it from the management console only in the Mapping section like windows server do or Linux demand another method to MAP the Volume ??

I need to mount these LUNs or Volumes on the system permanently to be available to the applications and the system even after any restart to the servers.

the SAN connected Directly to the servers through Qlogic HBA Cards without any SAN Switches.

any help, please ?!!!

How do i get my serial # MSA 2050 SAS

$
0
0

without physically being onsite obviously

 

i have connected ssh using Putty.

 

I tried show Show system .

# show system
System Information
------------------
System Name: Uninitialized Name
System Contact: Uninitialized Contact
System Location: Uninitialized Location
System Information: Uninitialized Info
Midplane Serial Number: Confidential Info Erased
Vendor Name: HPE
Product ID: MSA 2050 SAS
Product Brand: MSA Storage
SCSI Vendor ID: HPE
SCSI Product ID: MSA 2050 SAS
Enclosure Count: 1
Health: OK
Health Reason:
Other MC Status: Operational
PFU Status: Idle
Supported Locales: English (English), Spanish (español), French (français), German (Deutsch), Italian (italiano), Japanese (日本語), Korean (한국어), Dutch (Nederlands), Chinese-Simplified (简体中文), Chinese-Traditional (繁體中文)


Success: Command completed successfully. (2019-03-17 17:05:25)
# show system

 

 

 

Apprently this is the wrong serial # 

 

please help 

MSA 2050 Module SFP Ethernet no supported

$
0
0

Hi

I have a big problem, I have a 2050 msa with firmware VL270R001-01,
only I just installed two modules sfp ethernet ref: C8S75B-C 1GB RJ45 iscsi sfp 4pk,
unfortunately the msa does not recognize the module ( sfp not supported),
I applied FC and ISCSI mode,
how can I fix this problem please?

 

Thank's advanced !  urielexpedit@gmail.com

Release of controller firmware GL225P001 for HPE MSA 1040, HPE MSA 2040, and HPE MSA 2042

$
0
0

Howdy All!

Just wanted to announce the release of controller firmware GL225P001 for HPE MSA 1040, HPE MSA 2040, and HPE MSA 2042.  This firmware was released last week and is the most current version. 

You can find the download link and release notes link on www.hpe.com/storage/MSAFirmware

Cheers!
/Kipp

Viewing all 2285 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>