Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Share tech news, updates, or what's on your mind.

Sign up to Post

I have an HP DL 320e that won't get past post. After a recent planned shutdown, it just won't restart. If you push the power button, the system attempts to start and then stops right away and the red health light blinks quite fast.

We read some support articles on HPE's website and attempted some fixes such as swapping out the power supply, the power cables, trying a different power outlet, etc and the power supply doesn't seem to be the issue.

We have another server that is the exactly the same and with the same Raid setup but is performing a non-critical task. The Faulty server has a Raid 1 setup with hard drives in Bay 2 and 3. The working identical server has a Raid 1 setup with hard drives in Bay 1 and 2

My question is can I just swap the hard drives between the two servers since they have the exact same Raid 1 setup and are the exact same server?  I don't think its that easy but wanted to check before attempting my next step of swapping out the motherboard.
Ultimate Tool Kit for Technology Solution Provider
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

We have to serve 50,000,000,000 requests per day, our app can handle that, but we're not sure if our database skills are there to pick this up. We have a database server with 256gb of ram and 512GB SSD RAID using percona edition. Some of the data needs to stay on the database for 30 days, the transaction table where all requests are stored. what approach do we need to scale this horizontally? each endpoint requests runs about 4 queries...
Can anyone recommend a good place to get some cloud storage. Maybe a TB or 2 for Server backups? FTP or SFTP would be nice. Otherwise I need some way to map a drive letter.
Hello Experts,

I need to manage a partition in Proxmox.

I have Proxmox working with a SD drive 500 Gigabits.

I added a RAID 1 12 Terabytes disk volume for storage. I need to manage this RAID 1 volume and make it available for the VMs.  I need to partition the Disk and make it available for the VMs.

I am reading about a BIOS change for the VMs and also LVM management.

Not able to make sense of my Google findings:

In order to properly emulate a computer, QEMU needs to use a firmware. By default QEMU uses SeaBIOS for this, which is an open-source, x86 BIOS implementation. SeaBIOS is a good choice for most standard setups.

There are, however, some scenarios in which a BIOS is not a good firmware to boot from, e.g. if you want to do VGA passthrough. [6] In such cases, you should rather use OVMF, which is an open-source UEFI implementation. [7]

If you want to use OVMF, there are several things to consider:

In order to save things like the boot order, there needs to be an EFI Disk. This disk will be included in backups and snapshots, and there can only be one.

You can create such a disk with the following command:

qm set <vmid> -efidisk0 <storage>:1,format=<format>
Where <storage> is the storage where you want to have the disk, and <format> is a format which the storage supports. Alternatively, you can create such a disk through the web interface with Add → EFI Disk in the hardware section of a VM.

When using OVMF with a …
Hi, trying to replace a failed disk on a Dell Perc S110.  When I try to hit insert to assign the global hot spare as I have done in the past I get the following message/errro.

Not selectable: Disk is in use by a virtual disk

The two hard drives that are still OK are 500 gig Dell drives, I tried a 1 and a 2 T Toshiba, same error.

Thanks all
Hi, I'm looking for a decent RAID card for around 200 or so.  I wanted a Perc controller, but not stuck on it.  I wanted to run RAID 5 with 3 drives and a 4th as a spare.  I just want it to have sata drives, I don't need the SAS connections, and I really don't want a battery, just something better than the onboard connections and software raid.

Thanks a ton all.
Hi, we have an HP ML350 G9 server with a raid 5 array, now i am adding 2 new SSD drives and wanted to build a Raid 1 array additional. Does anyone have steps for this so i dont screw it up?

How do I figure out my primary fibre port in NetApp?

I am using OnTap to do it.
Crypto protection. Best solutions? Tape, disk based backup, on site, off site, vendor specific, and reasoning behind your answer.
Hi All,

I am facing an issue while configuring soft zoning for host and 3 par.
when I activate zoneset on Switch1 , Switch2 zones deactivates and show 0 active path on storage and vice versa
only for 3 par. Other zones for other storage are still intact on those switch.
Cloud Class® Course: SQL Server Core 2016
LVL 12
Cloud Class® Course: SQL Server Core 2016

This course will introduce you to SQL Server Core 2016, as well as teach you about SSMS, data tools, installation, server configuration, using Management Studio, and writing and executing queries.

Hi. I have a question regarding a predictive failure on a RAID 1 drive on a Dell Poweredge T620. The specifics are that the T620 hosts 2 server VMs, and has a RAID 1 and RAID 5 array, A drive in RAID 1 indicated a predictive failure, and I'm wondering the viability of adding a new drive (NOT the same model as this server is out of warranty), that is a 600GB drive replacing this failing 300GB drive. Is it possible to plug in the new drive, initilize it via the BIOS, then set it up as a hot spare so that the array rebuilds itself using the new drive? If this is possible, do I have to manually use Dell Openmanage to tell the old drive to turn off (it seems to have options for that), or will it automatically rebuild the array upon detecting the failing drive? Thanks.

The main concern is if using a different drive is possible. Is using a larger one okay? I want to hopefully minimize downtime.
Nimble vs. Tegile storage.  Where would you lean now?  Want to hear opinions.
Smaller scale now.  Less than 100TB space for the data center space.

Can I increase the disk space when the vm is power up? If can, why am i am encountered this error. See attachment. Tks
As a IT Manager I am trying to establish a mechanism /system to monitor Backup, Servers, Network, storage etc.

This does not have be be complex or automated SNMP based system but a simple data entry manual process by which the administrative staff or sysadmims update on a daily / weekly basis.

The need is to produce a dashboard for a quick identification of ongoing issues, errors, threshold etc.

Is there a low cost or open source or template, cloud based or on-prem system which I could use?


we have a customer in stress with a cluster in a box from Fujitsu. He has 2 nodes and shared storage inside the same box. the Disks are sas connected to the 2 nodes.

the system was online for few months without any problems.

now since today the nodes cannot connect to the shared storage. We a disk pool of 20 disks. All disks are online. we have 2 volumes on the disk pool.

volume 1 is online, volume 2 is not online.

when we try to get volume 2 online in the disk mananger of server 2016 we get an access denied error. but after googeling this seems to be normal. disk needs to be set online in the cluster manager.

when we want to set the disk online in the cluster manager we also receive an error and 2 critical events are created.

Cluster resource 'Clusterschijf 1' of type 'Physical Disk' in clustered role 'db001812-8ac1-43c7-87c5-783f1fb36254' failed. The error code was '0xf' ('Het systeem kan het opgegeven station niet vinden.').

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

Cluster Shared Volume 'Volume1' ('Clusterschijf 1') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network…
Steps in ontap 9 simulator to setup NDMP
how to start NDMP in storage and add as NDMP server in Backup Exec 12.5
Dell PowerVault MD1200 with 8 drives in RAID 5, (2 TB each) (2, 4 drive arrays).
There were 3 arrays but two drives simultaneously crashed so I pulled all 4 out. About 2 weeks later the whole system died. When you unplug it (and reboot the server too) and replug it in, it starts to boot, recognizes all the drives the the fans in both power supplies go FULL blast and REALLY loud.
The virtual drives are recognized by the OS for about 10 or 20 minutes until the system dies again.
Only thing I can find is on the back the EMM light is blinking amber twice at a time but I'm not sure thats it.
I've tried booting up with either one or the other PS plugged in, same result.

I have a windows 2012 Data Center Server

it has an iscsi connected SAN for disks (not the C Drive, which is DAS)

one of the volumes on the SAN is configured for 6TB, however the Drive on the server is only capable of seeing 2TB and of course running low on space.

the drive that is only seeing 2TB is Basic not Dynamic disk in Disk Management

this drive on the server has a good amount of data on it so apprehensive to do anything crazy

i need this disk to be the 6TB the volume is configured for.

how do i do this without any fear of losing data?

i do see the ability to switch from Basic to dynamic in disk management, however concerned about doing so without data loss.

once i get past the basic to dynamic switch how do we extend to increase the size?
In my organization there are 24000 mailbox on Exchange 2010 platform under private cloud hosted solution.Since last 5 years we have been archiving all inbound & outbound mails in separate storage called compliance archival but now we have decided to move on to Exchange Online O365.I would like to know the best cost effective with zero probability of data loss to securely preserve said data (compliance archived data) for next 5 or 10 years.I also would like to make provision to export data from compliance archival whenever it is required or demanded by end user back to their mailbox hosted under public cloud (O365) with minimum time lapse.Please suggest the best ways through which i could achieve target of manageability of compliance archival data without compromising with the security and data loss while doing Import/Export.
Get expert help—faster!
LVL 12
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

how can i configure cisco switch MDS 9148s with san storage DellEmc compellent SC2020

it's a new integration for the storage sc2020

I connected the storage properly by following the manual with tow san switchs cisco MDS 9148s 16G FC

when i initialized the storage i noticed that virtual ports does not exist

after the initialization has been finished I noticed that the virtual ports are disabled:


when i do this command in cisco switch

show flogi database

i can see only the physical ports

so the virtual ports does not exist to configure zoning.

I've fixed the cluster as follow:

Failover Cluster Manager
Nodes show
MBX1, MBX2, MBX3, MBX4 all healthy states

Storage for exchange is none. Did not configure

Production vNIC: MBX1, MBX2, MBX3, MBX4 are up at subnet of
Backup vNIC: MBX1, MBX2, MBX3, MBX4 are up at subnet of

Run network cluster Validation. see attachment.

Can any experts to explain to me what is exactly the below stated event mean and how to troubleshoot them step by step as it's keep appearing in the cluster event log?

Event ID: 1282
Security Handshake between Joiner and Sponsor did not complete in '30' Seconds, node terminating the connection

Event ID: 1135
Cluster node 'MBX1' was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster. Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapters on this node. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.

Event ID: 1177 => I did not see any quorum disk being configured in the Failover Cluster Manager. Why did this error keep appearing?
The Cluster service is shutting down because quorum was lost. This could be due to the loss of network …
Many hosting packages offer either HDD or an SSD hard drive options. Which one is the best when it comes to performance as well as budget?
This is a design and planning process to setup and configure Dual-node SuSE HA Extension cluster. The 2 nodes have already setup with SLES 11 SP4. These are physical servers. There are 4 NICs and dual-port HBA cards on each node. eth0/1 is bond1, and eth2/3 is bond 2. Bond0 connect to public network, while Bond1 to backup segment. Both fibre channel ports of hba have already connected to the respective SAN switches. We are using Fujitsu DX100 shared storage.

I want to know how to setup this HA Extension? How about the shared storage? we have to have a nfs share allocated to the active cluster node.

Thanks in advance.

I use backup exec 16 to backup my local network.

I have a site-to-site VPN to MS Azure

in Azure I have a VM running the Veritas Backup Exec CAS Server

in local network I have a VM running the Veritas Backup Exec MMS Server

both the CAS and MMS have a local deduplication disk attached to them.

the MMS runs all backup jobs

each backup job runs to the Dedup disk on the MMS (local network) then DUPLICATES to the Dedup disk on the CAS (off site network in Azure).

what I did not understand until now was the performance issues this set up would cause.

the MMS uses the SQL Database on the CAS as its management database.

all queries it makes therefore go over the WAN

after days if not weeks of calls to Veritas off-shore support (which Is painful beyond words) confirmed this is by design.

this article describes the issue:

a potential solution I have thought of is it replicate the SQL database of the CAS to a SQL database in my local network - and for my MMS to connect to this local database.

I am sure this will not be a supported solution

has anyone had any similar experience?

many thanks

Hi there,

We have several websites that have an Access database as their main storage.  For some reason we can't figure out, one site's DB keeps getting data removed from it.  The user who manages the data on it is not removing anything, but at random times, the data is gone and we have to restore from a backup.  The site is on a Windows 2008 R2 server on IIS, is there anything we can check/change to stop this from happening?  It's getting very frustrating for us and the client.

- Christian






Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.