[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now







Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Share tech news, updates, or what's on your mind.

Sign up to Post


I am looking into deploying a VxRail into our Storage environment,
if using total Flash SSD inside VxRail would it be best to have multiple pools or one single Pool with all LUNs inside this pool?

How is performance effect if we deploy one pool?
New feature and membership benefit!
LVL 11
New feature and membership benefit!

New feature! Upgrade and increase expert visibility of your issues with Priority Questions.

What types of daily/weekly/monthly activities are require to effectively manage a SAN device? I appreciate this is very generic, but often organisations use a 2nd SAN as a repository for disk based backups, so having some assurances from a management perspective that this DR SAN is well managed/maintained/monitored is quite important.

Also from a contingency / support angle, what sort of arrangements / contingencies should you look for in terms of support if there was a failure of the entire device or a core component?
This is really starting to do my head in.
My DL380 G6 has 3x RAID controllers.
  • 1x P410i with 256MB cache (no battery).
  • 1x P212 with 256MB cache (no battery).
  • 1x P800 with 512 BBWC

The P410i came with firmware version 5.70
P212 with v 3.xx
P800 is still v7.xx

The P410i links to the main cage in the server, the P212 to a generic SAS/SATA drive unit and the P800 to a MSA60.

The main reason for the firmware updates on the P410i and P212 was the P212 couldn't see 3TB+ disks.
Since the upgrade, the P410i has 0x14 lockups (1719-Slot 0 Drive Array - A controller failure event occurred prior to this power-up. (Previous lock up code = 0x14)) in ESXi, however seems fine in SmartStart and a Linux live boot (which I used to update the firmware from).
ESXi starts up and loads fine when the P410i has it's drives out, obviously not loading the VMs on it. (1x 4x146GB SAS as RAID 10, 1x 4x500GB SATA as RAID10).

In desperation I took the server home to try to at least recover the data off and on my workbench (with the P212 and P800 connections off) it booted fine. I began a very cumbersome backup overnight with SCP and at 500KBps it didn't get very far.
As it ran well, I took it back and plugged it in at the datacentre and I'm back to square 1.

Does anyone have any ideas what I can do or what to try? Since ESXi tries to access the datastores off P410i it jams a SSH session when …
I have a request to share a LUN with HA.
I mapped a LUN to 2 servers. But not sure how to do that.
Is it possible to share LUN by CSV (clustered shared volume)?

I have posted the question in the above link.

Now I need to give a picture of the data in the storage.

Full, Incremental,Incremental, Incremental.

Today if I need to run Synthetic Full... if I understand it will create another Incremental from The production (Not from the storage),  then I believe  it will create another copy of an existing full backup from the storage, then it will merge the recent incremental to the copy of the full backup that we have created...

is this the process? or am I misunderstanding how Synthetic Full back works ?

Thank you
I need solution for one issue I am facing in my organization and I am hopefull to get some good solution for this issue. Below is the details of my query.

1- Currently we are running 4 sites and all 4 sites are connected with each other throw site to site VPN tunnal.
2- We kept all the data under SAN storage. This data backed up regularly through backup exec in Tape drive and these tapes tapes moved to other location after backup .

We want all the data available at our branch offices should be replicate on the SAN storage which is installed at our Head office. In coming days we will put one more server in our branch office and copy approx. 2 TB data on that server. In case any disaster happen in any of our site, we can easily recover data from backup.

1- All three branch offices has 10 MBPS Link and approx. 2 TB data. If we will replicate data over network, it will be big load on network and slow down the network performance. It will affect regular operation until its all the data replicate successfully into SAN. This situation will affect our Business.
2-  There are much more chances of data duplicate because they are moving the same and use same data as case reference.

Solution required:
1- How can we replicate all location data in SAN storage without affecting our day to day operation?
2 -How to avoid data duplication?
Dear All

              I have a vcenter with a VNX san storage attached to our esxi host, just wonder how can i view the "performance" ? do i need to install some kind of driver to able to view this ? any help would be appreciate, thanks a lots !


Have anyone DS storage manager software copy? i lost it :(
I have a Dell T605 with SAS 6/iR controller with 2 drives (RAID 1) that failed a drive. I shut the server down and replaced the drive with the same model of disk.  Server booted back up and opened Dell Open Manage and saw drive was rebuilding.  It has been stuck at 1% for 24hours.  Any ideas?

Open manage screen
I'm IT for a small/mid-size business that currently has 3 HP DL380 servers and a VMWare Essentials license for those servers. We have only a handful of VMs on each machine (2 - 5 each) and data is currently stored on local SAS HDs in the server. However, the business consumes a fair amount of data and I'm about to run out of space on two of these servers (One in the next few months, and another one in about a year.)

I've currently stayed away from SANs due to the high cost, but I realize I may need to go that route and push for approval. I've also considered just getting another server and adding the SAS HDs again as well (although I will probably need to revisit my VMWare licensing, as well as Veeam) (Does it even make sense to buy a SAN without the accompanying vMotion licensing?)

I'm looking for advice, factoring in cost, as to what you feel I should focus on as a good viable solution. Thanks.

Concerto's Cloud Advisory Services
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

I am working on a storage solution, and I want to make sure I crank the best performance out of the hardware I have available. My test will be running backups using BackupExec. I have two HP J9280 switches that don't have any stackable modules nor SFP's in them; I have a Drobo 800i which has 2 ethernet ports; Each of my servers has at least one free ethernet port. All servers are HP servers, and are using either their onboard NIC ports, and the ones I added network cards to are using 331T adaptors. AFAIK, it looks like all hardware supports jumbo frames. How do I optmize the network? I'd assume making sure Jumbo frames are either consistently enabled or consistently disabled, STP disabled.... Any other suggestions or advice? Probably should have led with this- This is a dedicated storage network- no VLANs, no traffic other than iSCSI will flow through the switches,
I have a odd issue it my file-server performance seemed slow for transfers between other servers.  All the servers are connected to a 10 gig switch and all VM's are on all flash storage.  If I run a diskmark test in the file server the OS drive only registers 75MB a second and 60MB a second write any other server on same hardware and storage reports 1500MB a sec read and about 1000MB write.  The actual files are stored on a separate VHDX which is attached to the VM if I run a test on that drive from inside the File Server I get the 1500MB/1000MB  read write.  the OS is windows server 2012 R2 he users load is very little 25 persons.  Any Ideas?
I am currently backing up about 7TB of data, file, SQL and Exchange, with the bulk of the storage coming from files.
I am backing up about 5 different Windows servers.  We are currently using veritas backup exec and are backing up to 2 superloader 3 devices, one with LTO7 and one with LTO6.
The backup jobs are taking longer than the entire night and I'm thinking it time to upgrade to backup directly to disk.
So I'm thinking to go to disk to disk to disk, and I would like the disks to be hot swap-able, so I can remove them every day and store them in our fireproof safe until I have to reuse it, which would be once a month, as I'll have like 20 disks, one for each day the backup runs.

Any idea's what's the best way to proceed?  I've been doing some research and there's so many companies out there, that I wanted to know if anyone has had good success with any particular solution?
I would want something that supports at least 8TB drives, so I can do a full backup every night to a single drive.
I pretty new at RAID and adding additional Hard Drives.

As it stands, I have a Dell PowerEdge r720 with 5 x 900gb Hard Drives installed on a Perk H710 Raid Controller.
With a Raid 1 + Raid 5  according to the initial notes I have.

Right now I'm only showing on my VMware ESXi, 5.1.0 Capacity 1.64 TB with 135.07 GB free.

I want to add 2 more 900gb hard drives.  How can I go about this?
Hello- at our organization we are using Windows 10 with Office365.  Users have Office365 apps installed locally and typically use those over the web versions due to the increased functions available in the local versions as opposed to the web versions.  We are also encouraging the use of OneDrive for storing files.   When users boot up, they typically connect automatically to OneDrive for Business.  We do the initial configuration for them on their machine which consists of logging into the Office 365 web portal, clicking the OneDrive tile, allowing it to set up...then clicking 'Sync' which then prompts for their credentials to open OneDrive for Business which we enter for them.  One Drive for Business than syncs and creates the local folder and they are off and running.

This seems to work well, but we do have a user (power user) facing challenges as we migrate in this direction and we need to understand why before it happens to more and more users as they move toward using One Drive for Business to save files.

Occasionally the user will be working in a OneDrive saved file and making changes.  He will then suddenly get prompted with and Office 365 login screen along with the message 'Upload Failed'?  This is odd, because he is already logged in to OneDrive because he is working in a file from OneDrive.   He also will sometimes get the message 'Upload Blocked' and he is told he will 'need internet for this" ??  See figure 1.

Similar to this... see Figure 2... he is …

I am going through below link


what is importance of below entries and their sub entries

1. proxies
2. servers
3. mirrors
4. profiles has repositories and has internal and snapshot there

if i have say 4 projects in 4 workspaces with 4 settings.xmls how do i refer all of them in eclipse ?
workspace_1 should point to project_1 with settigs1.xml
workspace_2 should point to project_2 with settigs2.xml
workspace_3 should point to project_3 with settigs3.xml
workspace_4 should point to project_4 with settigs4.xml

please advise
I have an HP server A DL 360e Gen 8 with a smart array b320i raid controller card that has two drives saying ready for recovery....there are two logical drives on this system.  I am not sure how to start this with the server being up.  I really can't have this server down....really at all.
Hi All,

Is there any suggestions for creating a CentOS image template for CentOS?

Below are some of the information of what i am trying to achieve.

- Installed CentOS and virtualized it. Used Clonezilla for V2V and V2P on the hardware it was prepared on works.

Problem: Tested Using Clonezilla on another hardware e.g Dell R610 with Raid Controller doesnt work. It goes to dracut recovery mode. In dracut recovery, blkid does not show any hdd.

Reading suggestions online, some suggested to try baremetal backup and recovery tools such as shadowprotect/acronis still has issues with unable to boot.

Any suggestions would be appreciated.
I have a 2012 R2 ADFS 3.0 server on my internal network and a 2012 R2 Windows Application Proxy in my DMZ. I have published two web apps. My issue is a double login prompt at the ADFS login page. I enter my email and password and login. It immediately reloads the same login page. I enter my creds again, and then it takes me to the web app. Both logins appear successful. There is no indication of incorrect creds.

This issue only happens for one of my web apps. The other web app only prompt once like its supposed to do. Both web apps are hosted on the same internal server.
Industry Leaders: We Want Your Opinion!
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

I have a thumb drive that I encrypted using bitlocker to keep sensitive materials on (I do freelance work for a small adult film studio). The drive has served me really well over the years, but recently it wouldn't allow me access, even if I input the correct password or key, simply telling me to format the disk before using it. I was able to manage the bitlocker settings, change or remove password, etc, but not access the files themselves. Using testdisk, I was able to pull a number of files from the flash drive including one I believe is the container for the files that were on the drive, but it is simply an extensionless file called "COV". I do have the recovery key and the password for the drive, and it appears in disk management, but shows there as RAW, and I am unable to do anything else. Attempts to disable bitlocker entirely are met with a simple "The device is not ready".

Is there any way to take this "COV" file and extract things from it, or maybe place it into another bitlocker protected drive to unlock the files from within? I've already written off the contents, but if I can retrieve anything at all, it would be amazing.
Hi, I am inquiring about what others are using, as well as, their experience and if they had to do it all over again would they choose the same solution, for their storage/SAN solutions in your virtual environment (enterprise level) (i.e. VMware)?

Recently, I purchased dual HPE StoreVirtual 3200 SAN devices and have had nothing but issue after issue with them and I am trying to find out what others are using in a similar environment.  In addition, I am looking for great track records of reliability, granular configurations, on-the-fly expandability, etc.

Someone has recommended nimblestorage and insight manager, but not familiar with it at this time whether it is a viable alternative to the StoreVirtual SV3200 device setup.

Thanks in advance.
We have a 2 host cluster with hyper-V based on Windows 2016 patched

We have a Fujitsu DX SAN 10 Gb iSCSI SAN as the primary storage, connected using 10 Gb SFP+ DAC connections to the servers

Both iSCSI networks are isolated from the rest of the environment.

We are using MPIO and 2 connections for the iSCSI configuration on each host

When the iSCSI volumes are configured and mapped with a network drive on a single host we get good performance moving things in and out of mapped drive, anything between 5-8 Gbps

The moment we add the same volume as a CSV to the cluster, performance drops drastically to 1 Gbps

We have been testing different configurations, disabling unnecessary protocols on the NICs and others.

Had a remote session with Fujitsu and they are pointing the finger at Microsoft

Is there any particular necessary configuration to get the full performance out of the CSV?
Have been using Hyper-V for some years and sat up many a 2012 R2 server for customers.
Last 4-5 weeks I have set up 4. Small companies with one Hyper--V host with 2-3 vm's. (5-15 users)
Install Win2016 with GUI + Hyper-V role on server (HP DL380 gen 9 or ML 350 Gen 9) All with 8 x 600 GB 10k SAS drives in Raid 10. 64 GB ram and 1 x 8 core CPU with Hyper threading.

Been supprised over how slow the vm's are. So much that working over RDP or via Hyper-V console, locally on the Hyper-V server to the vm's, the mouse is lagging, and anything, like opening a program or Explorer is very slow. This before any users have starting loging on..

VMQ are as default disabled on physical NIC's, tried with it booth enabled an disabled on vm's NIC (no differance)

Have run Service Pack for Proliant on  the host with no change.  

Any tips?
I'm using HashMap for storing String and Date(in String format) I've initialised it as

    List<Map<String, String>> dataa_OLD= new ArrayList<Map<String, String>>();

I'm using TinyDB for data storage. Here is the code of it

List<Map<String, String>> dataa= new ArrayList<Map<String, String>>();
List<Map<String, String>> dataa_OLD= new ArrayList<Map<String, String>>();
Map<String, String> mapp = new HashMap<String, String>();
    protected void onCreate(Bundle savedInstanceState) {
Gson gson = new Gson();
String mapListString= tinyDB.getString("DATA");
        String mapListStringOLD= tinyDB.getString("DATA_OLD");
        Type type = new TypeToken<ArrayList<Map<String,String>>>() {}.getType();
         dataa = gson.fromJson(mapListString,type);
        dataa_OLD = gson.fromJson(mapListStringOLD,type);
if(dataa.size()!=0) {
            for (Map<String, String> map : dataa) {
            tinyDB.putListString("paddrslist", parked_addrs);

            if(dataa_OLD!=null) {
                for (Map<String, String> map : dataa) {
                    mapp.put("ADDRS", map.get("ADDRS"));
                    mapp.put("DATE", map.get("DATE"));

public void methodOne(){

I have 5.1 Android.

I attached two screenshots.
-the internal storage has 285 MB free space
-the usb storage has 6.98 GB free space
-the Sd card has 21.59 GB free space

However, I can not even install an app with size as small as 6.05 MB because it says that I need to free 20.38 MB space!

Am I missing something or is the system buggy?
What should I do?

Thank you very much!






Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.