Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi Experts,

I'm trying to build a raid 5 array for our file server using the following:
1.  Dell Poweredege R530 - server
2.  Windows Server 2016 - OS
3.  3x 4TB SAS - Hard Drives
4.  PERC H330 mini - embedded raid controller of dell server

But in my inquiries from diff. forums, I get these ideas:
1.  There's a risk in Raid 5 using large disk, like rebuilding array takes times and probability of failing another disk during the rebuild is more likely to happen.
2.  And raid controller without cache is not wise to use in a parity raid.

Since the PERC H330 is an entry-level raid card which does not have the cache. Is it more wise to use software raid in this situation or stick to using the built in raid controller?
Should I go building the Raid 5 array with 4TB disk?
What's should be the best option for this?


Diagram of the Setup
Introduction to Web Design
LVL 13
Introduction to Web Design

Develop a strong foundation and understanding of web design by learning HTML, CSS, and additional tools to help you develop your own website.

Hi Experts;

I need a clear advice on the Ms Access tables storage size:
Our client with a supermarket has many products on the shelves, now our current Point of sales works as follows:
(1)      For every product line, five line entries are generated as follows (Revenue line, Vat Line, Cost of Sales Line, Stock line and the Cash/Receipt line)
(2)      All the five lines are stored in POS Table details
My question or worry is can this table manage to handle let say 850, 000 product sold per year which is equal (850000 X 5 lines = 4,250,000 lines) if an access table can handle that, then can we continue to use the same POS for the next 5 years? Will it not burst???????
The above line represent data that will be required for accounting purpose, example:
(1)      Revenue Account
(2)      Cost of sales Account
(3)      Vat output Account
(4)      Stock Account
(5)      Cash/Receipts Account
The performance as at now is very good no issues at all.

I'm using a SATA hard drive adapter cable to use a known good hard drive for external storage. My PCs (Windows 7 and 10) see the adapter as "USB Drive" but show the drive capacity as 0 bytes.

When I view my drives in Disk Management, it shows Disk 1 as Removable with "No Media."
"No Media" image from Disk Management
I've gotten the same results using another known good drive. I've also had similar problems with other SATA hard drive adapters - including ones whose IDE adapters worked flawlessly. So far I've always given up and gotten an external case to make it work. But I liked the convenience so much in the past that I'd really like to know why this isn't working for me now.

Any help you can offer would be greatly appreciated.
I'm currently attempting to resize an 3PAR iscsi virtual volume partition within RHEL 7. I think I'm most of the way there, but lsblk still shows my old partition size of 39TB instead of the new 64TB the volume has been extended to. I've never user iscsi or multipath before, so it's all new to me. I've cribbed by via the RHEL documentation, but can't get the actual partition to grow.
I've used the following:
but they don't go into the final step of resizing the actual partition, so I get this (I've unmounted the drive from the mount point for now):
sdc                          8:32   0    64T  0 disk
├─sdc1                       8:33   0    39T  0 part
└─mpathb                   253:4    0    64T  0 mpath
  └─mpathb1                253:5    0    39T  0 part
sdd                          8:48   0    64T  0 disk
├─sdd1                       8:49   0    39T  0 part
└─mpathb                   253:4    0    64T  0 mpath
  └─mpathb1                253:5    0    39T  0 part

Open in new window

The most obvious solution would be resize2fs, but that gives me device or resource busy.

I'm not great with Linux, so please excuse the newbie question.

So where do I go from here?
Hey guys

I setup 2x SSD's samsung 1TB on raid1 that is gonna be used for a few VMs for the next 2 months.

i setup raid 1 with BTRFS. Setup a single lun with advanced features. connected to it with ms iscsi initiator.

Everything works. I see the drive, i initialize it and format it with NTFS.

The Rackstation is connected with 1 gb cable to a gigabit switch. Jumbo frames 9000 enabled.

Big files transfer really really fast. back and forward. (maxed out 125 mb/s)

What i have an issue is, when i run atto benchmark on it, all the READS are maxed out at jokingly 5 to 11 mb/s??

The write speeds are doing better but not by much until i reach bigger files.

I am monitoring cpu and network usage on the GUI but there is absolutely no load / stress on the machine.

Supposedly this rackstation can do better than that?.

Here is what i have tried:

- Enable / disable jumbo frames = no change

- Trying out EXT4 instead of BTRFS = Gave around 20% boost but reads still stuck at 11 mb/s

- Trying different allocation sizes with NTFS (4k to 64k all the way) = small difference

- Trying ReFS = No difference

- Tried SMB3 mapped drive = no difference.

- Turning off all unnessercary services = no difference.

- Tried 3 different machines with different OS (server 2012R2, server 2016 and server 2019) = No difference

- Tried the raid "Sync faster" options = gave better results with normal operation but benchmark still shows bad.

- Tried directly …
I am running Windows 10 on a Lenovo laptop.  My new laptop is in for repair, so this is my backup unit - which is only four years old, but hasn't been used for about a year.  I plugged it in, and I'm running all available updates, but the computer doesn't read my main desktop USB 3.0 hard drive (which is where most of my data lives).  I have a backup drive that it reads fine, and the desktop drive shows up in "Disk Management" (which shows it's "healthy").  But Windows Explorer doesn't see it, Devices and Printers doesn't see it, and my MS Office apps don't see it.  I've tried plugging it into both HUBS and directly to the computer to no avail.  Can you help?


I have a Buffalo Terastation TS1400. A drive failed and I replaced it. I formatted the drive but cannot get the RAID 5 to rebuild using that drive. The instructions say to press the bottom button on the NAS for 3 seconds but that does not do anything.

Any idea how I can get this to rebuild.
I am trying to install Phpki on a SME-Server.   In the initial Setup screen I is asking the following.

"Storage Directory *
Enter the location where PHPki will store its files. This should be a directory where the web server has full read/write access (chown phpki ; chmod 700), and is preferably outside of DOCUMENT_ROOT (/opt/phpki/html). You may have to manually create the directory before completing this form. "

It gives the example of :


The server's Primary Dir has three  folders

Primary -  cgi-bin
               -  html
               -  folder  -  phpki-store

I was thinking  about putting  phpki-store under folder which is at the same level as the html folder,   I'm not sure what they are asking for.
We have four Dell servers of various models and year.  Two of the Dells have LIS2008 HBS cards in them and the other two have Dell PERCH810 cards.

For storage, we have a Dell MD3200 and an IBM V3700 (don't laugh...I didn't pick the IBM).

Interesting thing is, when I connect the two LSI2008 HBA cards to the IBM and the Dell storage, those two cards CAN see the storage on both devices and access it.

The two Dell PERC cards can of course see the Dell storage, but CANNOT see anything in the IBM except the controller.

So, I've confirmed that the LSI2008 HBS cards CAN see both storage arrays but the PERCs cannot see beyond the IBM controller.

Can anyone give me some feedback on why this is?  

I need all four servers to see both the Dell and the IBM storage and I'm about to buy two LSI cards but want a second opinion before I do.

Quote likely the disk array controller is failed at the HP ML350pT08 E5-2620v2 server, P/N: 736978-425 (7/2014). We are replacing the server to new one but wonder can we access the data stored on the disks, There are two arrays 3x300GB RAID5 and 3x900GB RAID 5. This failed server doesn't boot.

We have a spare HP ProLiant ML350p Gen8 E5-2609 server, P/N: 669045-425 (11/2012) where we can try to house the disks but what do we need to do have this working? I'm worried that when we try to create array at the spare server it will wipe all data away.

The OS (Win Server Essentials 2012R2) is at the 3x300GB RAID5 and data is at the 3x900GB RAID5.
5 Ways Acronis Skyrockets Your Data Protection
5 Ways Acronis Skyrockets Your Data Protection

Risks to data security are risks to business continuity. Businesses need to know what these risks look like – and where they can turn for help.
Check our newest E-Book and learn how you can differentiate your data protection business with advanced cloud solutions Acronis delivers

We have a computer that has limited space - 120GB SSD - and can be changed out as it is currently over 1,500 miles away from us.

Doing a WinDirStat scan we found that the installer directory is over 10GB.

In doing some research online, it seems like half say you can delete and half say you will crash your system.

What do you all know about this?  I know it may not be ideal and if we have to download more content/files later from Windows Update that is fine.  Free space is the most important issue now.
Hello Experts,

We are having a issue with our storage enclosure's.  attached are the errors we are getting.
I have a server that has 32GB of RAM and 8TB of HDD with RAID 1 for 4TB of total HDD space.

I would like to split that with 2 TB being NFS and 2TB being Samba.

This is sort of a discussion oriented question.

I was wondering about running NFS and Samba under LXD or Docker containers. Would there be advantages to doing this (at least for learning)?
We have two existing Exchange 2016 servers in a DAG. Unfortunately there are 2TB of email boxes on them and only around 250GB of free space left on the volume that houses the mailboxes. To further complicate matters, these volumes were created as MBR instead of GPT volumes so they cannot be expanded any further. We have two new Exchange 2016 servers in a new DAG in the same organization. These have volumes that have been configured appropriately. The concern we have of course is the amount of log files that will be generated using batch moves and the possibility of filling up the volume. There are 10 databases so as we empty them and delete them, space will become less and less of an issue but we have to get through the initial migrations. At first we thought about slowly moving a few mailboxes and then running backup jobs to clear the logs but then it occurred to us that since the generation of log files related to mailbox migrations is tied to the Migration.8f3e7716-2011-43e4-96b1-aba62d229136 mailbox why not migrate that mailbox to one of the new servers first and then all subsequent migrations will be generating log files on the new server and it's volume where storage is not an issue. We cannot see any reason why this would not be a viable solution especially since this is not a coexistence situation between different versions of Exchange. However, out of an abundance of caution we thought we would float this out here in the forum and see if anyone knows of a reason why …
I have a problem with server HP DL380 G4. My server does not show anything in display/monitor.
I removed memory/RAM and started the server I am not getting the beep sound. So I arranged another motherboard and I still didnt see anything in the screen/monitor.
What could the issue I changed power supply as well. I just want  to start the server and run P2V and convert to virtual.
Appreciate your guidance.
I have Dell PC running windows 10. I use the windows back up and restore software for backing up on external drive. But this has not been upgraded since Windows 7.

Now I have updated Windows 10 I can no long get the back up to work. I need to find a different way now that Microsoft do not provide technical support on this.  Any ideas please for continuing to be able to back up on the external drive?

I am using Angular 7 to make an API call to a database and it returns some user data (id, nameline, email);

In the .ts file I am saving this user data to local storage in this way:
localStorage.setItem('user', JSON.stringify(;

Open in new window

Then on another page (Page1Component) I retrieve the data in this way. The "alert" at the end properly displays my user ID which is what I want.
export class Page1Component {
    myuser = JSON.parse(localStorage.getItem('user'));

  ngOnInit() {
    alert("MyPOPID is [""]");

Open in new window

But my question is, instead of putting the myuser command in the "export class statement" of every page, is there a way to have the "myuser" variable be placed in someplace like the app.module.ts file and then have it be available to all children components under it?  If so, how would that be done. What would I need in my app.module.ts file and what would I then need in my Page1Component.ts file.
hi all,
i created a new SVM and added domain, cifs server , almost everything but still i am not able to see the cifs share i created on that SVM.
I'm looking into RAID solutions for a very large file server (70+TB, ONLY serving NFS and CIFS). I know using ZFS raid on top of hardware raid is generally contraindicated, however I find myself in an unusual situation.

My personal preference would be to setup large `RAID-51` virtual disks. I.e Two mirrored RAID5, with each RAID5 having 9 data + 1 hotspare (so we don't lose TOO much storage space). This eases my administrative paranoia by having the data mirrored on two different drive chassis, while allowing for 1 disk failure in each mirror set before a crisis hits.

HOWEVER this question stems from the fact, that we have existing hardware RAID controllers (LSI Megaraid integrated disk chassis + server), licensed ONLY for RAID5 and 6. We also have an existing ZFS file system, which is intended (but not yet configured) to provide HA using RFS-1.

The suggestion is to use the hardware raid to create two, equally sized RAID5 virtual disks on each chassis. These two RAID5 virtual disks are then presented to their respective servers as /dev/sdx.

Then use ZFS + RFS-1 to mirror those two virtual disks as an HA mirror set (see image)

Is this a good idea, a bad idea, or just an ugly (but usable) configuration.

Are there better solutions?

Price Your IT Services for Profit
Price Your IT Services for Profit

Managed service contracts are great - when they're making you money. Yes, you’re getting paid monthly, but is it actually profitable? Learn to calculate your hourly overhead burden so you can master your IT services pricing strategy.

I would like to learn about virtualization. I'm thinking about purchasing a Type 1 virtualization software, for home use. Any good recommendations?
Also, can anyone point me to some good learning resources (books, websites, etc.)?

I followed the tutorial  "HOW TO: Shrink a VMware Virtual Machine Disk (VMDK) in 15 minutes" to shrink my VM in ESXi 6.7 from 900GB to 300GB. Everything seems fine, however after moving the VM to a second Datastore and re-registering vSphere is still showing 900GB not shared and used storage. I selected "I moved it" during startup. The provisioned size seems fine (300GB). Why is the not shared and used storage still wrong and how can I fix it?
Our current RAID devices are getting end of life, and I’m looking into the possibility of upgrading to a more robust system. The important goals are high availability.

The image shows one possible route. I'm investigating:
1. Is there a better way to do this? (Or is this a bad way to do it).
2. The best tools for accomplishing this.

IF POSSIBLE, when all four enclosures are working – I’d like all 4 ethernet cables to be serving data – to overcome bandwidth problems.
suggested setup
On the left (current) is our current setup.
Top: RAID controller with 24 slots (+ 2 SSD slots in the back for the OS).
Middle: The 12 slot chassis (which may be failing)
Bottom: chassis (without motherboard) with 24 slots.

Can you advise a setup for something like this?
I've inherited a MegaRAID SAS 2208 24-drive-bay RAID, that is also attached to two additional enclosures (one 24 bay, and one 12 bay).

I attempted to load 6 new drives into 6 consecutive empty drive bays on the 12 bay enclosure (bays 6 - 11) and all 6 showed a solid red error light, and the admin mailing list got 6 messages...

    Controller ID:  0   Phy is bad on enclosure:   2  PHY
    Event ID:185
    Generated On: Tue Feb 12 17:02:27 CET 2019

    System Details---
    IP Address:
    OS Name: Linux
    OS Version:3.13
    Driver Name: megaraid_sas
    Driver Version: 06.700.06.00-rc1

    Image Details---
    BIOS Version : 5.37.00_4.12.05.00_0x05180000 Firmware Package Version: 23.9.0-0015 Firmware Version : 3.220.05-1881

Looking up the event ID in the documentation gives me:

    Enclosure %s phy %d bad
    Logged when the status indicates a device presence, but there is no corresponding SAS address is associated with the device.

- I put the 6 new drives in one of the other enclosures, and they worked fine (so not bad drives).
- There are 6 drives operating fine in bays 0-5.

It seems odd that all 6 sequential drive bays 6-11 should be bad, but apparently NOT the entire backplane is bad.

**Is it possible for only half of the enclosure's backplane to have gone bad? Or is this a firmware (or some other configuration) problem?**
I have inherited a Megaraid SAS 2208 RAID (24 drive slots) with two additional external enclosures (one 12 slot, and one 24 slot)

The problem is; the 12 slot enclosure is demonstrating hardware problems. It is well out of warranty and has no support contract.

I'm considering buying a new enclosure and moving the disks. However, I don't know how to recreate the RAID6 virtual drive WITHOUT losing the data on the disks.

Is this even possible?
I have inherited a Megaraid 2208 RAID device with 7 virtual disks ranging in size from 7 to 14TB.

The file server hosting this RAID sees these 7 virtual disks as /dev/sda, /dev/sdb, etc.

The previous admin then used ZFS to strap all these drives into one giant pool, and then parse them out into datasets.

I'm new to ZFS. The question is; is it possible to determine what data is stored on which disks via ZFS?

For example, if we lost 6 consecutive disks in the array (a real possibility for one of the enclosures) - after repairing and replacing the disks, how could I know what data to replace specifically? Or would I have to do a restore on ALL the data held within the pool, and simply tell the restore software not to overwrite existing files?







Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.