Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium







Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Share tech news, updates, or what's on your mind.

Sign up to Post

I have Database.mdb which is a 570 meg file on a Windows Server 2016 shared directory.  

When I drag it to  \\CPUDesk c: drive it takes 10 seconds.  This is a $400 windows 7 proiDell Inspiron 620 desktop that is 6 years old.

When I drag it to a \\CPULap c: drive it takes 50 seconds.  That is a brand new $700 windows 10 pro Dell Inspiron 15 laptop with a fast i7 chip and 8gig memory and a super fast SSD SanDisk x400.

I am shocked that an old cheap desktop is so much faster than the latest technology laptop.

At first I thought it might be a bad Cat 5 cable or switch port, so I swapped the laptop and the desktop and reran the test with the same result.

CPULap has a 1000 gig NIC which is confirmed because the network switch shows two green LEDs.  

Does anybody have any ideas of how I can trouble shoot this?  


P.S.  At the risk of giving "Too Much Information", here are some additional facts, to muddle the picture.

I tested the copy on other cpus
*1 year old windows 10 pro laptop ($1300 quad core Lenovo t460p) also took 50 seconds.
*3 year old windows 10 pro desktop took 10 seconds
*3 year old windows 10 pro desktop took 10 seconds
* 5 year old windows 7 pro desktop took 10 seconds
* 4 year old windows 7 pro desktop took 50 seconds

* 2 month old windows 2016 server took 1 second (no surprise, because that copy didn't even go through the switch.)
Concerto's Cloud Advisory Services
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

Hello EE,

I am working on a network drive migration task and all has been going well with this command which will copy source to the destination and keep all the permissions\ACLs intact and gives me a nice log file to review the results. This command does not mess up anything on the source but will mirror it to the destination which is what I want for time being:

robocopy "\source-server\share\folder" "\destination-server\share\folder" /zb /MT:32 /mir /copyall /dcopy:T /V /tee /LOG+:C:\temp\robocopylog.txt /r:0 /w:0

All is well :-) Until I realize that I have tiny window for the final copy before full cutover to the new storage.

Problem: command above is taking close to 31 hours to run for 9TB of data and my cutover time frame is only about 12 hours. I re-run the command every 48 hours to check for changes to the data on source to update the destination and it appears there are millions of small files which are slowing down the copy.

All the data is there just want to keep the destination up to date with new robocopy command and slim down the time without jeopardizing the data or permissions\ACLs.

I have narrowed down a couple folders I could exclude from these subsequent copies to shave down the 31 hours to roughly 9 hours by reviewing the data.

I have been trying out the /XD switch and cannot get it to work.

Here is a example of the /XD added to the command;

robocopy "\source-server\share\folder" "\destination-server\share\folder" /XD …
We have an old Dell PowerEdge 2850 server that we use in our DMZ for a FTP and Customer portal for our customers.  We had a 136GB RAID1 partition and 409GB RAID5 partition.  The server's OS was VMware ESXi 3.5 and it was hosting Server 2003 virtuals.  One for the FTP site and one for our Customer portal.  From what I gathered, one of the RAID1 drives and one of the RAID5 drives went down.  Don't know the cause.  After tinkering it for a while, I blew away the RAID1 partition and replaced the failed drive.  The only thing on it was ESXi.  The virtuals sat on the RAID5 partition.  I replaced the drive and rebuilt it within the server's BIOS.  When I reinstalled ESXi, it detected that the RAID5 partition was a vmfs partition.  I was able to log into VMware to try and add the partition to the storage array ESXi seen that it was a vmhba1:1:0 drive with 409GB capacity.  But, when I went to add it, I got the message "Unable to read partition information from this disk."  

Is there any way or any tool I can use to get the data off of this partition?  It is mission critical.  I don't know if it's a hardware issue or if VMware simply can't read it so I'm tagging both to this question.
I use the native backup system on my SBS 2011 Std server. I currently backup to a 2TB drive. I would like to use a higher capacity drive so that I can keep more backup sets than what I can currently store on the 2TB drive. I have a 6TB drive available (not currently being used for anything). Can the backup system properly use this size drive or does the windows 2TB limit come into play (i.e. will it fill the entire 6TB)?

Just upgraded from Office 2007 to 2016.  Have lots of tasks, over 1,000.  Need to start anew, fresh, but don't want to lose the Tasks themselves / delete them.  Just need to cut them out of the Tasks system, putting them in some sort of data file for reference.

Followed this answer.  Yet, each time I select a task to move to the Data files, they pop up as an attachment to an email.  If I select 100 to move, an email with 100 attachments pops up.  Select one to move yields  an email with that one attached.

What do you think is happening?


I have a PowerEdge T620 with a PERC H710 with 4 physical disks (and 1 hot spare) in a RAID 5 configuration.

See attached screen-shots for disk layout.

I'm running out of space on the C: drive, however (thankfully) the data on E: is not needed.  I'm guessing since it's adjacent to C:, this makes it easier to re-size the C: drive.  Note that the data on D: is important, and can't be lost.  I do have backups, but would prefer not to "need" them.

What's the proper way to take the space on the E: drive and incorporate it into the C: drive?


As a very senior dinosaur I have been using Lotus 123 for a gazillion years.  It's now starting to malfunction by not functioning the way I want, etc.  Consequently and because I've previously lost some data, I think I better replace this program with something new but similar.  Can you suggest such a freebie type program to replace Lotus 123 (I'm only interested in the Smart Suite portion) with similar functions, etc.
Have a 2008 windows domain and am looking to add a storage server to house graphic files that need to be shared between MAC's and PC's (Windows 7 pro). Not sure what OS to install on the new server? Windows/Ubuntu/Samba (the new server is a Dell box so IOS is not an option.) Is there a better option?

Thank you
I'm sure this may sound simple, but over the last few days in my efforts to save data, files, etc. , (using Windows Easy Transfer between two computers running Windows 7) before doing a system recovery, it seems that I either simply did not fully understand the expression "back up" or, it's never been precisely defined for me.

 My backup saved a LOT of data and folders, but it seems that many (if not all) .exe files, along with their configurations, were not included in the Windows Easy Transfer. I somehow thought that if you transferred program files (in various folders) AND the settings/data/created info associated with the programs, all you had to do was copy them all to another computer (in this case, the one I did the system restore on), and you were good to go--open Excel, for example, and just pick up where you left off with any given associated file. I did not expect to have to totally reinstall Excel.

This also raises a question in my mind about "image backups." Nothing I encountered in searching for info answered the question the term raises, specifically, does "image backup" mean that once you install it on to a computer, you can then simply open any program and have it run, with all the settings and data, saved in whatever file name/folders as before the backup?

And finally, it seems to me that maybe the smartest thing I could do would be to have two computers, both of which have the same operating system, and in setting them up, make them identical twins…

I need to install OS Server 2016 at the Machine Dell Power Edge T 310.

I need the Drivers for : Raid Controller, Storage Controller, Network and video or graphic.

Somebody would show me the link please.

Hire Technology Freelancers with Gigs
LVL 11
Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

Hi guys

I'm doing a document to prove that we need a cluster environment for our virtual machines, as we have only one IBM X650 M3 hosting a load of virtual machines at our datacentre. There is no SAN either. All of the disks being used are internal on the actual X3650 which I believe we have around 14disks. I'm about to find out what sort of RAID has been set up.

I am making the documentation simple for board/management, but wanted to include 'what's wrong with his scenario' to show the sort of problems that could go wrong.

The current X3650 M3 has dual power supply, so that can't be a reason. Storage though may be an issue as if more than two disks die, we would lose all virtual machines that are business critical.

What other possibilities are there? Faulty Ram? Motherboard dying? Faulty RAM?

Thanks for helping
Wel I cannot  export the following  big table and the expdp log is as follow:

Export: Release - Production on Tue Jan 16 10:46:02 2018

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_04":  "/******** AS SYSDBA" directory=EXPRMAN dumpfile=dmpSGCIPROD160118.dmp logfile=dmpSGCIPROD160118.log TABLES=SGCEIPROD.GININFORMES CLUSTER=N
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 58.30 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TRIGGER
ORA-31693: Table data object "SGCEIPROD"."GININFORMES" failed to load/unload and is being skipped due to error:
ORA-02354: …
Hi everyone.
                        my tsm backup is failling with the following error:

ANS1377W The client was unable to obtain a snapshot of '\\vwarbass0000002\e$'. The operation will continue without snapshot support.
ANS1512E Scheduled event 'DIARIO_0130' failed.  Return code = 12.
 ANS1403E Error loading a required ad_dll.dll DLL
 ANS1403E Error loading a required ad_dll.dll DLL
  ANS1403E Error loading a required ad_dll.dll DLL

no modifications were made.

Any idea about it?

Thanks in advance.
Hi experts.

We're upgrading from Exchange 2010 to 2016 and I already have the design on my head, which will be similar to what we have on 2010, but I'm unsure regarding how many servers will we need to accommodate our mailboxes and run smoothly.

I've tried to use the  Exchange Server Role Requirements Calculator v9.1, but I'm not sure why it keeps giving me so many servers that it can't be right!
The idea is to have enough servers to run smoothly even in the case of 1 of them failing (DAG)
If someone could give me some thoughts on the below, that would be much appreciated.

Six databases, with around 200GB each
Total number of users is 1800, around 300 mailboxes per database
Average size of each mailbox is around 400mb (limit is 3GB) (they're small because we have a third-party archiving solution stubbing messages)
Growth: I don't expect us to grow a lot in mailbox numbers, let's say 25% in the next 2 years. In terms of mailbox size, I would say each mailbox could grow up to 50% in the next 2 years, so, each database would go up to 300GB or 350GB.
Outlook - 80% users will work in Online Mode and 20% in Offline Mode.
Daily email flow - Average of 5000 outbound and 12000 inbound emails. In terms of size, they're mostly small emails with a few kb.
Servers will be virtual machines in VMware.
Hosts servers have CPU Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (logical processors 32) and no RAM restrictions (can go up to 92GB).

One …
I have a 2TB Apple Time Capsule (Model A1409)

4th Generation

I have a MacBook and when I plug in the drive into a USB cable (w/ TM powered on)

I do not see any icon appear on Finder.

What steps do I follow to:
1) wipe the Time Capsule
2) verify it's been cleared

Hi One of my disks failed in my RN10400 device, i have replaced the disk but now the error above appears. There are 4 disks in a raid 5 . I can't get into the OS - i have tried reinstalling this.
 anyone have an idea how i can get this back working or recover the data?
hi experts

is this the correct answer

VMkernel is a virtualization interface between a Virtual Machine and the ESXi host which stores VMs. It is responsible to allocate all available resources of ESXi host to VMs such as memory, CPU, storage etc.

or this

The VMkernel networking layer provides connectivity to hosts and handles the standard system traffic of vSphere vMotion, IP storage, Fault Tolerance, vSAN, and others. You can also create VMkernel adapters on the source and target vSphere Replication hosts to isolate the replication data traffic.
My network included a NAS (N:), which crashed.  I recovered its data and installed another NAS which I want to name N: as well for convenience, and because many of my applications "look" for the N: designation.

I have found it impossible to get rid of N:.  It continues to "exist" on one of my computers which will not let me use the N: designation.
I have tried rebooting.  I have tried deleting the N: designation from my list of disks, but it keeps coming back, even though the computer will not let me "click" it.

Please someone help me get rid of this "ghost" N:

Hi All,

I'm in the process of migrating an old file server to a new file server.  They both sit on the same SAN volume and are both thin provisioned.

I though as the data moved across it would release the space.  But this doesn't seem to be the case.

Is there a way to release unused thin provisioned space?

Many thanks
Independent Software Vendors: We Want Your Opinion
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Background Information:

- We have two separate but identical sites (Production and Disaster Recovery)
- Both sites will have their own infrastructure of 3x servers, 2x iSCSI switches, and 1x SAN
- Data from production site will be replicated to DR site quite regularly (may be real-time)
- Sites utilize VMware ESXi/vSphere 6.5 and will have Acceleration Kits

We are in the process of an infrastructure upgrade, and am in the planning phase of implementation.  Once we have all of the above fully implemented, how can I configure a proper fail over and fail back configuration?  I am pretty sure I may be able to configure High Availability on a single site for a failure of a single host; would it be possible to configure some type of fail over should a disaster completely bring down an entire site (if building hosting production equipment burns down)?

Is it even possible to utilize VMware High Availability to fail over to another site/SAN?  If it was a temporary failover (power at production site is lost, then comes back), would there be an automatic fail back?

Can I use this method to temporarily take down a site for maintenance while having no impact to end users?

Thank you in advance for any assistance provided.
Is there a ELI5 (explain like I'm 5) for this process?  I'm not looking for a software recommendation.

I searched offline storage table (OST) and read that it "is an offline Outlook Data File used by Microsoft Exchange Server that enables users to work with their messages and mail data even when access to the mail server isn't available."  I also read a lot of content about OST and PST software.

Why would a person work with their messages and mail data even when they can't access a server?  What kind of work is a person doing that this is even necessary to do offline?
Any ideas what is causing the below weird issue since
I would like to see all folders when doing the
below step #3 and #10 ?

 1. login to Windows 2012 file server as "admin"
 2. open local C:
 3. see four of the five folders I created
 4. type "C:\hiddenfoldername" to see the hidden folder
 5. check to make sure above folder is NOT marked as hidden
 6. open "Server Manager, File and Storage Services, Shares"
 7. click "Tasks", select "New Share"
 8. select "Type a custom path"
 9. click "Browse", select "C:"
10. see four of the five folders I created
With reference to the attachment that report the slide of actual occupation of my vtl, can you have a detailed report of how and from wath is occupied with script tsm ?
We have an HP ProLiant ML350 Gen9 E5-2620v3 Server using a Smart Array P440ar controller. Smart storage administrator shows the Logical Drive 1 is queued for rebuilding, this is a RAID 1 array.
We can find no reports of a problem with either of the hard drives, our managements system does not show any evidence of faulty drives.
This is a 2012 R2 server, has anyone seen this? And if so what did you do to correct it?

Thanks All!
My understand in write-through is that the controller puts the data directly to/from memory where as write-back lets the CPU do it. Is that correct?

   So neither method has anything to do with lost data due to a power outage? Only a cache backup (or a UPS) can save you there?

   So if you spend the bucks on a cache battery backup don't you still have the possibility of losing data that is in the cache of the hard drives?






Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.