[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More







Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.

Share tech news, updates, or what's on your mind.

Sign up to Post

I am advising a new customer on replacing or upgrading an existing server that hosts a Electronic Medical Records package called Accuro.  The server performance is acceptable to them at this point, but local storage on the server is exhausted.  Adding more drives to the server is not possible.  Adding a NAS to the network and connecting volumes via iSCSI seems like a possible solution, but I am not sure of how this would perform as a database repository.  The software support folks say that they are happy putting the data files anywhere that can be browsed to with file explorer, so technically speaking this solution should work but I am in the dark with respect to how it would perform.

Can any of you database gurus weigh in on what would be expected in this scenario?

Thanks for your time.
10 Tips to Protect Your Business from Ransomware
10 Tips to Protect Your Business from Ransomware

Did you know that ransomware is the most widespread, destructive malware in the world today? It accounts for 39% of all security breaches, with ransomware gangsters projected to make $11.5B in profits from online extortion by 2019.

My hard drive states it is full last month it was not even close.  I have transfer folders and files to external drive and very little space removed.  I had this problem before and I think it had something to do with Sage 50.  My drive looked full and then everything was back to normal.  Any suggestions?


A customer of mine has failed a PCI scan, mainly due to files stored on two bookkeeping computers, which contain sensitive data, like SSNs for employees, tax returns, and a small number of credit card numbers... Some of it is easy, old mailboxes, old emails, duplicate files, that can just be deleted.

Some of that data will need to be kept, though, possibly for long-term storage, but in a way that is PCI Compliant.

The credit card numbers are most likely internal, not customers - the business mainly transacts with their customers via checks, which are electronically deposited and then shredded when the accounts are reconciled.

What is the best/correct method to recommend to them for storing and accessing this data going forward that is both compliant and usable by not-very-technical bookkeeping staff?

They are a network of 10 total active users all running Windows 10 Pro, and joined to Active Directory via Windows Small Business Server 2011, and do have shared file access on the servers. For compliance, I'm thinking it would be best to have this data on the server, where it is assuredly backed up, and permissions are stricter, but does that create a more centralized potential point of failure?

Your advice and recommendations are appreciated!
Server 2016 using WBS with 2 rotating external Usb hard drives. Can I do this and if one drive fails will the other remaining drive be able to recover the system?
I have a bit of an urgent problem

I am trying to rebuild my SOFS storage.

I am running on server 2016 with 3 JBODS (Dell MD1420's)

I have purchased 9x 1.6TB SSDs and 48x 1TB HDDs

I am trying to build this in to a new virtual disk with tiers, ideally with 3 data columns and 2 data copies.

Using the commands below. I have tried using all different types of sizes without any luck.

$disks = Get-PhysicalDisk | ? {$_.CanPool -eq $True}
New-StoragePool -StorageSubSystemFriendlyName “Clustered Windows Storage on MR-HY-SSC” -FriendlyName “ClusterPool” -PhysicalDisks $disks -EnclosureAwareDefault $True

# create storage tiers
$ssdTier = New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "SSD_Tier" -MediaType SSD -ResiliencySettingName "Mirror" -NumberofColumns 1 -numberofdatacopies 2 -FaultDomainAwareness StorageEnclosure
$hddTier = New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "HDD_Tier" -MediaType HDD -ResiliencySettingName "Mirror" -NumberofColumns 1 -numberofdatacopies 2 -FaultDomainAwareness StorageEnclosure
# create and initialize the virtual disk
Get-StoragePool ClusterPool | New-VirtualDisk -FriendlyName ClusterDisk -StorageTiers $ssdTier, $hddTier -StorageTierSizes 13000GB, 22000GB -WriteCacheSize 50GB -isenclosureaware $True
Initialize-Disk -VirtualDisk (Get-VirtualDisk -FriendlyName $vdName)

All I get is the error message below. I need this rebuilt ASAP so I can start moving data back to my servers. Any …
Anyway we can find info on SFP of HBA cards like vendor /speed or some diagnostic report from linux OS
Hello, I have a 6 node Server 2012 Cluster that I am running Hyper-V on.  Storage is currently a couple of Equallogic arrays attached via iSCSI.  Are there knows issues with installing the SOFS role on top of this?  I would like to create a CSV to host User Profile Disks as I am currently setting up a proper Server 2012 RDS environment using VMs.
We run VMware 6.0 on a Cisco UCS host. There are  6 X 1.2TB HDD in RAID10 giving 3.4ish TB and we are running low on space on Datastore1.  The LSI RAID controller wont let us add disks to the existing RAID so we've created a new RAID1 with 2 x 1.2TB drives.  

Using VCSA, we have moved 3 of our VMs to Datastore2, freeing up 400GB on Datastore1 and now we are trying to extend the size of the virtual disk of our exchange server by 50GB but we get the error

Insufficient disk space on datastore ''.

When we look at the error stack we see "The disk extend operation failed: msg.disklib.NOSPACE"

We cant understand why we are not allow to increase a vdisk when it is reporting that there is more than enough space free.  Any help much appreciated.
Hi Experts,

We have 2 ESXi identical hardware 6.0 hosts directly attached to a Powervault Md3400 .On 1 host we have 2 active paths, 1 active and 1 active with (I/O) and have a multipathing policy  VMW_SATP_ALUA.

The 2nd host I see 2 active paths and neither one has an active (I/O) path nor does it have a multipathiing policy available to select on the drop down and Storage type is blank.

Is there an option to enable multipathing on Esxi 6.0? I dod not see an option in th emodulard disk store manager on the Storage Array? and did not enable any specific settings on the host that displays the paths corrently.

We have NetApp FAS2220 SAN and 3xFujitsu PRIMERGY ESX host servers and we are using VMware Vcenter appliance.
On the NetApp filer the current ONTAP version is 8.1.2 - 7 Mode.

I would like to upgrade the filer from ONTAP version is 8.1.2 to ONTAP  version to 8.2.5. Please post me any tutorials or videos to do this upgrade. I have access to NetApp OnCommand Systems manager (version 3.0) and will I be able to do the upgrade from the GUI. Or do I need to do through the command line logging into the filer through putty session.

I have downloaded the upgrade file  for Ontap version 8.2.5 (825P1_q_image.tgz)

Any help much appreciated  and thanks in advance.
Become a CompTIA Certified Healthcare IT Tech
LVL 12
Become a CompTIA Certified Healthcare IT Tech

This course will help prep you to earn the CompTIA Healthcare IT Technician certification showing that you have the knowledge and skills needed to succeed in installing, managing, and troubleshooting IT systems in medical and clinical settings.

Good evening,

I am trying to build a Fedora server with a Raid 1+0.

The mother board is B450-F
Socket AM4

I was able to get Fedora 24 to somewhat work but I made a newbie mistake on a partition and re- installing the system again.
But here is my main problem:

I  have 4 identical solid state drives and want it to be set-up as Raid 1+0.
The hardware does support it but when I try to install linux after the raid has been set-up.  The linux install does not recognize the raided drives.
If I have to deal with CentOS 7 that would be fine.  I just need a system that handle Motif programming, and Fedora and CentOS still support this.

In help would be great
Hi Community,

We have one TS + One AD/FS
We enable UPD User Profile on the TS.
Now storage is getting a problem, some users have .ost file charging the UPD disk.
What i'm looking to do, is migrating some of the users (not everyone) to the FS server.
Migrating UPD keeping permissions could be achieved this way:

For my understanding, I will need to redeclare UPD path on the RDS server.
Is this possible to have different UPD location for specific users?

Thank for your hints!

looking for cheap and best phone which has lot of memory like 128gb and good ram like 4gb

any suggestions where to see and compare varous features and costs etc. that are available in usa markets

i also like to put 400 gb mini micro sd card (i think that is maximum available in market now right?)
please advise
yesteray i put 256gb mini micro sd card (with about 220gb content) to my motorola 4g plus below phone


i played one video and my phone heating up a bit.

Do i not supposed to load that big micro sd card to my phone.

what is the maximum mini micro sd card that i can put safely without overheating etc issues

please advise
I have a Perc 6/i integrated raid controller. Its a raid 5 with about 2tb of storage. There are 4 disk and I replaced each drive one at a time with 4tb drives. I thought the volume was supposed to expand, but im obviously wrong. Am I going to have to copy all the data to a separate drive and reconfigure this thing?

I see in Open Manage there is a reconfigure option for that Virtual Disk.

Here is the physical disk information.
Physical Disks
ID      0:0:2
Status      OK
Name      Physical Disk 0:0:2
State      Online
Bus Protocol      SATA
Media      HDD
Revision      00.0NS05
T10 PI Capable      No
Capacity      2,047.38GB
Used RAID Disk Space      931.00GB
Available RAID Disk Space      1,116.38GB
Hot Spare      No
Vendor ID      
Product ID      WDC WD4000FYYZ-05UL1B0
Serial No.      WD-WMC130E7MAVD
Sector Size      512B
SAS Address      1221000002000000
Non-RAID HDD Disk Cache Policy      Not Applicable
SBS2008 boots and functions as a DC, Gateway etc but the D drive is not recognised.
In device manager and Disk Management the drives are recognised, but Disk 1 comes up Unknown Not Initalized.
If I try to initialize it id comes up "the System Cannot Find the File Specified"

I am running into quite an unusual problem and was wondering if anyone has experienced this issue before. I have an ActiveX Control that I have made that uses a Picture Box and displays an image from Azure blob storage by converting that image's memory stream into an image. I use this ActiveX Control for a Microsoft Access 2002 application. This ActiveX Control works great on forms, but on reports it doesn't display fully (Please see attached images). Anyone know why?
Hello I have a Dell Poweredge T320 and I want to upgrade my 300gb 3.5 15k SAS drives.  I want to upgrade them and am considering SSDs.  Can anyone recommend SAS SSD with storage size around 600gb or higher.   Currently I have a RAID 10 and the T320 has a PERC H310.
Azure Indexer Error.

I have been running the Azure Indexer on a blob storage container that contains pdfs.  The indexer Indexed more than 7000 blobs successfully then started throwing these errors for any additional documents.

        "key": "https://yoursiteurl/169292.pdf",
        "errorMessage": "Invalid document key: 'https://yoursiteurl/processed-documents/169292.pdf'. Keys can only contain letters, digits, underscore (_), dash (-), or equal sign (=). Please see https://docs.microsoft.com/azure/search/search-howto-indexing-azure-blob-storage#DocumentKeys"

I see nothing wrong with the document itself and am wondering if this has anything to do with keys.

Here is index metadata:

Here is the blob metadata:
CompTIA Security+
LVL 12
CompTIA Security+

Learn the essential functions of CompTIA Security+, which establishes the core knowledge required of any cybersecurity role and leads professionals into intermediate-level cybersecurity jobs.

Hi Guys,

I have an interesting issue, after replacing a faulty HDD and allowing the RAID to rebuild successfully, a single user mailbox acted weirdly and i had to investigate and follow the usual routes and causes. Ultimately nothing worked an i created a new user in AD with a new mailbox account which was working correctly,

The disconnected mailbox will not reconnect to any other user account and i am afraid it may be corrupted, but before i pull the plug i thought of asking for help. Following is a shortened version of the message i got on the user pc, in hindsight i should have gone offline end exported all to PST but........
"Your mailbox has been temporarily moved to ... A temporary mailbox exists, but might not have all of your previous data. You can connect to the temporary mailbox or work offline with all of you"

Any solutions or suggestions or even is there a fee tool to convert an ost to pst??
I have a public API endpoint that I am pulling a json file every 30 mins. Right now I am using a python pandas dataframe to pull and upload the file to a cloud storage bucket and then sending to pub sub to process and place into BQ. The problem with this is that the file name stays the same and even though I have  gcs text stream to pub sub if it reads the file once it never reads it again even though the file attributes have changed. My question here is can any one help me with code that will pull from an api web link and stream the data directly to pub sub?

Sample code below:
import json
import pandas as pd
from sodapy import Socrata
from io import StringIO
import datalab.storage as gcs
from google.oauth2 import service_account

client = Socrata("sample.org", None)
results = client.get("xxx")

# Convert to pandas DataFrame
results_df = pd.DataFrame.from_records(results, columns =['segmentid','street','_direction','_fromst','_tost','_length','_strheading','_comments','start_lon','_lif_lat','lit_lon','_lit_lat','_traffic','_last_updt'])
# send results to GCP
gcs.Bucket('test-temp').item('data.json').write_to(results_df.to_json(orient='records', lines=True),'text/json')
It appears there are some chkdsk errors... this is an old SBS 2008 server... already EOL, we are planning to replace first thing 2019.  

This server has some critical files on it (Accounting, HR, etc)

I am a bit nervous to run the chkdsk /f due to risk of data loss/problems/it running for days... etc... but of course I realize there must be risk running with these errors.

Curious if anyone has any input on how bad these errors might be (given that it does atleast say 0 bad sectors)

See screenshots for details

DELL T30 with SATA disk encouter disk performance issue.
3 x SATA 1 TB HDDs (Software RAID 5)
* does not support hardware controller
* Windows Server 2012R2 Std

After do some checking, notice the disk encounter very high latency on read /write the files.

- System will hang if having a read and write activity.
- Encounter very slow responding when there is some disk running (e.g. copy past document)
- Do a check, CPU and Memory usage is low, only the disk having high activity.

Any idea how can i overcome this issue? without buying a new hardware to replace this.

I'm thinking adding a Synology NAS Storage and attach as iSCSI volume and store all the data.

What do you think?
I have a problem with Windows 10 app store, all the apps are crashing as soon as you start them (including the app store),
it is happening to multiple computers on my domain ( I get this problem at least once a week) where I get a notice from a user that they are unable to use any app like calculator or the camera app,
I tried several solution that I found online which I will add below, but it never worked, all computers are up to date, apps won't work even if I switch to a new profile on the computer,
here are the solution that I tried which basically remove the app store and reinstall it

and delete everything it.

make sure the Storage Service service running and set to Manual (Trigger Start) for Store updates to work properly.

run wsreset.exe
rename C:\Windows\SoftwareDistribution to softwaredistribution.old
taskkill /F /FI "SERVICES eq wuauserv" (do this multiple times)
net stop cryptSvc
net stop bits
net stop msiserver

if you can't rename softwaredistribution, reboot, run the 4 tasks above and then rename. make sure symantec is off.

powershell as admin
Paste the following command and hit enter:
$manifest = (Get-AppxPackage Microsoft.WindowsStore).InstallLocation + '\AppxManifest.xml' ; Add-AppxPackage -DisableDevelopmentMode -Register $manifest

Add-AppxPackage -DisableDevelopmentMode -Register $Env:C:\“Program …
I have rebuilt a bad Raid array back to healthy. Now when I boot it goes to preparing automatic repair. Any ideas on why it is not booting? I have ran diskpart and the system reserved vol, OS vol, and data vol are all there and showing NTFS Healthy






Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. In addition to local storage devices like CD and DVD readers, hard drives and flash drives, solid state drives can hold enormous amounts of data in a very small device. Cloud services and other new forms of remote storage also add to the capacity of devices and their ability to access more data without building additional data storage into a device.