$CSV = Invoke-Sqlcmd -Query $Query
I'm using Powershell version 4, so the solution should be done with version 3 or 4.
In this example $CSV.count would return 4,567 records. Same issue happens with import-csv and invoke-sql.
I then have to massage the data. I must ADD a duplicate record to the end of $CSV for each record that contains more than 1 filename in the $CSV.FileList field.
CSV$.FileList might equal "00310508.tif 00310509.tif 00310510.tif 00310511.tif 00310512.tif 00310513.tif" and I would then have to duplicate this row of data for each filename in the FileList field. The requirement is to have one filename per record in $CSV.
I've found a few ideas on the Internet, but they cause massive memory corruption and so I figure the additional records added to $CSV are simply added and no memory allocation is done to hold the added data.
So, nothing I have found has worked, except to export $CSV to a file, open it in Excel, and add several thousand records to the end of the file (just copied the last record and pasted it repeatedly), and then import the file remembering the original $CSV.count and re-using the newly added records as placeholders and then changing $CSV.count to equal $CSV.count+$new_record_count and finally exporting it to a file.
Do you know of a way that I can duplicate the last record of $CSV a thousand times using Powershell?
Is there was a way to append records to $CSV and not have memory corruption? Like I said, I need to duplicate the current record that I am processing for each filename in $CSV.FileList.