I'm very new to Python so please bear with me.
I currently receive a CSV file every day that (unfortunately for me) has an unpredictable structure. I never know exactly how many columns of data I have until I open up the file, luckily I do know that it will always be one of two formats. The CSV files can be either 6 or 10 columns long, standard comma-delimited with CRLF new line indicators. So that being said, I'm trying to put together a python script that will dynamically identify the file format (presumably by the header row) and then process the data appropriately.
I've done some reading on the csv module off the Python website and googled around for similar topics, I've made some progress but now I'm a bit stuck. Here's the meager code I have so far:
file_name = "C:\temp\file1.txt"
f = open(file_name,'r')
csv_reader = csv.reader(f,delimiter=',')
ncol = len(next(csv_reader)) #read first line and count columns
So this presumably gets me the number of columns which I can use for an if statement to fork logic. The next step I'm trying to get at is to load the CSV data into a SQL Server table. Now I'm unsure what the appropriate way to do this is. Do I load each line of the CSV into a list and then insert the list into the SQL table? Or is there a way to get a two dimensional array going to make this work in a bulk load fashion? One thing that makes this easier is that the CSV column names and the SQL table column names are the same, so I just have to match them for each line of data.
Thanks in advance,