[Last Call] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 514
  • Last Modified:

Memory Errors: Large CSV files into SQL Server

Hello All,

So this Python Neophyte is finally (hopefully) nearing the end of his project thanks to the advice of some kind and knowledgable programmers here (as well as Google).  But I'm running into what is hopefully my last roadblock.  So I have a large CSV file, typically around 100k in rowcount and 200+ columns wide, making for some pretty large files (100-200mb).  

I'm attempting to use the executemany() function to be able to load these rows quickly as doing them one by one would take way longer than time permits.  But I'm running into memory errors, here's what I have so far:

import csv
import ceODBC

fname='c:\temp\myfile.csv'

f = open(fname,'r',encoding='utf_8',newline='')
reader = csv.reader(f,delimiter=',')

conn = ceODBC.connect('***connection info***;Trusted_Connection=yes',autocommit=False)
cursor = conn.cursor()

header = next(reader)

#target columns explicitly defined
query = 'insert into mytable (col1,col2,col3...) values ({0})'

query = query.format(','.join('?' * len(header)))

columns = []

#Create a list of lists (aka 2D array)
for row in reader:
   columns.append(row)

#Load all rows in a batch
cursor.executemany(query,columns)

conn.commit()

cursor.close

conn.close

f.close()

Open in new window


Can anyone suggest a modification or a better way of being able to get this into my table?  

Thanks in advance,
Glen
0
jisoo411
Asked:
jisoo411
1 Solution
 
jisoo411Author Commented:
Re-arranging the for loop to incorporate more frequent commits seems to help somewhat:

import csv
import ceODBC

fname='c:\temp\myfile.csv'

f = open(fname,'r',encoding='utf_8',newline='')
reader = csv.reader(f,delimiter=',')

conn = ceODBC.connect('***connection info***;Trusted_Connection=yes',autocommit=False)
cursor = conn.cursor()

header = next(reader)

#target columns explicitly defined
query = 'insert into mytable (col1,col2,col3...) values ({0})'

query = query.format(','.join('?' * len(header)))

columns = []

#Create a list of lists (aka 2D array)
for row in reader:
   columns.append(row)

   if len(columns) > 1000:
       #Load all rows in a batch
       cursor.executemany(query,columns)
       conn.commit()
       columns = []

#Pick up the scraps
cursor.executemany(query,columns)
conn.commit()

cursor.close

conn.close

f.close()

Open in new window

0
 
peprCommented:
Warning: the cursor.close and conn.close is not called. You have to append the parentheses like cursor.close() and conn.close().

I did look inside the cursor.executemany implementation. It is written in C and it really requires to be passed the list of rows. (It does not accepts any generator.)

To make it more readable and more understandable, I suggest to define your own chunking reader that will wrap the normal reader and returns the list of rows of predefined length.

Have a look at the following example:
#!python3

import csv

def chunking_reader(reader, chunk_size=10):
    try:
        while True:
            # Initialize the list of rows as empty, and then append
            # the chunk_size of rows from the reader.
            result = []      # init
            for counter in range(chunk_size):
                row = next(reader)
                result.append(row)
                
            # Return the result and after resurrection do the next loop.
            # (The yield makes it a generator instead of a plain function.)    
            yield result
            
    except StopIteration:
        # When the reader has no more rows, the next(reader) raises
        # the StopIteration exception and we get here. Yield whatever
        # was collected until now (less than chunk_size of rows),
        # and raise the same (StopIteration) exception to stop the
        # outer for-loop.
        yield result
        raise


# Simulated content of the csv file.
fname = 'data.csv'
with open(fname, 'w', encoding='utf_8', newline='') as f:
    writer = csv.writer(f)
    for x in range(95):            # here 95 rows hardwired -- for demonstration only
        writer.writerow([x] * 3)   # the row has 3 elements all with the value x

# Now access the csv file via csv.reader and loop via the chunking reader.
with open(fname, 'r', encoding='utf_8', newline='') as f:
    reader = csv.reader(f)
    for list_of_rows in chunking_reader(reader):
        print(list_of_rows)
        print('-----------------------------------------')

Open in new window

It prints
[['0', '0', '0'], ['1', '1', '1'], ['2', '2', '2'], ['3', '3', '3'], ['4', '4',
'4'], ['5', '5', '5'], ['6', '6', '6'], ['7', '7', '7'], ['8', '8', '8'], ['9',
'9', '9']]
-----------------------------------------
[['10', '10', '10'], ['11', '11', '11'], ['12', '12', '12'], ['13', '13', '13'],
 ['14', '14', '14'], ['15', '15', '15'], ['16', '16', '16'], ['17', '17', '17'],
 ['18', '18', '18'], ['19', '19', '19']]
-----------------------------------------
[['20', '20', '20'], ['21', '21', '21'], ['22', '22', '22'], ['23', '23', '23'],
 ['24', '24', '24'], ['25', '25', '25'], ['26', '26', '26'], ['27', '27', '27'],
 ['28', '28', '28'], ['29', '29', '29']]
-----------------------------------------
[['30', '30', '30'], ['31', '31', '31'], ['32', '32', '32'], ['33', '33', '33'],
 ['34', '34', '34'], ['35', '35', '35'], ['36', '36', '36'], ['37', '37', '37'],
 ['38', '38', '38'], ['39', '39', '39']]
-----------------------------------------
[['40', '40', '40'], ['41', '41', '41'], ['42', '42', '42'], ['43', '43', '43'],
 ['44', '44', '44'], ['45', '45', '45'], ['46', '46', '46'], ['47', '47', '47'],
 ['48', '48', '48'], ['49', '49', '49']]
-----------------------------------------
[['50', '50', '50'], ['51', '51', '51'], ['52', '52', '52'], ['53', '53', '53'],
 ['54', '54', '54'], ['55', '55', '55'], ['56', '56', '56'], ['57', '57', '57'],
 ['58', '58', '58'], ['59', '59', '59']]
-----------------------------------------
[['60', '60', '60'], ['61', '61', '61'], ['62', '62', '62'], ['63', '63', '63'],
 ['64', '64', '64'], ['65', '65', '65'], ['66', '66', '66'], ['67', '67', '67'],
 ['68', '68', '68'], ['69', '69', '69']]
-----------------------------------------
[['70', '70', '70'], ['71', '71', '71'], ['72', '72', '72'], ['73', '73', '73'],
 ['74', '74', '74'], ['75', '75', '75'], ['76', '76', '76'], ['77', '77', '77'],
 ['78', '78', '78'], ['79', '79', '79']]
-----------------------------------------
[['80', '80', '80'], ['81', '81', '81'], ['82', '82', '82'], ['83', '83', '83'],
 ['84', '84', '84'], ['85', '85', '85'], ['86', '86', '86'], ['87', '87', '87'],
 ['88', '88', '88'], ['89', '89', '89']]
-----------------------------------------
[['90', '90', '90'], ['91', '91', '91'], ['92', '92', '92'], ['93', '93', '93'],
 ['94', '94', '94']]
-----------------------------------------

Open in new window

Instead of printing, you will call the executemany and commit. Set the defaut value of the chunk size to fit your needs or pass it explicitly.
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now