Geet Geet - 6 months ago 42
Python Question

How to resolve memory issue of pandas while reading big csv files

I have a 100GB csv file with millions of rows. I need to read, say, 10,000 rows at a time in pandas dataframe and write that to the SQL server in chunks.

I used chunksize as well as iteartor as suggested on, and have gone through many similar questions,but I am still getting the out of memory error.

Can you suggest a code to read very big csv files in pandas dataframe iteratively?



for chunk in pd.read_csv(filename, chunksize=10**5):
    chunk.to_sql('table_name', conn, if_exists='append')

where conn is a SQLAlchemy engine (created by sqlalchemy.create_engine(...))