I have a large CSV file (~250000 rows) and before I work on fully parsing and sorting it I was trying to display only a part of it by writing it to a text file.
csvfile = open(file_path, "rb")
rows = csvfile.readlines()
text_file = open("output.txt", "w")
row_num = 0
while row_num < 20:
row_num += 1
There's nothing specifically wrong with what you're doing, but it's not particularly Pythonic. In particular reading the whole file into memory with
readlines() at the start seems pointless if you're only using 20 lines.
Instead you could use a for loop with enumerate and break when necessary.
csvfile = open(file_path, "rb") text_file = open("output.txt", "w") for i, row in enumerate(csvfile): text_file.write(row) if row_num >= 20: break text_file.close()
You could further improve this by using
with blocks to open the files, rather than closing them explicitly. For example:
with csvfile as open(file_path, "rb"): #your code here involving csvfile #now the csvfile is closed!
Also note that Python might not be the best tool for this - you could do it directly from Bash, for example, with just
head -n20 csvfile.csv > output.txt.