user3264280 user3264280 - 6 months ago 12
Python Question

Writing to a csv file in a particular format

I have a huge

csv
file which contains timestamps and seven other columns.

The timestamp is organized in 10 seconds interval. I've calculated the
average
,
min
and
max
of these values on hourly basis.

I need to put the output in another
csv
file organized in the following manner:

timestamp, Name_of_header, min, max, average
2012-05-04 06:00:00, "data1", 25.0, 56.0 40.5
2012-05-04 06:00:00, "data2", 30.0, 50.0, 40.0
..
..
..
2012-05-04 08:00:00, "data1", 10.0, 20.0, 15.0
2012-05-04 08:00:00, "data2", 15.0, 30.0 22.5
..
..


My working code which gives
average
,
min
and
max
on hourly basis is as follows:

from datetime import datetime
import pandas as pd

def same_day(date_string): # remove year
return datetime.strptime(date_string, "%Y-%m-%d %H:%M:%S").strftime('%H:%M')
df = pd.read_csv('convertcsv.csv', parse_dates = True, index_col=0,
usecols=[0,1,2,3,4,5,6,7], names = ['date', 'data1', 'data2', 'data3', 'data4', 'data5', 'data6', 'data7'])
hourly_avg = df.groupby(pd.TimeGrouper('H'))
print((hourly_avg['data1]).agg([np.mean, np.min, np.max]))
(hourly_avg.mean()).to_csv('file.csv')


I'm stuck at getting all the values in the format that I need. How can I put all the
data1
,
data2
,
data3
.. under one header and then print their
average
,
min
and
max
values on one row at a time?

Answer

It seems that you would have quite a few columns to aggregate - assume 'date' is your timestamp, there seem to be seven, ie, data1 - data7. If you apply three aggregation functions to these seven columns (mean, min, max) you'll get 7 x 3 columns with a hierarchical MultiIndex (where .agg(dict) works differently as for 'ordinary' columns). Example follows, including saving to csv at the end. GroupBy docs and to_csv docs.

Sample data:

df = pd.DataFrame(np.random.random((1000, 7)), columns=['data_' + str(i) for i in range(7)], index=pd.date_range(date(2015,11,29), periods=1000, freq='15S'))

                       data_0    data_1    data_2    data_3    data_4  \
2015-11-29 00:00:00  0.331546  0.317863  0.616388  0.784063  0.968004   
2015-11-29 00:00:15  0.299468  0.971885  0.954483  0.856748  0.230223   
2015-11-29 00:00:30  0.246160  0.618193  0.902762  0.518121  0.705881   
2015-11-29 00:00:45  0.875753  0.629994  0.029394  0.883197  0.939194   
2015-11-29 00:01:00  0.696372  0.536321  0.632785  0.009850  0.951280   

                       data_5    data_6  
2015-11-29 00:00:00  0.829210  0.323449  
2015-11-29 00:00:15  0.753155  0.619336  
2015-11-29 00:00:30  0.149181  0.679280  
2015-11-29 00:00:45  0.236912  0.024689  
2015-11-29 00:01:00  0.171320  0.520080

Grouping by hour, calculating group-wise stats:

hourly_stats = df.groupby(pd.TimeGrouper('H')).agg([np.sum, np.min, np.max])

                       data_0                        data_1            \
                         mean      amin      amax      mean      amin   
2015-11-29 00:00:00  0.493442  0.003107  0.999432  0.495162  0.003796   
2015-11-29 01:00:00  0.506108  0.001073  0.988778  0.485588  0.001207   
2015-11-29 02:00:00  0.484978  0.003669  0.993387  0.495174  0.006063   
2015-11-29 03:00:00  0.510354  0.011436  0.997326  0.489296  0.000456   
2015-11-29 04:00:00  0.544405  0.040678  0.965896  0.542452  0.049254   

                                 data_2                        data_3  \
                         amax      mean      amin      amax      mean   
2015-11-29 00:00:00  0.994239  0.487602  0.001981  0.994280  0.491253   
2015-11-29 01:00:00  0.998454  0.467457  0.001216  0.997955  0.511204   
2015-11-29 02:00:00  0.999235  0.513108  0.003046  0.998595  0.524719   
2015-11-29 03:00:00  0.996543  0.528340  0.002706  0.995325  0.488485   
2015-11-29 04:00:00  0.984905  0.506434  0.025606  0.925806  0.535858   

                       ...                 data_4                      \
                       ...         amax      mean      amin      amax   
2015-11-29 00:00:00    ...     0.991856  0.487274  0.010722  0.998514   
2015-11-29 01:00:00    ...     0.999489  0.533160  0.003481  0.982976   
2015-11-29 02:00:00    ...     0.998216  0.482723  0.001257  0.999445   
2015-11-29 03:00:00    ...     0.999063  0.495377  0.002491  0.997803   
2015-11-29 04:00:00    ...     0.948984  0.522290  0.011772  0.998149   

                       data_5                        data_6            \
                         mean      amin      amax      mean      amin   
2015-11-29 00:00:00  0.523222  0.005653  0.999984  0.488832  0.004678   
2015-11-29 01:00:00  0.490922  0.000483  0.998804  0.500331  0.001843   
2015-11-29 02:00:00  0.478979  0.011122  0.998308  0.517226  0.001161   
2015-11-29 03:00:00  0.527019  0.008794  0.999160  0.461164  0.002979   
2015-11-29 04:00:00  0.415962  0.004843  0.975577  0.523150  0.050968   


                         amax  
2015-11-29 00:00:00  0.993302  
2015-11-29 01:00:00  0.998207  
2015-11-29 02:00:00  0.997752  
2015-11-29 03:00:00  0.999652  
2015-11-29 04:00:00  0.998608  

You can save the result straight to csv (option: rename columns with hourly_stats.columns = ['col_name_1', 'col_name_2', .... 'col_name_21']:

hourly_stats.to_csv('path/to/file.csv')
Comments