max max - 2 months ago 20
Python Question

Chaining grouping, filtration and aggregation

DataFrameGroupby.filter
method filters the groups, and returns the
DataFrame
that contains the rows that passed the filter.

But what can I do to obtain a new
DataFrameGroupBy
object instead of a
DataFrame
after filtration?

For example, let's say I have a
DataFrame
df
with two columns
A
and
B
. I want to obtain average value of column
B
for each value of column
A
, as long as there's at least 5 rows in that group:

# pandas 0.18.0
# doesn't work because `filter` returns a DF not a GroupBy object
df.groupby('A').filter(lambda x: len(x)>=5).mean()
# works but slower and awkward to write because needs to groupby('A') twice
df.groupby('A').filter(lambda x: len(x)>=5).reset_index().groupby('A').mean()
# works but more verbose than chaining
groups = df.groupby('A')
groups.mean()[groups.size() >= 5]

Answer

Here is some reproduceable data:

np.random.seed(0)

df = pd.DataFrame(np.random.randint(0, 10, (10, 2)), columns=list('AB'))

>>> df
   A  B
0  5  0
1  3  3
2  7  9
3  3  5
4  2  4
5  7  6
6  8  8
7  1  6
8  7  7
9  8  1

A sample filter application demonstrating that it works on the data.

gb = df.groupby('A')
>>> gb.filter(lambda group: group.A.count() >= 3)
   A  B
2  7  9
5  7  6
8  7  7

Here are some of your options:

1) You can also first filter based on the value counts, and then group.

vc = df.A.value_counts()

>>> df.loc[df.A.isin(vc[vc >= 2].index)].groupby('A').mean()
          B
A          
3  4.000000
7  7.333333
8  4.500000

2) Perform groupby twice, before and after the filter:

>>> (df.groupby('A', as_index=False)
       .filter(lambda group: group.A.count() >= 2)
       .groupby('A')
       .mean())
          B
A          
3  4.000000
7  7.333333
8  4.500000

3) Given that your first groupby returns the groups, you can also filter on those:

d = {k: v 
     for k, v in df.groupby('A').groups.items() 
     if len(v) >= 2}  # gb.groups.iteritems() for Python 2

>>> d
{3: [1, 3], 7: [2, 5, 8], 8: [6, 9]}

This is a bit of a hack, but should be relatively efficient as you don't need to regroup.

>>> pd.DataFrame({col: [df.ix[d[col], 'B'].mean()] for col in d}).T.rename(columns={0: 'B'})
          B
3  4.000000
7  7.333333
8  4.500000

Timings with 100k rows

np.random.seed(0)
df = pd.DataFrame(np.random.randint(0, 10, (100000, 2)), columns=list('AB'))

%timeit df.groupby('A', as_index=False).filter(lambda group: group['A'].count() >= 5).groupby('A').mean()
100 loops, best of 3: 18 ms per loop

%%timeit
vc = df.A.value_counts()
df.loc[df.A.isin(vc[vc >= 2].index)].groupby('A').mean()
100 loops, best of 3: 15.7 ms per loop
Comments