Asked  7 Months ago    Answers:  5   Viewed   108 times

I have the following dataframe:

df = pd.DataFrame([
    (1, 1, 'term1'),
    (1, 2, 'term2'),
    (1, 1, 'term1'),
    (1, 1, 'term2'),
    (2, 2, 'term3'),
    (2, 3, 'term1'),
    (2, 2, 'term1')
], columns=['id', 'group', 'term'])

I want to group it by id and group and calculate the number of each term for this id, group pair.

So in the end I am going to get something like this:

enter image description here

I was able to achieve what I want by looping over all the rows with df.iterrows() and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).

It looks like I have to group by and then count values, so I tried that with df.groupby(['id', 'group']).value_counts() which does not work because value_counts operates on the groupby series and not a dataframe.

Anyway I can achieve this without looping?

 Answers

73

I use groupby and size

df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)

enter image description here


Timing

enter image description here

1,000,000 rows

df = pd.DataFrame(dict(id=np.random.choice(100, 1000000),
                       group=np.random.choice(20, 1000000),
                       term=np.random.choice(10, 1000000)))

enter image description here

Tuesday, June 1, 2021
 
maniclorn
answered 7 Months ago
72

One idea with pivoting:

df1 = df.pivot_table(index=['date','region'], 
                     columns='traffic_type', 
                     values='total_views', 
                     aggfunc='sum')
print (df1)
traffic_type       desktop  mobileweb  total
date       region                           
01/04/2018 aug          50         60    200
           world        20         30     40

df2 = df1['desktop'].div(df1['total']).reset_index(name='desktop_share').assign(traffic_type='total')

df = df.merge(df2, how='left')
print (df)
  traffic_type        date region  total_views  desktop_share
0      desktop  01/04/2018    aug           50            NaN
1    mobileweb  01/04/2018    aug           60            NaN
2        total  01/04/2018    aug          200           0.25
3      desktop  01/04/2018  world           20            NaN
4    mobileweb  01/04/2018  world           30            NaN
5        total  01/04/2018  world           40           0.50

Another idea with MultiIndex:

df1 = df.set_index(['traffic_type','date','region'])

a = df1.xs('desktop', drop_level=False).rename({'desktop':'total'})
b = df1.xs('total', drop_level=False)

df = df1.assign(desktop_share = a['total_views'].div(b['total_views'])).reset_index()
print (df)
  traffic_type        date region  total_views  desktop_share
0      desktop  01/04/2018    aug           50            NaN
1    mobileweb  01/04/2018    aug           60            NaN
2        total  01/04/2018    aug          200           0.25
3      desktop  01/04/2018  world           20            NaN
4    mobileweb  01/04/2018  world           30            NaN
5        total  01/04/2018  world           40           0.50
Tuesday, July 27, 2021
 
gMale
answered 5 Months ago
90

You could group by both the bins and username, compute the group sizes and then use unstack():

>>> groups = df.groupby(['username', pd.cut(df.views, bins)])
>>> groups.size().unstack()
views     (1, 10]  (10, 25]  (25, 50]  (50, 100]
username
jane            1         1         1          1
john            1         1         1          1
Wednesday, July 28, 2021
 
Evernoob
answered 5 Months ago
93

Another method without using pivot_table. Use np.where with groupby+agg:

df['Condition'] = np.where(df['Condition']=='Good', df['City'], np.nan)
df = df.groupby('Area').agg({'City':'nunique', 'Condition':'nunique', 'Population':'sum'})
                       .rename(columns={'City':'city_count', 'Condition':'good_city_count'})
df.loc['All',:] = df.sum()
df = df.astype(int).reset_index()

print(df)
  Area  city_count  good_city_count  Population
0    A           4                2         940
1    B           1                1          50
2    C           1                1         170
3    D           1                1          80
4  All           7                5        1240
Saturday, November 13, 2021
 
rici
answered 3 Weeks ago
96

Use GroupBy.cumcount for get counter and then reshape by unstack:

df1 = pd.DataFrame([["John","guitar"],
                    ["Michael","football"],
                    ["Andrew","running"],
                    ["John","dancing"],
                    ["Andrew","cars"]], columns=['a','b'])

         a         b
0     John    guitar
1  Michael  football
2   Andrew   running
3     John   dancing
4   Andrew      cars


df = (df1.set_index(['a', df1.groupby('a').cumcount()])['b']
         .unstack()
         .rename_axis(-1)
         .reset_index()
         .rename(columns=lambda x: x+1))
print (df)

         0         1        2
0   Andrew   running     cars
1     John    guitar  dancing
2  Michael  football      NaN

Or aggregate list and create new dictionary by constructor:

s = df1.groupby('a')['b'].agg(list)
df = pd.DataFrame(s.values.tolist(), index=s.index).reset_index()
print (df)
         a         0        1
0   Andrew   running     cars
1     John    guitar  dancing
2  Michael  football     None
Saturday, November 13, 2021
 
Ali
answered 3 Weeks ago
Ali
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :  
Share