START END DAYS MIN
2007-06-29 2007-06-29 1 -0.006408
2007-07-04 2007-07-04 1 -0.000003
2007-07-10 2007-07-11 2 -0.018858
2007-07-16 2007-07-17 2 -0.008562
最好是天数不包括周末(即7/13到7/16算作1天),但我知道这通常很复杂。
替换代码2】方法似乎做了一个我想要的版本,但根据文档设置axis=1
并没有返回我所期望的索引值的集合。
Edit: 应该指定,寻找一个不需要循环的解决方案。
5 个回答
0 人赞同
Solution with named-aggregation
在pandas 0.25以上版本中工作。
#convert DatetimeIndex to column
df = df.reset_index()
#filter values equal 0
m = df['VAL'].eq(0)
#create groups only for non 0 rows filtering with inverting mask by ~
g = m.ne(m.shift()).cumsum()[~m]
#aggregation by groups
df1 = df.groupby(g).agg(START=('DATE','first'),
END=('DATE','last'),
DAYS= ('DATE', 'size'),
MIN=('VAL','min')).reset_index(drop=True)
print (df1)
START END DAYS MIN
0 2007-06-29 2007-06-29 1 -0.006408
1 2007-07-04 2007-07-04 1 -0.000003
2 2007-07-10 2007-07-11 2 -0.018858
3 2007-07-16 2007-07-17 2 -0.008562
Solution for pandas <0.25 is possible with pass dicttionary to agg
and last set new columns names:
df = df.reset_index()
m = df['VAL'].eq(0)
g = m.ne(m.shift()).cumsum()[~m]
df1 = df.groupby(g).agg({'DATE':['first','last','size'], 'VAL':'min'}).reset_index(drop=True)
df1.columns = ['START','END','DAYS','MIN']
print (df1)
START END DAYS MIN
0 2007-06-29 2007-06-29 1 -0.006408
1 2007-07-04 2007-07-04 1 -0.000003
2 2007-07-10 2007-07-11 2 -0.018858
3 2007-07-16 2007-07-17 2 -0.008562
0 人赞同
首先,你创建一个标志,找到非零记录,并将它们分配到相同的组中,然后进行groupby,并计算出你想要的那些属性。
df.assign(Flag = np.where(df.VAL.ge(0), 1, np.nan))
.assign(Flag = lambda x: x.Flag.fillna(x.Flag.cumsum().ffill()))
.loc[lambda x: x.Flag.ne(1)]
.groupby('Flag')
.apply(lambda x: [x.DATE.iloc[0], x.DATE.iloc[-1], len(x), x.VAL.min()])
.apply(pd.Series)
.set_axis(['START','END','DAYS','MIN'], axis=1, inplace=False)
START END DAYS MIN
3.0 2007-06-29 2007-06-29 1 -0.006408
5.0 2007-07-04 2007-07-04 1 -0.000003
8.0 2007-07-10 2007-07-11 2 -0.018858
10.0 2007-07-16 2007-07-17 2 -0.008562
0 人赞同
这个方案与最初的解决方案(由Allen提出)有一些类似的逻辑,但 "应用 "较少。不太清楚性能比较的情况。
# a new group begins when previous value is 0, but current is negative
df['NEW_GROUP'] = df['VAL'].shift(1) == 0
df['NEW_GROUP'] &= df['VAL'] < 0
# Group by the number of times a new group has showed up, which determines the group number.
# Directly return a Series from `apply` to obviate further transformations
print(df.loc[df['VAL'] < 0]
.groupby(df['NEW_GROUP'].cumsum())
.apply(lambda x: pd.Series([x.DATE.iloc[0], x.DATE.iloc[-1], x.VAL.min(), len(x)],
index=['START','END','MIN','DAYS'])))
Output:
START END MIN DAYS
NEW_GROUP
1 2007-06-29 2007-06-29 -0.006408 1
2 2007-07-04 2007-07-04 -0.000003 1
3 2007-07-10 2007-07-11 -0.018858 2
4 2007-07-16 2007-07-17 -0.008562 2
0 人赞同
numpy
的解决方案,df
是你的例子DataFrame。
# get data to numpy
date = df.index.to_numpy(dtype='M8[D]')
val = df['VAL'].to_numpy()
# find switches between zero/nonzero
on,off = np.diff(val!=0.0,prepend=False,append=False).nonzero()[0].reshape(-1,2).T
# use switch points to calculate all desired quantities
out = pd.DataFrame({'START':date[on],'END':date[off-1],'DAYS':np.busday_count(date[on],date[off-1])+1,'MIN':np.minimum.reduceat(val,on)})
# admire
# START END DAYS MIN
# 0 2007-06-29 2007-06-29 1 -0.006408
# 1 2007-07-04 2007-07-04 1 -0.000003
# 2 2007-07-10 2007-07-11 2 -0.018858
# 3 2007-07-16 2007-07-17 2 -0.008562
0 人赞同
你可以用这个方法。
首先,从文件中读取数据框架。
import pandas as pd
df=pd.read_csv("file.csv")
DATE VAL
0 2007-06-26 0.000000
1 2007-06-27 0.000000
2 2007-06-28 0.000000
3 2007-06-29 -0.006408
4 2007-07-02 0.000000
5 2007-07-03 0.000000
6 2007-07-04 -0.000003
7 2007-07-05 0.000000
8 2007-07-06 0.000000
9 2007-07-09 0.000000
10 2007-07-10 -0.018858
11 2007-07-11 -0.015624
12 2007-07-12 0.000000
13 2007-07-13 0.000000
14 2007-07-16 -0.008562
15 2007-07-17 -0.006587
和主代码。
from datetime import timedelta
last_date=0
min_val=0
mat=[]
for index, row in df.iterrows():
if (row['VAL'])!=0:
st=st+1
datetime_object = datetime.strptime(row['DATE'], '%Y-%m-%d')
if st==1:
start=datetime_object
last_date=start
if row['VAL']<min_val:
min_val=row['VAL']
else:
if last_date+timedelta(days=1)==datetime_object:
last_date=datetime_object
if row['VAL']<min_val:
min_val=row['VAL']
else:
arr=[]
arr.append(str(start.date()))
arr.append(str(last_date.date()))
arr.append(((last_date-start).days)+1)
arr.append(min_val)
start=datetime_object
last_date=datetime_object
min_val=row['VAL']
mat.append(arr)
arr=[]
arr.append(str(start.date()))
arr.append(str(last_date.date()))
arr.append(((last_date-start).days)+1)
arr.append(min_val)
mat.append(arr)
df = pd.DataFrame(mat, columns = ['start', 'end', 'days', 'min'])
start end days min