pyspark如何修改Dataframe中一列的值

数据值是这样的

Survived age
0 22.0
1 38.0
1 26.0
1 35.0
0 35.0
0 null
0 54.0
0 2.0
1 27.0
1 14.0
1 4.0
1 58.0
0 20.0
0 39.0
0 14.0
1 55.0
0 2.0
1 null
0 31.0
1 null
age_interval = [(lower, upper) for lower, upper in zip(range(0, 96, 5), range(5, 101, 5))]
def age_partition(age):
    """ 将年龄分类 """
    for lower, upper in age_interval:
        if age is None:
            return "None"
        elif lower <= age <= upper:
            return f"({lower}, {upper})"

我想对age一列进行修改,比如把22.0改为(20, 30),把38改成(30, 40)
上面的代码是对age值进行修改的函数

我应该如何对age列进行修改呢?

阅读 11.5k
1 个回答
import pandas as pd
df = pd.read_csv('xxx.csv', header=0, encoding='utf-8')

age_interval = [(lower, upper) for lower, upper in zip(range(0, 96, 5), range(5, 101, 5))]
def age_partition(age):
    """ 将年龄分类 """
    for lower, upper in age_interval:
        if age is None:
            return "None"
        elif lower <= age <= upper:
            return f"({lower}, {upper})"

df['new_col'] = df.age.apply(age_partition)
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题