有很多方法可以做到这一点:
选项1.使用selectExpr。
data = sqlContext.createDataframe([("Alberto", 2), ("Dakota", 2)], ["Name", "askdaosdka"])data.show()
data.printSchema()Output
+-------+----------+
| Name|askdaosdka|
+-------+----------+
|Alberto| 2|
| Dakota| 2|
+-------+----------+
root
|– Name: string (nullable = true)
|– askdaosdka: long (nullable = true)
df = data.selectExpr(“Name as name”, “askdaosdka as age”)
df.show()
df.printSchema()Output
+-------+—+
| name|age|
+-------+—+
|Alberto| 2|
| Dakota| 2|
+-------+—+
root
|– name: string (nullable = true)
|– age: long (nullable = true)
选项2。使用withColumnRenamed,请注意,此方法允许您“覆盖”同一列。对于Python3,请替换
xrange
为range
。from functools import reduce
oldColumns = data.schema.names
newColumns = [“name”, “age”]df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
df.printSchema()
df.show()选项3.使用 别名,在Scala中,您还可以将as用作。
from pyspark.sql.functions import col
data = data.select(col(“Name”).alias(“name”), col(“askdaosdka”).alias(“age”))
data.show()Output
+-------+—+
| name|age|
+-------+—+
|Alberto| 2|
| Dakota| 2|
+-------+—+
选项4.使用sqlContext.sql,它使您可以对
Dataframes
注册为表的数据库使用SQL查询。sqlContext.registerDataframeAsTable(data, "myTable")
df2 = sqlContext.sql(“SELECT Name AS name, askdaosdka as age from myTable”)
df2.show()
Output
+-------+—+
| name|age|
+-------+—+
|Alberto| 2|
| Dakota| 2|
+-------+—+



