Spark Dataframe join中用到Scala Seq提示没有序列化?

我在java spark-sql中想使用dataframe的多字段join功能,看了一下该接口如果要多字段join的话需要传入一个usingColumns.

public org.apache.spark.sql.DataFrame join(org.apache.spark.sql.DataFrame right, scala.collection.Seq<java.lang.String> usingColumns, java.lang.String joinType)


所以我自己在java中将List转成了Scala的Seq,代码如下

List<String> tmp = Arrays.asList(
        ColumnUtil.PRODUCT_COLUMN,
        ColumnUtil.EVENT_ID_COLUMN
);
scala.collection.Seq<String> usingColumns = JavaConverters.asScalaIteratorConverter(tmp.iterator()).asScala().toSeq();
DataFrame unionDf = uvDataframe.join(deviceUvDataframe, usingColumns, "inner");


结果最后在执行join的时候报错

Caused by: java.io.NotSerializableException: java.util.AbstractList$Itr
Serialization stack:

    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
    ... 49 more

我测试过以下两种接口都可以正常join,就只有这种多字段join会出现没有序列化的问题,请问各位有可以解决的办法吗?

    public org.apache.spark.sql.DataFrame join(org.apache.spark.sql.DataFrame right) 

    public org.apache.spark.sql.DataFrame join(org.apache.spark.sql.DataFrame right, java.lang.String usingColumn)
阅读 3.9k
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进