今天将代码以Spark On Yarn Cluster
的方式提交,遇到了很多很多问题.特地记录一下.
代码通过--master yarn-client
提交是没有问题的,但是通过--master yarn-cluster
总是报错,而且是各种各样的错误.
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
...
这个bug通常会提示我们是否将Jar
包部署到所有的slave
上了,但是yarn-cluster一般会通过RPC
框架分发Jar包,即使将Jar
包一一部署到slave机器中,并没有任何效果,仍然报这个错误.
开始通过google
,stackoverflow
查找相关信息.产生这种问题的原因可谓错综复杂,有的说类加载器的问题,有的说UDF的问题.其中有一个引起了我的注意:
如果在代码中引用了
Java
代码,最好将代码打成的Jar
放在$SPARK_HOME/jars
目录下,确保jar包是在classpath
下.
按照这个解答的方式安排了一下jar
包,然后重新执行.通过yarn的web页面观察运行日志,没有这个报错了.但是任务失败了,报了另一个错误:
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/xxx/xxxx/xxxx/application_1495996836198_0003/__spark_libs__1200479165381142167.zip
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
...
这个错误就让我很熟悉了,我在代码创建sparkSession
的时候设置了master
,master
地址是spark master
的url
,所以当在yarn上提交任务的时候,最终会按照代码中的配置开始standalone
模式,这会造成混乱,所以会产生一些莫名其妙的bug.
修改一下代码重新打包就好了
解决办法:
val spark = SparkSession.builder()
// .master("spark://master:7077") //注释掉master的设置
.appName("xxxxxxx")
.getOrCreate();
中间还遇到了其他很多bug,比如无法反序列化
SerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
再或者这种类型转换错误
org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: scala.Tuple2 cannot be cast to com.xxx.xxxxx.ResultMerge
这些报错通过注释掉master
的设置后都会消失.
各种异常交错出现,这是很容易让人迷惑的.
幸好最后报了一个熟悉的错误java.io.FileNotFoundException
,问题才得以解决.
报错如下:
java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1729427003-192.168.1.219-1527744820505:blk_1073742492_1669; getBlockSize()=24; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[192.168.1.219:50010,DS-e478076c-c3aa-4870-adce-7ffd6a49efe4,DISK], DatanodeInfoWithStorage[192.168.1.21:50010,DS-af806575-7404-45fd-bae0-0fcc59de7598,DISK]]}
这是因为在操作一个正在写入的hdfs
文件,通常可能出现在flume写入的文件未正常关闭,或者hdfs重启导致的文件问题.
可以通过命令查看一下哪些文件是OPENFORWRITTING
或者MISSING
:
hadoop fsck / -openforwrite | egrep "MISSING|OPENFORWRITE"
通过上面的命令可以确定具体文件,然后将其删除即可.
手机扫一扫
移动阅读更方便
你可能感兴趣的文章