(一)
接昨日,在Java运行spark,报错【win10 HADOOP_HOME and hadoop.home.dir are unset】。
我参考的网址为:https://blog.csdn.net/u012662688/article/details/118962916,和https://blog.csdn.net/juhua2012/article/details/82215729(此文章用于解决解压缩方面的问题:hadoop-2.10.1libnativelibhadoop.so - 无法创建符号链接. hadoop-2.10.1lib)的文章。
(二)
紧接着又报错了,报错【Exception in thread “main” org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 8.0 failed 1 times, most recent failure: Lost task 1.0 in stage 8.0 (TID 207, localhost, executor driver): java.sql.SQLException: The server time zone value ‘�й���ʱ��’ is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the ‘serverTimezone’ configuration property) to use a more specifc time zone value if you want to utilize time zone support.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)】主要问题在于连接数据库的URL不对。
我参考的网址为:https://blog.csdn.net/qq_36350532/article/details/81534812。按照该网站的第一种方法,把访问数据库的URL 后面加上时区,问题就解决了。
一点感悟:
遇到问题,得看清楚报错信息,再按照报错信息去寻找办法。如果没有理解好报错信息,反而会找错方向,始终不得其法,事倍功半。理解好报错信息,则会直接、准确地找到报错的问题根源所在。



