- 用户zhangsan执行insert overwrite:
INSERT OVERWRITE table temp.push_temp PARTITION(d_layer='app_video_uid_d_1') SELECt ...
报错目的目录无法清理——could not be cleaned up:
Failed with exception Directory hdfs://Ucluster/user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1 could not be cleaned up. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. Directory hdfs://Ucluster/user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1 could not be cleaned up.
- 查看hdfs目录权限,发现该目录为所有人可写,目录owner为lisi:
drwxrwxrwt - lisi supergroup 0 2021-11-29 15:04 /user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1
- 用户lisi执行步骤1.中的sql可以成功执行
三个字——粘滞位。
仔细看上面的目录权限,最后一位为“t”,表示该目录开启了粘滞位,即只有该目录的owner才能删除目录下的文件
# 非owner删除粘滞位文件 $ hadoop fs -rm /user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1/000000_0 21/11/29 16:32:59 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 7320 minutes, Emptier interval = 0 minutes. rm: Failed to move to trash: hdfs://Ucluster/user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1/000000_0: Permission denied by sticky bit setting: user=admin, inode=000000_0
因为insert overwrite需要删除目录下的原文件,但是由于粘滞位原因无法删除,从而造成hql执行失败
解决方案取消该目录的粘滞位
# 取消粘滞位 hadoop fs -chmod -R o-t /user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1 # 开启粘滞位 hadoop fs -chmod -R o+t /user/hive/warehouse/temp.db/push_temp/d_layer=app_video_uid_d_1



