Resource Center upload #10448
novice-gamer
started this conversation in
General
Replies: 1 comment 1 reply
-
the same problem, how to slove? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Version 2.0.5
Background: Hadoop clusters are on the cloud, and the dolphin scheduler is on the local machine.
HDFS is configured in "common.properties". In the file management of the resource center, you can create folders, but creating and uploading files will get stuck.
config:
'''
[root@localhost conf]# cat common.properties |grep -Ev "^#|^$"
data.basedir.path=/tmp/dolphinscheduler
resource.storage.type=HDFS
resource.upload.path=/daas/dcfl/test/lgx
hadoop.security.authentication.startup.state=false
java.security.krb5.conf.path=/application/dolphinscheduler/conf/krb5.conf
login.user.keytab.username=[email protected]
login.user.keytab.path=/application/dolphinscheduler/conf/hdfs.headless.keytab
kerberos.expire.time=2
hdfs.root.user=hadoop
fs.defaultFS=hdfs://x.x.x.x:8020
resource.manager.httpaddress.port=8088
yarn.resourcemanager.ha.rm.ids=x.x.x.x,x.x.x.x
yarn.application.status.address=http://yarnIp1:%s/ws/v1/cluster/apps/%s
yarn.job.history.status.address=http://yarnIp1:19888/ws/v1/history/mapreduce/jobs/%s
datasource.encryption.enable=false
datasource.encryption.salt=!@#$%^&*
support.hive.oneSession=false
sudo.enable=true
development.state=false
datasource.plugin.dir=lib/plugin/datasource
'''
Beta Was this translation helpful? Give feedback.
All reactions