-
Notifications
You must be signed in to change notification settings - Fork 145
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
48 changed files
with
2,760 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -32,3 +32,7 @@ awscli-exe* | |
*.key | ||
*.repo | ||
*.jar | ||
|
||
# detritus produced by kuttl | ||
kubeconfig* | ||
kuttl-report-*.xml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
<!-- | ||
Licensed to the Apache Software Foundation (ASF) under one | ||
or more contributor license agreements. See the NOTICE file | ||
distributed with this work for additional information | ||
regarding copyright ownership. The ASF licenses this file | ||
to you under the Apache License, Version 2.0 (the | ||
"License"); you may not use this file except in compliance | ||
with the License. You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. | ||
--> | ||
|
||
# Base | ||
|
||
Some values such as SERVICE name, SERVICEACCOUNT name, | ||
and RBAC role are hard-coded in the environment-configmap.yaml | ||
and supplied into the pods as environment variables. Other | ||
hardcodings include the service name ('hadoop') and the | ||
namespace we run in (also 'hadoop'). | ||
|
||
The hadoop Configuration system can interpolate environment variables | ||
into '\*.xml' file values ONLY. See | ||
[Configuration Javadoc](http://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html) | ||
|
||
...but we can not do interpolation of SERVICE name into '\*.xml' file key names | ||
as is needed when doing HA in hdfs-site.xml... so for now, we have | ||
hard-codings in 'hdfs-site.xml' key names. For example, the property key name | ||
`dfs.ha.namenodes.hadoop` has the SERVICE name ('hadoop') in it or the key | ||
`dfs.namenode.http-address.hadoop` (TODO: Fix/Workaround). | ||
|
||
Edit of pod resources or jvm args for a process are | ||
done in place in the yaml files or in kustomization | ||
replacements in overlays. |
89 changes: 89 additions & 0 deletions
89
hbase-kubernetes-deployment/base/delete-format-hdfs-configmap-job.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Job to delete the 'format-hdfs' configmap after hdfs has come up | ||
# successfully. The 'format-hdfs' configmap is added by running | ||
# 'kubectl -n hadoop apply -k tools/format-hdfs' (You need the | ||
# '-n hadoop' to apply the configmap to the 'hadoop' namespace). | ||
# Add the configmap if you want hdfs to format the filesystem. | ||
# Do this on initial install only or if you want to clean out | ||
# the current HDFS data. | ||
# | ||
# If the 'format-hdfs' configmap is NOT present, this Job exits/completes. | ||
# Otherwise, it keeps probing until HDFS is up and healthy, and then | ||
# this job removes the 'format-hdfs' configmap. The presence of the | ||
# 'format-hdfs' configmap is checked by all hdfs pods on startup. If | ||
# the configmap is present, they clean out their data directories and run | ||
# format/recreate of their data directories. To install the 'format-hdfs' | ||
# configmap, do it before launch of hdfs. See tools/format-hdfs. | ||
--- | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: delete-format-hdfs-configmap | ||
spec: | ||
ttlSecondsAfterFinished: 300 | ||
template: | ||
spec: | ||
containers: | ||
- image: hadoop | ||
name: delete-format-hdfs-configmap | ||
imagePullPolicy: IfNotPresent | ||
command: | ||
- /bin/bash | ||
- -c | ||
- |- | ||
set -xe | ||
# See if 'format-hdfs' configmap is present. | ||
# If not, then there is nothing for this job to do, complete, exit 0. | ||
/tmp/scripts/exists_configmap.sh format-hdfs || { | ||
echo "No 'format-hdfs' configmap found so no work to do; exiting" | ||
exit 0 | ||
} | ||
# The `format-hdfs`` configmap is present. Remove it after HDFS is fully up. | ||
/tmp/scripts/jmxping.sh namenode ${HADOOP_SERVICE} | ||
/tmp/scripts/jmxping.sh datanode ${HADOOP_SERVICE} | ||
# TODO: Should we check if ha and if so, if a NN active... get a report on health? | ||
# HDFS is up. Delete the format-hdfs flag. | ||
/tmp/scripts/delete_configmap.sh format-hdfs | ||
resources: | ||
requests: | ||
cpu: '0.2' | ||
memory: 256Mi | ||
limits: | ||
cpu: '0.5' | ||
memory: 512Mi | ||
envFrom: | ||
- configMapRef: | ||
name: environment | ||
volumeMounts: | ||
- mountPath: /tmp/scripts | ||
name: scripts | ||
# Scratch dir is a location where init containers place items for later use | ||
# by the main containers when they run. | ||
- mountPath: /tmp/scratch | ||
name: scratch | ||
serviceAccountName: hadoop | ||
restartPolicy: Never | ||
volumes: | ||
- configMap: | ||
name: scripts | ||
defaultMode: 0555 | ||
name: scripts | ||
# Scratch dir is location where init containers place items for later use | ||
# by the main containers when they run. | ||
- emptyDir: {} | ||
name: scratch |
70 changes: 70 additions & 0 deletions
70
hbase-kubernetes-deployment/base/environment-configmap.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# Common environment variables shared across pods. | ||
# Include w/ the 'envFrom:' directive. | ||
# We have to be pendantic in here. We cannot have a value | ||
# refer to a define made earlier; the interpolation | ||
# doesn't work. | ||
--- | ||
apiVersion: v1 | ||
kind: ConfigMap | ||
metadata: | ||
name: environment | ||
data: | ||
DOMAIN: svc.cluster.local | ||
# HADOOP_HOME, HADOOP_HDFS_HOME, etc., and HBASE_HOME are provided by the images. | ||
# | ||
# The headless-service pods in our statefulsets come up in. | ||
# See https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id | ||
# The headless-service is defined in the adjacent rbac.yaml. | ||
# Matches the serviceName we have on our statefulsets. | ||
# Required that we create it according to https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations | ||
HADOOP_SERVICE: hadoop | ||
# dfs.http.policy | ||
# If HTTPS_ONLY or HTTPS_OR_HTTP then we'll depend on https in UI and jmx'ing | ||
# and will adjust schema and ports accordingly. If https, we need to get certificates | ||
# so cert-manager, etc., needs to be instaled. | ||
HTTP_POLICY: HTTP_ONLY | ||
DFS_HTTPS_ENABLE: "false" | ||
HBASE_SSL_ENABLED: "false" | ||
HTTP_AUTH: kerberos | ||
# The insecure port for now. | ||
DATANODE_DATA_DIR: /data00/dn | ||
JOURNALNODE_DATA_DIR: /data00/jn | ||
NAMENODE_DATA_DIR: /data00/nn | ||
HDFS_AUDIT_LOGGER: INFO,RFAAUDIT | ||
HADOOP_DAEMON_ROOT_LOGGER: INFO,RFA,CONSOLE | ||
HADOOP_ROOT_LOGGER: INFO,RFA,CONSOLE | ||
HADOOP_SECURITY_LOGGER: INFO,RFAS | ||
HADOOP_CONF_DIR: /etc/hadoop | ||
HADOOP_LOG_DIR: /var/log/hadoop | ||
HADOOP_SECURE_LOG: /var/log/hadoop | ||
HBASE_ROOT_LOGGER: DEBUG,RFA,console | ||
HBASE_LOG_DIR: /var/log/hbase | ||
HBASE_CONF_DIR: /etc/hbase | ||
# if [ "$HBASE_NO_REDIRECT_LOG" != "" ]; then ... so we are asking for NO redirect of logs. | ||
HBASE_NO_REDIRECT_LOG: "true" | ||
HBASE_MANAGES_ZK: "false" | ||
DFS_REPLICATION: "1" | ||
# What percentage of the container memory to give over to the JVM. | ||
# Be aware that we look at the container resource limit, NOT request: e.g. if | ||
# the resource request memory is set to 8G and the limit is 16G and the | ||
# JVM_HEAP_PERCENTAGE_OF_RESOURCE_LIMIT is 50 as in 50%, | ||
# the heap will be set to 8G: i.e. 1/2 of the 16G limit. | ||
# ip-172-18-132-227.us-west-2.compute.internal | ||
# See https://dzone.com/articles/best-practices-java-memory-arguments-for-container | ||
JVM_HEAP_PERCENTAGE_OF_RESOURCE_LIMIT: "45" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
networkaddress.cache.ttl=1 | ||
networkaddress.cache.negative.ttl=0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
# We run the jmxexporter on most all processes to convert jmx metrics to prometheus. | ||
# This is the config file it uses. | ||
# | ||
# Don't lowercase. Leave the metrics in camelcase. Do this because while | ||
# jmxexport can lowercase metrics names, telegraf can't. | ||
# | ||
#lowercaseOutputName: false | ||
#lowercaseOutputLabelNames: false | ||
# From https://godatadriven.com/blog/monitoring-hbase-with-prometheus/ | ||
#rules: | ||
# - pattern: HadoopNamespace_([^\W_]+)_table_([^\W_]+)_region_([^\W_]+)_metric_(\w+) | ||
# name: HBase_metric_$4 | ||
# labels: | ||
# namespace: "$1" | ||
# table: "$2" | ||
# region: "$3" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
--- | ||
apiVersion: kustomize.config.k8s.io/v1beta1 | ||
kind: Kustomization | ||
|
||
configMapGenerator: | ||
- name: hadoop-configuration | ||
# Base set of hadoop configurations. Overlays will add to the set here. | ||
files: | ||
- log4j.properties=log4j.properties.hadoop | ||
- name: scripts | ||
# Useful scripts | ||
files: | ||
- scripts/jmxping.sh | ||
- scripts/apiserver_access.sh | ||
- scripts/get_statefulset_replica_count.sh | ||
- scripts/get_statefulset.sh | ||
- scripts/exists_configmap.sh | ||
- scripts/delete_configmap.sh | ||
- scripts/topology.sh | ||
- scripts/describe_node.sh | ||
- scripts/get_node_name_from_pod_IP.sh | ||
- scripts/get_node_labels.sh | ||
- scripts/get_node_labels_from_pod_IP.sh | ||
- scripts/log.sh | ||
options: | ||
disableNameSuffixHash: true | ||
- name: global-files | ||
# Add files used by most/all processes into a global configuration configmap | ||
# accessible to all processes. The environment-configmap defines env varibles used by | ||
# all processes and pods. This configmap loads files used by each process. | ||
files: | ||
- jmxexporter.yaml | ||
- java.security | ||
- ssl-client.xml | ||
- ssl-server.xml | ||
options: | ||
disableNameSuffixHash: true | ||
|
||
secretGenerator: | ||
- name: keystore-password | ||
type: Opaque | ||
options: | ||
disableNameSuffixHash: true | ||
literals: | ||
- password=changeit | ||
|
||
resources: | ||
- namespace.yaml | ||
# Global environment variables read in by pods | ||
- environment-configmap.yaml | ||
- rbac.yaml | ||
- delete-format-hdfs-configmap-job.yaml | ||
# These depend on cert-manager being installed. | ||
# See https://cert-manager.io/docs/installation/ | ||
#- clusterissuer.yaml | ||
#- certificate.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
hadoop.console.threshold=LOG | ||
hadoop.log.maxbackupindex=20 | ||
hadoop.log.maxfilesize=256MB | ||
hadoop.root.logger=TRACE,CONSOLE | ||
hadoop.security.log.file=SecurityAuth-${user.name}.audit | ||
hadoop.security.log.maxbackupindex=20 | ||
hadoop.security.log.maxfilesize=256MB | ||
hadoop.security.logger=INFO,RFAS | ||
hdfs.audit.log.maxbackupindex=20 | ||
hdfs.audit.log.maxfilesize=256MB | ||
hdfs.audit.logger=INFO,RFAAUDIT | ||
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false | ||
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender | ||
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout | ||
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n | ||
log4j.appender.CONSOLE.Threshold=${hadoop.console.threshold} | ||
log4j.appender.RFA=org.apache.log4j.RollingFileAppender | ||
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} | ||
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout | ||
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n | ||
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} | ||
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} | ||
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender | ||
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log | ||
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout | ||
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n | ||
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} | ||
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} | ||
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender | ||
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} | ||
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout | ||
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n | ||
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} | ||
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} | ||
log4j.category.SecurityLogger=${hadoop.security.logger} | ||
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} | ||
log4j.logger.org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy=DEBUG | ||
log4j.logger.org.apache.hadoop.net.NetworkTopology=DEBUG | ||
log4j.rootLogger=${hadoop.root.logger} |
Oops, something went wrong.