-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DNM] Horizon k8s cluster logging #399
base: main
Are you sure you want to change the base?
[DNM] Horizon k8s cluster logging #399
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mcgonago The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
pkg/horizon/volumes.go
Outdated
func GetLogVolumeMount() corev1.VolumeMount { | ||
return corev1.VolumeMount{ | ||
Name: logVolume, | ||
MountPath: "/var/log/manila", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/var/log/horizon
?
// Horizon is the global ServiceType that refers to all the components deployed | ||
// by the horizon-operator | ||
Horizon storage.PropagationType = "Horizon" | ||
|
||
//LogFile - | ||
LogFile = "/var/log/horizon/horizon.log" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we also planning to capture Apache logs as part of this? Or do you feel that just the Horizon application logs are sufficient for any support / debugging requirements your team may have?
Current logging for Apache is just to the stdout of the container, so that might be sufficient:
https://github.com/openstack-k8s-operators/horizon-operator/blob/main/templates/horizon/config/httpd.conf#L25-L26
It just means logs will be lost when / if the pod is evicted from a node.
This can probably be a separate PR and topic, but just asking to make sure it has been considered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other thing about writing it to a specific log file is that we wont be able to see them simply by oc logs
on the Horizon pod. So we will also need to add a sidecar container to the pod which will just run tail -f /var/log/horizon/horizon.log
. That architecture is defined under this section:
https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent
We can just add a new container to the pod called horizon-logs
or something. That way, users will be able to clearly tell which container they can check to get Horizon Django application logs.
recheck |
@mcgonago: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
func GetLogVolumeMount() corev1.VolumeMount { | ||
return corev1.VolumeMount{ | ||
Name: logVolume, | ||
MountPath: "/var/log/horizon", | ||
ReadOnly: false, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be easier to return a slice of corev1.VolumeLogMount
here. Then you can just call it from within the pod definition above without having to append the result to a slice.
func GetLogVolumeMount() corev1.VolumeMount { | |
return corev1.VolumeMount{ | |
Name: logVolume, | |
MountPath: "/var/log/horizon", | |
ReadOnly: false, | |
} | |
} | |
func GetLogVolumeMount() []corev1.VolumeMount { | |
return []corev1.VolumeMount{ | |
{ | |
Name: logVolume, | |
MountPath: "/var/log/horizon", | |
ReadOnly: false, | |
}, | |
} | |
} | |
RunAsUser: &runAsUser, | ||
}, | ||
Env: env.MergeEnvs([]corev1.EnvVar{}, envVars), | ||
VolumeMounts: []corev1.VolumeMount{GetLogVolumeMount()}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the change suggested to the GetLogVolumtMount()
function below. You can just call the function here:
VolumeMounts: []corev1.VolumeMount{GetLogVolumeMount()}, | |
VolumeMounts: GetLogVolumeMount(), |
@@ -146,6 +162,7 @@ func Deployment( | |||
}, | |||
Env: env.MergeEnvs([]corev1.EnvVar{}, envVars), | |||
VolumeMounts: volumeMounts, | |||
[]corev1.VolumeMount{GetLogVolumeMount()}...), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, you can just call the function once it's changed to return a slice.
@@ -158,6 +175,9 @@ func Deployment( | |||
}, | |||
}, | |||
} | |||
deployment.Spec.Template.Spec.Volumes = append(GetVolumes( | |||
instance.Name, | |||
instance.Spec.ExtraMounts), GetLogVolume()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suggested change to GetLogVolume()
means that this would require merging the two slices at this point though.
The other way you could do it, is keep the function the same and then just append this to the volumeMounts
variable defined on line 99 and just have everything using the same mounts.
I would probably opt for the first option though, just to keep the mounts minimal on the log pod. It just means you'll need to merge the slices here, rather than appending since this would now give you a slice of slices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, sorry, just realised this is the Volume not the Volume mount. The only thing missing here is the HorizonPropagation
variable. So it should be:
instance.Spec.ExtraMounts), GetLogVolume()) | |
instance.Spec.ExtraMounts, HorizonPropagation), GetLogVolume()) | |
@@ -158,6 +175,9 @@ func Deployment( | |||
}, | |||
}, | |||
} | |||
deployment.Spec.Template.Spec.Volumes = append(GetVolumes( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getVolumes
? GetVolumes
is undefined.
templateParameters := map[string]interface{}{ | ||
"LogFile": horizon.LogFile, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, this will override the entire templateParameters and you will just end up with LogFile in there. So you want to just set the LogFile
key:
templateParameters := map[string]interface{}{ | |
"LogFile": horizon.LogFile, | |
} | |
templateParameters["LogFile"] = horizon.LogFile |
Add support to Horizon operator for k8s cluster logging.