Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[YUNIKORN-2068] E2E Test for Preemption #705

Closed
wants to merge 27 commits into from
Closed
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
b83bdd5
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 26, 2023
3830426
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 26, 2023
72ef0d8
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 30, 2023
f3e8b57
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 30, 2023
46eb327
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 30, 2023
856b629
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Oct 31, 2023
7e22832
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 2, 2023
353bf51
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 2, 2023
2206722
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 2, 2023
c972734
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 2, 2023
daac46d
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 2, 2023
93ab76e
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 3, 2023
d916cd9
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 3, 2023
bca7152
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 3, 2023
b30f5ec
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 3, 2023
726c154
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 6, 2023
ad8e7eb
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 7, 2023
c4f17b0
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 8, 2023
ce0351b
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 8, 2023
fae4a0f
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 8, 2023
d2d1130
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 8, 2023
5fbc97f
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 8, 2023
c86acb1
[YUNIKORN-2068] E2E Test for Preemption
rrajesh-cloudera Nov 9, 2023
d85fa9a
cdh kli
rrajesh-cloudera Nov 9, 2023
87494ce
[YUNIKORN-2068] E2E Preemption tests
rrajesh-cloudera Nov 21, 2023
7a30f8e
[YUNIKORN-2068] E2E Preemption tests
rrajesh-cloudera Nov 21, 2023
461cff1
[YUNIKORN-2068] E2E Preemption tests
rrajesh-cloudera Nov 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 93 additions & 1 deletion test/e2e/preemption/preemption_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ var kClient k8s.KubeCtl
var restClient yunikorn.RClient
var ns *v1.Namespace
var dev = "dev" + common.RandSeq(5)
var devNew = "dev" + common.RandSeq(5)
var oldConfigMap = new(v1.ConfigMap)
var annotation = "ann-" + common.RandSeq(10)

Expand All @@ -49,6 +50,8 @@ var Worker = ""
var WorkerMemRes int64
var sleepPodMemLimit int64
var sleepPodMemLimit2 int64
var sleepPodMemOverLimit int64
var nodeName string
var taintKey = "e2e_test_preemption"
var nodesToTaint []string

Expand Down Expand Up @@ -102,12 +105,14 @@ var _ = ginkgo.BeforeSuite(func() {
for _, node := range *nodesDAOInfo {
if node.NodeID == Worker {
WorkerMemRes = node.Available["memory"]
nodeName = node.HostName
}
}
WorkerMemRes /= (1000 * 1000) // change to M
fmt.Fprintf(ginkgo.GinkgoWriter, "Worker node %s available memory %dM\n", Worker, WorkerMemRes)

sleepPodMemLimit = int64(float64(WorkerMemRes) / 3)
sleepPodMemOverLimit = int64(float64(WorkerMemRes) * 1.5)
Ω(sleepPodMemLimit).NotTo(gomega.BeZero(), "Sleep pod memory limit cannot be zero")
fmt.Fprintf(ginkgo.GinkgoWriter, "Sleep pod limit memory %dM\n", sleepPodMemLimit)

Expand Down Expand Up @@ -175,7 +180,7 @@ var _ = ginkgo.Describe("Preemption", func() {
// Wait for pod to move to running state
podErr = kClient.WaitForPodBySelectorRunning(dev,
fmt.Sprintf("app=%s", sleepRespPod.ObjectMeta.Labels["app"]),
60)
120)
gomega.Ω(podErr).NotTo(gomega.HaveOccurred())
}

Expand Down Expand Up @@ -546,6 +551,85 @@ var _ = ginkgo.Describe("Preemption", func() {
gomega.Ω(err).ShouldNot(HaveOccurred())
})

ginkgo.It("Verify_preemption_on_specific_node", func() {
/*
1. Create Two Queue (High and Low Guaranteed Limit)
2. Select a schedulable node from the cluster
3. Schedule a number of small, Low priority pause tasks on Low Guaranteed queue (Enough to fill the node)
4. Schedule a large task in High Priority queue with same node
5. Wait for few seconds to schedule the task
6. This should trigger preemption on low-priority queue and remove or preempt task from low priority queue
7. Do cleanup once test is done either passed or failed
*/
time.Sleep(20 * time.Second)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we start with a 20 second sleep? Looks very arbitrary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Cleanup for other tests is taking time and because of that the test which I added is failing intermittently for a few k8s versions. Added a sleep before the execution of test so it will trigger once the cleanup is done and resources are available to allocate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's interesting. I think the reason is that tests are not using different namespaces. See my comment below, you need to create/destroy a namespace for each test. This is what solved many problems for the admission controller tests, they often interfered with each other, debugging was difficult. A new namespace for every test solved it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Peter, Checked your comments and tried that approach but that required a lot of changes since we are using common namespace for all the tests so need to update each namespace as well. Also facing some issue with your approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does it need a lot of changes? That's what we're doing everywhere. Tests should be isolated by different namespaces. This 20 seconds sleep is only acceptable as a workaround. I'll try to see how difficult that is, but at least this must be addressed in a follow-up JIRA. This cannot stay as is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I can see what the problem is. I'll create a JIRA to separate the test cases. It's unacceptable to have all tests in a single namespace. They can easily interfere with each other.

https://issues.apache.org/jira/browse/YUNIKORN-2247

ginkgo.By("A queue uses resource more than the guaranteed value even after removing one of the pods. The cluster doesn't have enough resource to deploy a pod in another queue which uses resource less than the guaranteed value.")
// update config
ginkgo.By(fmt.Sprintf("Update root.sandbox1 and root.sandbox2 with guaranteed memory %dM", sleepPodMemLimit))
annotation = "ann-" + common.RandSeq(10)
yunikorn.UpdateCustomConfigMapWrapper(oldConfigMap, "", annotation, func(sc *configs.SchedulerConfig) error {
// remove placement rules so we can control queue
sc.Partitions[0].PlacementRules = nil
var err error
if err = common.AddQueue(sc, "default", "root", configs.QueueConfig{
Name: "sandbox1",
Resources: configs.Resources{Guaranteed: map[string]string{"memory": fmt.Sprintf("%dM", sleepPodMemLimit)}},
Properties: map[string]string{"preemption.delay": "1s"},
}); err != nil {
return err
}

if err = common.AddQueue(sc, "default", "root", configs.QueueConfig{
Name: "sandbox2",
Resources: configs.Resources{Guaranteed: map[string]string{"memory": fmt.Sprintf("%dM", sleepPodMemOverLimit)}},
Properties: map[string]string{"preemption.delay": "1s"},
}); err != nil {
return err
}
return nil
})

newNamespace, err := kClient.CreateNamespace(devNew, nil)
gomega.Ω(err).NotTo(gomega.HaveOccurred())
gomega.Ω(newNamespace.Status.Phase).To(gomega.Equal(v1.NamespaceActive))
// Define sleepPod
sleepPodConfigs := createSandbox1SleepPodCofigsWithStaticNode(3, 600)
sleepPod4Config := k8s.SleepPodConfig{Name: "sleepjob4", NS: devNew, Mem: sleepPodMemLimit, Time: 600, Optedout: k8s.Allow, Labels: map[string]string{"queue": "root.sandbox2"}, RequiredNode: nodeName}
sleepPodConfigs = append(sleepPodConfigs, sleepPod4Config)

for _, config := range sleepPodConfigs {
ginkgo.By("Deploy the sleep pod " + config.Name + " to the development namespace")
sleepObj, podErr := k8s.InitSleepPod(config)
Ω(podErr).NotTo(gomega.HaveOccurred())
sleepRespPod, podErr := kClient.CreatePod(sleepObj, devNew)
gomega.Ω(podErr).NotTo(gomega.HaveOccurred())

// Wait for pod to move to running state
podErr = kClient.WaitForPodBySelectorRunning(devNew,
fmt.Sprintf("app=%s", sleepRespPod.ObjectMeta.Labels["app"]),
120)
gomega.Ω(podErr).NotTo(gomega.HaveOccurred())
}

// assert one of the pods in root.sandbox1 is preempted
ginkgo.By("One of the pods in root.sanbox1 is preempted")
sandbox1RunningPodsCnt := 0
pods, err := kClient.ListPodsByLabelSelector(devNew, "queue=root.sandbox1")
gomega.Ω(err).NotTo(gomega.HaveOccurred())
for _, pod := range pods.Items {
if pod.DeletionTimestamp != nil {
continue
}
if pod.Status.Phase == v1.PodRunning {
sandbox1RunningPodsCnt++
}
}
Ω(sandbox1RunningPodsCnt).To(gomega.Equal(2), "One of the pods in root.sandbox1 should be preempted")
errNew := kClient.DeletePods(newNamespace.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already performed inside AfterEach()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After each function is doing cleanup for a global namespace which is created for all the tests, In this test I am creating a new namespace and doing cleanup at the end of the test function to ensure there is no resources are available. I tried with the Global namespace and was facing some issue so trying to create inside the test script only and clean once execution is done.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you need a separate namespace for each test case, that should be done in BeforeEach/AfterEach. If something fails before this code, it will never be cleaned up.

Example:

ginkgo.BeforeEach(func() {
		kubeClient = k8s.KubeCtl{}
		gomega.Expect(kubeClient.SetClient()).To(gomega.BeNil())
		ns = "ns-" + common.RandSeq(10)
		ginkgo.By(fmt.Sprintf("Creating namespace: %s for admission controller tests", ns))
		var ns1, err1 = kubeClient.CreateNamespace(ns, nil)
		gomega.Ω(err1).NotTo(gomega.HaveOccurred())
		gomega.Ω(ns1.Status.Phase).To(gomega.Equal(v1.NamespaceActive))
	})

...

	ginkgo.AfterEach(func() {
		ginkgo.By("Tear down namespace: " + ns)
		err := kubeClient.TearDownNamespace(ns)
		gomega.Ω(err).NotTo(gomega.HaveOccurred())
		// call the healthCheck api to check scheduler health
		ginkgo.By("Check YuniKorn's health")
		checks, err2 := yunikorn.GetFailedHealthChecks()
		gomega.Ω(err2).ShouldNot(gomega.HaveOccurred())
		gomega.Ω(checks).Should(gomega.Equal(""), checks)
	})

if errNew != nil {
fmt.Fprintf(ginkgo.GinkgoWriter, "Failed to delete pods in namespace %s - reason is %s\n", newNamespace.Name, err.Error())
}
})

ginkgo.AfterEach(func() {
testDescription := ginkgo.CurrentSpecReport()
if testDescription.Failed() {
Expand All @@ -572,3 +656,11 @@ func createSandbox1SleepPodCofigs(cnt, time int) []k8s.SleepPodConfig {
}
return sandbox1Configs
}

func createSandbox1SleepPodCofigsWithStaticNode(cnt, time int) []k8s.SleepPodConfig {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"WithRequiredNode" or "WithNodeSelector" sounds better

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acknowledged.

sandbox1Configs := make([]k8s.SleepPodConfig, 0, cnt)
for i := 0; i < cnt; i++ {
sandbox1Configs = append(sandbox1Configs, k8s.SleepPodConfig{Name: fmt.Sprintf("sleepjob%d", i+1), NS: devNew, Mem: sleepPodMemLimit2, Time: time, Optedout: k8s.Allow, Labels: map[string]string{"queue": "root.sandbox1"}, RequiredNode: nodeName})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part is very suspicious. You're using the "RequiredNode" setting here, which will trigger the "simple" (aka. RequiredNode) preemption logic inside Yunikorn. That is not the generic preemption logic that Craig worked on.

Please check if the console log of Yunikorn contains this:

log.Log(log.SchedApplication).Info("Triggering preemption process for daemon set ask",
		zap.String("ds allocation key", ask.GetAllocationKey()))

If this is the case (which is VERY likely), changes are necessary.

A trivial solution is to enhance the RequiredNode field, it has to be a struct like:

type RequiredNode struct {
  Node      string
  DaemonSet bool
}

type SleepPodConfig struct {
...
	Mem          int64
	RequiredNode RequiredNode 
	Optedout     AllowPreemptOpted
}

If DaemonSet == false, then this code doesn't run:

owner := metav1.OwnerReference{APIVersion: "v1", Kind: constants.DaemonSetType, Name: "daemonset job", UID: "daemonset"}
owners = []metav1.OwnerReference{owner}

So this line effectively becomes:

sandbox1Configs = append(sandbox1Configs,
 k8s.SleepPodConfig{
    Name: fmt.Sprintf("sleepjob%d", i+1),
    NS: devNew, Mem: sleepPodMemLimit2,
    Time: time,
    Optedout: k8s.Allow,
    Labels: map[string]string{"queue": "root.sandbox1"},
    RequiredNode: RequiredNode{Node: nodeName, DeamonSet: false}
    }
)

The existing tests inside simple_preemptor_test.go are affected.

Copy link
Contributor

@pbacsko pbacsko Dec 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you looked at this? Again, the tests should address YUNIKORN-2068. RequiredNodePreemptor does not use the predicates.

}
return sandbox1Configs
}