-
Notifications
You must be signed in to change notification settings - Fork 0
/
search.xml
724 lines (639 loc) · 243 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>Ceph</title>
<url>/2021/10/05/ceph/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Ceph is an open source storage system, which supports 3 types of sorage:</p>
<ul>
<li>block storage: support snapshot</li>
<li>file system: posix interface, support snapshot</li>
<li>object storage: s3 compatible</li>
</ul>
<span id="more"></span>
<h2 id="Core-components-and-concepts"><a href="#Core-components-and-concepts" class="headerlink" title="Core components and concepts"></a>Core components and concepts</h2><img src="/2021/10/05/ceph/ceph-architecture.png" class="" title="Ceph Architecture">
<ul>
<li><strong>Monitor</strong>: maintains the status of the cluster (monitor map, manager map, OSD map, CRUSH map)</li>
<li><strong>OSD</strong>: Object Storage Device, a daemon which interacts with client for providing data</li>
<li><strong>MDS</strong>: Ceph Meta Data Server, meta data service for CephFS</li>
<li><strong>RGW</strong>: Rados Gateway, provide object storage service</li>
<li><strong>RBD</strong>: Rados Block Device, block storage service</li>
<li><strong>CRUSH</strong>: algorithm for data distribution in Ceph</li>
<li><strong>PG</strong>: Placement Groups, a logical concept for better data distribution and localization</li>
</ul>
<h2 id="Setup"><a href="#Setup" class="headerlink" title="Setup"></a>Setup</h2><h3 id="1-Cephadm"><a href="#1-Cephadm" class="headerlink" title="1. Cephadm"></a>1. Cephadm</h3><p>For setup with cephadm please go to <a href="https://naomilyj.github.io/2021/10/05/cephadm/">here</a></p>
<h3 id="2-Rook-ceph"><a href="#2-Rook-ceph" class="headerlink" title="2. Rook-ceph"></a>2. Rook-ceph</h3><p>For setup with rook please go to <a href="https://naomilyj.github.io/2021/10/06/rook-ceph/">here</a></p>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://www.cnblogs.com/hukey/p/11899710.html">https://www.cnblogs.com/hukey/p/11899710.html</a></li>
</ol>
]]></content>
<categories>
<category>ceph</category>
</categories>
<tags>
<tag>storage</tag>
<tag>cloud</tag>
<tag>ceph</tag>
</tags>
</entry>
<entry>
<title>helm charts</title>
<url>/2022/02/06/helmchart/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This note is about</p>
<ul>
<li>what is helm charts</li>
<li>how to use helm charts</li>
<li>how to create helm charts</li>
<li>helm charts with helm secrets</li>
</ul>
<span id="more"></span>
<p>based on helm <code>v3.7.1</code></p>
<p>You can install helm with </p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ curl -sSL https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz | tar --strip-components 1 -xzC /usr/bin linux-amd64/helm && chmod +x /usr/bin/helm</span><br></pre></td></tr></table></figure>
<h2 id="What-is-helm-charts"><a href="#What-is-helm-charts" class="headerlink" title="What is helm charts"></a>What is helm charts</h2><p>According to definition from helm official website (<a href="https://helm.sh/">https://helm.sh</a>), <code>helm</code> helps you manage Kubernetes applications, and <code>helm charts</code> help you define, install, and upgrade even the most complex Kubernetes application.</p>
<p>Helm charts package all the k8s manifests (YAML files) of your application and make your management of metadata more easily. A helm chart has the following structure</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mychart</span><br><span class="line">├── Chart.yaml</span><br><span class="line">├── charts <span class="comment"># contains other sub charts</span></span><br><span class="line">├── templates <span class="comment"># chart templates (gotemplates),which are used to render the final k8s manifests with variables defined in values.yaml</span></span><br><span class="line">│ ├── NOTES.txt <span class="comment"># hint message when the user run command helm install</span></span><br><span class="line">│ ├── _helpers.tpl <span class="comment"># some helper code for templates</span></span><br><span class="line">│ ├── xx.yaml <span class="comment"># Kubernetes manifest</span></span><br><span class="line">│ ├── ... <span class="comment"># Kubernetes manifest</span></span><br><span class="line">│ └── tests</span><br><span class="line">│ └── test-connection.yaml</span><br><span class="line">└── values.yaml <span class="comment"># variables for rendering templates when the user run command helm install</span></span><br></pre></td></tr></table></figure>
<h2 id="How-to-use-helm-charts"><a href="#How-to-use-helm-charts" class="headerlink" title="How to use helm charts"></a>How to use helm charts</h2><h3 id="1-local-chart"><a href="#1-local-chart" class="headerlink" title="1. local chart"></a>1. local chart</h3><p>You could install your local helm chart with </p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ helm install your-release-name /path/to/your/chart --namespace your-namespace -f /path/to/your/values.yaml</span><br><span class="line"><span class="comment"># Upgrade</span></span><br><span class="line">$ helm upgrade --install your-release-name /path/to/your/chart --namespace your-namespace -f /path/to/your/values.yaml</span><br></pre></td></tr></table></figure>
<h3 id="2-chart-from-repository"><a href="#2-chart-from-repository" class="headerlink" title="2. chart from repository"></a>2. chart from repository</h3><p>To install public helm chart, you have to add public helm chart repository</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Add a public repo</span></span><br><span class="line">$ helm repo add repo-name repo-url</span><br><span class="line"><span class="comment"># Add a private repo with self-signed certificate</span></span><br><span class="line">$ helm repo add repo-name repo-url --insecure-skip-tls-verify</span><br><span class="line"><span class="comment"># e.g. add a harbor chart repository</span></span><br><span class="line">$ helm repo add myharbor https://harbor.mydomain.com/chartrepo/library --insecure-skip-tls-verify</span><br><span class="line"><span class="comment"># Login to your private registry</span></span><br><span class="line">$ helm registry login https://harbor.mydomain.com -u xxx -p xxx --insecure</span><br><span class="line"></span><br><span class="line"><span class="comment"># List all helm repositories in your local cache</span></span><br><span class="line">$ helm repo list</span><br><span class="line"><span class="comment"># Update your local Helm chart repository cache</span></span><br><span class="line">$ helm repo update</span><br><span class="line"></span><br><span class="line"><span class="comment"># Search all charts under a repo</span></span><br><span class="line">$ helm search repo repo-name</span><br><span class="line"><span class="comment"># Download a chart package from helm repository</span></span><br><span class="line">$ helm fetch repo-name/chart-name --verion=xxx --insecure-skip-tls-verify</span><br><span class="line"></span><br><span class="line"><span class="comment"># Install a helm chart</span></span><br><span class="line">$ helm install --cleanup-on-fail your-release-name repo-name/chart-name --namespace your-namespace --version xxx -f /path/to/your/values.yaml</span><br><span class="line"><span class="comment"># Upgrade </span></span><br><span class="line">$ helm upgrade --install --cleanup-on-fail your-release-name repo-name/chart-name --namespace your-namespace --version xxx -f /path/to/your/values.yaml</span><br><span class="line"></span><br></pre></td></tr></table></figure>
<h2 id="How-to-create-helm-charts"><a href="#How-to-create-helm-charts" class="headerlink" title="How to create helm charts"></a>How to create helm charts</h2><h3 id="Create"><a href="#Create" class="headerlink" title="Create"></a>Create</h3><p>To create a helm chart, you could simply run </p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ helm create foo</span><br><span class="line"><span class="comment"># The following files will be generated</span></span><br><span class="line">foo/</span><br><span class="line">├── .helmignore <span class="comment"># Contains patterns to ignore when packaging Helm charts.</span></span><br><span class="line">├── Chart.yaml <span class="comment"># Information about your chart</span></span><br><span class="line">├── values.yaml <span class="comment"># The default values for your templates</span></span><br><span class="line">├── charts/ <span class="comment"># Charts that this chart depends on</span></span><br><span class="line">└── templates/ <span class="comment"># The template files</span></span><br><span class="line"> └── tests/ <span class="comment"># The test files</span></span><br></pre></td></tr></table></figure>
<p>For details of helm chart grammar, please refer to <a href="https://naomilyj.github.io/2022/02/07/helmtemplates/">helm templates</a></p>
<h3 id="Lint"><a href="#Lint" class="headerlink" title="Lint"></a>Lint</h3><p>To lint your chart</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ helm lint /path/to/your/chart</span><br></pre></td></tr></table></figure>
<h3 id="Package-and-Push"><a href="#Package-and-Push" class="headerlink" title="Package and Push"></a>Package and Push</h3><p>To package your chart and push to helm chart registry, there are 2 methods</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Method 1</span></span><br><span class="line">$ helm package /path/to/your/chart</span><br><span class="line">Successfully packaged chart and saved it to: /my/path/hello-world-0.1.0.tgz</span><br><span class="line">$ helm push hello-world-0.1.0.tgz oci://harbor.mydomain.com/chartrepo/library --insecure-skip-tls-verify</span><br><span class="line"><span class="comment"># Directly push directory</span></span><br><span class="line">$ helm push /path/to/your/chart oci://harbor.mydomain.com/chartrepo/library --insecure-skip-tls-verify</span><br><span class="line"></span><br><span class="line"><span class="comment"># Method 2 (old version and is removed)</span></span><br><span class="line">$ helm chart save /path/to/your/chart harbor.mydomain.com/chartrepo/library/chart-name:chart-version</span><br><span class="line">$ helm chart push harbor.mydomain.com/chartrepo/library/chart-name:chart-version</span><br><span class="line"></span><br><span class="line"><span class="comment"># Note, before push, you need to login to helm registry and</span></span><br><span class="line">$ <span class="built_in">export</span> HELM_EXPERIMENTAL_OCI=1</span><br><span class="line">$ helm plugin install https://github.com/chartmuseum/helm-push.git --version master</span><br></pre></td></tr></table></figure>
<h2 id="Helm-charts-with-helm-secrets"><a href="#Helm-charts-with-helm-secrets" class="headerlink" title="Helm charts with helm secrets"></a>Helm charts with helm secrets</h2><p>Some variables in <code>values.yaml</code> are sensitive data, e.g, passwords, which should be encrypted. <a href="https://github.com/jkroepke/helm-secrets/wiki/Usage">helm secrets</a> is a helm plugin to do this job.</p>
<p>This plugin provides ability to encrypt/decrypt secrets files to store in less secure places, before they are installed using<br>Helm. To decrypt/encrypt/edit you need to initialize/first encrypt secrets with sops - <a href="https://github.com/mozilla/sops">https://github.com/mozilla/sops</a></p>
<h3 id="To-install-sops-and-helm-secrets"><a href="#To-install-sops-and-helm-secrets" class="headerlink" title="To install sops and helm secrets"></a>To install <code>sops</code> and <code>helm secrets</code></h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ curl -O -L -C - https://github.com/mozilla/sops/releases/download/v3.7.1/sops-v3.7.1.linux -o /usr/bin/sops && chmod +x /usr/bin/sops</span><br><span class="line">$ helm plugin install https://github.com/jkroepke/helm-secrets --version v3.11.0</span><br></pre></td></tr></table></figure>
<h3 id="To-generate-keys-for-encryption"><a href="#To-generate-keys-for-encryption" class="headerlink" title="To generate keys for encryption"></a>To generate keys for encryption</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ gpg --gen-key</span><br><span class="line">enter username, email and password</span><br><span class="line"></span><br><span class="line">gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.</span><br><span class="line">This is free software: you are free to change and redistribute it.</span><br><span class="line">There is NO WARRANTY, to the extent permitted by law.</span><br><span class="line"></span><br><span class="line">Note: Use <span class="string">"gpg --full-generate-key"</span> <span class="keyword">for</span> a full featured key generation dialog.</span><br><span class="line"></span><br><span class="line">GnuPG needs to construct a user ID to identify your key.</span><br><span class="line"></span><br><span class="line">Real name: user-name</span><br><span class="line">Email address: [email protected]</span><br><span class="line">You selected this USER-ID:</span><br><span class="line"> <span class="string">"user-name <[email protected]>"</span></span><br><span class="line"></span><br><span class="line">Change (N)ame, (E)mail, or (O)kay/(Q)uit? O</span><br><span class="line">We need to generate a lot of random bytes. It is a good idea to perform</span><br><span class="line">some other action (<span class="built_in">type</span> on the keyboard, move the mouse, utilize the</span><br><span class="line">disks) during the prime generation; this gives the random number</span><br><span class="line">generator a better chance to gain enough entropy.</span><br><span class="line">We need to generate a lot of random bytes. It is a good idea to perform</span><br><span class="line">some other action (<span class="built_in">type</span> on the keyboard, move the mouse, utilize the</span><br><span class="line">disks) during the prime generation; this gives the random number</span><br><span class="line">generator a better chance to gain enough entropy.</span><br><span class="line">gpg: key B56B3F7B2A11A0D6 marked as ultimately trusted</span><br><span class="line">gpg: directory <span class="string">'/root/.gnupg/openpgp-revocs.d'</span> created</span><br><span class="line">gpg: revocation certificate stored as <span class="string">'/root/.gnupg/openpgp-revocs.d/52A05B5F40C7F10EA56D3A38B56B3F7B2A11A0D6.rev'</span></span><br><span class="line">public and secret key created and signed.</span><br><span class="line"></span><br><span class="line">pub rsa3072 2022-02-07 [SC] [expires: 2024-02-07]</span><br><span class="line"> 52A05B5F40C7F10EA56D3A38B56B3F7B2A11A0D6</span><br><span class="line">uid user-name <[email protected]></span><br><span class="line">sub rsa3072 2022-02-07 [E] [expires: 2024-02-07]</span><br><span class="line"></span><br><span class="line"><span class="comment"># Check your key</span></span><br><span class="line">$ gpg --fingerprint</span><br><span class="line">gpg: checking the trustdb</span><br><span class="line">gpg: marginals needed: 3 completes needed: 1 trust model: pgp</span><br><span class="line">gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u</span><br><span class="line">gpg: next trustdb check due at 2024-02-07</span><br><span class="line">/root/.gnupg/pubring.kbx</span><br><span class="line">------------------------</span><br><span class="line">pub rsa3072 2022-02-07 [SC] [expires: 2024-02-07]</span><br><span class="line"> 52A0 5B5F 40C7 F10E A56D 3A38 B56B 3F7B 2A11 A0D6</span><br><span class="line">uid [ultimate] user-name <[email protected]></span><br><span class="line">sub rsa3072 2022-02-07 [E] [expires: 2024-02-07]</span><br><span class="line"></span><br><span class="line"><span class="comment"># Export your private key (for CICD)</span></span><br><span class="line"><span class="comment"># Method 1</span></span><br><span class="line">$ gpg --export-secret-key --armor <span class="string">"user-name"</span> > private.key</span><br><span class="line"><span class="comment"># Method 2</span></span><br><span class="line">$ gpg --export-secret-key --armor <span class="string">"<span class="variable">${KEY_FP}</span>"</span> > private.key</span><br><span class="line"></span><br><span class="line"><span class="comment"># Export your public key (for CICD)</span></span><br><span class="line">$ gpg --<span class="built_in">export</span> --armor <span class="string">"<span class="variable">${KEY_FP}</span>"</span> > public.key</span><br></pre></td></tr></table></figure>
<h3 id="To-use-keys-for-encryption"><a href="#To-use-keys-for-encryption" class="headerlink" title="To use keys for encryption"></a>To use keys for encryption</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Import keys (in CICD, otherwise skip it)</span></span><br><span class="line">$ gpg --import public.key</span><br><span class="line">$ gpg --import private.key</span><br></pre></td></tr></table></figure>
<p>Set fingerprint in <code>.sops.yaml</code>, which is under the same folder of your secrets.yaml</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">creation_rules:</span></span><br><span class="line"> <span class="comment"># encrypted using user-name <[email protected]></span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">pgp:</span> <span class="string">"52A05B5F40C7F10EA56D3A38B56B3F7B2A11A0D6"</span></span><br></pre></td></tr></table></figure>
<p>or <code>export SOPS_PGP_FP="52A05B5F40C7F10EA56D3A38B56B3F7B2A11A0D6"</code></p>
<p>Encryption and Decryption</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Encryption and Decryption of helm secrets is a wrapper of sops</span></span><br><span class="line"><span class="comment"># Encryption (secrets.yaml will be encrypted)</span></span><br><span class="line">$ helm secrets enc secrets.yaml</span><br><span class="line">$ cat secrets.yaml</span><br><span class="line">podAnnotations:</span><br><span class="line"> secret: ENC[AES256_GCM,data:R5pFRNs=,iv:2OF6j34rusZYulfkalXlgOs4+3M9t57R3rJwy/NJKos=,tag:SQTMAQg1HsTjdnAqeEWWnQ==,<span class="built_in">type</span>:str]</span><br><span class="line">sops:</span><br><span class="line"> kms: []</span><br><span class="line"> gcp_kms: []</span><br><span class="line"> azure_kv: []</span><br><span class="line"> hc_vault: []</span><br><span class="line"> age: []</span><br><span class="line"> lastmodified: <span class="string">"2021-10-04T16:34:43Z"</span></span><br><span class="line"> mac: ENC[AES256_GCM,data:BZLaP0aV0xqU6863VULSFnFiZkDvaWNT91mBvUxbgQVjUQkPwNCtj7bFx7zRLutf684xr6Xvx7EjRc0KmA/q7w9elLpj5XP6lvHwVcw2dYwnc/nXMvfAcHQTv0Gl3Ey+PXZ7lECouciZtyKd9ib9IEMnKrwzqs4ZDPC9Y6DZitU=,iv:BQk4/siDQxkemO3g3SEGzoeuka5BmoAL/23oRT4sM60=,tag:Ef9tL6h2MNAAV7o2RI3SPg==,<span class="built_in">type</span>:str]</span><br><span class="line"> pgp:</span><br><span class="line"> - created_at: <span class="string">"2021-10-04T16:34:43Z"</span></span><br><span class="line"> enc: |</span><br><span class="line"> -----BEGIN PGP MESSAGE-----</span><br><span class="line"> hQEMA9ce5qCwOO4MAQf/UI8ggX3hR0ZrVeZ4j5MiYsl7O1lDAS6xWLGivRfOfy4l</span><br><span class="line"> UYBMZi9E7LYNN47xXgbbGUJ8MXrCEp+vQR0AUqG+K/X6OPP6pmeeAlEGH0o9Fab0</span><br><span class="line"> 0f0sU3/h9juST0RBtTDa8YTmjTglD5uAzjYNqVsYe0YLNv6HxDw6Fu/h/sXI3Ekn</span><br><span class="line"> PCYw3E+ONjOAQWfCGgkiIQkdPmnB0kZD+bA3U+3EGSnPPljTWYyGuGyonEm4IckV</span><br><span class="line"> AGgzhtsPWKmh/SwVa603eVD/+JvBzszyUao9JinijZHJJmcHJg6TjuOOUUlTmRbq</span><br><span class="line"> 8Fgf3NUE3G5BQgeH1nFzLzlNYg6MVSceaUIX7vilwtJeAVHt2EIxGz6oZO0vFGYc</span><br><span class="line"> NoACwB6FkVEp3jS4QR0wMhtaflpGtoaooc+BIWxrbf9S0XIv0RHdf33/X7vbMRsz</span><br><span class="line"> tUw10Hsbl13DeySp+6uwoom3VVGCuisQdewoIf1ntg==</span><br><span class="line"> =c0YW</span><br><span class="line"> -----END PGP MESSAGE-----</span><br><span class="line"> fp: D6174A02027050E59C711075B430C4E58E2BBBA3</span><br><span class="line"> unencrypted_suffix: _unencrypted</span><br><span class="line"> version: 3.7.1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Decryption</span></span><br><span class="line">$ helm secrets dec secrets.yaml</span><br><span class="line">secrets.yaml.dec will be generated</span><br><span class="line">$ cat secrets.yaml.dec</span><br><span class="line">podAnnotations:</span><br><span class="line"> secret: value</span><br><span class="line"></span><br><span class="line"><span class="comment"># View decryption in console</span></span><br><span class="line">$ helm secrets view secrets.yaml</span><br><span class="line">podAnnotations:</span><br><span class="line"> secret: value</span><br><span class="line"></span><br><span class="line"><span class="comment"># Edit your secret value</span></span><br><span class="line">$ helm secrets edit secrets.yaml</span><br></pre></td></tr></table></figure>
<h3 id="To-install-helm-chart-with-secrets"><a href="#To-install-helm-chart-with-secrets" class="headerlink" title="To install helm chart with secrets"></a>To install helm chart with secrets</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ helm secrets upgrade \</span><br><span class="line"> helloworld \</span><br><span class="line"> stable/java-app \</span><br><span class="line"> --install \</span><br><span class="line"> --timeout 600 \</span><br><span class="line"> --<span class="built_in">wait</span> \</span><br><span class="line"> --kube-context=sandbox \</span><br><span class="line"> --namespace=projectx \</span><br><span class="line"> --<span class="built_in">set</span> global.app_version=bff8fc4 \</span><br><span class="line"> -f helm_vars/projectx/sandbox/us-east-1/java-app/helloworld/secrets.yaml \</span><br><span class="line"> -f helm_vars/projectx/sandbox/us-east-1/java-app/helloworld/values.yaml \</span><br><span class="line"> -f helm_vars/secrets.yaml \</span><br><span class="line"> -f helm_vars/values.yaml</span><br><span class="line"></span><br></pre></td></tr></table></figure>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://helm.sh/">https://helm.sh</a></li>
<li><a href="https://github.com/chartmuseum/helm-push">https://github.com/chartmuseum/helm-push</a></li>
<li><a href="https://github.com/jkroepke/helm-secrets/wiki/Usage">https://github.com/jkroepke/helm-secrets/wiki/Usage</a></li>
</ol>
]]></content>
<categories>
<category>helm</category>
</categories>
<tags>
<tag>helm</tag>
<tag>k8s</tag>
</tags>
</entry>
<entry>
<title>Install ceph with cephadm</title>
<url>/2021/10/05/cephadm/</url>
<content><![CDATA[<h1 id="Cephadm"><a href="#Cephadm" class="headerlink" title="Cephadm"></a>Cephadm</h1><h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This note is about how to install cep cluster with cephadm.</p>
<ul>
<li>What are the preparations for the installation</li>
<li>How to install components</li>
<li>Basic usage</li>
</ul>
<span id="more"></span>
<h2 id="Layout"><a href="#Layout" class="headerlink" title="Layout"></a>Layout</h2><p>node01 10.0.2.7 /dev/sda mon<br>node02 10.0.2.4 /dev/sdc mon<br>node03 10.0.2.5 /dev/sdb mon</p>
<h2 id="Prepare-Environment"><a href="#Prepare-Environment" class="headerlink" title="Prepare Environment"></a>Prepare Environment</h2><p>For installing ceph with cephadm, the follwing requirements should be fulfilled</p>
<ul>
<li>ntp</li>
<li>python3 </li>
<li>container runtime (each component of ceph would be started as a container)</li>
<li>ssh environment</li>
<li>repo source <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ wget --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm</span><br><span class="line">$ chmod +x cephadm</span><br><span class="line">$ ./cephadm add-repo --release pacific</span><br><span class="line">$ ./cephadm install</span><br><span class="line">$ apt-get update</span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="Installation"><a href="#Installation" class="headerlink" title="Installation"></a>Installation</h2><p>Note: Try to install with root user to avoid some issues</p>
<h3 id="Initialization"><a href="#Initialization" class="headerlink" title="Initialization"></a>Initialization</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">cephadm bootstrap --mon-ip 10.0.2.7 --cluster-network 10.0.2.0/24</span><br><span class="line"><span class="comment"># Note: store the dashboard login credentials in the initialization logs</span></span><br><span class="line"><span class="comment"># Create user for dashboard</span></span><br><span class="line">$ ceph dashboard set-login-credentials yuanjing -i secret-yuanjing</span><br></pre></td></tr></table></figure>
<h3 id="Access-cluster"><a href="#Access-cluster" class="headerlink" title="Access cluster"></a>Access cluster</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># check cluster status</span></span><br><span class="line">$ cephadm shell -- ceph -s</span><br><span class="line"><span class="comment"># check osd status</span></span><br><span class="line">$ cephadm shell -- ceph osd tree</span><br></pre></td></tr></table></figure>
<h3 id="Add-other-nodes-to-cluster"><a href="#Add-other-nodes-to-cluster" class="headerlink" title="Add other nodes to cluster"></a>Add other nodes to cluster</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># copy ssh pub key (ceph.pub) generated by ceph to other nodes</span></span><br><span class="line">$ ssh-copy-id -f -i /etc/ceph/ceph.pub node02</span><br><span class="line">$ ssh-copy-id -f -i /etc/ceph/ceph.pub node03</span><br><span class="line"><span class="comment"># add nodes</span></span><br><span class="line">$ ceph orch host add node02 10.0.2.4</span><br><span class="line">$ ceph orch host add node03 10.0.2.5</span><br><span class="line"></span><br><span class="line"><span class="comment"># check cluster status (mon and mgr will be deployed on nodes automatically)</span></span><br><span class="line">$ ceph -s</span><br></pre></td></tr></table></figure>
<h3 id="Deploy-osd"><a href="#Deploy-osd" class="headerlink" title="Deploy osd"></a>Deploy osd</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># check available devices</span></span><br><span class="line">$ osd orch device ls</span><br><span class="line"><span class="comment"># A storage device is considered available if all of the following conditions are met:</span></span><br><span class="line"><span class="comment"># The device must have no partitions.</span></span><br><span class="line"><span class="comment"># The device must not have any LVM state.</span></span><br><span class="line"><span class="comment"># The device must not be mounted.</span></span><br><span class="line"><span class="comment"># The device must not contain a file system.</span></span><br><span class="line"><span class="comment"># The device must not contain a Ceph BlueStore OSD.</span></span><br><span class="line"><span class="comment"># The device must be larger than 5 GB.</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># deploy all available devices</span></span><br><span class="line">$ ceph orch apply osd --all-available-devices</span><br><span class="line"></span><br><span class="line"><span class="comment"># add new osd</span></span><br><span class="line">$ ceph orch daemon add osd hostname:/dev/sdb</span><br></pre></td></tr></table></figure>
<h3 id="Deploy-mds-CephFS"><a href="#Deploy-mds-CephFS" class="headerlink" title="Deploy mds (CephFS)"></a>Deploy mds (CephFS)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create pool for data and metadata </span></span><br><span class="line">$ ceph osd pool create cephfs1_data 64 64</span><br><span class="line">$ ceph osd pool create cephfs1_metadata 64 64</span><br><span class="line"><span class="comment"># create a fs</span></span><br><span class="line">$ ceph fs new cephfs-demo1 cephfs1_metadata cephfs1_data</span><br><span class="line"></span><br><span class="line"><span class="comment"># or with fs volume, pool will be automatically created in this case with default pg and pgp</span></span><br><span class="line">$ ceph fs volume create cephfs-demo2 <span class="string">"node01,node02,node03"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># or with orch apply, multiple mds will be created, if cephfs-demo1 is already created, the placement will be updated</span></span><br><span class="line">$ ceph orch apply mds cephfs-demo1 --placement=<span class="string">"3 node01 node02 node03"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># check</span></span><br><span class="line">$ ceph fs ls</span><br><span class="line"></span><br><span class="line"><span class="comment"># check fs with admin config</span></span><br><span class="line">$ ceph -m 10.0.2.4:6789 --id admin --key=AQCEKzthORrmJhAA3PH7m+9kldDLLRQXuscofg== -c /etc/ceph/ceph.conf fs ls --format=json</span><br></pre></td></tr></table></figure>
<h3 id="Deploy-rgw-Ceph-Object-Storage"><a href="#Deploy-rgw-Ceph-Object-Storage" class="headerlink" title="Deploy rgw (Ceph Object Storage)"></a>Deploy rgw (Ceph Object Storage)</h3><ul>
<li>Create Realm, Zonegroup, Zone <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># radosgw-admin realm create --rgw-realm=<realm-name> --default</span></span><br><span class="line">$ radosgw-admin realm create --rgw-realm=testenv --default</span><br><span class="line"><span class="comment"># radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default</span></span><br><span class="line">$ radosgw-admin zonegroup create --rgw-zonegroup=muc --master --default</span><br><span class="line"><span class="comment"># radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default</span></span><br><span class="line">$ radosgw-admin zone create --rgw-zonegroup=muc --rgw-zone=room1 --master --default</span><br><span class="line"><span class="comment"># This will generate .rgw.root</span></span><br></pre></td></tr></table></figure></li>
<li>Create rgw service <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create a single instance service on node03</span></span><br><span class="line">$ ceph orch apply rgw myrgw --realm_name=testenv --placement=<span class="string">"1 node03"</span> --port=7480 --zone_name=room1</span><br><span class="line"><span class="comment"># This will generate:</span></span><br><span class="line"><span class="comment"># room1.rgw.log</span></span><br><span class="line"><span class="comment"># room1.rgw.control</span></span><br><span class="line"><span class="comment"># room1.rgw.meta</span></span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="Use-ceph-storage-inside-a-k8s-cluster"><a href="#Use-ceph-storage-inside-a-k8s-cluster" class="headerlink" title="Use ceph storage inside a k8s cluster"></a>Use ceph storage inside a k8s cluster</h2><h3 id="CephRBD-use-ceph-block-storage-in-k8s"><a href="#CephRBD-use-ceph-block-storage-in-k8s" class="headerlink" title="CephRBD (use ceph block storage in k8s)"></a>CephRBD (use ceph block storage in k8s)</h3><ul>
<li><p>prepare ceph pool</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create a pool for block storage</span></span><br><span class="line">$ ceph osd pool create kubernetes</span><br><span class="line"><span class="comment"># create user (not working, use admin instead, and update csi-rbd-secret.yaml the keyring)</span></span><br><span class="line">$ ceph auth get-or-create client.kubernetes mon <span class="string">'profile rbd'</span> osd <span class="string">'profile rbd pool=kubernetes'</span></span><br></pre></td></tr></table></figure></li>
<li><p>prepare rbd dynamic provisioner and plugin</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># refer to https://github.com/ceph/ceph-csi/tree/devel/deploy/rbd/kubernetes</span></span><br><span class="line">$ kubectl apply -f ceph-block-provisioner/deploy/ -n ceph</span><br></pre></td></tr></table></figure></li>
<li><p>test dynamic provisioning of rbd of block mode</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ kubectl apply -f test-raw-block-pvc.yaml</span><br><span class="line">$ kubectl apply -f test-raw-block-pod.yaml</span><br></pre></td></tr></table></figure></li>
<li><p>test dynamic provisioning of rbd of filesystem mode</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ kubectl apply -f test-filesystem-pvc.yaml</span><br><span class="line">$ kubectl apply -f test-filesystem-pod.yaml</span><br></pre></td></tr></table></figure></li>
<li><p><strong>Note</strong>: </p>
<ul>
<li>ERROR: MountVolume.SetUp failed for volume “registration-dir” : hostPath type check failed: /var/lib/kubelet/plugins_registry/ is not a directory.<br><strong>change kubelet dir to <code>/data/kubelet</code></strong> if you install k8s with binary and set kubelet folder as <code>/data/kubelet</code> (ignore it when installing with kubeadm)</li>
<li>rbd-plugin is a daemonset, add tolerations to deploy to all nodes <figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">tolerations:</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">key:</span> <span class="string">"master"</span></span><br><span class="line"> <span class="attr">value:</span> <span class="string">"prometheus"</span></span><br><span class="line"> <span class="attr">effect:</span> <span class="string">"NoSchedule"</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">key:</span> <span class="string">"master"</span></span><br><span class="line"> <span class="attr">value:</span> <span class="string">"nowork"</span></span><br><span class="line"> <span class="attr">effect:</span> <span class="string">"NoSchedule"</span></span><br></pre></td></tr></table></figure></li>
</ul>
</li>
</ul>
<h3 id="CephFS-use-cephfs-in-k8s"><a href="#CephFS-use-cephfs-in-k8s" class="headerlink" title="CephFS (use cephfs in k8s)"></a>CephFS (use cephfs in k8s)</h3><ul>
<li><p>prepare cephfs server</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create pool for data and metadata </span></span><br><span class="line">$ ceph osd pool create cephfs1_data 64 64</span><br><span class="line">$ ceph osd pool create cephfs1_metadata 64 64</span><br><span class="line"><span class="comment"># create a fs</span></span><br><span class="line">$ ceph fs new cephfs-demo1 cephfs1_metadata cephfs1_data</span><br><span class="line"><span class="comment"># update placement</span></span><br><span class="line">$ ceph orch apply mds cephfs-demo1 --placement=<span class="string">"3 node01 node02 node03"</span></span><br></pre></td></tr></table></figure></li>
<li><p>prepare cephfs dynamic provisioner and plugin </p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># refer to https://github.com/ceph/ceph-csi/tree/devel/deploy/cephfs/kubernetes</span></span><br><span class="line">$ kubectl apply -f cephfs-provisioner/deploy/ -n ceph</span><br><span class="line"></span><br><span class="line"><span class="comment"># test cephfs pvc</span></span><br><span class="line">kubectl apply -f test_pvc.yaml</span><br><span class="line">kubectl apply -f test_pvc_pod.yaml</span><br></pre></td></tr></table></figure></li>
</ul>
<h3 id="Ceph-Object-Storage"><a href="#Ceph-Object-Storage" class="headerlink" title="Ceph Object Storage"></a>Ceph Object Storage</h3><ul>
<li>Create a user <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ radosgw-admin user create --uid=yuanjing --display-name=Yuanjing</span><br></pre></td></tr></table></figure></li>
<li>Create a bucket<ul>
<li>command: s3cmd <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ s3cmd --configure</span><br><span class="line"><span class="comment"># have the follwing configurations, configurations are saved in </span></span><br><span class="line">Access Key: <Key></span><br><span class="line">Secret Key: <Secret></span><br><span class="line">Default Region: US</span><br><span class="line">S3 Endpoint: 10.0.2.5:7480</span><br><span class="line">DNS-style bucket+hostname:port template <span class="keyword">for</span> accessing a bucket: 10.0.2.5:7480/%(bucket)s</span><br><span class="line">Encryption password: </span><br><span class="line">Path to GPG program: /usr/bin/gpg</span><br><span class="line">Use HTTPS protocol: False</span><br><span class="line">HTTP Proxy server name: </span><br><span class="line">HTTP Proxy server port: 0</span><br><span class="line"></span><br><span class="line"><span class="comment"># create a bucket</span></span><br><span class="line">$ s3cmd mb s3://demo-bucket</span><br><span class="line"><span class="comment"># upload a file</span></span><br><span class="line">$ s3cmd put file.py s3://demo-bucket</span><br></pre></td></tr></table></figure></li>
</ul>
</li>
</ul>
<h2 id="Trouble-shooting"><a href="#Trouble-shooting" class="headerlink" title="Trouble shooting"></a>Trouble shooting</h2><ol>
<li>416[invalidrange] -> mon_max_pg_per_osd when putting object to bucket <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ceph config <span class="built_in">set</span> mon mon_max_pg_per_osd 1000</span><br></pre></td></tr></table></figure></li>
</ol>
<p><strong>Note</strong>:</p>
<p>Ceph RBD and CephFS both supports filesystem</p>
<ul>
<li>ceph rbd: supports the volume shared by pods running on the same node, low latency, good I/O</li>
<li>cephfs: supports the volume shared by pods running across multiple nodes, higher latency, good I/O, expecially for large files, but bottleneck with more storage</li>
</ul>
<h4 id="References"><a href="#References" class="headerlink" title="References"></a>References</h4><ol>
<li><a href="https://www.cnblogs.com/zjz20/p/14136349.html">https://www.cnblogs.com/zjz20/p/14136349.html</a></li>
<li><a href="https://dylanyang.top/post/2021/05/15/k8s%E4%BD%BF%E7%94%A8ceph-csi%E6%8C%81%E4%B9%85%E5%8C%96%E5%AD%98%E5%82%A8cephfs/">https://dylanyang.top/post/2021/05/15/k8s%E4%BD%BF%E7%94%A8ceph-csi%E6%8C%81%E4%B9%85%E5%8C%96%E5%AD%98%E5%82%A8cephfs/</a></li>
<li><a href="https://blog.51cto.com/leejia/2583381">https://blog.51cto.com/leejia/2583381</a></li>
<li><a href="https://docs.ceph.com/en/octopus/rbd/rbd-kubernetes/">https://docs.ceph.com/en/octopus/rbd/rbd-kubernetes/</a></li>
<li><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></li>
<li><a href="https://blog.51cto.com/renlixing/3134294">https://blog.51cto.com/renlixing/3134294</a> (ceph and ceph comparison)</li>
<li><a href="https://xiebiao.top/post/storage/ceph_octopus_ops/">https://xiebiao.top/post/storage/ceph_octopus_ops/</a> (cephadm)</li>
<li><a href="https://docs.ceph.com/en/mimic/man/8/radosgw-admin/">https://docs.ceph.com/en/mimic/man/8/radosgw-admin/</a> (radosgw-admin)</li>
<li><a href="https://docs.ceph.com/en/latest/radosgw/s3/python/">https://docs.ceph.com/en/latest/radosgw/s3/python/</a> (boto s3)</li>
<li><a href="https://knowledgebase.45drives.com/kb/kb450422-configuring-ceph-object-storage-to-use-multiple-data-pools/">https://knowledgebase.45drives.com/kb/kb450422-configuring-ceph-object-storage-to-use-multiple-data-pools/</a> (placement)</li>
<li><a href="https://ci-jie.github.io/2019/08/24/Ceph-Object-Storage-Placement-%E4%BB%8B%E7%B4%B9/">https://ci-jie.github.io/2019/08/24/Ceph-Object-Storage-Placement-%E4%BB%8B%E7%B4%B9/</a> (placement)</li>
</ol>
]]></content>
<categories>
<category>ceph</category>
</categories>
<tags>
<tag>cloud</tag>
<tag>ceph</tag>
<tag>cephadm</tag>
</tags>
</entry>
<entry>
<title>Install ceph with rook-ceph</title>
<url>/2021/10/06/rook-ceph/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This note is about how to install ceph cluster in k8s cluster with rook-ceph</p>
<ul>
<li>What are the preparations for the installation</li>
<li>install with k8s manifests (release-1.3)</li>
<li>How to install rook-ceph with helm charts (v1.8.3)</li>
</ul>
<span id="more"></span>
<h2 id="Layout"><a href="#Layout" class="headerlink" title="Layout"></a>Layout</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Cluster Environment</span></span><br><span class="line"><span class="comment"># k8s 1.22.7</span></span><br><span class="line"><span class="comment"># NAME ROLES Disk</span></span><br><span class="line"><span class="comment"># node01 master, mon /dev/sda</span></span><br><span class="line"><span class="comment"># node02 worker, mon /dev/sdc</span></span><br><span class="line"><span class="comment"># node03 worker, mon /dev/sdb</span></span><br></pre></td></tr></table></figure>
<h2 id="Preparation"><a href="#Preparation" class="headerlink" title="Preparation"></a>Preparation</h2><p>For installing ceph in k8s with rook, the follwing requirements should be fulfilled</p>
<ul>
<li>kubernetes > v1.17.0</li>
<li>minimum 3 nodes of kubernetes cluster</li>
<li>disk <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">1. The device must have no partitions.</span><br><span class="line">2. The device must not have any LVM state.</span><br><span class="line">3. The device must not be mounted.</span><br><span class="line">4. The device must not contain a file system.</span><br><span class="line">5. The device must not contain a Ceph BlueStore OSD.</span><br><span class="line">6. The device must be larger than 5 GB.</span><br><span class="line"><span class="comment"># check with lsblk</span></span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="label-your-node-for-setup"><a href="#label-your-node-for-setup" class="headerlink" title="label your node for setup"></a>label your node for setup</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@node01 ceph]<span class="comment"># kubectl taint node node01 node-role.kubernetes.io/master="":NoSchedule</span></span><br><span class="line"></span><br><span class="line">[root@node01 ceph]<span class="comment"># kubectl label nodes {node01, node02, node03} ceph-osd=enabled</span></span><br><span class="line"></span><br><span class="line">[root@node01 ceph]<span class="comment"># kubectl label nodes {node01, node02, node03} ceph-mon=enabled</span></span><br><span class="line"></span><br><span class="line">[root@node01 ceph]<span class="comment"># kubectl label nodes node01 ceph-mgr=enabled</span></span><br></pre></td></tr></table></figure>
<h2 id="Install-with-k8s-manifests"><a href="#Install-with-k8s-manifests" class="headerlink" title="Install with k8s manifests"></a>Install with k8s manifests</h2><h3 id="Install-Rook-Operator"><a href="#Install-Rook-Operator" class="headerlink" title="Install Rook Operator"></a>Install Rook Operator</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># download rook release 1.3</span></span><br><span class="line">$ git <span class="built_in">clone</span> --single-branch --branch release-1.3 https://github.com/rook/rook.git</span><br><span class="line"><span class="comment"># configuration files</span></span><br><span class="line">$ <span class="built_in">cd</span> rook/cluster/examples/kubernetes/ceph</span><br><span class="line"></span><br><span class="line">$ kubectl create -f common.yaml</span><br><span class="line">$ kubectl apply -f operator.yaml</span><br><span class="line"><span class="comment">## Output</span></span><br><span class="line">$ configmap/rook-ceph-operator-config created</span><br><span class="line">$ deployment.apps/rook-ceph-operator created</span><br><span class="line"></span><br><span class="line"><span class="comment"># check pods created in rook-ceph</span></span><br><span class="line">$ kubectl get pod -n rook-ceph</span><br><span class="line"><span class="comment">## Output</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 92s</span><br><span class="line">rook-discover-6fhlb 1/1 Running 0 55s</span><br><span class="line">rook-discover-97kmz 1/1 Running 0 55s</span><br><span class="line">rook-discover-z5k2z 1/1 Running 0 55s</span><br></pre></td></tr></table></figure>
<h3 id="Create-a-ceph-cluster"><a href="#Create-a-ceph-cluster" class="headerlink" title="Create a ceph cluster"></a>Create a ceph cluster</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># apply cluster resource</span></span><br><span class="line">kubectl apply -f cephcluster.yaml</span><br><span class="line"></span><br><span class="line"><span class="comment">## Output</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">csi-cephfsplugin-lz6dn 3/3 Running 0 3m54s</span><br><span class="line">csi-cephfsplugin-provisioner-674847b584-4j9jw 5/5 Running 0 3m54s</span><br><span class="line">csi-cephfsplugin-provisioner-674847b584-h2cgl 5/5 Running 0 3m54s</span><br><span class="line">csi-cephfsplugin-qbpnq 3/3 Running 0 3m54s</span><br><span class="line">csi-cephfsplugin-qzsvr 3/3 Running 0 3m54s</span><br><span class="line">csi-rbdplugin-kk9sw 3/3 Running 0 3m55s</span><br><span class="line">csi-rbdplugin-l95f8 3/3 Running 0 3m55s</span><br><span class="line">csi-rbdplugin-provisioner-64ccb796cf-8gjwv 6/6 Running 0 3m55s</span><br><span class="line">csi-rbdplugin-provisioner-64ccb796cf-dhpwt 6/6 Running 0 3m55s</span><br><span class="line">csi-rbdplugin-v4hk6 3/3 Running 0 3m55s</span><br><span class="line">rook-ceph-crashcollector-pool-33zy7-68cdfb6bcf-9cfkn 1/1 Running 0 109s</span><br><span class="line">rook-ceph-crashcollector-pool-33zyc-565559f7-7r6rt 1/1 Running 0 53s</span><br><span class="line">rook-ceph-crashcollector-pool-33zym-749dcdc9df-w4xzl 1/1 Running 0 78s</span><br><span class="line">rook-ceph-mgr-a-7fdf77cf8d-ppkwl 1/1 Running 0 53s</span><br><span class="line">rook-ceph-mon-a-97d9767c6-5ftfm 1/1 Running 0 109s</span><br><span class="line">rook-ceph-mon-b-9cb7bdb54-lhfkj 1/1 Running 0 96s</span><br><span class="line">rook-ceph-mon-c-786b9f7f4b-jdls4 1/1 Running 0 78s</span><br><span class="line">rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 6m58s</span><br><span class="line">rook-ceph-osd-prepare-pool-33zy7-c2hww 1/1 Running 0 21s</span><br><span class="line">rook-ceph-osd-prepare-pool-33zyc-szwsc 1/1 Running 0 21s</span><br><span class="line">rook-ceph-osd-prepare-pool-33zym-2p68b 1/1 Running 0 21s</span><br><span class="line">rook-discover-6fhlb 1/1 Running 0 6m21s</span><br><span class="line">rook-discover-97kmz 1/1 Running 0 6m21s</span><br><span class="line">rook-discover-z5k2z 1/1 Running 0 6m21s</span><br></pre></td></tr></table></figure>
<h2 id="Install-with-helm-charts"><a href="#Install-with-helm-charts" class="headerlink" title="Install with helm charts"></a>Install with helm charts</h2><p>There’re 2 charts (<a href="https://github.com/rook/rook/tree/v1.8.3/deploy/charts">https://github.com/rook/rook/tree/v1.8.3/deploy/charts</a>)</p>
<ul>
<li>rook-ceph: install rook-operator resources</li>
<li>rook-ceph-cluster: install cluster resources</li>
</ul>
<h3 id="rook-ceph-chart"><a href="#rook-ceph-chart" class="headerlink" title="rook-ceph chart"></a>rook-ceph chart</h3><ul>
<li><p>update configurations in values.yaml</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">allowMultipleFilesystems: <span class="literal">true</span></span><br><span class="line">hostpathRequiresPrivileged: <span class="literal">true</span></span><br></pre></td></tr></table></figure></li>
<li><p>apply installation</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">helm upgrade install rook-ceph -n rook-ceph -v v1.8.3 rook-release/rook-ceph -f values.yaml</span><br></pre></td></tr></table></figure></li>
</ul>
<h3 id="rook-ceph-cluster-chart"><a href="#rook-ceph-cluster-chart" class="headerlink" title="rook-ceph-cluster chart"></a>rook-ceph-cluster chart</h3><ul>
<li><p>Modify the disk configuration in valuse.yaml according to your hardward disk setup. Make sure that the disks fulfill the requirements</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">storage:</span></span><br><span class="line"> <span class="attr">useAllNodes:</span> <span class="literal">false</span></span><br><span class="line"> <span class="attr">useAllDevices:</span> <span class="literal">false</span></span><br><span class="line"> <span class="attr">nodes:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"node01"</span></span><br><span class="line"> <span class="attr">devices:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"sda"</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"node02"</span></span><br><span class="line"> <span class="attr">devices:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"sdc"</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"node03"</span></span><br><span class="line"> <span class="attr">devices:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">"sdb"</span></span><br></pre></td></tr></table></figure></li>
<li><p>apply installation</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">helm upgrade install rook-ceph-cluster -n rook-ceph -v v1.8.3 rook-release/rook-ceph-cluster -f values.yaml</span><br></pre></td></tr></table></figure></li>
</ul>
<h3 id="Usage"><a href="#Usage" class="headerlink" title="Usage"></a>Usage</h3><h4 id="CephFS"><a href="#CephFS" class="headerlink" title="CephFS"></a>CephFS</h4><ul>
<li><p>create file system (CephFilesystem and StorageClass)<br>Fill the templates accroding to your setup</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephFilesystem</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.hddFS.name</span>}}</span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">metadataPool:</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.hddFS.replicated</span>}}</span><br><span class="line"> <span class="attr">dataPools:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">replicated-{{.Values.hddFS.replicated}}</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.hddFS.replicated</span>}}</span><br><span class="line"> <span class="attr">deviceClass:</span> <span class="string">hdd</span></span><br><span class="line"> <span class="attr">preserveFilesystemOnDelete:</span> <span class="literal">true</span></span><br><span class="line"> <span class="attr">metadataServer:</span></span><br><span class="line"> <span class="attr">activeCount:</span> <span class="number">1</span></span><br><span class="line"> <span class="attr">activeStandby:</span> <span class="literal">true</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">storage.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">StorageClass</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.hddFS.name</span>}}</span><br><span class="line"><span class="comment"># Change "rook-ceph" provisioner prefix to match the operator namespace if needed</span></span><br><span class="line"><span class="attr">provisioner:</span> <span class="string">rook-ceph.cephfs.csi.ceph.com</span></span><br><span class="line"><span class="attr">parameters:</span></span><br><span class="line"> <span class="comment"># clusterID is the namespace where the rook cluster is running</span></span><br><span class="line"> <span class="comment"># If you change this namespace, also change the namespace below where the secret namespaces are defined</span></span><br><span class="line"> <span class="attr">clusterID:</span> <span class="string">rook-ceph</span></span><br><span class="line"></span><br><span class="line"> <span class="comment"># CephFS filesystem name into which the volume shall be created</span></span><br><span class="line"> <span class="attr">fsName:</span> {{<span class="string">.Values.hddFS.name</span>}}</span><br><span class="line"></span><br><span class="line"> <span class="comment"># Ceph pool into which the volume shall be created</span></span><br><span class="line"> <span class="comment"># Required for provisionVolume: "true"</span></span><br><span class="line"> <span class="attr">pool:</span> {{<span class="string">.Values.hddFS.name</span>}}<span class="string">-replicated-{{.Values.hddFS.replicated}}</span></span><br><span class="line"></span><br><span class="line"> <span class="comment"># The secrets contain Ceph admin credentials. These are generated automatically by the operator</span></span><br><span class="line"> <span class="comment"># in the same namespace as the cluster.</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/provisioner-secret-name:</span> <span class="string">rook-csi-cephfs-provisioner</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/provisioner-secret-namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/controller-expand-secret-name:</span> <span class="string">rook-csi-cephfs-provisioner</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/controller-expand-secret-namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/node-stage-secret-name:</span> <span class="string">rook-csi-cephfs-node</span></span><br><span class="line"> <span class="attr">csi.storage.k8s.io/node-stage-secret-namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"></span><br><span class="line"><span class="attr">reclaimPolicy:</span> <span class="string">Delete</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephFilesystem</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.ssdFS.name</span>}}</span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">metadataPool:</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.ssdFS.replicated</span>}}</span><br><span class="line"> <span class="attr">dataPools:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">replicated-{{.Values.ssdFS.replicated}}</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.ssdFS.replicated</span>}}</span><br><span class="line"> <span class="attr">deviceClass:</span> <span class="string">ssd</span></span><br><span class="line"> <span class="attr">preserveFilesystemOnDelete:</span> <span class="literal">true</span></span><br><span class="line"> <span class="attr">metadataServer:</span></span><br><span class="line"> <span class="attr">activeCount:</span> <span class="number">1</span></span><br><span class="line"> <span class="attr">activeStandby:</span> <span class="literal">true</span></span><br></pre></td></tr></table></figure></li>
<li><p>use the created file system in the current k8s cluster</p>
</li>
</ul>
<h4 id="Ceph-Object-Storage"><a href="#Ceph-Object-Storage" class="headerlink" title="Ceph Object Storage"></a>Ceph Object Storage</h4><h5 id="create-object-storage-service"><a href="#create-object-storage-service" class="headerlink" title="create object storage service"></a>create object storage service</h5><ul>
<li><p>object storage service without multisite settings (only one type of bucket, e.g., hdd)</p>
<ul>
<li>CephObjectStorage</li>
<li>StorageClass</li>
</ul>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephObjectStore</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="comment"># do not change it, pool name, </span></span><br><span class="line"><span class="comment"># pool will not be deleted even this resource is deleted</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.hdd.storeName</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">metadataPool:</span></span><br><span class="line"> <span class="attr">failureDomain:</span> <span class="string">host</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.oss.hdd.replicated</span>}}</span><br><span class="line"><span class="attr">dataPool:</span></span><br><span class="line"> <span class="attr">failureDomain:</span> <span class="string">host</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.oss.hdd.replicated</span>}}</span><br><span class="line"> <span class="attr">deviceClass:</span> <span class="string">hdd</span></span><br><span class="line"> <span class="comment">#erasureCoded:</span></span><br><span class="line"> <span class="comment"># dataChunks: 2</span></span><br><span class="line"> <span class="comment"># codingChunks: 1</span></span><br><span class="line"><span class="attr">preservePoolsOnDelete:</span> <span class="literal">true</span></span><br><span class="line"><span class="attr">gateway:</span></span><br><span class="line"> <span class="attr">sslCertificateRef:</span></span><br><span class="line"> <span class="attr">port:</span> {{<span class="string">.Values.oss.hdd.port</span>}}</span><br><span class="line"> <span class="comment"># securePort: 443</span></span><br><span class="line"> <span class="attr">instances:</span> <span class="number">1</span></span><br><span class="line"><span class="attr">healthCheck:</span></span><br><span class="line"> <span class="attr">bucket:</span></span><br><span class="line"> <span class="attr">disabled:</span> <span class="literal">false</span></span><br><span class="line"> <span class="attr">interval:</span> <span class="string">60s</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">storage.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">StorageClass</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.oss.hdd.storageClassName</span>}}</span><br><span class="line"><span class="comment"># Change "rook-ceph" provisioner prefix to match the operator namespace if needed</span></span><br><span class="line"><span class="attr">provisioner:</span> <span class="string">rook-ceph.ceph.rook.io/bucket</span></span><br><span class="line"><span class="attr">reclaimPolicy:</span> <span class="string">Delete</span></span><br><span class="line"><span class="attr">parameters:</span></span><br><span class="line"><span class="attr">objectStoreName:</span> {{<span class="string">.Values.oss.hdd.storeName</span>}}</span><br><span class="line"><span class="attr">objectStoreNamespace:</span> <span class="string">rook-ceph</span></span><br></pre></td></tr></table></figure></li>
<li><p>another object storage service with multisite settings (make good use of the crush rule to allow the creation of both ssd bucket and hdd bucket) </p>
<ul>
<li>CephObjectRealm</li>
<li>CephObjectZoneGroup</li>
<li>CephObjectZone</li>
<li>CephObjectStorage</li>
<li>StorageClass</li>
<li>Service</li>
<li>Ingress<br>Fill the templates accroding to your setup<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="comment">#################################################################################################################</span></span><br><span class="line"><span class="comment"># Create an object store with multisite settings for a production environment. A minimum of 3 hosts with OSDs</span></span><br><span class="line"><span class="comment"># are required in this example.</span></span><br><span class="line"><span class="comment">#################################################################################################################</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephObjectRealm</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.realm</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span> <span class="comment"># namespace:cluster</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephObjectZoneGroup</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.zonegroup</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span> <span class="comment"># namespace:cluster</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">realm:</span> {{<span class="string">.Values.oss.multisite.realm</span>}}</span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephObjectZone</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.zone</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span> <span class="comment"># namespace:cluster</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">zoneGroup:</span> {{<span class="string">.Values.oss.multisite.zonegroup</span>}}</span><br><span class="line"><span class="attr">metadataPool:</span></span><br><span class="line"> <span class="attr">failureDomain:</span> <span class="string">host</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.oss.multisite.replicated</span>}}</span><br><span class="line"> <span class="comment">#requireSafeReplicaSize: true</span></span><br><span class="line"><span class="attr">dataPool:</span></span><br><span class="line"> <span class="attr">failureDomain:</span> <span class="string">host</span></span><br><span class="line"> <span class="attr">replicated:</span></span><br><span class="line"> <span class="attr">size:</span> {{<span class="string">.Values.oss.multisite.replicated</span>}}</span><br><span class="line"> <span class="comment">#requireSafeReplicaSize: true</span></span><br><span class="line"> <span class="attr">deviceClass:</span> <span class="string">hdd</span></span><br><span class="line"> <span class="attr">parameters:</span></span><br><span class="line"> <span class="attr">compression_mode:</span> <span class="string">none</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">ceph.rook.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">CephObjectStore</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.storeName</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span> <span class="comment"># namespace:cluster</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">gateway:</span></span><br><span class="line"> <span class="attr">port:</span> {{<span class="string">.Values.oss.multisite.port</span>}}</span><br><span class="line"> <span class="comment"># securePort: 443</span></span><br><span class="line"> <span class="attr">instances:</span> <span class="number">1</span></span><br><span class="line"><span class="attr">zone:</span></span><br><span class="line"> <span class="comment"># do not change it, pool name</span></span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.zone</span>}}</span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.storeName</span>}}<span class="string">-svc</span></span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">selector:</span></span><br><span class="line"> <span class="attr">rgw:</span> <span class="string">ar-store-multisite</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">if</span> <span class="string">.Values.oss.multisite.nodePort</span> }}</span><br><span class="line"><span class="attr">type:</span> <span class="string">NodePort</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"><span class="attr">ports:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">protocol:</span> <span class="string">TCP</span></span><br><span class="line"> <span class="attr">port:</span> <span class="number">8080</span></span><br><span class="line"> <span class="attr">targetPort:</span> {{<span class="string">.Values.oss.multisite.port</span>}}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">if</span> <span class="string">.Values.oss.multisite.nodePort</span> }}</span><br><span class="line"> <span class="attr">nodePort:</span> {{ <span class="string">.Values.oss.multisite.nodePort</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="meta"></span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">if</span> <span class="string">.Values.oss.multisite.ingress.enabled</span> }} </span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">networking.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Ingress</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.storeName</span>}}</span><br><span class="line"><span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr">ingressClassName:</span> {{<span class="string">.Values.oss.multisite.ingress.ingressClass</span>}}</span><br><span class="line"><span class="attr">rules:</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">host:</span> {{<span class="string">.Values.oss.multisite.ingress.hostName</span>}}</span><br><span class="line"> <span class="attr">http:</span></span><br><span class="line"> <span class="attr">paths:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">pathType:</span> <span class="string">Prefix</span></span><br><span class="line"> <span class="attr">path:</span> <span class="string">"/"</span></span><br><span class="line"> <span class="attr">backend:</span></span><br><span class="line"> <span class="attr">service:</span> </span><br><span class="line"> <span class="attr">name:</span> {{<span class="string">.Values.oss.multisite.storeName</span>}}<span class="string">-svc</span></span><br><span class="line"> <span class="attr">port:</span></span><br><span class="line"> <span class="attr">number:</span> {{<span class="string">.Values.oss.multisite.port</span>}}</span><br><span class="line"></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure></li>
<li>manual configurations of the realm to update the crush rule to support creation of different types of buckets<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># use ceph-tools execute ceph commands</span></span><br><span class="line">kubectl <span class="built_in">exec</span> -it -n rook-ceph rook-ceph-tools-769bdf4bdd-hdx6r bash</span><br><span class="line"><span class="comment"># create a placement target in zonegroup</span></span><br><span class="line">$ radosgw-admin zonegroup placement list</span><br><span class="line">$ radosgw-admin zonegroup placement add --rgw-zonegroup zonegroup-a --placement-id test-placement --tags test-placement</span><br><span class="line"></span><br><span class="line"><span class="comment"># set placement pools of test-placement (index and data)</span></span><br><span class="line">$ radosgw-admin zone placement add --rgw-zone zone-a --placement-id test-placement --index-pool zone-a.rgw.buckets.test.index --data-pool zone-a.rgw.buckets.test.data --data-extra-pool zone-a.rgw.buckets.test.non-ec</span><br><span class="line"></span><br><span class="line"><span class="comment"># create crush rule</span></span><br><span class="line">$ ceph osd crush rule create-replicated replicated_ssd default host ssd</span><br><span class="line"></span><br><span class="line"><span class="comment"># create placement pools for ssd-placement</span></span><br><span class="line">$ ceph osd pool create zone-a.rgw.buckets.test.index 8 8</span><br><span class="line">$ ceph osd pool create zone-a.rgw.buckets.test.data 8 8</span><br><span class="line">$ ceph osd pool create zone-a.rgw.buckets.test.non-ec 8 8</span><br><span class="line"></span><br><span class="line"><span class="comment"># apply crush rule to newly create pools for ssd-placement</span></span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> zone-a.rgw.buckets.test.index crush_rule replicated_ssd</span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> zone-a.rgw.buckets.test.data crush_rule replicated_ssd</span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> zone-a.rgw.buckets.test.non-ec crush_rule replicated_ssd</span><br><span class="line"></span><br><span class="line"><span class="comment"># add placement tag for default-placement (be default, it's not set, be used for creating bucket)</span></span><br><span class="line">$ radosgw-admin zonegroup placement modify --rgw-zonegroup zonegroup-a --placement-id default-placement --tags default-placement</span><br><span class="line"></span><br><span class="line"><span class="comment"># enable the user to create bucket on test-placement</span></span><br><span class="line">$ radosgw-admin user create --uid yuanjing --display-name <span class="string">"Yuanjing"</span> --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a</span><br><span class="line">$ radosgw-admin metadata get user:yuanjing > yuanjing.json</span><br><span class="line"><span class="comment"># add "placement_tags": ["default-placement", "test-placement"]</span></span><br><span class="line">$ radosgw-admin metadata put user:yuanjing < yuanjing.json</span><br><span class="line"></span><br><span class="line"><span class="comment"># enable the update</span></span><br><span class="line">radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a</span><br></pre></td></tr></table></figure></li>
<li>operate on a bucket<ul>
<li>configure s5cmd <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># configure for s5cmd</span></span><br><span class="line">cat > ~/.aws/credentials << <span class="string">EOF</span></span><br><span class="line"><span class="string">[default]</span></span><br><span class="line"><span class="string">aws_access_key_id = ESYOVEPAMYMR4WM86FFR</span></span><br><span class="line"><span class="string">aws_secret_access_key = xdujJER3kVt1DM5irnQvexXcF35NRjJg40hBdM4Z</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure></li>
<li>create a bucket<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">s5cmd mb default-placement-bucket</span><br></pre></td></tr></table></figure></li>
<li>delete a bucket <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># remove bucket</span></span><br><span class="line">radosgw-admin bucket rm --bucket=default-placement-bucket --purge-objects</span><br></pre></td></tr></table></figure></li>
<li>create bucket with placement <figure class="highlight py"><table><tr><td class="code"><pre><span class="line"><span class="keyword">import</span> boto3</span><br><span class="line"></span><br><span class="line">bucket = <span class="string">"test-bucket"</span></span><br><span class="line"><span class="comment"># location = "zonegroup:placement-tag"</span></span><br><span class="line">location = <span class="string">"zonegroup-a:test-placement"</span></span><br><span class="line"></span><br><span class="line">s3 = boto3.client(</span><br><span class="line"> <span class="string">'s3'</span>,</span><br><span class="line"> endpoint_url=<span class="string">"http://10.0.2.4:8080"</span>,</span><br><span class="line"> aws_access_key_id=<span class="string">"VKOCRRX0PBCGG1SNXXVB"</span>,</span><br><span class="line"> aws_secret_access_key=<span class="string">"cw4Z2zxYY56d6iqax8aUifJa0Wf654Nwsmiz6HU0"</span>,</span><br><span class="line">)</span><br><span class="line"></span><br><span class="line">s3.create_bucket(</span><br><span class="line"> Bucket=bucket,</span><br><span class="line"> CreateBucketConfiguration={<span class="string">'LocationConstraint'</span>: location},</span><br><span class="line">)</span><br></pre></td></tr></table></figure></li>
</ul>
</li>
</ul>
</li>
</ul>
<h5 id="Deploy-MINIO-GUI-for-Ceph-Obkect-Storage"><a href="#Deploy-MINIO-GUI-for-Ceph-Obkect-Storage" class="headerlink" title="Deploy MINIO GUI for Ceph Obkect Storage"></a>Deploy MINIO GUI for Ceph Obkect Storage</h5><ul>
<li>Deployment</li>
<li>PersistentVolumeClaim (for storage of GUI users)</li>
<li>Service</li>
<li>Ingress<br>Fill the templates accroding to your setup<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{<span class="bullet">-</span> <span class="string">if</span> <span class="string">and</span> <span class="string">.Values.oss.multisite.dashboard.enabled</span> <span class="string">.Values.oss.multisite.ingress.enabled</span>}} </span><br><span class="line"><span class="comment"># https://medium.com/techlogs/mino-s3-gateway-gui-for-rook-ceph-s3-93a8b4c7040b </span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">apps/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Deployment</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"> <span class="attr">labels:</span></span><br><span class="line"> <span class="attr">app:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> <span class="attr">purpose:</span> <span class="string">ceph-oss-mino-UI</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">replicas:</span> <span class="number">1</span></span><br><span class="line"> <span class="attr">selector:</span></span><br><span class="line"> <span class="attr">matchLabels:</span></span><br><span class="line"> <span class="attr">app:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> <span class="attr">purpose:</span> <span class="string">ceph-oss-mino-UI</span></span><br><span class="line"> <span class="attr">template:</span></span><br><span class="line"> <span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">labels:</span></span><br><span class="line"> <span class="attr">app:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> <span class="attr">purpose:</span> <span class="string">ceph-oss-mino-UI</span></span><br><span class="line"> <span class="attr">spec:</span> </span><br><span class="line"> <span class="attr">containers:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> <span class="attr">image:</span> {{<span class="string">.Values.oss.multisite.dashboard.image</span>}}</span><br><span class="line"> <span class="attr">command:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">sh</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">-c</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">minio</span> <span class="string">gateway</span> <span class="string">s3</span> <span class="string">http://rook-ceph-rgw-{{.Values.oss.multisite.storeName}}:8080</span> <span class="string">--console-address</span> <span class="string">":9001"</span></span><br><span class="line"> <span class="comment">#command: ["/bin/sh"] </span></span><br><span class="line"> <span class="comment">#args: ["minio gateway s3 http://s3.mystore.10.x.y.112.nip.io:80"]</span></span><br><span class="line"> <span class="attr">ports:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">containerPort:</span> <span class="number">9001</span></span><br><span class="line"> <span class="attr">env:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">MINIO_ACCESS_KEY</span></span><br><span class="line"> <span class="attr">value:</span> {{<span class="string">.Values.oss.multisite.dashboard.admin</span>}}</span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">MINIO_SECRET_KEY</span></span><br><span class="line"> <span class="attr">value:</span> {{<span class="string">.Values.oss.multisite.dashboard.adminPassword</span>}}</span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">MINIO_ETCD_ENDPOINTS</span></span><br><span class="line"> <span class="attr">value:</span> <span class="string">http://localhost:2379</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">MINIO_ETCD_PATH_PREFIX</span></span><br><span class="line"> <span class="attr">value:</span> <span class="string">minio/</span></span><br><span class="line"> <span class="comment"># https://programming.vip/docs/a-little-problem-in-building-minio-gateway.html</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">etcd</span></span><br><span class="line"> <span class="attr">image:</span> <span class="string">docker.io/bitnami/etcd:3.5</span></span><br><span class="line"> <span class="attr">ports:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">containerPort:</span> <span class="number">2379</span></span><br><span class="line"> <span class="attr">env:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">ALLOW_NONE_AUTHENTICATION</span></span><br><span class="line"> <span class="attr">value:</span> <span class="string">"yes"</span></span><br><span class="line"> <span class="attr">volumeMounts:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">mountPath:</span> <span class="string">/bitnami/etcd</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">etcd-data</span></span><br><span class="line"> <span class="attr">volumes:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">etcd-data</span></span><br><span class="line"> <span class="attr">persistentVolumeClaim:</span></span><br><span class="line"> <span class="attr">claimName:</span> <span class="string">ceph-oss-minio-gw-etcd</span></span><br><span class="line"><span class="meta">--- </span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">PersistentVolumeClaim</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw-etcd</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">accessModes:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">ReadWriteMany</span></span><br><span class="line"> <span class="attr">resources:</span></span><br><span class="line"> <span class="attr">requests:</span></span><br><span class="line"> <span class="attr">storage:</span> {{ <span class="string">.Values.oss.multisite.dashboard.persistence.size</span> }}</span><br><span class="line"> <span class="attr">storageClassName:</span> {{ <span class="string">.Values.oss.multisite.dashboard.persistence.storageClass</span> }}</span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw-svc</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">selector:</span></span><br><span class="line"> <span class="attr">app:</span> <span class="string">ceph-oss-minio-gw</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">if</span> <span class="string">.Values.oss.multisite.dashboard.nodePort</span> }}</span><br><span class="line"> <span class="attr">type:</span> <span class="string">NodePort</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"> <span class="attr">ports:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">protocol:</span> <span class="string">TCP</span></span><br><span class="line"> <span class="attr">port:</span> <span class="number">80</span></span><br><span class="line"> <span class="attr">targetPort:</span> <span class="number">9001</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">if</span> <span class="string">.Values.oss.multisite.dashboard.nodePort</span> }}</span><br><span class="line"> <span class="attr">nodePort:</span> {{ <span class="string">.Values.oss.multisite.dashboard.nodePort</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">networking.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Ingress</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw-ingress</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">rook-ceph</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"> <span class="attr">ingressClassName:</span> {{<span class="string">.Values.oss.multisite.dashboard.ingress.ingressClass</span>}}</span><br><span class="line"> <span class="attr">rules:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">host:</span> {{<span class="string">.Values.oss.multisite.dashboard.ingress.hostName</span>}}</span><br><span class="line"> <span class="attr">http:</span></span><br><span class="line"> <span class="attr">paths:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">pathType:</span> <span class="string">Prefix</span></span><br><span class="line"> <span class="attr">path:</span> <span class="string">"/"</span></span><br><span class="line"> <span class="attr">backend:</span></span><br><span class="line"> <span class="attr">service:</span> </span><br><span class="line"> <span class="attr">name:</span> <span class="string">ceph-oss-minio-gw-svc</span></span><br><span class="line"> <span class="attr">port:</span></span><br><span class="line"> <span class="attr">number:</span> <span class="number">80</span></span><br><span class="line"></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="Cleanup-rook-ceph"><a href="#Cleanup-rook-ceph" class="headerlink" title="Cleanup rook-ceph"></a>Cleanup rook-ceph</h2><ul>
<li><p>first clean up all the resources in use, and the down the ceph cluster</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># cleanup resources</span></span><br><span class="line">kubectl delete obc -A</span><br><span class="line">kubectl delete pvc -A</span><br><span class="line"><span class="comment"># down the ceph cluster</span></span><br><span class="line">helmfile -f helmfile.yaml -e k8s01 -l bundle=rook-ceph destroy</span><br></pre></td></tr></table></figure></li>
<li><p>cleanup on the host</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">rm -rf /var/lib/rook/*</span><br></pre></td></tr></table></figure></li>
<li><p>cleanup osd device and reboot</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="meta">#!/bin/bash</span></span><br><span class="line"><span class="comment"># unformat the disk device</span></span><br><span class="line">DISK=<span class="string">"/dev/sdc"</span></span><br><span class="line"><span class="keyword">if</span> [ ! -z <span class="variable">$1</span> ];<span class="keyword">then</span></span><br><span class="line">DISK=<span class="variable">$1</span></span><br><span class="line"><span class="keyword">fi</span></span><br><span class="line"><span class="comment"># Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)</span></span><br><span class="line"><span class="comment"># You will have to run this step for all disks.</span></span><br><span class="line">sgdisk --zap-all <span class="variable">$DISK</span></span><br><span class="line">dd <span class="keyword">if</span>=/dev/zero of=<span class="string">"<span class="variable">$DISK</span>"</span> bs=1M count=100 oflag=direct,dsync</span><br></pre></td></tr></table></figure></li>
<li><p>these steps only have to be run once on each node. If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ls /dev/mapper/ceph-xxx | xargs -I% -- dmsetup remove %</span><br><span class="line"><span class="comment"># ceph-volume setup can leave ceph-<UUID> directories in /dev (unnecessary clutter)</span></span><br><span class="line">rm -rf /dev/ceph-xxx</span><br><span class="line"><span class="comment"># check</span></span><br><span class="line">lsblk -f</span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="Trouble-shooting-ceph-installation"><a href="#Trouble-shooting-ceph-installation" class="headerlink" title="Trouble shooting ceph installation"></a>Trouble shooting ceph installation</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># check why osd is not up</span></span><br><span class="line">kubectl -n rook-ceph logs rook-ceph-osd-prepare-node01--1-wnq6d provision</span><br></pre></td></tr></table></figure>
<h2 id="Usage-1"><a href="#Usage-1" class="headerlink" title="Usage"></a>Usage</h2><h3 id="ceph-osd-pool-management"><a href="#ceph-osd-pool-management" class="headerlink" title="ceph osd: pool management"></a>ceph osd: pool management</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># delete pool</span></span><br><span class="line"><span class="comment">#!/bin/bash</span></span><br><span class="line">pool_list=<span class="string">".rgw.root zone-muc.rgw.control ar-store.rgw.control zone-muc.rgw.meta ar-store.rgw.meta zone-muc.rgw.log ar-store.rgw.log ar-store.rgw.buckets.index zone-muc.rgw.buckets.index zone-muc.rgw.buckets.non-ec ar-store.rgw.buckets.non-ec zone-muc.rgw.buckets.data ar-store.rgw buckets.data default.rgw.log default.rgw.control default.rgw.meta"</span></span><br><span class="line"><span class="keyword">for</span> pool <span class="keyword">in</span> <span class="variable">$pool_list</span></span><br><span class="line"><span class="keyword">do</span></span><br><span class="line"> ceph osd pool rm <span class="variable">$pool</span> <span class="variable">$pool</span> --yes-i-really-really-mean-it</span><br><span class="line"><span class="keyword">done</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># check storage of each pool</span></span><br><span class="line">ceph df</span><br><span class="line"><span class="comment"># details of a pool</span></span><br><span class="line">ceph osd pool ls detail</span><br></pre></td></tr></table></figure>
<h3 id="ceph-filesystem"><a href="#ceph-filesystem" class="headerlink" title="ceph filesystem"></a>ceph filesystem</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># check status</span></span><br><span class="line">ceph fs status</span><br><span class="line"><span class="comment"># check available filesystem</span></span><br><span class="line">ceph fs ls</span><br><span class="line"><span class="comment"># check pg and osd</span></span><br><span class="line">ceph pg dump pgs|awk <span class="string">'{print $1,$2,$16}'</span></span><br><span class="line"><span class="comment">#first column, pg id (the first part is the pool id)</span></span><br><span class="line"><span class="comment">#second column: number of objects</span></span><br><span class="line"><span class="comment">#third column: osds</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#when we increase pg, the objects in current pgs will split into new pg (still use the same osds) -> the objects are not rebalanced</span></span><br><span class="line"><span class="comment">#when we increase pgp, the objects are rebalanced on osds</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># check crush rule</span></span><br><span class="line">ceph osd crush rule dump</span><br><span class="line"></span><br><span class="line"><span class="comment"># in k8s there should be csi created by provisioner</span></span><br><span class="line">ceph fs subvolumegroup ls myfs</span><br><span class="line">ceph fs subvolumegroup create myfs csi</span><br><span class="line"></span><br><span class="line"><span class="comment"># set a storageclass to a default</span></span><br><span class="line">kubectl patch storageclass fs-hdd -p <span class="string">'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># remove fs, which are created with commandline tool</span></span><br><span class="line"><span class="comment">## down the fs</span></span><br><span class="line">ceph fs fail cephfs-demo</span><br><span class="line"><span class="comment">## remove the fs</span></span><br><span class="line">ceph fs rm cephfs-demo --yes-i-really-mean-it</span><br><span class="line"><span class="comment">## remove the pools</span></span><br><span class="line">ceph osd lspools</span><br><span class="line">ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it</span><br><span class="line"><span class="comment"># the above will raise error</span></span><br><span class="line"><span class="comment"># Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool</span></span><br><span class="line">ceph tell mon.\* injectargs <span class="string">'--mon-allow-pool-delete=true'</span></span><br><span class="line"></span><br><span class="line">ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it</span><br><span class="line">ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it</span><br></pre></td></tr></table></figure>
<h3 id="ceph-osd-remove-a-osd-device"><a href="#ceph-osd-remove-a-osd-device" class="headerlink" title="ceph osd: remove a osd device"></a>ceph osd: remove a osd device</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># https://www.icode9.com/content-4-1244528.html</span></span><br><span class="line"><span class="comment"># stop operator</span></span><br><span class="line">kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0 </span><br><span class="line"><span class="comment"># edit cluster to remove the osd</span></span><br><span class="line">kubectl edit cephclusters.ceph.rook.io -n rook-ceph rook-ceph </span><br><span class="line"></span><br><span class="line"><span class="comment"># use ceph-tools to down the osd</span></span><br><span class="line">kubectl <span class="built_in">exec</span> -it -n rook-ceph rook-ceph-tools-769bdf4bdd-hdx6r bash</span><br><span class="line">ceph osd <span class="built_in">set</span> noup</span><br><span class="line">ceph osd down 0</span><br><span class="line">ceph osd out 0</span><br><span class="line"><span class="comment"># check rebalancing status</span></span><br><span class="line">ceph -w</span><br><span class="line"></span><br><span class="line"><span class="comment"># when rebalancing finished, delete osd</span></span><br><span class="line">ceph osd purge 0 --yes-i-really-mean-it</span><br><span class="line">ceph auth del osd.0</span><br><span class="line"></span><br><span class="line"><span class="comment"># remove the host from crushmap when there is no osd on that host</span></span><br><span class="line">ceph osd crush remove host-name</span><br><span class="line"></span><br><span class="line"><span class="comment"># check status</span></span><br><span class="line">ceph -s</span><br><span class="line">ceph osd tree</span><br><span class="line"></span><br><span class="line"><span class="comment"># unlabel</span></span><br><span class="line">ceph osd <span class="built_in">unset</span> noup</span><br><span class="line"></span><br><span class="line"><span class="comment"># delete osd job </span></span><br><span class="line">kubectl delete deploy -n rook-ceph rook-ceph-osd-0</span><br><span class="line"></span><br><span class="line"><span class="comment"># restart operator</span></span><br><span class="line">kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1</span><br></pre></td></tr></table></figure>
<h3 id="ceph-osd-add-a-new-osd"><a href="#ceph-osd-add-a-new-osd" class="headerlink" title="ceph osd: add a new osd"></a>ceph osd: add a new osd</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># stop operator</span></span><br><span class="line">kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0 </span><br><span class="line"><span class="comment"># edit cluster to add the osd</span></span><br><span class="line">kubectl edit cephclusters.ceph.rook.io -n rook-ceph rook-ceph</span><br><span class="line"><span class="comment"># restart operator</span></span><br><span class="line">kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1</span><br><span class="line"></span><br><span class="line"><span class="comment"># or</span></span><br><span class="line"><span class="comment"># edit helmfile the osd, and apply</span></span><br></pre></td></tr></table></figure>
<h3 id="ceph-object-storage-data-pools"><a href="#ceph-object-storage-data-pools" class="headerlink" title="ceph object storage: data pools"></a>ceph object storage: data pools</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create a CephObjectStore</span></span><br><span class="line">kubectl apply -f os.yaml</span><br><span class="line"><span class="comment"># after creating CephObjectStore, the following pool will be created</span></span><br><span class="line"><span class="comment"># realm: my-store, zonegroup: my-store, zone: my-store</span></span><br><span class="line">12 my-store.rgw.control</span><br><span class="line">13 my-store.rgw.meta</span><br><span class="line">14 my-store.rgw.log</span><br><span class="line">15 my-store.rgw.buckets.index</span><br><span class="line">16 my-store.rgw.buckets.non-ec</span><br><span class="line">17 .rgw.root</span><br><span class="line">18 my-store.rgw.buckets.data</span><br><span class="line"></span><br><span class="line"><span class="comment"># based on above buckets</span></span><br><span class="line"><span class="comment"># sc for auto generating bucket (this is already included in rook-ceph-resources)</span></span><br><span class="line">kubectl apply -f os_sc.yaml</span><br><span class="line"></span><br><span class="line"><span class="comment"># claim a bucket</span></span><br><span class="line">kubectl apply -f os_bucket.yaml</span><br></pre></td></tr></table></figure>
<h3 id="ceph-object-storage-multisite-management"><a href="#ceph-object-storage-multisite-management" class="headerlink" title="ceph object storage: multisite management"></a>ceph object storage: multisite management</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># get and set zonegroup info</span></span><br><span class="line">radosgw-admin zonegroup get --rgw-zonegroup=zonegroup-a > zonegroup_a.json</span><br><span class="line">radosgw-admin zonegroup <span class="built_in">set</span> --infile ./zonegroup_a.json</span><br><span class="line"><span class="comment"># enable update</span></span><br><span class="line">radosgw-admin period update --commit --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a</span><br><span class="line"></span><br><span class="line"><span class="comment"># remove realm, zonegroup, zone</span></span><br><span class="line"><span class="comment"># radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default</span></span><br><span class="line"><span class="comment"># radosgw-admin period update --commit</span></span><br><span class="line"><span class="comment"># radosgw-admin zone rm --rgw-zone=default</span></span><br><span class="line"><span class="comment"># radosgw-admin period update --commit</span></span><br><span class="line"><span class="comment"># radosgw-admin zonegroup delete --rgw-zonegroup=default</span></span><br><span class="line"><span class="comment"># radosgw-admin period update --commit</span></span><br></pre></td></tr></table></figure>
<h3 id="ceph-object-storage-user-management"><a href="#ceph-object-storage-user-management" class="headerlink" title="ceph object storage: user management"></a>ceph object storage: user management</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create a user (in a zonegroup)</span></span><br><span class="line">$ radosgw-admin user create --uid yuanjing --display-name <span class="string">"Yuanjing"</span> --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a</span><br><span class="line"><span class="comment"># edit user</span></span><br><span class="line">$ radosgw-admin metadata get user:yuanjing --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a > yuanjing.json</span><br><span class="line"><span class="comment"># add something in yuanjing.json and apply the change</span></span><br><span class="line">$ radosgw-admin metadata put user:yuanjing --rgw-realm=realm-a --rgw-zonegroup=zonegroup-a < yuanjing.json</span><br></pre></td></tr></table></figure>
<h3 id="ceph-object-storage-bucket-policy"><a href="#ceph-object-storage-bucket-policy" class="headerlink" title="ceph object storage: bucket policy"></a>ceph object storage: bucket policy</h3><p><a href="https://docs.ceph.com/en/latest/radosgw/bucketpolicy/">https://docs.ceph.com/en/latest/radosgw/bucketpolicy/</a><br><a href="https://www.jianshu.com/p/a1aab0d3eeef">https://www.jianshu.com/p/a1aab0d3eeef</a><br><a href="https://zhoubofsy.github.io/2019/10/17/storage/ceph/rgw-bucket-policy/">https://zhoubofsy.github.io/2019/10/17/storage/ceph/rgw-bucket-policy/</a><br><a href="https://www.modb.pro/db/134277">https://www.modb.pro/db/134277</a><br>example policies</p>
<figure class="highlight json"><table><tr><td class="code"><pre><span class="line">{</span><br><span class="line"> <span class="attr">"Version"</span>:</span><br><span class="line"> <span class="string">"2012-10-17"</span>,</span><br><span class="line"> <span class="attr">"Statement"</span>: [{</span><br><span class="line"> <span class="attr">"Sid"</span>: <span class="string">"AddPerm"</span>,</span><br><span class="line"> <span class="attr">"Effect"</span>: <span class="string">"Allow"</span>,</span><br><span class="line"> <span class="attr">"Principal"</span>: {<span class="attr">"AWS"</span>: [<span class="string">"*"</span>]},</span><br><span class="line"> <span class="attr">"Action"</span>: [<span class="string">"s3:ListBucket"</span>,</span><br><span class="line"> <span class="string">"s3:PutObject"</span>,</span><br><span class="line"> <span class="string">"s3:DeleteObject"</span>,</span><br><span class="line"> <span class="string">"s3:GetObject"</span>],</span><br><span class="line"> <span class="attr">"Resource"</span>: [<span class="string">"arn:aws:s3:::my-new-bucket"</span>, <span class="string">"arn:aws:s3:::my-new-bucket/*"</span>]</span><br><span class="line"> }]</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">{</span><br><span class="line"> <span class="attr">"Version"</span>:</span><br><span class="line"> <span class="string">"2012-10-17"</span>,</span><br><span class="line"> <span class="attr">"Statement"</span>: [{</span><br><span class="line"> <span class="attr">"Sid"</span>: <span class="string">"AddPerm"</span>,</span><br><span class="line"> <span class="attr">"Effect"</span>: <span class="string">"Allow"</span>,</span><br><span class="line"> <span class="attr">"Principal"</span>: {<span class="attr">"AWS"</span>: [<span class="string">"arn:aws:iam:::user/yuanjing"</span>,<span class="string">"arn:aws:iam::tenanttwo:user/userthree"</span>]},</span><br><span class="line"> <span class="attr">"Action"</span>: [<span class="string">"s3:ListBucket"</span>,</span><br><span class="line"> <span class="string">"s3:PutObject"</span>,</span><br><span class="line"> <span class="string">"s3:DeleteObject"</span>,</span><br><span class="line"> <span class="string">"s3:GetObject"</span>],</span><br><span class="line"> <span class="attr">"Resource"</span>: [<span class="string">"arn:aws:s3:::my-new-bucket"</span>, <span class="string">"arn:aws:s3:::my-new-bucket/*"</span>]</span><br><span class="line"> }]</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">{</span><br><span class="line"> <span class="attr">"Version"</span>:</span><br><span class="line"> <span class="string">"2012-10-17"</span>,</span><br><span class="line"> <span class="attr">"Statement"</span>: [{</span><br><span class="line"> <span class="attr">"Sid"</span>: <span class="string">"AddPerm"</span>,</span><br><span class="line"> <span class="attr">"Effect"</span>: <span class="string">"Allow"</span>,</span><br><span class="line"> <span class="attr">"Principal"</span>: {<span class="attr">"AWS"</span>: [<span class="string">"arn:aws:iam:::user/yuanjing"</span>,<span class="string">"arn:aws:iam::tenanttwo:user/userthree"</span>]},</span><br><span class="line"> <span class="attr">"Action"</span>: [<span class="string">"s3:*"</span>],</span><br><span class="line"> <span class="attr">"Resource"</span>: [<span class="string">"arn:aws:s3:::my-new-bucket"</span>, <span class="string">"arn:aws:s3:::my-new-bucket/*"</span>]</span><br><span class="line"> }]</span><br><span class="line">}</span><br><span class="line"></span><br></pre></td></tr></table></figure>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># To apply the policy on a bucket with s3cmd</span></span><br><span class="line">s3cmd setpolicy policy.json s3://my-new-bucket</span><br></pre></td></tr></table></figure>
<h3 id="ceph-dashboard"><a href="#ceph-dashboard" class="headerlink" title="ceph dashboard"></a>ceph dashboard</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">user: admin</span><br><span class="line">password: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath=<span class="string">"{['data']['password']}"</span> | base64 --decode && <span class="built_in">echo</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># enable object gateway in dashboard</span></span><br><span class="line">https://github.com/rook/rook/issues/3026</span><br><span class="line">ceph dashboard set-rgw-api-access-key -i access_key</span><br><span class="line">ceph dashboard set-rgw-api-secret-key -i PATH_TO_FILE</span><br></pre></td></tr></table></figure>
<h2 id="References"><a href="#References" class="headerlink" title="References:"></a>References:</h2><ol>
<li><p><a href="https://kuboard.cn/learning/k8s-intermediate/persistent/ceph/rook-config.html#%E5%AE%89%E8%A3%85-rook-ceph">https://kuboard.cn/learning/k8s-intermediate/persistent/ceph/rook-config.html#%E5%AE%89%E8%A3%85-rook-ceph</a></p>
</li>
<li><p><a href="https://www.joyk.com/dig/detail/1585287051590150">https://www.joyk.com/dig/detail/1585287051590150</a></p>
</li>
<li><p><a href="https://www.qikqiak.com/post/deploy-ceph-cluster-with-rook/">https://www.qikqiak.com/post/deploy-ceph-cluster-with-rook/</a></p>
</li>
<li><p><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-ceph-cluster-within-kubernetes-using-rook">https://www.digitalocean.com/community/tutorials/how-to-set-up-a-ceph-cluster-within-kubernetes-using-rook</a></p>
</li>
<li><p><a href="https://xiebiao.top/post/storage/rook_install_operator/">https://xiebiao.top/post/storage/rook_install_operator/</a></p>
</li>
<li><p><a href="https://github.com/rook/rook/tree/master/cluster/examples">https://github.com/rook/rook/tree/master/cluster/examples</a></p>
</li>
</ol>
]]></content>
<categories>
<category>ceph</category>
</categories>
<tags>
<tag>cloud</tag>
<tag>ceph</tag>
<tag>rook-ceph</tag>
</tags>
</entry>
<entry>
<title>gitlab</title>
<url>/2022/02/06/gitlab/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Here’s a collection of gitlab related commands for personal usage.</p>
<span id="more"></span>
<h2 id="Commands"><a href="#Commands" class="headerlink" title="Commands"></a>Commands</h2><ul>
<li><p>tag a branch</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Checkout to the target branch</span></span><br><span class="line">$ git checkout master</span><br><span class="line"><span class="comment"># Tag the branch</span></span><br><span class="line">$ git tag <tag> / git tag -a <tag> -m <span class="string">"this is tag"</span></span><br><span class="line"><span class="comment"># Push the tag to remote</span></span><br><span class="line">$ git push origin --tags</span><br></pre></td></tr></table></figure></li>
<li><p>delete a local tag</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ git tag -d <tag></span><br></pre></td></tr></table></figure></li>
<li><p>delete a remote tag</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ git push --delete origin <tag></span><br></pre></td></tr></table></figure></li>
<li><p>merge changes from master to feature branch (merge)</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ git checkout FEATURE-BRANCH</span><br><span class="line">$ git merge --no-edit master</span><br><span class="line">$ git push origin FEATURE-BRANCH</span><br><span class="line"><span class="comment"># This results in non-linear history</span></span><br></pre></td></tr></table></figure></li>
<li><p>merge changes from master to feature branch (rebase)</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Checkout the FEATURE-BRANCH</span></span><br><span class="line">$ git checkout FEATURE-BRANCH</span><br><span class="line"><span class="comment"># Sync your main branch (master) with the latest changes</span></span><br><span class="line">$ git fetch origin</span><br><span class="line">$ git rebase origin/master</span><br><span class="line"><span class="comment"># Fix merge conflicts and add the fixed files</span></span><br><span class="line">$ git add FILE</span><br><span class="line"><span class="comment"># Continue</span></span><br><span class="line">$ git rebase --<span class="built_in">continue</span></span><br><span class="line"><span class="comment"># or skip it if there is complain that no changes after resolving all conflicts</span></span><br><span class="line">$ git rebase --skip</span><br><span class="line"><span class="comment"># Repeat in the subsequent commits until complete (Without the --force flag, your remote branch will continue to believe it is out of sync and will claim a merge conflict)</span></span><br><span class="line">$ git push origin HEAD --force</span><br></pre></td></tr></table></figure></li>
<li><p>rename a branch name</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Checkout the old branch</span></span><br><span class="line">$ git checkout OLD-NAME</span><br><span class="line"><span class="comment"># Give new name</span></span><br><span class="line">$ git branch -m NEW-NAME</span><br><span class="line"><span class="comment"># Push to the remote</span></span><br><span class="line">$ git push origin -u NEW-NAME</span><br><span class="line"><span class="comment"># Delete the old one</span></span><br><span class="line">$ git push origin --delete OLD-NAME</span><br><span class="line"></span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="Configurations"><a href="#Configurations" class="headerlink" title="Configurations"></a>Configurations</h2><ul>
<li>push rules<br><code>^(build|ci|docs|feat|fix|perf|refactor|revert|style|test|chore)(\((\S+\s*)+\))?: \S+|Merge \S+|Revert "|Apply</code></li>
</ul>
<h2 id="CI-CD"><a href="#CI-CD" class="headerlink" title="CI/CD"></a>CI/CD</h2><ul>
<li>variables: variables defined in the job could only be used in the job</li>
<li>stages: stages could not be merged by multiple templates</li>
<li>stages with <code>need</code>: if one job failed, the whole pipeline failed, but the job will run if it’s needed job in previous stage succeeds.</li>
<li>stages without <code>need</code>: if one job failed, then all job in the next stages are skipped</li>
<li>artifacts: artifacts from all previous stages are passed by default</li>
<li>needs: does not work with rules, but with except</li>
<li>MR: any update to the MR will trigger push event, CI_COMMI_BRANCH -> target branch</li>
<li>tag: is push event, CI_COMMIT_BRANCH=null, CI_COMMIT_TAG!=null</li>
<li>docs generation job should be exactly <code>pages</code></li>
</ul>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://www.verdantfox.com/blog/view/how-to-git-rebase-mainmaster-onto-your-feature-branch-even-with-merge-conflicts">https://www.verdantfox.com/blog/view/how-to-git-rebase-mainmaster-onto-your-feature-branch-even-with-merge-conflicts</a></li>
</ol>
]]></content>
</entry>
<entry>
<title>helmfile</title>
<url>/2022/02/06/helmfile/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Helmfile is a <a href="https://github.com/roboll/helmfile">tool</a> to manage helm charts. Instead of managing all the helm charts one by one, helmfile allows you to achieve that in one command with helmfile and a directory of chart value files.</p>
<p>This note will introdcue</p>
<ul>
<li>basics of helmfile</li>
<li>helmfile installation and cli</li>
<li>advanced configuration: nested states</li>
<li>helmfile with environment management</li>
<li>helmfile with secrets management</li>
</ul>
<span id="more"></span>
<h2 id="Basics-of-helmfile"><a href="#Basics-of-helmfile" class="headerlink" title="Basics of helmfile"></a>Basics of helmfile</h2><p>A basic and simple helmfile looks like</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">repositories:</span></span><br><span class="line"><span class="comment"># To use official "stable" charts a.k.a https://github.com/helm/charts/tree/master/stable</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">stable</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">https://charts.helm.sh/stable</span></span><br><span class="line"><span class="comment"># To use official "incubator" charts a.k.a https://github.com/helm/charts/tree/master/incubator</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">incubator</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">https://charts.helm.sh/incubator</span></span><br><span class="line"><span class="comment"># helm-git powered repository: You can treat any Git repository as a charts repository</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">polaris</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">git+https://github.com/reactiveops/polaris@deploy/helm?ref=master</span></span><br><span class="line"><span class="comment"># Advanced configuration: You can setup basic or tls auth and optionally enable helm OCI integration</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">roboll</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">http://roboll.io/charts</span></span><br><span class="line"> <span class="attr">certFile:</span> <span class="string">optional_client_cert</span></span><br><span class="line"> <span class="attr">keyFile:</span> <span class="string">optional_client_key</span></span><br><span class="line"> <span class="attr">username:</span> <span class="string">optional_username</span></span><br><span class="line"> <span class="attr">password:</span> <span class="string">optional_password</span></span><br><span class="line"> <span class="attr">oci:</span> <span class="literal">true</span></span><br><span class="line"> <span class="attr">passCredentials:</span> <span class="literal">true</span></span><br><span class="line"><span class="comment"># Advanced configuration: You can use a ca bundle to use an https repo</span></span><br><span class="line"><span class="comment"># with a self-signed certificate</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">insecure</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">https://charts.my-insecure-domain.com</span></span><br><span class="line"> <span class="attr">caFile:</span> <span class="string">optional_ca_crt</span></span><br><span class="line"><span class="comment"># Advanced configuration: You can skip the verification of TLS for an https repo</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">skipTLS</span></span><br><span class="line"> <span class="attr">url:</span> <span class="string">https://ss.my-insecure-domain.com</span></span><br><span class="line"> <span class="attr">skipTLSVerify:</span> <span class="literal">true</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># Default values to set for args along with dedicated keys that can be set by contributors, cli args take precedence over these.</span></span><br><span class="line"><span class="comment"># In other words, unset values results in no flags passed to helm.</span></span><br><span class="line"><span class="comment"># See the helm usage (helm SUBCOMMAND -h) for more info on default values when those flags aren't provided.</span></span><br><span class="line"><span class="attr">helmDefaults:</span></span><br><span class="line"> <span class="attr">kubeContext:</span> <span class="string">kube-context</span> <span class="comment">#dedicated default key for kube-context (--kube-context)</span></span><br><span class="line"> <span class="attr">cleanupOnFail:</span> <span class="literal">false</span> <span class="comment">#dedicated default key for helm flag --cleanup-on-fail</span></span><br><span class="line"> <span class="comment"># verify the chart before upgrading (only works with packaged charts not directories) (default false)</span></span><br><span class="line"> <span class="attr">verify:</span> <span class="literal">true</span></span><br><span class="line"> <span class="comment"># wait for k8s resources via --wait. (default false)</span></span><br><span class="line"> <span class="attr">wait:</span> <span class="literal">true</span></span><br><span class="line"> <span class="comment"># if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout (default false, Implemented in Helm3.5)</span></span><br><span class="line"> <span class="attr">waitForJobs:</span> <span class="literal">true</span></span><br><span class="line"> <span class="comment"># time in seconds to wait for any individual Kubernetes operation (like Jobs for hooks, and waits on pod/pvc/svc/deployment readiness) (default 300)</span></span><br><span class="line"> <span class="attr">timeout:</span> <span class="number">600</span></span><br><span class="line"> <span class="comment"># limit the maximum number of revisions saved per release. Use 0 for no limit. (default 10)</span></span><br><span class="line"> <span class="attr">historyMax:</span> <span class="number">10</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># The desired states of Helm releases.</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># Helmfile runs various helm commands to converge the current state in the live cluster to the desired state defined here.</span></span><br><span class="line"><span class="attr">releases:</span></span><br><span class="line"> <span class="comment"># Published chart example</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">vault</span> <span class="comment"># name of this release</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">vault</span> <span class="comment"># target namespace</span></span><br><span class="line"> <span class="attr">createNamespace:</span> <span class="literal">true</span> <span class="comment"># helm 3.2+ automatically create release namespace (default true)</span></span><br><span class="line"> <span class="attr">labels:</span> <span class="comment"># Arbitrary key value pairs for filtering releases</span></span><br><span class="line"> <span class="attr">foo:</span> <span class="string">bar</span></span><br><span class="line"> <span class="attr">chart:</span> <span class="string">roboll/vault-secret-manager</span> <span class="comment"># the chart being installed to create this release, referenced by `repository/chart` syntax</span></span><br><span class="line"> <span class="attr">version:</span> <span class="string">~1.24.1</span> <span class="comment"># the semver of the chart. range constraint is supported</span></span><br><span class="line"> <span class="attr">installed:</span> {{ <span class="string">.Values.vault.enabled</span> }}</span><br><span class="line"> <span class="attr">missingFileHandler:</span> <span class="string">Warn</span> <span class="comment"># set to either "Error" or "Warn". "Error" instructs helmfile to fail when unable to find a values or secrets file. When "Warn", it prints the file and continues.</span></span><br><span class="line"> <span class="comment"># Values files used for rendering the chart</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="comment"># Value files passed via --values</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">vault.yaml</span></span><br><span class="line"> <span class="comment"># Inline values, passed via a temporary values file and --values, so that it doesn't suffer from type issues like --set</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">address:</span> <span class="string">https://vault.example.com</span></span><br><span class="line"> <span class="comment"># Go template available in inline values and values files.</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">image:</span></span><br><span class="line"> <span class="comment"># The end result is more or less YAML. So do `quote` to prevent number-like strings from accidentally parsed into numbers!</span></span><br><span class="line"> <span class="comment"># See https://github.com/roboll/helmfile/issues/608</span></span><br><span class="line"> <span class="attr">tag:</span> {{ <span class="string">requiredEnv</span> <span class="string">"IMAGE_TAG"</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> <span class="comment"># Otherwise:</span></span><br><span class="line"> <span class="comment"># tag: "{{ requiredEnv "IMAGE_TAG" }}"</span></span><br><span class="line"> <span class="comment"># tag: !!string {{ requiredEnv "IMAGE_TAG" }}</span></span><br><span class="line"> <span class="attr">db:</span></span><br><span class="line"> <span class="attr">username:</span> {{ <span class="string">requiredEnv</span> <span class="string">"DB_USERNAME"</span> }}</span><br><span class="line"> <span class="comment"># value taken from environment variable. Quotes are necessary. Will throw an error if the environment variable is not set. $DB_PASSWORD needs to be set in the calling environment ex: export DB_PASSWORD='password1'</span></span><br><span class="line"> <span class="attr">password:</span> {{ <span class="string">requiredEnv</span> <span class="string">"DB_PASSWORD"</span> }}</span><br><span class="line"> <span class="attr">proxy:</span></span><br><span class="line"> <span class="comment"># Interpolate environment variable with a fixed string</span></span><br><span class="line"> <span class="attr">domain:</span> {{ <span class="string">requiredEnv</span> <span class="string">"PLATFORM_ID"</span> }}<span class="string">.my-domain.com</span></span><br><span class="line"> <span class="attr">scheme:</span> {{ <span class="string">env</span> <span class="string">"SCHEME"</span> <span class="string">|</span> <span class="string">default</span> <span class="string">"https"</span> }}</span><br><span class="line"> <span class="comment"># Use `values` whenever possible!</span></span><br><span class="line"> <span class="comment"># `set` translates to helm's `--set key=val`, that is known to suffer from type issues like https://github.com/roboll/helmfile/issues/608</span></span><br><span class="line"> <span class="attr">set:</span></span><br><span class="line"> <span class="comment"># single value loaded from a local file, translates to --set-file foo.config=path/to/file</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">foo.config</span></span><br><span class="line"> <span class="attr">file:</span> <span class="string">path/to/file</span></span><br><span class="line"> <span class="comment"># set a single array value in an array, translates to --set bar[0]={1,2}</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">bar[0]</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="number">1</span></span><br><span class="line"> <span class="bullet">-</span> <span class="number">2</span></span><br><span class="line"> <span class="comment"># set a templated value</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">namespace</span></span><br><span class="line"> <span class="attr">value:</span> {{ <span class="string">.Namespace</span> }}</span><br><span class="line"> <span class="comment"># will attempt to decrypt it using helm-secrets plugin</span></span><br><span class="line"> <span class="attr">secrets:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">vault_secret.yaml</span></span><br><span class="line"> <span class="attr">kubeContext:</span> <span class="string">kube-context</span></span><br><span class="line"> <span class="comment"># Local chart example</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">name:</span> <span class="string">grafana</span> <span class="comment"># name of this release</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">another</span> <span class="comment"># target namespace</span></span><br><span class="line"> <span class="attr">chart:</span> <span class="string">../my-charts/grafana</span> <span class="comment"># the chart being installed to create this release, referenced by relative path to local helmfile</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">"../../my-values/grafana/values.yaml"</span> <span class="comment"># Values file (relative path to manifest)</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">./values/{{</span> <span class="string">requiredEnv</span> <span class="string">"PLATFORM_ENV"</span> <span class="string">}}/config.yaml</span> <span class="comment"># Values file taken from path with environment variable. $PLATFORM_ENV must be set in the calling environment.</span></span><br><span class="line"> <span class="attr">wait:</span> <span class="literal">true</span></span><br><span class="line"></span><br></pre></td></tr></table></figure>
<p>The basic helmfile includes 3 parts</p>
<ul>
<li>repositories: public helm chart repositories</li>
<li>helmDefaults: default configurations for helmfile command</li>
<li>releases: definition of helm chart deploy (local chart and registry chart)</li>
</ul>
<h2 id="helmfile-installation-and-cli"><a href="#helmfile-installation-and-cli" class="headerlink" title="helmfile installation and cli"></a>helmfile installation and cli</h2><p>For installation</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ curl https://github.com/roboll/helmfile/releases/download/v0.143.0/helmfile_linux_amd64 -o /usr/bin/helmfile</span><br><span class="line">$ chmod +x helmfile</span><br><span class="line"></span><br><span class="line"><span class="comment"># Note, helm and helm plugins: helm-diff should be installed </span></span><br></pre></td></tr></table></figure>
<p>Some frequently used commands</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">$ helmfile -f helmfile.yaml diff</span><br><span class="line">$ helmfile -f helmfile.yaml apply</span><br><span class="line">$ helmfile -f helmfile.yaml sync</span><br><span class="line">$ helmfile -f helmfile.yaml destroy</span><br></pre></td></tr></table></figure>
<h2 id="advanced-configuration-nestated-states"><a href="#advanced-configuration-nestated-states" class="headerlink" title="advanced configuration: nestated states"></a>advanced configuration: nestated states</h2><p>nested states allows you to classify your releases in different helmfiles and manage all with one single helmfile.yaml</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Advanced Configuration: Nested States</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="attr">helmfiles:</span></span><br><span class="line"><span class="bullet">-</span> <span class="comment"># Path to the helmfile state file being processed BEFORE releases in this state file</span></span><br><span class="line"> <span class="attr">path:</span> <span class="string">path/to/subhelmfile.yaml</span></span><br><span class="line"> <span class="comment"># Label selector used for filtering releases in the nested state.</span></span><br><span class="line"> <span class="comment"># For example, `name=prometheus` in this context is equivalent to processing the nested state like</span></span><br><span class="line"> <span class="comment"># helmfile -f path/to/subhelmfile.yaml -l name=prometheus sync</span></span><br><span class="line"> <span class="attr">selectors:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">name=prometheus</span></span><br><span class="line"> <span class="comment"># Override state values</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="comment"># Values files merged into the nested state's values</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">additional.values.yaml</span></span><br><span class="line"> <span class="comment"># One important aspect of using values here is that they first need to be defined in the values section</span></span><br><span class="line"> <span class="comment"># of the origin helmfile, so in this example key1 needs to be in the values or environments.NAME.values of path/to/subhelmfile.yaml</span></span><br><span class="line"> <span class="comment"># Inline state values merged into the nested state's values</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">key1:</span> <span class="string">val1</span></span><br><span class="line"><span class="bullet">-</span> <span class="comment"># All the nested state files under `helmfiles:` is processed in the order of definition.</span></span><br><span class="line"> <span class="comment"># So it can be used for preparation for your main `releases`. An example would be creating CRDs required by `releases` in the parent state file.</span></span><br><span class="line"> <span class="attr">path:</span> <span class="string">path/to/mycrd.helmfile.yaml</span></span><br><span class="line"><span class="bullet">-</span> <span class="comment"># Terraform-module-like URL for importing a remote directory and use a file in it as a nested-state file</span></span><br><span class="line"> <span class="comment"># The nested-state file is locally checked-out along with the remote directory containing it.</span></span><br><span class="line"> <span class="comment"># Therefore all the local paths in the file are resolved relative to the file</span></span><br><span class="line"> <span class="attr">path:</span> <span class="string">git::https://github.com/cloudposse/helmfiles.git@releases/kiam.yaml?ref=0.40.0</span></span><br><span class="line"><span class="comment"># If set to "Error", return an error when a subhelmfile points to a</span></span><br><span class="line"><span class="comment"># non-existent path. The default behavior is to print a warning and continue.</span></span><br><span class="line"><span class="attr">missingFileHandler:</span> <span class="string">Error</span></span><br></pre></td></tr></table></figure>
<p>Under helmfiles, all the independent sub helmfiles are included via it’s path, and we could pass the values file and select the release from the sub helmfile to be executed</p>
<h2 id="helmfile-with-environment-management"><a href="#helmfile-with-environment-management" class="headerlink" title="helmfile with environment management"></a>helmfile with environment management</h2><p>We could also manage the deployment with different environment values files. Environment Values allows you to inject a set of values specific to the selected environment, into values.yaml.gotml templates. </p>
<p>e.g., <code>helmfile -f helmfile.yaml -e dev01 apply</code></p>
<p>The helmfile.yaml is configured with different environment</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="comment"># The list of environments managed by helmfile.</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># The default is `environments: {"default": {}}` which implies:</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># - `{{ .Environment.Name }}` evaluates to "default"</span></span><br><span class="line"><span class="comment"># - `{{ .Values }}` being empty</span></span><br><span class="line"><span class="attr">environments:</span></span><br><span class="line"> <span class="comment"># The "default" environment is available and used when `helmfile` is run without `--environment NAME`.</span></span><br><span class="line"> <span class="attr">default:</span></span><br><span class="line"> <span class="comment"># Everything from the values.yaml is available via `{{ .Values.KEY }}`.</span></span><br><span class="line"> <span class="comment"># Suppose `{"foo": {"bar": 1}}` contained in the values.yaml below,</span></span><br><span class="line"> <span class="comment"># `{{ .Values.foo.bar }}` is evaluated to `1`.</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">environments/default/values.yaml</span></span><br><span class="line"> <span class="comment"># Each entry in values can be either a file path or inline values.</span></span><br><span class="line"> <span class="comment"># The below is an example of inline values, which is merged to the `.Values`</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">myChartVer:</span> <span class="number">1.0</span><span class="number">.0</span><span class="string">-dev</span></span><br><span class="line"> <span class="comment"># Any environment other than `default` is used only when `helmfile` is run with `--environment NAME`.</span></span><br><span class="line"> <span class="comment"># That is, the "production" env below is used when and only when it is run like `helmfile --environment production sync`.</span></span><br><span class="line"> <span class="attr">production:</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">environments/production/values.yaml</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">myChartVer:</span> <span class="number">1.0</span><span class="number">.0</span></span><br><span class="line"> <span class="comment"># disable vault release processing</span></span><br><span class="line"> <span class="bullet">-</span> <span class="attr">vault:</span></span><br><span class="line"> <span class="attr">enabled:</span> <span class="literal">false</span></span><br><span class="line"> <span class="comment">## `secrets.yaml` is decrypted by `helm-secrets` and available via `{{ .Environment.Values.KEY }}`</span></span><br><span class="line"> <span class="attr">secrets:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">environments/production/secrets.yaml</span></span><br><span class="line"> <span class="comment"># Instructs helmfile to fail when unable to find a environment values file listed under `environments.NAME.values`.</span></span><br><span class="line"> <span class="comment">#</span></span><br><span class="line"> <span class="comment"># Possible values are "Error", "Warn", "Info", "Debug". The default is "Error".</span></span><br><span class="line"> <span class="comment">#</span></span><br><span class="line"> <span class="comment"># Use "Warn", "Info", or "Debug" if you want helmfile to not fail when a values file is missing, while just leaving</span></span><br><span class="line"> <span class="comment"># a message about the missing file at the log-level.</span></span><br><span class="line"> <span class="attr">missingFileHandler:</span> <span class="string">Error</span></span><br><span class="line"> <span class="comment"># kubeContext to use for this environment</span></span><br><span class="line"> <span class="attr">kubeContext:</span> <span class="string">kube-context</span></span><br><span class="line"><span class="attr">releases:</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">myapp</span></span><br><span class="line"> <span class="attr">chart:</span> <span class="string">myrepo/myapp</span></span><br><span class="line"> <span class="attr">values:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">values.yaml.gotmpl</span></span><br></pre></td></tr></table></figure>
<p><code>environment/xx/values.yaml</code> should contain values that could be fed into values.yaml.gotmpl</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">domain:</span> <span class="string">myapp.example.com</span></span><br><span class="line"><span class="attr">releaseName:</span> <span class="string">myapp</span></span><br></pre></td></tr></table></figure>
<p>in <code>values.yaml.gotmpl</code></p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">domain:</span> {{ <span class="string">.Values</span> <span class="string">|</span> <span class="string">get</span> <span class="string">"domain"</span> <span class="string">"dev.example.com"</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="helmfile-with-secrets-management"><a href="#helmfile-with-secrets-management" class="headerlink" title="helmfile with secrets management"></a>helmfile with secrets management</h2><p>In environment values, some sensitive variables should always be encrypted. Helmfile supports also secrets, the secrets parameter in a helmfile.yaml causes the helm-secrets plugin to be executed to decrypt the secrets.yaml file before rendering the templates. (helm plugin secrets should be installed to used this feature).</p>
<p>The secrets usage in helmfile is the same as in helm chart, a detailed introduction could be found in previous post about <a href="https://naomilyj.github.io/2022/02/06/helmchart/">helm chart</a>.</p>
<h2 id="helmfile-hook-and-kustomize"><a href="#helmfile-hook-and-kustomize" class="headerlink" title="helmfile hook and kustomize"></a>helmfile hook and kustomize</h2><h3 id="hook"><a href="#hook" class="headerlink" title="hook"></a>hook</h3><p>A Helmfile hook is a per-release extension point that is composed of:</p>
<ul>
<li>events</li>
<li>command</li>
<li>args</li>
<li>showlogs</li>
</ul>
<p>Helmfile triggers various events while it is running. Once events are triggered, associated hooks are executed, by running the command with args. The standard output of the command will be displayed if showlogs is set and it’s value is true.</p>
<p>Currently supported events are:</p>
<ul>
<li>prepare</li>
<li>presync</li>
<li>preuninstall</li>
<li>postuninstall</li>
<li>postsync</li>
<li>cleanup</li>
</ul>
<h3 id="kustomize"><a href="#kustomize" class="headerlink" title="kustomize"></a>kustomize</h3><p>The way how helmfile could be used to manage kustomize manifests is based on hook. The preinstall hook will create a temp helm chart from rendered resources using kustomize command</p>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://github.com/roboll/helmfile">https://github.com/roboll/helmfile</a></li>
<li><a href="https://github.com/NaomiLYJ/helmfile-templates">https://github.com/NaomiLYJ/helmfile-templates</a></li>
</ol>
]]></content>
<categories>
<category>helm</category>
</categories>
<tags>
<tag>helm</tag>
<tag>k8s</tag>
<tag>helm charts</tag>
</tags>
</entry>
<entry>
<title>k8s cluster installation with kubeadm</title>
<url>/2022/02/06/k8s/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This note is about installation of k8s cluster with kubeadm</p>
<h2 id="Environment"><a href="#Environment" class="headerlink" title="Environment"></a>Environment</h2><ul>
<li>ubuntu 22.04</li>
<li>containerd: 1.5.10</li>
<li>k8s v1.22.7<ul>
<li>calico 3.22</li>
<li>nginx-ingress</li>
</ul>
</li>
</ul>
<span id="more"></span>
<h2 id="Basic-package-installation"><a href="#Basic-package-installation" class="headerlink" title="Basic package installation"></a>Basic package installation</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="meta">#!/bin/bash</span></span><br><span class="line"></span><br><span class="line">CONTAINERD_VERSION=1.5.10</span><br><span class="line">K8S_VERSION=1.22.7-00</span><br><span class="line"></span><br><span class="line"><span class="comment"># Basic environment</span></span><br><span class="line">sudo apt-get install vim bridge-utils ebtables -y</span><br><span class="line"><span class="comment">## disable firewall</span></span><br><span class="line">sudo ufw <span class="built_in">disable</span></span><br><span class="line"><span class="comment">## disable swap</span></span><br><span class="line">sudo swapoff -a</span><br><span class="line">sudo modprobe nf_tables</span><br><span class="line">sudo modprobe br_netfilter</span><br><span class="line"></span><br><span class="line"><span class="comment">## enable forwarding</span></span><br><span class="line">sudo cat > /etc/sysctl.d/k8s.conf << <span class="string">EOF</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-ip6tables = 1</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-ip6tables = 1</span></span><br><span class="line"><span class="string">net.ipv4.ip_forward = 1</span></span><br><span class="line"><span class="string">vm.swappiness = 0</span></span><br><span class="line"><span class="string">EOF</span></span><br><span class="line">sudo sysctl -p /etc/sysctl.d/k8s.conf</span><br><span class="line"></span><br><span class="line"><span class="comment">## ipvs</span></span><br><span class="line">sudo apt-get install ipvsadm ipset -y</span><br><span class="line">find /lib/modules/$(uname -r)/ -iname <span class="string">"**.ko*"</span> | cut -d/ -f5-|grep ip_vs</span><br><span class="line"></span><br><span class="line"><span class="comment">## ntp</span></span><br><span class="line">sudo apt-get install chrony -y</span><br><span class="line">sudo systemctl <span class="built_in">enable</span> chrony --now</span><br><span class="line"></span><br><span class="line"><span class="comment">## other required packages</span></span><br><span class="line">sudo apt-get install -y \</span><br><span class="line"> apt-transport-https \</span><br><span class="line"> ca-certificates \</span><br><span class="line"> curl \</span><br><span class="line"> gnupg-agent \</span><br><span class="line"> software-properties-common</span><br><span class="line"></span><br><span class="line"><span class="comment">## cehck if libseccomp2 is installed</span></span><br><span class="line">dpkg --list | grep libseccomp</span><br><span class="line"></span><br><span class="line"><span class="comment"># Install Containerd</span></span><br><span class="line">wget https://github.com/containerd/containerd/releases/download/v<span class="variable">${CONTAINERD_VERSION}</span>/cri-containerd-cni-<span class="variable">${CONTAINERD_VERSION}</span>-linux-amd64.tar.gz</span><br><span class="line">sudo tar --no-overwrite-dir -C / -xzf cri-containerd-cni-<span class="variable">${CONTAINERD_VERSION}</span>-linux-amd64.tar.gz</span><br><span class="line"></span><br><span class="line">sudo mkdir -p /etc/containerd && sudo containerd config default | sudo sed -r -e <span class="string">"s@SystemdCgroup = false@SystemdCgroup = true@g"</span> | sudo tee /etc/containerd/config.toml</span><br><span class="line"></span><br><span class="line">sudo systemctl daemon-reload</span><br><span class="line">sudo systemctl <span class="built_in">enable</span> --now containerd</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># Install kubeadm, kubelet, kubectl</span></span><br><span class="line">sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg</span><br><span class="line"><span class="built_in">echo</span> <span class="string">"deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main"</span> | sudo tee /etc/apt/sources.list.d/kubernetes.list</span><br><span class="line"></span><br><span class="line">sudo apt-get update \</span><br><span class="line"> && sudo apt-get install -y -q kubelet=<span class="variable">${K8S_VERSION}</span> kubectl=<span class="variable">${K8S_VERSION}</span> kubeadm=<span class="variable">${K8S_VERSION}</span></span><br><span class="line">sudo systemctl <span class="built_in">enable</span> --now kubelet</span><br></pre></td></tr></table></figure>
<p>For GPU worker node, install nvidia-container-toolkit, </p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">distribution=$(. /etc/os-release;<span class="built_in">echo</span> $ID<span class="variable">$VERSION_ID</span>) \</span><br><span class="line"> && curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - \</span><br><span class="line"> && curl -s -L https://nvidia.github.io/libnvidia-container/<span class="variable">$distribution</span>/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list</span><br><span class="line"></span><br><span class="line">sudo apt-get update \</span><br><span class="line"> && sudo apt-get install -y nvidia-container-toolkit</span><br></pre></td></tr></table></figure>
<p>and add the following in containerd config.toml to support nvidia-container-runtime</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"> [plugins.<span class="string">"io.containerd.grpc.v1.cri"</span>.containerd.runtimes.runc.options]</span><br><span class="line">+ SystemdCgroup = <span class="literal">true</span></span><br><span class="line">+ [plugins.<span class="string">"io.containerd.grpc.v1.cri"</span>.containerd.runtimes.nvidia]</span><br><span class="line">+ privileged_without_host_devices = <span class="literal">false</span></span><br><span class="line">+ runtime_engine = <span class="string">""</span></span><br><span class="line">+ runtime_root = <span class="string">""</span></span><br><span class="line">+ runtime_type = <span class="string">"io.containerd.runc.v1"</span></span><br><span class="line">+ [plugins.<span class="string">"io.containerd.grpc.v1.cri"</span>.containerd.runtimes.nvidia.options]</span><br><span class="line">+ BinaryName = <span class="string">"/usr/bin/nvidia-container-runtime"</span></span><br><span class="line">+ SystemdCgroup = <span class="literal">true</span></span><br></pre></td></tr></table></figure>
<p>and change the default runtime from “runc” to “nvidia”</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sudo sed -i <span class="string">'s/default_runtime_name = "runc"/default_runtime_name = "nvidia"/g'</span> /etc/containerd/config.toml</span><br><span class="line">sudo systemctl restart containerd</span><br></pre></td></tr></table></figure>
<p>Configure containerd to allow private registry with self-signed certificates</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[plugins.<span class="string">"io.containerd.grpc.v1.cri"</span>.registry.configs]</span><br><span class="line"> [plugins.<span class="string">"io.containerd.grpc.v1.cri"</span>.registry.configs.<span class="string">"private.registry.com:8080"</span>.tls]</span><br><span class="line"> insecure_skip_verify = <span class="literal">true</span></span><br></pre></td></tr></table></figure>
<h2 id="Setup-the-cluster"><a href="#Setup-the-cluster" class="headerlink" title="Setup the cluster"></a>Setup the cluster</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Start installation</span></span><br><span class="line"><span class="comment"># get initialization files, modify the master ip and add serviceSubnet, podSubnet accordingly, and master taint</span></span><br><span class="line">kubeadm config <span class="built_in">print</span> init-defaults --component-configs KubeletConfiguration > kubeadmin.yaml</span><br><span class="line"><span class="comment"># check images should be used for initialization</span></span><br><span class="line">sudo kubeadm config images list --config kubeadmin.yaml</span><br><span class="line"><span class="comment"># images will be saved under containerd k8s.io ns</span></span><br><span class="line">sudo kubeadm config images pull --config kubeadmin.yaml </span><br><span class="line"><span class="comment"># initialize the cluster</span></span><br><span class="line">sudo kubeadm init --config kubeadmin.yaml --v 5</span><br><span class="line"><span class="comment"># join other nodes (invalid after 24h), regenerate command: sudo kubeadm token create --print-join-command</span></span><br><span class="line">kubeadm join master-ip:6443 --token abcdef.0123456789abcdef \</span><br><span class="line"> --discovery-token-ca-cert-hash sha256:8345453e3d08fb73e90d916adee187182cdb01af8c999ca0bd6f47e7d2089dee</span><br></pre></td></tr></table></figure>
<h2 id="Troubleshot-setup-k8s"><a href="#Troubleshot-setup-k8s" class="headerlink" title="Troubleshot setup k8s"></a>Troubleshot setup k8s</h2><ul>
<li>if docker is installed, needs to edit kubelet configuration file /var/lib/kubelet/kubeadm-flags.env to add the following flags to KUBELET_KUBEADM_ARGS variable (adapt container-runtime-endpoint path if needed):<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sudo vim /var/lib/kubelet/kubeadm-flags.env</span><br><span class="line">--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock</span><br><span class="line"><span class="comment"># or </span></span><br><span class="line">sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf</span><br><span class="line">Environment=<span class="string">"KUBELET_EXTRA_ARGS=--container-runtime remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock"</span></span><br></pre></td></tr></table></figure></li>
</ul>
<h3 id="Reset-k8s-cluster"><a href="#Reset-k8s-cluster" class="headerlink" title="Reset k8s cluster"></a>Reset k8s cluster</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sudo kubeadm reset</span><br></pre></td></tr></table></figure>
<h2 id="Deploy-networkplugin"><a href="#Deploy-networkplugin" class="headerlink" title="Deploy networkplugin"></a>Deploy networkplugin</h2><p><a href="https://www.cjavapy.com/article/2394/">https://www.cjavapy.com/article/2394/</a><br><a href="https://www.teanote.pub/archives/300">https://www.teanote.pub/archives/300</a></p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">CALICO_VERSION=3.22</span><br><span class="line">curl https://docs.projectcalico.org/archive/v<span class="variable">${CALICO_VERSION}</span>/manifests/calico.yaml -O calico.yaml</span><br><span class="line"><span class="comment"># check images needed</span></span><br><span class="line">grep image calico.yaml</span><br><span class="line"><span class="comment"># modify calico.yaml (k8s podSubnet, and modify the network interface)</span></span><br><span class="line">- name: CALICO_IPV4POOL_CIDR</span><br><span class="line"> value: <span class="string">"192.168.0.0/16"</span> </span><br><span class="line">- name: IP_AUTODETECTION_METHOD</span><br><span class="line"> value: <span class="string">"interface=eth0"</span></span><br><span class="line"><span class="comment"># do not let networkmanager to manage calico</span></span><br><span class="line">cat <<<span class="string">EOF>> /etc/NetworkManager/conf.d/calico.conf</span></span><br><span class="line"><span class="string">[keyfile]</span></span><br><span class="line"><span class="string">unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico</span></span><br><span class="line"><span class="string">EOF</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># apply bgp</span></span><br><span class="line">kubectl apply -f calico_3_22_bgp.yaml</span><br><span class="line"></span><br><span class="line"><span class="comment"># apply vxlan</span></span><br><span class="line">kubectl apply -f calico_3_22_vxlan.yaml</span><br></pre></td></tr></table></figure>
<h2 id="Troubleshot-networkplugin"><a href="#Troubleshot-networkplugin" class="headerlink" title="Troubleshot networkplugin"></a>Troubleshot networkplugin</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">default ip route :</span><br><span class="line"></span><br><span class="line">default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.4 metric 100 </span><br><span class="line">10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.4 </span><br><span class="line">168.63.129.16 via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.4 metric 100 </span><br><span class="line">169.254.169.254 via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.4 metric 100 </span><br><span class="line">172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown </span><br><span class="line">172.18.0.0/16 dev br-e468dacf0271 proto kernel scope link src 172.18.0.1 linkdown</span><br><span class="line"></span><br><span class="line">calico_vxlan ip route:</span><br><span class="line"></span><br><span class="line">default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.5 metric 100 </span><br><span class="line">10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.5 </span><br><span class="line">168.63.129.16 via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.5 metric 100 </span><br><span class="line">169.254.169.254 via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.5 metric 100 </span><br><span class="line">172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown </span><br><span class="line">172.18.0.0/16 dev br-7420f731c7b2 proto kernel scope link src 172.18.0.1 linkdown </span><br><span class="line">blackhole 192.168.27.0/26 proto 80 </span><br><span class="line">192.168.27.1 dev cali351751788c1 scope link </span><br><span class="line">192.168.27.2 dev cali6212a2caaa0 scope link </span><br><span class="line">192.168.27.3 dev calid0c10cbe515 scope link </span><br><span class="line">192.168.214.0/26 via 192.168.214.0 dev vxlan.calico onlink</span><br></pre></td></tr></table></figure>
<h2 id="Reset-networkplugin"><a href="#Reset-networkplugin" class="headerlink" title="Reset networkplugin"></a>Reset networkplugin</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># flannel</span></span><br><span class="line">sudo ifconfig cni0 down && sudo ip link delete cni0</span><br><span class="line">sudo ifconfig flannel.1 down && sudo ip link delete flannel.1</span><br><span class="line"></span><br><span class="line"><span class="comment"># calico vxlan</span></span><br><span class="line">sudo ifconfig vxlan.calico down && sudo ip link delete vxlan.calico</span><br><span class="line"></span><br><span class="line"><span class="comment"># remove extra ip links</span></span><br><span class="line">sudo rm -rf /var/lib/cni/</span><br><span class="line"></span><br><span class="line">sudo ip route del xx</span><br><span class="line"></span><br><span class="line"><span class="comment"># remove network files</span></span><br><span class="line">sudo mv /etc/cni/net.d/10-containerd-net.conflist /etc/cni/net.d/10-containerd-net.conflist.bk</span><br><span class="line">sudo mv /etc/cni/net.d/10-flannel.conflist /etc/cni/net.d/10-flannel.conflist.bk</span><br><span class="line">sudo mv /etc/cni/net.d/10-calico.conflist /etc/cni/net.d/10-calico.conflist.bk</span><br><span class="line">sudo rm -rf /etc/cni/net.d/calico-kubeconfig</span><br></pre></td></tr></table></figure>
<h2 id="Troubleshot-gpu-node"><a href="#Troubleshot-gpu-node" class="headerlink" title="Troubleshot gpu node"></a>Troubleshot gpu node</h2><p>Failed to initialize NVML: Driver/library version mismatch</p>
<p>yj@gpu01:<del>$ cat /proc/driver/nvidia/version<br>NVRM version: NVIDIA UNIX x86_64 Kernel Module 510.47.03 Mon Jan 24 22:58:54 UTC 2022<br>GCC version: gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1</del>20.04.1) </p>
<p>yj@gpu01:~$ dpkg -l | grep -i nvidia<br>ii libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library<br>ii libnvidia-common-510-server 510.73.08-0ubuntu0.20.04.1 all Shared files used by the NVIDIA libraries<br>ii libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA libcompute package<br>ii libnvidia-container-tools 1.9.0-1 amd64 NVIDIA container runtime library (command-line tools)<br>ii libnvidia-container1:amd64 1.9.0-1 amd64 NVIDIA container runtime library<br>ii libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA Video Decoding runtime libraries<br>ii libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVENC Video Encoding runtime library<br>ii libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 Extra libraries for the NVIDIA Server Driver<br>ii libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library<br>ii libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD<br>ii nvidia-compute-utils-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA compute utilities<br>ii nvidia-container-toolkit 1.9.0-1 amd64 NVIDIA container runtime hook<br>ii nvidia-dkms-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA DKMS package<br>ii nvidia-driver-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA Server Driver metapackage<br>ii nvidia-kernel-common-510-server 510.73.08-0ubuntu0.20.04.1 amd64 Shared files used with the kernel module<br>ii nvidia-kernel-source-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA kernel source package<br>ii nvidia-utils-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA Server Driver support binaries<br>ii xserver-xorg-video-nvidia-510-server 510.73.08-0ubuntu0.20.04.1 amd64 NVIDIA binary Xorg driver</p>
<p>check if the driver has been upgraded (from 510.47.03 to 510.73.08)</p>
<p>yj@gpu01:~$ cat /var/log/dpkg.log|grep nvidia<br>2022-06-09 06:16:53 upgrade nvidia-driver-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status half-configured nvidia-driver-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status unpacked nvidia-driver-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status half-installed nvidia-driver-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status unpacked nvidia-driver-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 upgrade libnvidia-gl-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status half-configured libnvidia-gl-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status unpacked libnvidia-gl-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:16:53 status half-installed libnvidia-gl-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:05 status unpacked libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:05 upgrade nvidia-dkms-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:05 status half-configured nvidia-dkms-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status unpacked nvidia-dkms-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status half-installed nvidia-dkms-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status unpacked nvidia-dkms-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 upgrade nvidia-kernel-source-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status half-configured nvidia-kernel-source-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status unpacked nvidia-kernel-source-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:08 status half-installed nvidia-kernel-source-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:09 status unpacked nvidia-kernel-source-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:09 upgrade nvidia-kernel-common-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:09 status half-configured nvidia-kernel-common-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:09 status unpacked nvidia-kernel-common-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:09 status half-installed nvidia-kernel-common-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status unpacked nvidia-kernel-common-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 upgrade libnvidia-decode-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status half-configured libnvidia-decode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status unpacked libnvidia-decode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status half-installed libnvidia-decode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status unpacked libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 upgrade libnvidia-compute-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status half-configured libnvidia-compute-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status unpacked libnvidia-compute-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:10 status half-installed libnvidia-compute-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 upgrade libnvidia-extra-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-configured libnvidia-extra-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked libnvidia-extra-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-installed libnvidia-extra-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 upgrade nvidia-compute-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-configured nvidia-compute-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked nvidia-compute-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-installed nvidia-compute-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked nvidia-compute-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 upgrade libnvidia-encode-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-configured libnvidia-encode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked libnvidia-encode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-installed libnvidia-encode-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 upgrade nvidia-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-configured nvidia-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status unpacked nvidia-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:14 status half-installed nvidia-utils-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked nvidia-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 upgrade xserver-xorg-video-nvidia-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-configured xserver-xorg-video-nvidia-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked xserver-xorg-video-nvidia-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-installed xserver-xorg-video-nvidia-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked xserver-xorg-video-nvidia-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 upgrade libnvidia-fbc1-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-configured libnvidia-fbc1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked libnvidia-fbc1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-installed libnvidia-fbc1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 upgrade libnvidia-cfg1-510-server:amd64 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-configured libnvidia-cfg1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked libnvidia-cfg1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-installed libnvidia-cfg1-510-server:amd64 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status unpacked libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 configure nvidia-kernel-common-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:15 status unpacked nvidia-kernel-common-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:15 status half-configured nvidia-kernel-common-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed nvidia-kernel-common-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-cfg1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-compute-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-gl-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure nvidia-kernel-source-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked nvidia-kernel-source-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured nvidia-kernel-source-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed nvidia-kernel-source-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure nvidia-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked nvidia-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured nvidia-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed nvidia-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-fbc1-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure xserver-xorg-video-nvidia-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked xserver-xorg-video-nvidia-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured xserver-xorg-video-nvidia-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed xserver-xorg-video-nvidia-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-decode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed libnvidia-extra-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure nvidia-compute-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked nvidia-compute-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured nvidia-compute-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status installed nvidia-compute-utils-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 configure nvidia-dkms-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:17:37 status unpacked nvidia-dkms-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:17:37 status half-configured nvidia-dkms-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 status installed nvidia-dkms-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 configure libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:18:35 status unpacked libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 status half-configured libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 status installed libnvidia-encode-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 configure nvidia-driver-510-server:amd64 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:18:35 status unpacked nvidia-driver-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 status half-configured nvidia-driver-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:18:35 status installed nvidia-driver-510-server:amd64 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 upgrade libnvidia-common-510-server:all 510.73.05-0ubuntu0.20.04.1 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status half-configured libnvidia-common-510-server:all 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status unpacked libnvidia-common-510-server:all 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status half-installed libnvidia-common-510-server:all 510.73.05-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status unpacked libnvidia-common-510-server:all 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 configure libnvidia-common-510-server:all 510.73.08-0ubuntu0.20.04.1 <none><br>2022-06-09 06:19:29 status unpacked libnvidia-common-510-server:all 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status half-configured libnvidia-common-510-server:all 510.73.08-0ubuntu0.20.04.1<br>2022-06-09 06:19:29 status installed libnvidia-common-510-server:all 510.73.08-0ubuntu0.20.04.1</p>
<p>sudo reboot to sync<br>sudo apt-mark hold nvidia-driver-510</p>
<p>or disable update by<br>sudo vim /etc/apt/apt.conf.d/50unattended-upgrades</p>
<p>after reboot, cluster restarts automactically (prerequisites: enable docker, enable kubelet)</p>
<h2 id="Test-gpu-job"><a href="#Test-gpu-job" class="headerlink" title="Test gpu job"></a>Test gpu job</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># exec into container</span></span><br><span class="line"><span class="comment"># pytorch</span></span><br><span class="line">import torch as torch</span><br><span class="line">torch.cuda.is_available()</span><br><span class="line">torch.cuda.current_device()</span><br><span class="line">torch.cuda.device(0)</span><br><span class="line">torch.cuda.device_count()</span><br><span class="line">torch.cuda.get_device_name(0)</span><br><span class="line"></span><br><span class="line"><span class="comment"># tensorflow</span></span><br><span class="line">from tensorflow.python.client import device_lib</span><br><span class="line">device_lib.list_local_devices()</span><br><span class="line">gpus = tf.config.experimental.list_physical_devices(<span class="string">'GPU'</span>)</span><br><span class="line">from tensorflow.python.client import device_lib</span><br><span class="line"></span><br><span class="line"><span class="comment"># get all the devices</span></span><br><span class="line">local_device_protos = device_lib.list_local_devices()</span><br><span class="line"><span class="comment">#print(local_device_protos)</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># print only gpu</span></span><br><span class="line">[<span class="built_in">print</span>(x) <span class="keyword">for</span> x <span class="keyword">in</span> local_device_protos <span class="keyword">if</span> x.device_type == <span class="string">'GPU'</span>]</span><br></pre></td></tr></table></figure>
<h2 id="Install-with-Docker"><a href="#Install-with-Docker" class="headerlink" title="Install with Docker"></a>Install with Docker</h2><p>daemon.json</p>
<figure class="highlight json"><table><tr><td class="code"><pre><span class="line"></span><br><span class="line">{</span><br><span class="line"> <span class="attr">"graph"</span>: <span class="string">"/data/docker"</span>,</span><br><span class="line"> <span class="attr">"exec-opts"</span>: [<span class="string">"native.cgroupdriver=systemd"</span>],</span><br><span class="line"> <span class="attr">"default-runtime"</span>: <span class="string">"nvidia"</span>,</span><br><span class="line"> <span class="attr">"runtimes"</span>: {</span><br><span class="line"> <span class="attr">"nvidia"</span>: {</span><br><span class="line"> <span class="attr">"path"</span>: <span class="string">"/usr/bin/nvidia-container-runtime"</span>,</span><br><span class="line"> <span class="attr">"runtimeArgs"</span>: []</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> <span class="attr">"log-driver"</span>: <span class="string">"json-file"</span>,</span><br><span class="line"> <span class="attr">"insecure-registries"</span>: [<span class="string">"private.registry.com:8080"</span>],</span><br><span class="line"> <span class="attr">"live-restore"</span>: <span class="literal">true</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<h2 id="Create-kubeconfig-based-on-service-account"><a href="#Create-kubeconfig-based-on-service-account" class="headerlink" title="Create kubeconfig based on service account"></a>Create kubeconfig based on service account</h2><p>ceate a service account and bind a role</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">kubectl -n kube-system create serviceaccount <service-account-name></span><br><span class="line">kubectl create clusterrolebinding <clusterrole-binding-name> --clusterrole=cluster-admin --serviceaccount=namespace_name:<service-account-name></span><br></pre></td></tr></table></figure>
<p>check the created service account</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">soeren-cluster-admin</span></span><br><span class="line"> <span class="attr">namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">secrets:</span></span><br><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">xxx-token-dg9k7</span></span><br></pre></td></tr></table></figure>
<p>Secret_name=<code>kubectl -n namespace_name get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'</code></p>
<p>TOKEN=<code>kubectl -n namespace_name get secret xxx-token-dg9k7 -o jsonpath='{.data.token}'| base64 --decode</code></p>
<p>set the credentials in .kube/config</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">kubectl config set-credentials <service-account-name> --token=<span class="variable">$TOKEN</span></span><br><span class="line">kubectl config set-context context-name --user=<service-account-name> --cluster=kubernetes</span><br></pre></td></tr></table></figure>
<p>switch to that context</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">kubectl config use-context context-name</span><br></pre></td></tr></table></figure>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><p><a href="https://containerd.io/docs/getting-started/">https://containerd.io/docs/getting-started/</a><br><a href="https://dev.to/stack-labs/how-to-switch-container-runtime-in-a-kubernetes-cluster-1628">https://dev.to/stack-labs/how-to-switch-container-runtime-in-a-kubernetes-cluster-1628</a><br><a href="https://mdnice.com/writing/3e3ec25bfa464049ae173c31a6d98cf8">https://mdnice.com/writing/3e3ec25bfa464049ae173c31a6d98cf8</a><br><a href="https://www.i4k.xyz/article/Scarborought/107247296">https://www.i4k.xyz/article/Scarborought/107247296</a></p>
<p>calico<br><a href="https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less">https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less</a><br><a href="https://www.cnblogs.com/ssgeek/p/13194687.html">https://www.cnblogs.com/ssgeek/p/13194687.html</a><br><a href="https://github.com/projectcalico/calico/issues/2166">https://github.com/projectcalico/calico/issues/2166</a><br><a href="https://github.com/projectcalico/calico/issues/2834">https://github.com/projectcalico/calico/issues/2834</a><br><a href="https://www.cnblogs.com/Christine-ting/p/12837250.html">https://www.cnblogs.com/Christine-ting/p/12837250.html</a><br><a href="https://www.teanote.pub/archives/300">https://www.teanote.pub/archives/300</a><br><a href="https://feisky.gitbooks.io/kubernetes/content/network/calico/calico.html">https://feisky.gitbooks.io/kubernetes/content/network/calico/calico.html</a></p>
<p>flannel<br><a href="https://www.modb.pro/db/149337">https://www.modb.pro/db/149337</a></p>
<p>cni<br><a href="https://ronaknathani.com/blog/2020/08/how-a-kubernetes-pod-gets-an-ip-address/">https://ronaknathani.com/blog/2020/08/how-a-kubernetes-pod-gets-an-ip-address/</a></p>
]]></content>
<categories>
<category>k8s</category>
</categories>
<tags>
<tag>k8s</tag>
<tag>container</tag>
</tags>
</entry>
<entry>
<title>kubeflow</title>
<url>/2022/02/06/kubeflow/</url>
<content><![CDATA[]]></content>
</entry>
<entry>
<title>helm templates</title>
<url>/2022/02/07/helmtemplates/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This article is about some frequently used objects and functions in go templates</p>
<ul>
<li>Internal Objects<ul>
<li>Release</li>
<li>Values</li>
<li>Chart</li>
<li>Files</li>
</ul>
</li>
<li>Functions<ul>
<li>default</li>
<li>operations: eq、ne、lt、gt、and、or</li>
</ul>
</li>
<li>Flow Control<ul>
<li>if/else</li>
<li>with</li>
<li>range</li>
</ul>
</li>
<li>variable</li>
<li>define and template</li>
<li>include</li>
<li>subchart values</li>
</ul>
<span id="more"></span>
<h2 id="Release"><a href="#Release" class="headerlink" title="Release"></a>Release</h2><ul>
<li><code>Release.Name</code>: name of the release, obtained from installation, e.g, helm install release-name chart args</li>
<li><code>Release.Time</code></li>
<li><code>Release.Namespace</code>: namespace to deploy the chart, obtained from installation flag –namespace</li>
<li><code>Release.Revision</code>: revision number, from 1</li>
<li><code>Release.Isupgrade</code>: whether current operation is upgrade, true/false</li>
<li><code>Release.Install</code>: whether current operation is install, true/false</li>
</ul>
<h2 id="Values"><a href="#Values" class="headerlink" title="Values"></a>Values</h2><p>value from <code>values.yaml</code>, e.g. </p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">image:</span> <span class="string">gcr.io/app:latest</span></span><br></pre></td></tr></table></figure>
<h2 id="Chart"><a href="#Chart" class="headerlink" title="Chart"></a>Chart</h2><ul>
<li><code>Chart.Name</code>: name of the chart, defined in <code>Chart.yaml</code></li>
</ul>
<h2 id="Files"><a href="#Files" class="headerlink" title="Files"></a>Files</h2><p>Files could be used to visit non special files in the chart。</p>
<p>Let’s say if we have 3 files under <code>mychart/</code></p>
<p>config1.toml</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">message = Hello from config 1</span><br></pre></td></tr></table></figure>
<p>config2.toml</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">message = Hello from config 2</span><br></pre></td></tr></table></figure>
<p>config3.toml</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">message = Hello from config 3</span><br></pre></td></tr></table></figure>
<p>We could read the content of the files and feed to <code>ConfigMap</code> template</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">$files</span> <span class="string">:=</span> <span class="string">.Files</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">range</span> <span class="string">tuple</span> <span class="string">"config1.toml"</span> <span class="string">"config2.toml"</span> <span class="string">"config3.toml"</span> }}</span><br><span class="line"> {{ <span class="string">.</span> }}<span class="string">:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> {{ $files.Get . }}</span></span><br><span class="line"><span class="string"></span> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>Here we state a variable <code>$files</code> to store the reference to <code>.Files</code> obejct, and use <code>tuple</code> function to loop through the file list. The rendered template looks like</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Source: mychart/templates/configmap.yaml</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">mychart-1576046462-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">config1.toml:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> message = Hello from config 1</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"> <span class="attr">config2.toml:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> message = Hello from config 2</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"> <span class="attr">config3.toml:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> message = Hello from config 3</span></span><br><span class="line"><span class="string"></span></span><br></pre></td></tr></table></figure>
<h3 id="Files-Glob"><a href="#Files-Glob" class="headerlink" title="Files.Glob"></a>Files.Glob</h3><p><code>Files.Glob</code> helps get files of a specific pattern<br>For example if we have the files</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">foo/</span><br><span class="line"> foo.txt</span><br><span class="line"> foo.yaml</span><br><span class="line">bar/</span><br><span class="line"> bar.txt</span><br><span class="line"> bar.yaml</span><br></pre></td></tr></table></figure>
<p>We could use <code>File.Glob</code> to select all yaml files</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{ <span class="string">range</span> <span class="string">$path</span> <span class="string">:=</span> <span class="string">.Files.Glob</span> <span class="string">"**.yaml"</span> }}</span><br><span class="line">{{ <span class="string">$path</span> }}<span class="string">:</span> <span class="string">|</span></span><br><span class="line">{{ <span class="string">.Files.Get</span> <span class="string">$path</span> }}</span><br><span class="line">{{ <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>or all the files under a specified folder</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{ <span class="string">range</span> <span class="string">$path</span>, <span class="string">$bytes</span> <span class="string">:=</span> <span class="string">.Files.Glob</span> <span class="string">"foo/*"</span> }}</span><br><span class="line">{{ <span class="string">$path</span> }}<span class="string">:</span> <span class="string">'<span class="template-variable">{{ b64enc $bytes }}</span>'</span></span><br><span class="line">{{ <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>We could also generate configmaps and secrets from files</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">conf</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line">{{ <span class="string">(.Files.Glob</span> <span class="string">"foo/*"</span><span class="string">).AsConfig</span> <span class="string">|</span> <span class="string">indent</span> <span class="number">2</span> }}</span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Secret</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> <span class="string">very-secret</span></span><br><span class="line"><span class="attr">type:</span> <span class="string">Opaque</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line">{{ <span class="string">(.Files.Glob</span> <span class="string">"bar/*"</span><span class="string">).AsSecrets</span> <span class="string">|</span> <span class="string">indent</span> <span class="number">2</span> }}</span><br></pre></td></tr></table></figure>
<h3 id="Files-lines"><a href="#Files-lines" class="headerlink" title="Files.lines"></a>Files.lines</h3><p>Use lines to loop through each line of a file content</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">some-file.txt:</span> {{ <span class="string">range</span> <span class="string">.Files.Lines</span> <span class="string">"foo/bar.txt"</span> }}</span><br><span class="line"> {{ <span class="string">.</span> }}{{ <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="default"><a href="#default" class="headerlink" title="default"></a>default</h2><p>default function allows you define default value for a variable, e.g., <code>favorite.food</code> is “RICE” if not given in <code>values.yaml</code></p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">food:</span> {{ <span class="string">.Values.favorite.food</span> <span class="string">|</span> <span class="string">default</span> <span class="string">"rice"</span> <span class="string">|</span> <span class="string">upper</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="if-else"><a href="#if-else" class="headerlink" title="if/else"></a>if/else</h2><figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{<span class="bullet">-</span> <span class="string">if</span> <span class="string">CONDITION</span> }}</span><br><span class="line"> <span class="comment"># Do something</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">else</span> <span class="string">if</span> <span class="string">CONDITION</span> }}</span><br><span class="line"> <span class="comment"># Do something else</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">else</span> }}</span><br><span class="line"> <span class="comment"># Default case</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>Condition result is false if </p>
<ul>
<li>bool: false, e.g. <code>eq .Values.favorite.drink "coffee"</code>, <code>and () ()</code></li>
<li>0</li>
<li>“”</li>
<li>null/empty/nil</li>
<li>empty set (map、slice、tuple、dict、array)</li>
</ul>
<h2 id="with"><a href="#with" class="headerlink" title="with"></a>with</h2><p>with is used for scope management, e.g., using <code>with</code>, we could use <code>.drink</code> instead of <code>.Values.favorite.drink</code></p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">with</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> <span class="attr">drink:</span> {{ <span class="string">.drink</span> <span class="string">|</span> <span class="string">default</span> <span class="string">"tea"</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> <span class="attr">food:</span> {{ <span class="string">.food</span> <span class="string">|</span> <span class="string">upper</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="range"><a href="#range" class="headerlink" title="range"></a>range</h2><p>range is used to loop through a list,dict…, if we have the following yaml file</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">favorite:</span></span><br><span class="line"> <span class="attr">drink:</span> <span class="string">coffee</span></span><br><span class="line"> <span class="attr">food:</span> <span class="string">pizza</span></span><br><span class="line"><span class="attr">pizzaToppings:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">mushrooms</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">cheese</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">peppers</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">onions</span></span><br></pre></td></tr></table></figure>
<p>we could use range to get all the values of <code>pizzaToppings</code> list</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">with</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> <span class="attr">drink:</span> {{ <span class="string">.drink</span> <span class="string">|</span> <span class="string">default</span> <span class="string">"tea"</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> <span class="attr">food:</span> {{ <span class="string">.food</span> <span class="string">|</span> <span class="string">upper</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"> <span class="attr">toppings:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> {{- range .Values.pizzaToppings }}</span></span><br><span class="line"><span class="string"> - {{ . | title | quote }}</span></span><br><span class="line"><span class="string"> {{- end }}</span></span><br></pre></td></tr></table></figure>
<p>Note: yaml multipline string</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">string:</span> <span class="string">|</span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="attr">string:</span> <span class="string">|+</span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="attr">string:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br></pre></td></tr></table></figure>
<ul>
<li><code>|</code>: line break each line + a new empty line in the end</li>
<li><code>|+</code>: line break each line + 2 new empty lines in the end</li>
<li><code>|-</code>: line break each line + without empty line in the end</li>
</ul>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">string:</span> <span class="string">></span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="attr">string:</span> <span class="string">>+</span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="attr">string:</span> <span class="string">>-</span></span><br><span class="line"><span class="string"> I am a coder.</span></span><br><span class="line"><span class="string"> My blog is didispace.com.</span></span><br><span class="line"><span class="string"></span></span><br></pre></td></tr></table></figure>
<ul>
<li><code>></code>: no line break + a new empty line in the end</li>
<li><code>>+</code>: no line break + 2 new empty lines in the end</li>
<li><code>>-</code>: no line break + without empty line in the end</li>
</ul>
<h2 id="variable"><a href="#variable" class="headerlink" title="variable"></a>variable</h2><p>define a variable <code>$releasename := .Release.Name</code></p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">$releasename</span> <span class="string">:=</span> <span class="string">.Release.Name</span> <span class="string">-</span>}}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">with</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> <span class="attr">drink:</span> {{ <span class="string">.drink</span> <span class="string">|</span> <span class="string">default</span> <span class="string">"tea"</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> <span class="attr">food:</span> {{ <span class="string">.food</span> <span class="string">|</span> <span class="string">upper</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> <span class="attr">release:</span> {{ <span class="string">$relname</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>variable in range list</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">toppings:</span> <span class="string">|-</span></span><br><span class="line"><span class="string"> {{- range $index, $topping := .Values.pizzaToppings }}</span></span><br><span class="line"><span class="string"> {{ $index }}: {{ $topping }}</span></span><br><span class="line"><span class="string"> {{- end }}</span></span><br></pre></td></tr></table></figure>
<p>variable in range dict</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">range</span> <span class="string">$key</span>, <span class="string">$val</span> <span class="string">:=</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> {{ <span class="string">$key</span> }}<span class="string">:</span> {{ <span class="string">$val</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<p>global variable with <code>$</code></p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{<span class="bullet">-</span> <span class="string">range</span> <span class="string">.Values.tlsSecrets</span> }}</span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Secret</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.name</span> }}</span><br><span class="line"> <span class="attr">labels:</span></span><br><span class="line"> <span class="attr">app.kubernetes.io/name:</span> {{ <span class="string">template</span> <span class="string">"fullname"</span> <span class="string">$</span> }}</span><br><span class="line"> <span class="comment"># can't use `.Chart.Name` in range, but `$.Chart.Name`</span></span><br><span class="line"> <span class="attr">helm.sh/chart:</span> <span class="string">"<span class="template-variable">{{ $.Chart.Name }}</span>-<span class="template-variable">{{ $.Chart.Version }}</span>"</span></span><br><span class="line"> <span class="attr">app.kubernetes.io/instance:</span> <span class="string">"<span class="template-variable">{{ $.Release.Name }}</span>"</span></span><br><span class="line"> <span class="attr">app.kubernetes.io/version:</span> <span class="string">"<span class="template-variable">{{ $.Chart.AppVersion }}</span>"</span></span><br><span class="line"> <span class="attr">app.kubernetes.io/managed-by:</span> <span class="string">"<span class="template-variable">{{ $.Release.Service }}</span>"</span></span><br><span class="line"><span class="attr">type:</span> <span class="string">kubernetes.io/tls</span></span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">tls.crt:</span> {{ <span class="string">.certificate</span> }}</span><br><span class="line"> <span class="attr">tls.key:</span> {{ <span class="string">.key</span> }}</span><br><span class="line"><span class="meta">---</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="define-and-template"><a href="#define-and-template" class="headerlink" title="define and template"></a>define and template</h2><p>define a template and use it</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{<span class="string">/*</span> <span class="string">this</span> <span class="string">is</span> <span class="string">label</span> <span class="string">template</span> <span class="string">*/</span>}}</span><br><span class="line">{{<span class="bullet">-</span> <span class="string">define</span> <span class="string">"mychart.labels"</span> }}</span><br><span class="line"> <span class="attr">labels:</span></span><br><span class="line"> <span class="attr">generator:</span> <span class="string">helm</span></span><br><span class="line"> <span class="attr">date:</span> {{ <span class="string">now</span> <span class="string">|</span> <span class="string">htmlDate</span> }}</span><br><span class="line"> <span class="attr">chart:</span> {{ <span class="string">.Chart.Name</span> }}</span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"> <span class="comment"># note here "." is used to pass the variable name of .Chart.Name (top scope)</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">template</span> <span class="string">"mychart.labels"</span> <span class="string">.</span> }}</span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">range</span> <span class="string">$key</span>, <span class="string">$val</span> <span class="string">:=</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> {{ <span class="string">$key</span> }}<span class="string">:</span> {{ <span class="string">$val</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="include"><a href="#include" class="headerlink" title="include"></a>include</h2><p>include is similiar to template, but we could use pip function</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line">{{<span class="bullet">-</span> <span class="string">define</span> <span class="string">"mychart.app"</span> <span class="string">-</span>}}</span><br><span class="line"><span class="attr">app_name:</span> {{ <span class="string">.Chart.Name</span> }}</span><br><span class="line"><span class="attr">app_version:</span> <span class="string">"<span class="template-variable">{{ .Chart.Version }}</span>"</span></span><br><span class="line">{{<span class="bullet">-</span> <span class="string">end</span> <span class="string">-</span>}}</span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ConfigMap</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"> <span class="attr">name:</span> {{ <span class="string">.Release.Name</span> }}<span class="string">-configmap</span></span><br><span class="line"> <span class="attr">labels:</span></span><br><span class="line"><span class="comment"># use indent pip function</span></span><br><span class="line">{{ <span class="string">include</span> <span class="string">"mychart.app"</span> <span class="string">.</span> <span class="string">|</span> <span class="string">indent</span> <span class="number">4</span> }}</span><br><span class="line"><span class="attr">data:</span></span><br><span class="line"> <span class="attr">myvalue:</span> <span class="string">"Hello World"</span></span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">range</span> <span class="string">$key</span>, <span class="string">$val</span> <span class="string">:=</span> <span class="string">.Values.favorite</span> }}</span><br><span class="line"> {{ <span class="string">$key</span> }}<span class="string">:</span> {{ <span class="string">$val</span> <span class="string">|</span> <span class="string">quote</span> }}</span><br><span class="line"> {{<span class="bullet">-</span> <span class="string">end</span> }}</span><br><span class="line"><span class="comment"># use indent pip function </span></span><br><span class="line">{{ <span class="string">include</span> <span class="string">"mychart.app"</span> <span class="string">.</span> <span class="string">|</span> <span class="string">indent</span> <span class="number">2</span> }}</span><br></pre></td></tr></table></figure>
<h2 id="NOTES-txt"><a href="#NOTES-txt" class="headerlink" title="NOTES.txt"></a>NOTES.txt</h2><p>Here you could put information for the user when the chart is used, e.g.,</p>
<figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">Thank you for installing {{ .Chart.Name }}.</span><br><span class="line"></span><br><span class="line">Your release is named {{ .Release.Name }}.</span><br><span class="line"></span><br><span class="line">To learn more about the release, try:</span><br><span class="line"></span><br><span class="line"> $ helm status {{ .Release.Name }}</span><br><span class="line"> $ helm get {{ .Release.Name }}</span><br></pre></td></tr></table></figure>
<p>Normally we put the template in <code>templates/_helpers.tpl</code></p>
<h2 id="subchart-values"><a href="#subchart-values" class="headerlink" title="subchart values"></a>subchart values</h2><p>We could pass values from parent values.yaml to child chart</p>
<figure class="highlight yaml"><table><tr><td class="code"><pre><span class="line"><span class="attr">favorite:</span></span><br><span class="line"> <span class="attr">drink:</span> <span class="string">coffee</span></span><br><span class="line"> <span class="attr">food:</span> <span class="string">pizza</span></span><br><span class="line"><span class="attr">pizzaToppings:</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">mushrooms</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">cheese</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">peppers</span></span><br><span class="line"> <span class="bullet">-</span> <span class="string">onions</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># subchart name</span></span><br><span class="line"><span class="attr">mysubchart:</span></span><br><span class="line"> <span class="attr">dessert:</span> <span class="string">ice</span> <span class="string">cream</span></span><br><span class="line"></span><br><span class="line"><span class="attr">global:</span></span><br><span class="line"> <span class="attr">salad:</span> <span class="string">caesar</span></span><br></pre></td></tr></table></figure>
<p><code>.Values.global.salad</code> could be used in both parent chart template and child chart template</p>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://www.qikqiak.com/k8strain/helm/templates/function/">https://www.qikqiak.com/k8strain/helm/templates/function/</a></li>
<li><a href="https://www.codenong.com/j5ef5bee8e51d45346f783a30/">https://www.codenong.com/j5ef5bee8e51d45346f783a30/</a></li>
</ol>
]]></content>
<categories>
<category>helm</category>
</categories>
<tags>
<tag>helm</tag>
<tag>k8s</tag>
</tags>
</entry>
<entry>
<title>Ceph - crushmap and crushrule</title>
<url>/2022/11/20/ceph-crushrule/</url>
<content><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>This note is to introduce the crushrule feature in ceph storage.</p>
<p>The CRUSH algorithm determines how to store and retrieve data by computing storage locations. CRUSH uses a map of your cluster (the CRUSH map) to pseudo-randomly map data to OSDs, distributing it across the cluster according to configured replication policy and failure domain. This ensures that replicas or erasure code shards are distributed across hosts and that a single host or other failure will not affect availability.</p>
<span id="more"></span>
<h2 id="Ceph-crush-rule-and-placement"><a href="#Ceph-crush-rule-and-placement" class="headerlink" title="Ceph crush rule and placement"></a>Ceph crush rule and placement</h2><h3 id="Define-a-crush-rule-and-apply-it-to-a-pool"><a href="#Define-a-crush-rule-and-apply-it-to-a-pool" class="headerlink" title="Define a crush rule and apply it to a pool"></a>Define a crush rule and apply it to a pool</h3><ul>
<li><p>modify crushmap</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># get crushmap</span></span><br><span class="line">$ ceph osd getcrushmap -o oldcrushmap.bin</span><br><span class="line"><span class="comment"># decompile crushmap</span></span><br><span class="line">$ crushtool -d oldcrushmap.bin -o decompiled-oldcrushmap.txt</span><br><span class="line"><span class="comment"># modify</span></span><br><span class="line"><span class="comment"># change class device, host, root, crush rule</span></span><br><span class="line"><span class="comment"># device classes</span></span><br><span class="line"><span class="comment"># $ ceph osd crush set-device-class <class> <osd-name> [...]</span></span><br><span class="line"><span class="comment"># $ ceph osd crush rm-device-class <osd-name> [...]</span></span><br><span class="line"><span class="comment"># create a rule (replicated rule or EC rule)</span></span><br><span class="line"><span class="comment"># $ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class></span></span><br><span class="line"><span class="comment"># recompile </span></span><br><span class="line">$ crushtool -c decompiled-oldcrushmap.txt -o compiled-newcrushmap.bin</span><br><span class="line"><span class="comment"># set</span></span><br><span class="line">$ ceph osd setcrushmap -i compiled-newcrushmap.bin</span><br><span class="line"></span><br><span class="line"><span class="comment"># modify via CLI</span></span><br><span class="line">ceph osd crush <span class="built_in">set</span> {name} {weight} root={root} [{bucket-type}={bucket-name} ...]</span><br><span class="line">ceph osd crush <span class="built_in">set</span> osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1</span><br></pre></td></tr></table></figure></li>
<li><p>an example of crushmap</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># begin crush map</span></span><br><span class="line">tunable choose_local_tries 0</span><br><span class="line">tunable choose_local_fallback_tries 0</span><br><span class="line">tunable choose_total_tries 50</span><br><span class="line">tunable chooseleaf_descend_once 1</span><br><span class="line">tunable chooseleaf_vary_r 1</span><br><span class="line">tunable chooseleaf_stable 1</span><br><span class="line">tunable straw_calc_version 1</span><br><span class="line">tunable allowed_bucket_algs 54</span><br><span class="line"></span><br><span class="line"><span class="comment"># devices</span></span><br><span class="line">device 0 osd.0 class ssd</span><br><span class="line">device 1 osd.1 class ssd</span><br><span class="line">device 2 osd.2 class ssd</span><br><span class="line">device 3 osd.3 class hdd</span><br><span class="line">device 4 osd.4 class hdd</span><br><span class="line">device 5 osd.5 class hdd</span><br><span class="line">device 6 osd.6 class hdd</span><br><span class="line">device 7 osd.7 class hdd</span><br><span class="line">device 8 osd.8 class hdd</span><br><span class="line">device 9 osd.9 class hdd</span><br><span class="line">device 10 osd.10 class hdd</span><br><span class="line">device 11 osd.11 class hdd</span><br><span class="line"></span><br><span class="line"><span class="comment"># types</span></span><br><span class="line"><span class="built_in">type</span> 0 osd</span><br><span class="line"><span class="built_in">type</span> 1 host</span><br><span class="line"><span class="built_in">type</span> 2 chassis</span><br><span class="line"><span class="built_in">type</span> 3 rack</span><br><span class="line"><span class="built_in">type</span> 4 row</span><br><span class="line"><span class="built_in">type</span> 5 pdu</span><br><span class="line"><span class="built_in">type</span> 6 pod</span><br><span class="line"><span class="built_in">type</span> 7 room</span><br><span class="line"><span class="built_in">type</span> 8 datacenter</span><br><span class="line"><span class="built_in">type</span> 9 zone</span><br><span class="line"><span class="built_in">type</span> 10 region</span><br><span class="line"><span class="built_in">type</span> 11 root</span><br><span class="line"></span><br><span class="line"><span class="comment"># buckets</span></span><br><span class="line">host cpu02 {</span><br><span class="line"> id -4 <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -5 class ssd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -6 class hdd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> <span class="comment"># weight 56.100</span></span><br><span class="line"> alg straw2</span><br><span class="line"> <span class="built_in">hash</span> 0 <span class="comment"># rjenkins1</span></span><br><span class="line"> item osd.3 weight 16.371</span><br><span class="line"> item osd.9 weight 16.371</span><br><span class="line"> item osd.0 weight 6.986</span><br><span class="line"> item osd.6 weight 16.371</span><br><span class="line">}</span><br><span class="line">host cpu03 {</span><br><span class="line"> id -7 <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -8 class ssd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -9 class hdd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> <span class="comment"># weight 56.100</span></span><br><span class="line"> alg straw2</span><br><span class="line"> <span class="built_in">hash</span> 0 <span class="comment"># rjenkins1</span></span><br><span class="line"> item osd.7 weight 16.371</span><br><span class="line"> item osd.1 weight 6.986</span><br><span class="line"> item osd.10 weight 16.371</span><br><span class="line"> item osd.4 weight 16.371</span><br><span class="line">}</span><br><span class="line">host cpu01 {</span><br><span class="line"> id -10 <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -11 class ssd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -12 class hdd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> <span class="comment"># weight 56.100</span></span><br><span class="line"> alg straw2</span><br><span class="line"> <span class="built_in">hash</span> 0 <span class="comment"># rjenkins1</span></span><br><span class="line"> item osd.11 weight 16.371</span><br><span class="line"> item osd.2 weight 6.986</span><br><span class="line"> item osd.8 weight 16.371</span><br><span class="line"> item osd.5 weight 16.371</span><br><span class="line">}</span><br><span class="line">root default {</span><br><span class="line"> id -1 <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -2 class ssd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> id -3 class hdd <span class="comment"># do not change unnecessarily</span></span><br><span class="line"> <span class="comment"># weight 168.299</span></span><br><span class="line"> alg straw2</span><br><span class="line"> <span class="built_in">hash</span> 0 <span class="comment"># rjenkins1</span></span><br><span class="line"> item cpu02 weight 56.100</span><br><span class="line"> item cpu03 weight 56.100</span><br><span class="line"> item cpu01 weight 56.100</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment"># rules</span></span><br><span class="line">rule replicated_rule {</span><br><span class="line"> id 0</span><br><span class="line"> <span class="built_in">type</span> replicated</span><br><span class="line"> min_size 1</span><br><span class="line"> max_size 10</span><br><span class="line"> step take default</span><br><span class="line"> step chooseleaf firstn 0 <span class="built_in">type</span> host</span><br><span class="line"> step emit</span><br><span class="line">}</span><br><span class="line"></span><br></pre></td></tr></table></figure></li>
<li><p>set crush rule for pool</p>
<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># create a demo rule on ssd device</span></span><br><span class="line">ceph osd crush rule create-replicated replicated_ssd default host ssd</span><br><span class="line"><span class="comment"># check current crush rule of the pool ceph-demo</span></span><br><span class="line">$ ceph osd pool get ceph-demo crush_rule</span><br><span class="line"><span class="comment"># check current crush rules</span></span><br><span class="line">$ ceph osd crush rule ls</span><br><span class="line"><span class="comment"># change crush rule for the pool</span></span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> ceph-demo crush_rule demo_rule</span><br></pre></td></tr></table></figure></li>
</ul>
<h3 id="Crush-rule-for-object-storage"><a href="#Crush-rule-for-object-storage" class="headerlink" title="Crush rule for object storage"></a>Crush rule for object storage</h3><ul>
<li>placement target and placement pools<figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># based on previous setup, we should have the following osd tree (2 root tags)</span></span><br><span class="line">$ ceph osd tree</span><br><span class="line"></span><br><span class="line">ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF</span><br><span class="line">-9 3.00000 root ssd</span><br><span class="line">-10 1.00000 host OSD04</span><br><span class="line">3 ssd 1.00000 osd.3 up 1.00000 1.00000</span><br><span class="line">-11 1.00000 host OSD05</span><br><span class="line">4 ssd 1.00000 osd.4 up 1.00000 1.00000</span><br><span class="line">-12 1.00000 host OSD06</span><br><span class="line">5 ssd 1.00000 osd.5 up 1.00000 1.00000</span><br><span class="line">-1 3.00000 root default</span><br><span class="line">-3 1.00000 host OSD01</span><br><span class="line">0 hdd 1.00000 osd.0 up 1.00000 1.00000</span><br><span class="line">-4 1.00000 host OSD02</span><br><span class="line">1 hdd 1.00000 osd.1 up 1.00000 1.00000</span><br><span class="line">-5 1.00000 host OSD03</span><br><span class="line">2 hdd 1.00000 osd.2 up 1.00000 1.00000</span><br><span class="line"></span><br><span class="line"><span class="comment"># create a placement target in zonegroup</span></span><br><span class="line">$ radosgw-admin zonegroup placement add --rgw-zonegroup ge --placement-id ssd-placement --tags ssd</span><br><span class="line"></span><br><span class="line"><span class="comment"># set placement pools of ssd-placement (index and data)</span></span><br><span class="line">$ radosgw-admin zone placement add --rgw-zone room1 --placement-id ssd-placement --index-pool room1.rgw.buckets.ssd.index --data-pool room1.rgw.buckets.ssd.data --data-extra-pool room1.rgw.buckets.ssd.non-ec</span><br><span class="line"></span><br><span class="line"><span class="comment"># create placement pools for ssd-placement</span></span><br><span class="line">$ ceph osd pool create room1.rgw.buckets.ssd.index 8 8</span><br><span class="line">$ ceph osd pool create room1.rgw.buckets.ssd.data 8 8</span><br><span class="line">$ ceph osd pool create room1.rgw.buckets.ssd.non-ec 8 8</span><br><span class="line"></span><br><span class="line"><span class="comment"># apply crush rule to newly create pools for ssd-placement</span></span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> room1.rgw.buckets.ssd.index crush_rule demo_rule</span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> room1.rgw.buckets.ssd.data crush_rule demo_rule</span><br><span class="line">$ ceph osd pool <span class="built_in">set</span> room1.rgw.buckets.ssd.non-ec crush_rule demo_rule</span><br><span class="line"></span><br><span class="line"><span class="comment"># enable the update</span></span><br><span class="line">$ radosgw-admin period update --commit</span><br><span class="line"></span><br><span class="line"><span class="comment"># enable the user to create bucket on ssd-placement</span></span><br><span class="line">$ radosgw-admin metadata get user:yuanjing > yuanjing.json</span><br><span class="line"><span class="comment"># add "placement_tags": ["ssd"]</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># add placement tag for default-placement (be default, it's not set, be used for creating bucket)</span></span><br><span class="line">$ radosgw-admin zonegroup placement modify --rgw-zonegroup ge --placement-id default-placement --tags default-placement</span><br></pre></td></tr></table></figure></li>
<li>test <figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">import boto3</span><br><span class="line"></span><br><span class="line">bucket = <span class="string">"ssd-bucket"</span></span><br><span class="line"><span class="comment">#location = "zonegroup:default-placement"</span></span><br><span class="line">location = <span class="string">"ge:ssd"</span></span><br><span class="line"></span><br><span class="line">s3 = boto3.client(</span><br><span class="line"><span class="string">'s3'</span>,</span><br><span class="line">endpoint_url=<span class="string">"xxx"</span>,</span><br><span class="line">aws_access_key_id=<span class="string">"xxx"</span>,</span><br><span class="line">aws_secret_access_key=<span class="string">"xxx"</span>,</span><br><span class="line">)</span><br><span class="line"></span><br><span class="line">s3.create_bucket(</span><br><span class="line">Bucket=bucket,</span><br><span class="line">CreateBucketConfiguration={<span class="string">'LocationConstraint'</span>:location},</span><br><span class="line">)</span><br></pre></td></tr></table></figure></li>
</ul>
<h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ol>
<li><a href="https://docs.ceph.com/en/quincy/rados/operations/crush-map/">https://docs.ceph.com/en/quincy/rados/operations/crush-map/</a></li>
</ol>
]]></content>
<categories>
<category>ceph</category>
</categories>
<tags>
<tag>storage</tag>
<tag>cloud</tag>
<tag>ceph</tag>
<tag>crushrule</tag>
</tags>
</entry>
</search>