-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path26. Configuration Management and Automation Tools
706 lines (518 loc) · 49.4 KB
/
26. Configuration Management and Automation Tools
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
Configuration management tools:
Automate the provisioning and deployment of applications and infrastructure
No knowledge of programming required
Leverages software development practices for deployments:
Version Control
Design Patterns
Testing
Common tools: Puppet, Ansible, Chef, and SaltStack
When there is a defined manual workflow to perform a set of tasks, proper tools should be used to automate it. It does not make sense to spend an hour performing a change. This change could take just a few minutes by using a properly engineered tool. This process is where open source tools such as Puppet, Chef, Ansible, and SaltStack can dramatically reduce the number of manual interactions with the network
These tools are often referred to as DevOps tools. They are more specifically configuration management and automation tools that happen to be used by those organizations that have implemented some form of DevOps practices
Puppet:
was created in 2005 and has been around the longest compared to Chef and Ansible. Puppet manages systems in a declarative manner meaning you define the state the target system should be in without worrying about how it happens. In reality, that is true for all these tools. Puppet is written in Ruby and refers to its automation instruction set as Puppet manifests. The major point to realize is that Puppet is agent-based. Agent-based means a software agent needs to be installed on all devices you want to manage with Puppet. As an example, servers, routers, switches, firewalls, and the like. It is often not possible to load an agent on many network devices. This procedure limits the number of devices that can be used with Puppet out of the box. By out of the box, you can infer that it is possible to have proxy devices when using Puppet. However, this process means that using Puppet has a greater barrier entry to getting started
Chef:
another popular configuration management tool, follows much of the same model as Puppet. Chef is based in Ruby, uses a declarative model, is agent-based, and refers to it’s automation instruction as recipes (grouped, they are cookbooks)
Note: It’s often difficult to load agents onto machines in order to automate them. When it is technically possible, it often increases the time that it takes to get the solution or tool deployed. hence, i love ansible :)
Ansible:
was created in 2012 as an alternative to Puppet and Chef. Ansible was later acquired by Red Hat in 2015. The two notable differences between Puppet, Chef, and Ansible are that Ansible is written in Python and that it is agentless. Being natively agentless significantly lowers the barrier to entry from an automation perspective. Since Ansible is agentless, it can integrate and automate any device using any API. For example, integrations can use REST APIs, NETCONF, SSH, or even SNMP, if desired. Playbooks are Ansible sets of tasks (instructions) used to automate devices. Each playbook is made up of one or more plays, each of which consists of individual tasks
Ansible overview:
Ansible can be considered a complete IT automation framework and not only a configuration management system. In this lesson you will see how it can help to automate, deploy and maintain an entire network infrastructure
Ansible is an agent-less solution. This feature means you do not need to install any third-party software on infrastructure devices to start automating with Ansible. It also uses YAML (human readable data serialization language rfc 2822 ) as a language to write Playbooks, resulting in human readable scripts
Ansible itself is open source, and you can get started with Ansible in a matter of minutes. However, Ansible also offers a commercial product that is called Tower that acts as a wrapper for Ansible open source. Tower includes a Web UI, REST API, role based access control, integration to cloud platforms and GitHub, and much more
Ansible now has native support for several Cisco operating systems including IOS, IOS-XR, and NX-OS. These integrations support both SSH and device-specific APIs, including NX-API and NETCONF
Installing Ansible:
The Ansible installation process is relatively straightforward, due to its agent-less nature, since it only needs to be installed on a control host (server) or laptop. This feature means that no installation is required on individual nodes you want to automate
There are three main requirements for the machine you’re installing Ansible:
It must be installed on a Linux operating system
Python 2.6 or 2.7 must be installed.
Install Ansible dependencies using pip. These dependencies include Paramiko, PyYAML, Jinja2, Httplib2 and six:
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six
then:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
or:
$ sudo easy_install pip
$ sudo pip install ansible
How ansible works:
it connects to network device (IOSXE, IOSXR, NXOS) over ssh, NETCONF, RESTCONF, NX-API or SNMP. runs python modules on the control host (i.e. your laptop or server). returns json object for each task(play) executed
Ansible is an agent-less solution, which means that from a networking perspective, all Ansible jobs (Python code) run locally on the control host. This design is in contrast to how Ansible works for Linux servers. By default, Ansible uses SSH to log in to the server. It then copies Python code to the server, and the Python code (Ansible tasks) run on each server
Ansible components:
There are several important components and new terms to understand when working with Ansible. Using a top down approach, you will go through them starting with an inventory file and finish with variables
inventory:
When working with Configuration Management system like Ansible, the goal may be to automate many devices. These devices are listed inside an Ansible Inventory file. In this file, you can define groups such as nxos, iosxr and iosxe. From here you can list a set of devices belonging to that group using either an IP address or FQDN. In this way, you can decide to run an Ansible playbook for a given host or group of hosts, or a combination of both. Ansible Inventory files support variable definitions that will then become accessible to Ansible tasks. As an example, you can define variables per host and per group directly in the inventory file
[nxos:vars]
username=cisco
password=nexus
[nxos]
nxosv
By adding these lines, you have created two group-based variables for the group called nxos. This practice can be done for any group including a predefined group that is called all. If all credentials were the same for all devices, you can use the following notation:
[all:vars]
username=cisco
password=cisco
Within the inventory file, you can also specify variables that are only scoped, or usable, for a given device. This syntax would have the format as follows:
[nxos]
nxosv username=cisco password=nxosv
sample inventory file:
[all:vars]
username=cisco
password=cisco
[nxos]
nxosv
[iosxr]
xrv
[iosxe]
csr1kv
playbook:
Ansible playbooks contain a set of instructions and tasks that will be automated when the playbook is executed. The playbook, as you can see in the figure, utilizes the YAML structured data format
It is common to call the “main” playbook of a project site.yml. However, there is no requirement for this procedure as you will see throughout the course. For this course, playbooks are given names that map back to the function being performed:
---
- name: manage IOSXE devices
hosts: iosxe
tasks:
- name: show version
ios_command:
commands:
- show version
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
Note: Remember that all YAML files start with “---”. This includes Ansible Playbooks
plays:
Within a playbook, there are one or more plays. The number of plays depends on the different groups of devices being automated.
A play begins by using the Ansible name parameter. While name is technically an optional parameter, it’s recommended as it helps provide more context to the Play. The associated value of name should be an arbitrary string that describes the actions that are taken by the Play. This text is displayed in real time as the playbook is executed.
The play definition also defines what group of nodes the tasks will be executed against. in previous example, all iosxe devices will be automated as part of this playbook. The host or group of devices that are automated must match the names are defined in the inventory file
Tasks:
Within each play, there are groups of tasks. Each task, like the play itself, could be given an arbitrary name that is displayed during task execution. If you do not use name, you can simply put a hyphen “-” next to the module name as the following shows:
- ios_command:
commands:
- show version
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
Note: This code is still valid syntax, but you lose context of the task without a one-line description, especially as the playbook runs
Modules and variables:
Each module is a parameterized Python file. You pass in parameters, or key-value pairs, into the module so it knows the action to be performed on the device. In this example, the module that is called ios_command which simply executes commands on IOS devices is used. To perform this task, you pass in the required parameters, which include the following:
commands: This list provides the commands that will be executed on the IOS device.
username: Username that is used to log in to the switch
password: Password that is used to log in to the switch
host:inventory_hostname is an Ansible built-in variable that is equal to the device’s name (or IP) as defined in the inventory file. As an example, if there were 5 devices in the iosxe group with the names r1, r2, r3, r4, and r5, this task would iterate all five. Also,inventory_hostname would first be equal to r1, then r2, then r3, and the like. You can think of it as a for loop.
While inventory_hostname is a built-in Ansible variable, username and password are user-defined variables. Recall these variables were in the inventory from a previous figure as follows:
[all:vars]
username=cisco
password=cisco
[nxos]
nxosv
[iosxr]
xrv
[iosxe]
csr1kv
Module documentation:
On the Ansible website and you will find a table like for every module Ansible supports. This table is how you know what parameters are supported for a given module. You can see the supported parameters. Also, if the parameter is required, and if there is a default option, there are specific choices (true or false), and a short description or comment on each parameter
Note: there are two files that are required to getting started with Ansible: an inventory file and a playbook
Executing ansible playbook:
$ ansible-playbook –i hosts site.yml
You can see that the –i flag is what tells Ansible which inventory file should be used when the playbook is executed. And within the playbook, you specify the host or group of hosts being automated in the play definition. This example automates the iosxe group, which in turn, is just automating one device. As an example, csr1kv.
As you saw earlier, you can define host variables and group variables in an inventory file such as the following:
[all:vars]
username=cisco
password=cisco
[iosxe]
csr1kv password=cisco
While this code works adequately for a few variables, it is not scalable to store variables in the inventory for a production rollout of Ansible. The more efficient and recommended approach is to use directories for each type of variable. These directories must be called host_vars and group_vars. Within each you create YAML files: group_vars is a directory that is dedicated to vars related to groups specified inside the inventory file. Each file must be a yaml file and its name must match the group name inside inventory. host_vars is a directory that is dedicated to vars related to hosts specified inside the inventory
Ansible base modules:
Ansible supports a series of modules for NXOS, IOS, and IOS-XR platforms. Note: the term «base» modules is insignificant and used because Ansible provides these modules across a wide number of device platforms and vendors. The three core «base» modules are *_command, *_config, and *_template
Here is the formal definition for each as described by Ansible:
*_command - Sends arbitrary commands to an ios node and returns the results read from the device. The ios_command module includes an argument that will cause the module to wait for a specific condition before timing out. Note: While this module can push configuration or show commands, it is primarily used for show commands.
*_config - Cisco IOS configurations use a simple block indent file syntax for segmenting configuration into sections. This module provides an implementation for working with IOS configuration sections in a deterministic way
Note: The *_config module compares the commands being pushed against the running configuration pushing only the commands needed to get the device to its desired state
ios_command module:
The ios_command is used to send arbitrary commands to devices running Cisco IOS. This figure introduces the concept of executing a playbook in verbose mode. If you run the playbook with the “-v” flag, you can see the JSON object that is returned back. Every module returns a JSON object and includes specific data from the operation that is executed. This action will occur if the task passed, failed, and if a change was made on the device
sample playbook:
- ios_command:
commands:
- show version
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
Playbook:
test-ios_command.yml
$ ansible-playbook -i hosts test-ios_command.yml -v
PLAY [testing ios_command] *****************************************************
TASK [show version] ************************************************************
ok: [csr1kv] => {"changed": false, "stdout": ["Cisco IOS XE Software, Version BLD_V163_THROTTLE_LATEST_20160624_090103_V16_3_0_241\nCisco IOS Software [Denali], CSR1000V Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Experimental Version 16.3(20160624:092502) [v163_throttle-BLD-BLD_V163_THROTTLE_LATEST_20160624_090103
........
........
PLAY RECAP *********************************************************************
csr1kv : ok=1 changed=0 unreachable=0 failed=0
executing same playbook without verbose:
$ ansible-playbook -i hosts test-ios_command.yml
PLAY [testing ios_command] ****************************************************************
TASK [show version] ****************************************************************
ok: [csr1kv]
PLAY RECAP *********************************************************************
csr1kv : ok=1 changed=0 unreachable=0 failed=0
Note: running playbook in verbose mode, outputs the return object in json. also Verbose mode proves to be extremely valuable for troubleshooting and collecting data such as from show commands
ios_config:
This module is used to manage Cisco IOS configuration section. The same operation can be achieved on IOS-XR and NXOS devices using iosxr_config or nxos_config.
This module supports many optional parameters. A few of them as shown in the figure include parents, before, and match.
parents: The ordered set of parents that uniquely identify the section that the commands should be checked against. If the parents argument is omitted, the commands are checked against the set of top level or global commands.
before: The ordered set of commands to push to the command stack if a change needs to be made. This practice allows the playbook designer the opportunity to perform configuration commands. This action happens before pushing any changes without affecting how the set of commands are matched against the system.
match: Instructs the module on the way to perform the matching of the set of commands against the current device config. If match is set to “line”, commands are matched line by line. If match is set to “strict”, command lines are matched regarding position. If match is set to “exact”, command lines must be an equal match. If the match is set to “none”, the module will not attempt to compare the source configuration with the running configuration on the remote device
Note: take a good look at ansible documentation to understand each modules and its parameters
Note: The *_config module also supports a parameter that is called “src” that point to a template or a config file
You have seen variables in playbooks that are denoted with curly braces such as {{ username }}. This syntax is actually using Jinja2 variables within a playbook. Jinja2 is a templating language that is supported in Python (remember that Ansible is written in Python). One of the common tasks that are done with Ansible is to template device configurations using Jinja2 templates and push them using ios_config.
For example, you may define variables such as these:
snmp_ro: public123
snmp_rw: private123
And you can create a template that is called config.j2 that looks like the following:
snmp-server community {{ snmp_ro }} ro
snmp-server community {{ snmp_rw }} ro
In this task, there is a parameter that is called source and it references config.j2. When executed, it would automatically insert variables where appropriate, create the following two commands, and send them to the device:
snmp-server community public123 ro
snmp-server community private123 ro
Performing compliance checks using ansible:
Even though the ios_command, and nxos_command have been investigated. The Ansible directive that is called register and a module that is called assert will be discussed. It works the same way, but you will want to show how to do compliance checks and validations using Ansible.
In this example, you want to be sure that the running OS version is the one expected. In order to do so, you will use the register directive with the assert module
- name: Ensure proper OS version is present on device
hosts: nxos
connection: local
tasks:
- name: show version
nxos_command:
commands:
- show version
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
register: output
- debug: var=output
$ ansible-playbook -i hosts test-nxos_command.yml
PLAY [Print output] *******************
TASK [debug] *******************************************************************
ok: [nxosv] => {
"output": {
"changed": false,
"response": [
"\nCisco Nexus Operating System (NX-OS) Software\nTAC support: http://www.cisco.com/tac\nDocuments: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html\nCopyrOMITTEDOUTPUT
]
}
}
The register directive automatically creates (registers) a new variable and assigns it the value of the JSON object returned
The assert module is used for validation and you can now easily assert various conditions or configurations are true on a given device
In the playbook, you retrieve show version output and store it into a variable kthat is called “output” with register it in the first task. Then, in the second task, you ensure that the proper OS version is present on the device
In general, before you do assertions you need to know what is possible to assert. You need to know what is in the object you registered. One way to do that is to use the debug module that allows you to print any variable to the terminal as the playbook is executed. Once you see and understand the object being saved and registered, you can intelligently use assertions. The object may be a raw string like shown in the figure, or it maybe a nested object, which makes it much easier to work with
Another option is to use the verbose flag instead of debug, if you simply want to view the data module is returning. The one benefit of using debug is that it is pretty printed when used
Note: you can run ansible-doc <ansiblemodulename> from a terminal to get module documentation. ansible-doc debug. ansible-doc assert
sample playbook:
---
- name: PERFORM COMPLIANCE CHECKS
hosts: ios
connection: local
gather_facts: no
vars:
provider:
username: cisco
password: cisco
host: "{{ inventory_hostname }}"
tasks:
- name: GATHER SHOW VERSION
ios_command:
commands:
- show version
provider: "{{ provider }}"
register: output
- name: DUMP OUTPUT TO TERMINAL
debug:
var: output
- name: VERIFY OS AND CONFIG REGISTER
assert:
that:
- "'Version 16.3' in output['stdout'][0]"
- "'Configuration register is 0x2102' in output['stdout'][0]"
- name: ENSURE SNMP RO EXISTS
ios_config:
commands:
- snmp-server community PUBLIC_SECURITY ro
provider: "{{ provider }}"
Ansible nexus features modules:
There is a distinction between base modules and feature modules. Base modules provide for a way to manage devices using CLI commands, but in an automated fashion. On the other hand, feature modules eliminate the need to know how to configure a particular feature. Feature models eliminate the need to know the precise commands that are required to configure a feature. Instead, you simply declare the state that the resource should be in, and then the module ensure that the resource is in that state. These modules are also idempotent (unless otherwise stated). This feature means that the change would only occur just once no matter how many times a given task or module is executed
e.g:
nxos_bgp
nxos_bgp_af
nxos_bgp_neighbor
nxos_bgp_neighbor_af
nxos_evpn_global
nxos_evpn_vni
nxos_facts
nxos_feature
Supports Nexus Application Programming Interface (NX-API) and CLI
Ansible Core now supports an increasing number of feature modules for Cisco NX-OS. This list continues to grow and Ansible has more feature for Nexus than for any other vendor or platform due to open source contributions
it’s worth noting that many of these modules support a parameter that is called state, which usually has options of being present or absent. This characteristic means the resource (or configuration) should be present on the device or it should be absent on the device (remove the configuration)
Nearly all Cisco NXOS feature modules return six key-value pairs in the JSON object being returned:
Changed: If true, the module pushed changes to the device, else it is false.
Proposed — Key value pairs of parameters that are passed into module.
Existing — Key value pairs of existing feature configuration that maps back to the parameters that the module supports
End_state — Key value pairs of feature configuration after module execution
State: State as sent in from the playbook
Updates: Command list sent to the device
Example ansible nexus playbook:
let's play with two ansible modules: nxos_interface and nxos_ip_interface
nxos_interface:
- name: configure ethernet2/1 interface
nxos_interface:
interface: ethernet2/1
admin_state: up
mode: layer3
description: "Configured with Ansible"
state: present
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
nxos_ip_interface:
- name: configure ethernet2/1 ip address
nxos_ip_interface:
interface: ethernet2/1
addr: "10.0.0.1"
mask: 30
state: present
username: "{{ username }}"
password: "{{ password }}"
host: "{{ inventory_hostname }}"
- name: ENSURE VLAN EXISTS
nxos_vlan:
vlan_id=113
name=native
vlan_state=active
host={{ inventory_hostname }}
username={{ username }}
password={{ password }}
- name: ENSURE INTERFACE IS L2
nxos_interface:
interface=eth2/1
mode=layer2
host={{ inventory_hostname }}
username={{ username }}
password={{ password }}
- name: ENSURE INTERFACE IS CONFIGURED FOR V113
nxos_switchport:
interface=eth2/1
mode=access
access_vlan=113
host={{ inventory_hostname }}
username={{ username }}
password={{ password }}
then play this playbook:
$ ansible-playbook -i hosts test_nxos.yml
PLAY [configure nxos device] ***************************************************
TASK [CREATE VLAN] *************************************************************
changed: [nxosv]
TASK [ENSURE INTERFACE IS L2] **************************************************
changed: [nxosv]
TASK [CONFIGURE TRUNK] *********************************************************
changed: [nxosv]
Note: If you execute the tasks that were in the previous figure, you would see the output that is shown on the left. All tasks will show as changed. If you execute the same playbook again, you would see ok instead of changed showing that the change happens only once. This feature is the concept of idempotency that was previously covered. No matter how many more times the playbook is run, it will always show as «ok» as the change will only occur once
Puppet:
Puppet models desired system states, enforces those states, and reports any variances so you can track what Puppet is doing. To model system states, Puppet uses a declarative resource-based language. This feature means that a user describes a desired final state (“this package must be installed” or “this service must be running”) rather than describing a series of steps to execute
You use the declarative, readable Puppet Domain Specific Language to define the desired end-state of your environment, and Puppet converges the infrastructure to the desired state. If you have a predefined configuration that every new switch should receive, the puppet intent-based automation solution can automate these repetitive configuration management tasks quickly
Puppet is an agent-based system. It is designed in a client/server manner with an agent software running on the remote hosts. Agents check configuration every 30 minutes by default and ensure that a match exists between expected configuration and their own
similar to ansible, puppet is available in a free open source version and a commercial version called puppet enterprise
puppet supports several agents for network devices, such as nxos and ios-xr
Puppet has several different components that make up their architecture:
They have a central control server, called the Puppet Master, that provides Enterprise features like reporting, security, a UI, and the like
They have a software agent that runs on the target node they will be managing. They have software agents for Linux hosts, windows machines, and various network devices. The software agents initiate the connection back to the Master and if the agent does not have the correct configuration. The Master ensures it is reapplied to the device
Puppet Forge is a cloud-based repository, that is community-based, and is a place for people to share their “Puppet programs” and manifests
puppet open source:
Open Source is a collection of smaller projects and binaries. There is the main puppet agent binary, mcollective, and facter. There is the main puppet agent binary, mcollective, and facter
mcollective = provides orchestration-like capabilities
facter = Separate binary that resides on the target node that gathers facts about the devices
puppet enterprise stack:
puppet agent = agent that sits on nodes such as servers, F5s, nx and ios-xr
puppet master = reporting, single pane of glass, manifests. logging, etc.
puppet enterprise console = user interface, live management and admin/security
puppet forge = free, cloud repository for sharing modules. manifests
how puppet works?
Step 1: Define using the Puppet DSL (domain specific language) what you want the desired state of the infrastructure to be. This procedure is done with Puppet manifests. Define what you want the desired state of the systems and network devices to be in. As an example, which VLANs should exist, which interfaces should be configured, and the like
Step 2: Simulate the change – see what would have happened if a push would have been made. Puppet allows you to run your manifest in “dry run” mode, thus you can see if you did run it, which changes would happen? here as you continue to run in dry mode, you could simply detect unauthorized changes taking place on a device
Step 3: Enforce the change. You know what changes are going to be made after the simulation. Do you want to enforce them? If so, now you can execute. If not, you can continue to run in dry mode and catch the people making changes in the environment that should not be making
Step 4: Report back to the Puppet Master. The dashboard within the console will track the status of the tasks being executed
puppet dataflow:
Remember that Puppet has an agent on the target device. The agent controls when “things happen.” First, assuming facter is installed on the target device. It runs on the device and collects facts about the device. Facts from a high level were covered earlier, but these facts are characteristics about the device: OS, hostname, number of interfaces, IP addresses, HW, and the like. The facts are then sent back to the Puppet Master.
The Puppet Master analyzes these facts and compares them against its known database. Puppet then builds a catalog that describes how the device should be configured. This catalog means, based on the facts reported back to the master. The master figures out who the device is and then what role or configuration should be assigned. Then, the device-specific catalog is created and pushed back to the device.
The device receives the catalog and because Puppet is declarative, the policy is compared against its current state. Only if the policies are different is a change a made. If everything is already in the desired state, no change is made.
After the agent runs, a report is sent back to the master
Note: puppet implements a resource abstraction layer (RAL). you declare end state through a policy and puppet translate that into what a device understands. it doesn't matter if that device is nx or ios - how sweet abstraction is!!
There are resource types that are objects. This type is the high level of “what” you want to declare for your policy. Related resources are grouped into platform independent Resource Types
Resource providers are the “enablers” or “how” for the Resource types. Providers are required to give the resource types the abstraction between OSs when needed
example puppet manifest:
#Configuring Interface eth1/1
cisco_interface { "Ethernet1/1" :
shutdown => true,
switchport_mode => disabled,
description => 'managed by puppet',
ipv4_address => '1.1.43.43',
ipv4_netmask_length => 24,
}
#Configuring interface
cisco_interface { "Vlan22" :
svi_autostate => false,
svi_management => true,
}
puppet components:
The main components of Puppet are the server, the node (being automated), the agent (installed on the node), modules, and classes. The server, node, and agent components of the Puppet architecture will be investigated
puppet server:
source of truth
holds manifests and resources
central server or point of configuration and control for your datacenter, both switching and compute
installing puppet:
First, you need to enable the Puppet package repositories. This task depends on your environment, so you should pick the one related to the OS you’re running. Assuming the local server runs Ubuntu 16.04 OS, you can use the following commands to install Puppet:
wget https://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb
sudo dpkg –i puppetlabs-release-pc1-xenial.deb
sudo apt-get update
sudo apt-get install puppetserver
Note: For more information, visit: https://docs.puppet.com/puppetserver/2.4/install_from_packages.htm
puppet manifests:
Manifests are files containing Puppet code. They are standard text files that are saved with the .pp extension.
Puppet master always uses the main manifest set by the current node’s environment. The default main manifest name is site.pp but the user can change it according to its environment.
“The main manifest can be a single file or a directory of .pp files. By default, the main manifest for a given environment is <ENVIRONMENTS DIRECTORY>/<ENVIRONMENT>/manifests. (For example:/etc/puppetlabs/code/environments/production/manifests.) You can configure the manifest per-environment, and you can also configure the default for all environments. An environment can use the manifest setting in environment.conf to choose its main manifest.
If the main manifest is a directory, Puppet parses every .pp file in the directory in alphabetical order and evaluate the combined manifest.” (https://docs.puppet.com/puppet/latest/reference/dirs_manifest.html)
an example of a manifest to ensure that an interface is in layer3 mode
# Configuring the interface to routed interface
cisco_interface { "Ethernet1/1" :
switchport_mode => disabled,
}
If you’re the root user, the default location for your manifests is /etc/puppetlabs/code/environments/production/manifests/. Instead, if you’re not the root, your default location will be /home/user/.puppetlabs/etc/code/environments/production/manifests/
You can clearly understand what is going on without knowing how it is configured on each type of server.
Puppet DSL is said to be “executable documentation” because it is descriptive and transparent (although with more complex manifests they may not be as transparent).
Also, a Puppet module is just a collection of files and directories that can contain Puppet manifests. As well as other objects such as files and templates, all packaged and organized in a way that Puppet can understand and use. When you download a module from PuppetForge, you are downloading a top-level directory with several subdirectories that contain the components that are needed to specify the desired state. When you want to use that module to manage your nodes, you classify each node by assigning to it a class within the module.
A class is a set of common configurations — resources, variables, and more advanced attributes. Anytime you assign this class to a machine, it will apply those configurations within the class.
Note: For a complete list of supported Puppet types, please reference the following: https://forge.puppet.com/puppetlabs/ciscopuppet
puppet resources:
Puppet ships with several predefined resources, which are the fundamental components of your infrastructure. Puppet revolves around the management of these resources that are used to describe some aspects of a system. A resource declaration is composed by a “type”, a “name” and some “attributes”. As an example, the “cisco_interface” is the resource type, and “Ethernet1/2” is the managed resource name. These attributes are “description”, “shutdown” and “access_vlan”. If you are running Puppet on a user different from root, you can save the manifest into the /home/user1/.puppetlabs/etc/code/environments/production/manifests/ path.
Also, you can also modify the default manifest for this environment and change it to interface.pp.
Same thing is true for the “cisco_snmp_community ” type, where you are ensuring that Puppet snmp group exist
/home/user1/.puppetlabs/etc/code/environments/production/manifests/interface.pp
# A resource declaration:
cisco_interface { "Ethernet1/2" :
description => 'default',
shutdown => 'default',
access_vlan => 'default',
}
/home/user2/.puppetlabs/etc/code/environments/production/manifests/snmp.pp
# Configure snmp community
cisco_snmp_community { “puppet":
ensure => present,
group => "network-operator",
}
puppet node and agent:
Node = Any physical, virtual, or cloud machine or switch that is configured maintained by a puppet client, whether by server or switch
Agent: Runs locally on every node managed by the puppet master server. Performs all configuration tasks that are specified by the manifest and converges the node into desired state
puppet agent installation:
Agent installation process depends on few things. First, you need to make sure that your OS version supports puppet agent. If so, you can decide the environment in which you want to install it. For n3k and n9k, you can choose between the Guestshell and the Bash-shell. For n5k, n6k and n7k you are forced to use the Open Agent Container (OAC)
Note: for most nx platforms, nxos 7.0 and later required for puppet agent
Note: see the following for more info on installing agent https://github.com/cisco/cisco-network-puppet-module/blob/develop/docs/README-agent-install.md
puppet installation with OAC:
This system is a 32-bit CentOS-based container that is created specifically for running Puppet Agent software. After you have downloaded the proper OVA (container) version that is based on your device OS, you can copy it to your switch. You can use scp to do this task. Now you can install the OVA environment with the virtual-service install name oac package bootflash:oac.1.0.0.ova command and verify the installation status with the show virtual-service list command. Once the install has finished, the OAc may be activated (nxosv(config)# virtual-service oac). Use 'show virtual-service list' for progress
Now you’re ready to access the OAC environment for the first time. The command 'virtual-service connect name oac console' will achieve this goal. You will have to insert default credential root/oac but then you will have to immediately change the access password
configuring OAC environment:
The open agent container is an independent CentOS container that does not inherit settings from NX-OS; thus it requires additional network configuration. This configuration will be applied inside the OAC container.
If you’re device is using the mgmt0 interface (and management VRF), you need to ensure that the container is also using management VRF. You enter the management VRF with chvrf management command. Now you can set up the hostname and DNS configuration. The default servers are two OpenDNS Public DNS
[root@localhost ~]# chvrf management
[root@localhost ~]#
[root@localhost ~]# hostname nxosv
[root@localhost ~]#
[root@localhost ~]# echo 'nxosv' > /etc/hostname
[root@localhost ~]#
[root@localhost ~]# cat /etc/resolv.conf
nameserver 208.67.222.222
nameserver 208.67.220.220
[root@localhost ~]#
puppet installation with guest shell:
The guestshell container environment is enabled by default on most supported Nexus platforms. However, the default disk and memory resources that are allocated to the guestshell container might be too small to support Puppet agent requirements. These resource limits can be increased with the NX-OS CLI guestshell resize commands. To enter the guestshell, use the command guestshell
Guestshell environment:
A secure Linux container environment running CentOS
Enabled by default
Resize disk and memory (if needed) to minimum recommended values:
Disk: 400 MB
Memory: 300 MB
Enter the guestshell with the guestshell command
The guestshell is an independent CentOS container that does not inherit settings from NX-OS; thus it requires additional network configuration. First, become root with the sudo su – command. Then enter the management namespace if your device uses the management interface for connectivity with chvrf management. Now you can set up the hostname and DNS configuration
[guestshell@guestshell ~]$ sudo su -
[root@guestshell ~]#
[root@guestshell ~]# chvrf management
[root@guestshell ~]#
[root@guestshell ~]# hostname nxosv
[root@guestshell ~]#
[root@guestshell ~]# echo 'nxosv' > /etc/hostname
[root@guestshell ~]#
[root@guestshell ~]# cat /etc/resolv.conf
nameserver 208.67.222.222
nameserver 208.67.220.220
[root@guestshell ~]#
puppet installation with bash shell:
This process is the native Wind River Linux environment underlying NX-OS. It is disabled by default on NX-OS. So, the first step would be to enable it. You can do that with the feature bash-shell command. After which you can access the bash-shell with the run bash command
Now you have to set up the environment. First, become root with the sudo su – command. Then enter the management namespace if your device uses the management interface for connectivity with ip netns exec management bash. Now you can set up DNS configuration
bash-4.2$ sudo su –
root@nxosv#
root@nxosv#ip netns exec management bash
root@nxosv#
root@nxosv#cat /etc/resolv.conf
nameserver 208.67.222.222
nameserver 208.67.220.220
puppet agent setup:
Now that you have activated the proper environment in which to run the puppet agent, you are ready to finally install it. The procedure is common to all environments. First, you need to import the Puppet GPG keys. These keys are used for security reasons and to be sure about RPM authenticity. Then, you need to install the Puppet RPM. The proper RPM varies based on the environment, so you should choose the one matching your agent environment.
After that, you can install the appropriate RPM and install puppet. An RPM, or Red Hat Package Manager, is one of the oldest utilities that are used to manage software packages on Linux systems, including their installation and upgrade. It enables the user to install precompiled software, which will automatically build, resulting in a very easy and fast way to manage software. As you have just seen, these procedures can also be cryptographically verified with GPG
Then, you update PATH. This task is needed to be able to execute puppet-related commands using the “puppet” keyword.
The ciscopuppet module has dependencies on the cisco_node_utils gem, so you need to install it too
Note: A gem is a ruby program or library. It can be easily managed with RubyGem, a package management tool for Ruby programs. The gem command is used to build, upload, download, and install these packages in a similar way to yum or apt-get
agent setup steps:
import the Puppet GPG keys
rpm --import http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
rpm --import http://yum.puppetlabs.com/RPM-GPG-KEY-reductive
install RPM
The recommended Puppet Red Hat Package Manager (RPM) varies by environment:
bash-shell
http://yum.puppetlabs.com/puppetlabs-release-pc1-cisco-wrlinux-5.noarch.rpm
Guestshell
http://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
open agent container
http://yum.puppetlabs.com/puppetlabs-release-pc1-el-6.noarch.rpm
using the appropriate RPM for your environment
yum install $PUPPET_RPM
yum install puppet
update PATH var
export PATH=/opt/puppetlabs/puppet/bin:/opt/puppetlabs/puppet/lib:$PATH
install the cisco_node_utils gem
/opt/puppetlabs/puppet/bin/gem install cisco_node_utils
Note: “The CiscoNodeUtils gem provides utilities for management of Cisco network nodes. It is designed to work with Puppet and Chef and other open source management tools.” (https://github.com/cisco/cisco-network-node-utils)
Establishing server-agent connection:
The first step is to run the puppet master. You can do this using puppet master --verbose --no-daemonize. This command will cause the master to print debug information and not send the process into the background.
The second step is to be sure that the Agent can reach the server with proper IP/names mapping. Once the Agent will run, by default it will look for the server using the “puppet” name. This action is configured by changing the puppet.conf file.
Puppet uses SSL to create secure channel over the network between Server and Agent. When the Agent starts and attempts to connect to the server, it sends its certificate so that the server can authorize the communication by signing it. The puppet agent –t command is used to start the agent immediately while the puppet cert sign --all is used to sign all incoming certificates on the server. The server can also sign single certificate by using “puppet cert sign certificate”. Certificate can be obtained from the “puppet cert list -a”
Puppet examples:
as an example, the cisco_interface type can be used to manage interface configuration. Here, you are shutting down the interface with the shutdown attribute. This action makes it a layer3 port, disabling the swithport_mode. Then, you will add a description with the description attribute and configuring ipv4 address and mask with the ipv4_address and ipv4_netmask_length attributes.
this task can be stored in a manifest that is called site.pp, or anything else according to your environment. Remember that site.pp is the default main manifest and that it should be stored into etc/puppetlabs/code/environments/production/manifests by default
#Configuring Interface eth1/1
cisco_interface { "Ethernet1/1" :
shutdown => true,
switchport_mode => disabled,
description => 'managed by puppet',
ipv4_address => '1.1.43.43',
ipv4_netmask_length => 24,
}
another example: configuring ospf:
Three types are needed to add OSPF support on an interface: cisco_ospf, cisco_ospf_vrf, and cisco_interface_ospf. First, to configure cisco_ospf to enable ospf on the device.Then put the ospf router under a VRF, and add the corresponding OSPF configuration. If the configuration is global, use 'default' as the VRF name.Finally, apply the ospf configuration to the interface
cisco_ospf {"Sample":
ensure => present,
}
cisco_ospf_vrf {"Sample default":
ensure => 'present',
default_metric => '5',
auto_cost => '46000',
}
cisco_interface_ospf {"Ethernet1/2 Sample":
ensure => present,
area => 200,
cost => "200",
}