-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
433 lines (1 loc) · 375 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
[{"categories":["Windows 365","Azure Image Builder","Azure"],"contents":"Welcome along for the ride as we talk about Windows 365 Custom Images, and how we can use Azure Image Builder to create these images. In the upcoming series of posts, we will cover the following topics;\nPart 1 - The Foundations (This Post) Part 2 - PowerShell Deployment Part 3 - DevOps Deployment Part 4 - Azure Virtual Desktop UI Deployment In this post we are going to discuss the foundations \u0026amp; requirements for creating a custom images, it is intended that this post is read first before moving onto the other parts of the series to ensure you fully understand the required components and how they work together. So without any further ado, let\u0026rsquo;s get started.\nPermission Requirements This first part of the series will cover the requirements for the Azure Infrastructure and the Azure Image Builder prerequisites, and as such will require the following permissions;\nOwner permissions on the target Azure Subscription. Really, nothing more than that for creating the infrastructure.\nWhat is Azure Image Builder? Azure Image Builder is a service that allows you to create custom images in Azure, this is based on HashiCorp Packer. Until recently, Image Templates had to be specified in either ARM (Azure Resource Manager) Templates, BICEP or you could use PowerShell to create your image. However, as of May 2023 there is a new kid on the block, you can now create images in the Azure Portal\u0026hellip; see [Part 4] for more information on this.\nWhat will we need to create for the Foundations? Well the answer is\u0026hellip; not a lot and certainly nothing manually. This section is to first of all highlight what will be done in the Infrastructure setup script, and then we will go through the steps to execute the script. No one wants to blindly run it without knowing what it is doing\u0026hellip;. Right?\u0026hellip; Right?\u0026hellip;\nAzure Resource Group This will come of no surprise to many, but we will need a resource group to store all of the resources we will create. This will be created in the subscription of your choice, and will have the name you specify with the -aibRG parameter. This will also be used to scope the permissions for the User Managed Identity and the Custom Role.\nCustom Azure Role Azure Image builder requires some specific permissions to be able to manage different aspects of the deployment. Rather than giving it full access to the subscription, we will create a custom role that will be scoped to the resource group we created above.\nThe role will have the following permissions;\nMicrosoft.Compute/galleries/read Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/images/write Microsoft.Compute/images/read Microsoft.Compute/images/delete Microsoft.ManagedIdentity/userAssignedIdentities/assign/action See the Microsoft Documentation for more information on the permissions.\nThe Microsoft.ManagedIdentity/userAssignedIdentities/assign/action is not defined in the documentation, but is required when using Azure DevOps for Deployments using this solution.\nUser Managed Identity The Azure Image Builder service requires a User Managed Identity to be able to perform the actions required to create the image. This will be created in the resource group we created above, and will be given the name of the resource group with the suffix of -UMI, unless you specify something different with the -identityName parameter.\nOnce the User Managed Identity has been created, it will be assigned the custom role we created above.\nAzure Resource Providers To support the provisioning of the Azure resources within your subscription, the following resource providers will need to be registered;\nMicrosoft.Compute Microsoft.Storage Microsoft.VirtualMachineImages Microsoft.Network (Optional) Microsoft.KeyVault - This is only required if you are using a Key Vault to store your secrets. Documentation can be found on this LINK.\nDeploying the Foundations Now we understand what the script will do, let\u0026rsquo;s go through the steps to execute the script. The script can be found on my GitHub repository, and can be downloaded from the link below;\nThe Create-AIBInfrastructure.ps1 script, once executed will install the required PowerShell modules, and then prompt you to login to Azure.\nBelow is an example of the parameters that can be used to execute the script;\n$splat = @{ SubscriptionID = \u0026#34;b493a1f9-4895-45fe-bb71-152b36eea469\u0026#34; # The ID of the Azure Subscription where the resources will be created. geoLocation = \u0026#34;UKSouth\u0026#34; # The Azure region in which resources will be provisioned aibRG = \u0026#34;W365-CI-EUC365\u0026#34; # The name of the resource group to be created } .\\Create-AIBInfrastructure.ps1 @splat Once the script is executed, you will start to see the resource information in the console;\nOnce the execution has complete you will have a resource group with the Custom Roles \u0026amp; the User Managed Identity with the Custom Role assigned to it.\nConclusion Stick around and check our the other parts of the series noted in the introduction, and if you have any questions or comments, please feel free to reach out to me on Twitter or leave a comment below.\n","image":"https://hugo.euc365.com/images/post/w365/customimage/part1_hua37ff77ba8250a3fc6703372336519dc_53493_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/windows-365-custom-images-part-1-foundations/","tags":["Windows 365","Azure Image Builder","Azure","PowerShell"],"title":"Windows 365 Custom Images - Part 1 - The Foundations"},{"categories":["Windows 365","Azure Image Builder","Azure"],"contents":"Welcome to the second post in the series talking about Windows 365 Custom Images, and how we can use Azure Image Builder to create these images. In this series of posts, we will cover the following topics;\nPart 1 - The Foundations Part 2 - PowerShell Deployment (This Post) Part 3 - DevOps Deployment Part 4 - Azure Virtual Desktop UI Deployment The first post in the series covered the foundations for all of the deployment methods, and as such, this post will cover the PowerShell deployment method. We will cover additional requirements and what to expect as an output from the script.\nThe PowerShell deployment method will not only create the Image, but it will also upload the image to the Windows 365 Service, and then create a new Windows 365 Cloud PC Provisioning Policy, or update and existing policy if one already exists.\nThe ids in the script are fictitious and for example purposes only, please do not use these in your environment.\nPermission Requirements As we are executing this manually, the Owner permission on the Subscription is not required, you will require the following permissions to execute the script;\nIntune Administrator Contributor on the Resource Group where the resources will be created Getting Prepared Before we can execute the script, we will need to ensure that we gather all of the information we will need to execute the script and achieve our goal of creating a custom image for Windows 365.\nImage Offer and SKU One of the first things we need to obtain is the Image Offer we will use as our base template. To do so, follow the below steps;\nObtain the Get-ImageOptions.ps1 script Run this script, specifying your Subscription ID, Geo Locations (e.g UKSouth, EastUS etc.) and the Image Publisher (which for this case is MicrosoftWindowsDesktop), as below. Get-ImageOptions.ps1 -SubscriptionId \u0026lt;Subid\u0026gt; -geoLocation \u0026quot;UKSouth\u0026quot; -imagePublisher \u0026quot;MicrosoftWindowsDesktop\u0026quot;\nLocate the windows-ent-cpc heading, and take note of an image offer you wish to use. For those wondering, this denotes Windows Enterprise Cloud PC. There are two options for later versions of the OS, which are M365 or OS. To help make your decision, please review the Cloud PC Device images overview documentation.\nImage Customisations As you will see in the script, there are three customisations, two \u0026lsquo;Inline\u0026rsquo; and 1 script URI. Now these are the bits that make your images do the business, there is a bit of trial and error some times, but when you find your groove, it becomes like shelling peas.\nIf you search for $ImgCustomParams this will locate the customisations. If you add, or remove one, please do not forget to update the $ImgTemplateParams object!, more information on customiser objects in PowerShell can be found HERE.\nYou can use the Managed Identity to access Azure Storage Accounts for Files, there is documentation on this HERE, this is not in in a PowerShell formate but it does outline the concept.\nGeneration 2 Virtual Machine Templates require both RunAsSystem \u0026amp; RunAsElevated to be set to True. If you do not do this, the image will fail to build.\nExecuting the Script Ok, lets get some resources in the over, go grab a coffee, and come back to a fully built image, and a new Windows 365 Provisioning Policy\u0026hellip; honestly, this process from start to finish takes longer than 1 hour\u0026hellip; Why not use a Windows 365 to do it to avoid any unexpected interruptions 😋😋!!\nThis script will create a shared image gallery where the image will be built to before creating the Managed disk. This is to also provide additional value in that you can create multiple images in the same gallery, and then use the same image for multiple purposes. For example, you could create a Windows 10 Enterprise image, and then use this for both Windows 365 and Azure Virtual Desktop.\nTo execute the script, ensure you have added your Customisations and gathered all the information you need to pass into the script, and then execute the script as the below example;\n$Params = @{ subscriptionID = \u0026#34;b493a1f9-4895-45fe-bb71-152b36eea469\u0026#34; #The ID of the Azure Subscription where the resources will be created. geoLocation = \u0026#34;uksouth\u0026#34; #The Azure region in which resources will be provisioned. aibRG = \u0026#34;w365-CICD-rg\u0026#34; #The name of the resource group to be created imageTemplateName = \u0026#34;w365CustomCICD\u0026#34; #The name of the Image Template to Create. aibGalleryName = \u0026#39;sigw365\u0026#39; #The name of the Image Gallery to create/update, You cannot use special characters or spaces in this field. imageDefinitionName = \u0026#39;w365Images\u0026#39; #The name of the image definition to create provisioningPolicyDisplayName = \u0026#34;W365 Demo\u0026#34; #The name of your Windows 365 Provisioning Policy. publisher = \u0026#34;MicrosoftWindowsDesktop\u0026#34; #This value is set by default, but please do update to suit your needs, please see the Image Offer and SKU section above offerName = \u0026#34;windows-ent-cpc\u0026#34; #This value is set by default, but please do update to suit your needs, please see the Image Offer and SKU section above offerSku = \u0026#34;win11-22h2-ent-cpc-m365\u0026#34; #This value is set by default, but please do update to suit your needs, please see the Image Offer and SKU section above runOutputName = \u0026#34;w365DistResult\u0026#34; #The Result Output Name. } \u0026amp; \u0026#39;.\\Create-Windows365AIB.ps1\u0026#39; @Params Don\u0026rsquo;t worry, you will be able to see it is still working though, as the script will check the progress on actions periodically, and output information to the console\u0026hellip; I have been there, and I know the feeling of \u0026lsquo;is it still working\u0026rsquo; 🤣🤣\u0026hellip;\nConclusion So there you have it, a fully automated (yet Manually Invoked) image build process for Windows 365, and a new Windows 365 Provisioning Policy. Stick around for the next post in the series, where we will cover the Azure DevOps deployment method for a complete hands off approach to deploying your Windows 365 Custom Images.\nIf you have any questions, please do reach out to me on Twitter or in the comments below.\n","image":"https://hugo.euc365.com/images/post/w365/customimage/part2_hu76c2f010c792201aeb658fe248f3b77c_43489_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/windows-365-custom-images-part-2-powershell-deployment/","tags":["Windows 365","Azure Image Builder","Azure","PowerShell"],"title":"Windows 365 Custom Images - Part 2 - PowerShell Deployment"},{"categories":["Windows 365","Azure Image Builder","Azure","DevOps","Bicep","YAML"],"contents":"Welcome to the third post in the series talking about Windows 365 Custom Images, and how we can use Azure Image Builder to create these images. In this series of posts, we will cover the following topics;\nPart 1 - The Foundations Part 2 - PowerShell Deployment Part 3 - DevOps Deployment (This Post) Part 4 - Azure Virtual Desktop UI Deployment Some of you may be like Veruca Salt from Charlie and the Chocolate Factory, and want all of the goodies Now and opted to jump the second post in the series, and well, that is fine, I would have done the same thing 🤣, sorry not sorry.\nIn this post, we will be looking at how we can use Azure DevOps to deploy our Windows 365 Custom Image. Please follow the process carefully, as missing any of the steps could leave you scratching your head as to why it is not working.\nThis post assumes you have basic source control competency, and that you are familiar with the terminology such as, pull, push etc. If you are not, then I would recommend you read up on this before continuing.You will also need to have the ability to use Git on your workstation.\nPermission Requirements For this post, we do need some additional permissions, however these can be short lived as once we configure the pipelines, service principals etc, we will only need the ability to manage the source data.\nAzure DevOps You will need the ability to Create a Project in Azure DevOps You will need the ability to Create a Service Connection in an Azure DevOps Project Have at least 1 Self-Hosted Agent or 1 Microsoft Hosted Agent available to run the pipeline (Free Tier is fine, the code is designed to run from a free tier account) - Documentation on Agents The free agents have a maximum run time of 1 hour and there are also limitations on monthly execution limits. As we go through this guide, you will notice that there are two pipelines to work around this been a problem.\nAzure Global Administrator permissions to create the Service Principal \u0026amp; grant the required Application Permissions. Getting Prepared Azure DevOps Project \u0026amp; Service Connection First of all, lets get the Azure DevOps Project created, I would recommend following the Microsoft Documentation on this. The name of the project is not important, but I would recommend using something that is meaningful to you.\nOnce you have the project created, we will need to create a Service Connection, again I would recommend following the Microsoft Documentation, ensuring you select Resource Group created in the Foundations post and you select Grant access permission to all pipelines.\nService Principal (App Registration) You only need to run through this section if you want to upload the image to Windows 365, which I assume you do. If you do not, then you can skip this section.\nNot that it\u0026rsquo;s a habit of this post\u0026hellip; but I have another link\u0026hellip; this time it\u0026rsquo;s to one of my previous posts, Create an Azure App Registration, the service principal will need the following permissions;\nCloudPC.ReadWrite.All - This is required to upload the image to Windows 365 It will also need the Custom Role assigning on the resource group you created in the Foundations post. (By default this will be called Azure Image Builder Image Definition for \u0026lt;Resource Group Name\u0026gt;) Once you have the Service Principal created, you will need to create a secret, and take note of the Application (client) ID and Client Secret \u0026amp; the Tenant ID for later use.\nThe Code Ok, now we are ready to start getting into the juicy bits!!\nBefore we go any further, you will need to ensure you have cloned your repository to your local machine, ready to copy the code into the Azure DevOps Project.\nYou can get all of the code using the GitHub Resource link below.\nOnce you have download the code, and copied it into your local repository, we can start to update the code to suit your needs.\nImage Template (BICEP) The Image template file itself is located in the Templates folder, and is called Windows365.bicep. The only edits we need to make to this file for the purpose of this guide is the customizations, this is where we will define the applications we want to install on the image along with any other scripted or inline customization.\nAs you will see above there are two marked areas, the RED area is where you will define the customisation objects and the YELLOW area is a sample of what the customisation object looks like. You can read about more BICEP customisations objects HERE.\nOnce you have made all your customisations, we can move onto looking at the parameters file.\nParameters You will notice from the codeset, that there is a parameters file, this is what drives the BICEP file (Apart from Customisations) in the Templates folder.\nSeparating the parameters from the BICEP file, allows us to use the same BICEP file for multiple deployments, without having to edit the BICEP file each time.\nIf we open up the Windows365.parameters.json file and take a look at the parameters that are available to set, all of which have descriptions to help you understand what they are for.\nThere is one key parameters that you will need to set that is unique to your environment, that is the AIBMSIName parameter. This is the name of the User Managed Identity in you resource group. If you are unsure of the name, you can find this in the Azure Portal, by navigating to the resource group you created in the Foundations post.\nAll of the other parameters are set to default values, which you can change if you wish, but the image would provision with the default values.\nPipelines (YAML) As mentioned in the Getting Prepared section, we will be using two pipelines to cater for those using the free tier of Azure DevOps. The first pipeline will be used to create the image template and then invoke the build of the Managed Image, and the second pipeline will be used to upload the image to Windows 365.\nIf you are using a paid pipeline, you can combine the two pipelines into one, you will just need to ensure all of the correct variables are set and that you do not have any duplicate steps.\nPipeline 1 - Create Image Template \u0026amp; Build Managed Image Pipeline Name - CreateManagedImage.yaml This pipeline will create the image template and then invoke the build of the Managed Image, there are a couple of variables that you will need to update prior to running the pipeline, these are;\nConnection - The name of the Azure DevOps Service Connector with access to subscription (Created Above) subscriptionID - The subscription ID of the subscription you are deploying to resourceGroup - The name of the resource group you created in the Foundations post imageTemplateName - A name for the image template you want to create location - The region you want to deploy the resources to, i.e \u0026ldquo;UK South\u0026rdquo; template - Path to the BICEP file e.g. Templates/Windows365.bicep templateParameters - Path to the parameters file e.g. Parameters/Windows365.parameters.json You will notice a commented out section in the pipeline, this section will allow you to create a schedule to run the pipeline on.\nThis pipeline also handles some other actions, such as clearing up existing templates, as this is not currently possible. We will cover the DeploymentActions.ps1 file in a later section.\nPipeline 2 - Upload Image to Windows 365 Pipeline Name - DeployToW365.yaml This pipeline will upload the image to Windows 365, again there are a couple of variables that you will need to update prior to running the pipeline, these are the same as above. However, we will be adding Pipeline Variables later in the post once we have published the code, and created the pipeline in DevOps.\nDeploymentActions.ps1 This script can be thought of as a boilerplate script, it is used to handle some of the actions that we run in a PowerShell script, this is customisable to your needs.\nIf you wish to amend the Provisioning Profile type, data etc, this can be found in this script, the same goes for the Required Modules sections of the script.\nPublish the Code \u0026amp; Create/Run the Pipelines Once you have updated the code to suit your needs, you will need to push the code to your Azure DevOps Project and then we can head over to Azure DevOps to create the pipelines.\nOk, so first of all, let us create the pipeline to create the image template and build the managed image.\nOpen your DevOps Project In the left hand menu, select Pipelines Click Create Pipeline Select Azure Repos Git Select your repository Select Existing Azure Pipelines YAML file Select the CreateManagedImage.yaml file, and click Continue Check the details in the pipeline, and click Run Ok, now we have the first pipeline running, we can create the second pipeline to upload the image to Windows 365.\nIn the left hand menu, select Pipelines Click New Pipeline (Top Right) Select Azure Repos Git Select your repository Select Existing Azure Pipelines YAML file Select the DeployToW365.yaml file, and click Continue Click Variables Click New variable Enter ClientID in the Name field, and the Application (client) ID from the Service Principal you created earlier in the Value field, and click OK Click the + icon in the top right of the variables section Enter ClientSecret in the Name field, and the Client Secret from the Service Principal you created earlier in the Value field, select Keep this value secret and click OK Click the + icon in the top right of the variables section Enter TenantID in the Name field, and the Directory (tenant) ID from the Service Principal you created earlier in the Value field, and click OK You should end up with something that looks like this;\nThis time don\u0026rsquo;t click Run, instead use the dropdown next to it and click Save. We will amend the name of the Pipeline by clicking the ellipses next to the Run Pipeline button (Top Right), and then clicking Rename/Move. Once you have renamed your pipeline, you can click Run Pipeline.\nWhen you click run, it will queue this job and it will wait for the first pipeline to complete, before running.\nSample Output(s) Overview of Pipelines Create Managed Image Pipeline Deploy to Windows 365 Pipeline Windows 365 Image In the Console Conclusion Well this has been a fun post, this is by far the most in-depth post in the series, but man is it worth it!!\nStick around for the next post in the series, where we will be looking at how to create Image templates with the new UI Features in Azure Virtual Desktop within the Azure Portal.\nAs always, if you have any questions, please feel free to reach out to me on Twitter or leave a comment below.\n","image":"https://hugo.euc365.com/images/post/w365/customimage/part3_hu31fc839efd189df211df0a1ccb454234_44245_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/windows-365-custom-images-part-3-azure-devops-deployment/","tags":["Windows 365","Azure Image Builder","Azure","DevOps","Bicep","YAML"],"title":"Windows 365 Custom Images - Part 3 - Azure DevOps Deployment"},{"categories":["Windows 365","Azure Image Builder","Azure"],"contents":"Welcome to the fourth (\u0026amp; final\u0026hellip; for now) post in the series talking about Windows 365 Custom Images, and how we can use Azure Image Builder to create these images. In this series of posts, we will cover the following topics;\nPart 1 - The Foundations Part 2 - PowerShell Deployment Part 3 - DevOps Deployment Part 4 - Azure Virtual Desktop UI Deployment (This Post) This has been a long time coming for people not so involved with ARM, BICEP, DevOps or PowerShell as there is now UI Capabilities for you to create your own custom images. You can even use some \u0026lsquo;in-built\u0026rsquo; customisation options to give you that extra hand along the way.\nThis is by far the most manually involved piece, but it is a great way to get started with Azure Image Builder, and then you can start to explore the other options available to you.\nSo without further ado, lets get into it!\nYou must still use the Foundation post to create the required resources, as this post will not cover the creation of the required resources.\nPermission Requirements For this post, you will need access to the Subscription and Resource Group created in the Foundations post. During Testing no other permissions were required, but as with everything this may change in the future.\nGetting Prepared Ok, before we start getting into the creating of the template, ensure you have the publicly available links to your resources, even if you are using something with a SAS token or in a storage account, it needs to be accessible from the internet.\nIf you are using the Managed Identity to get data from a storage account, please ensure that this has the relevant permissions on that account.\nCreating the Image So we are now ready to start creating the image, so let\u0026rsquo;s head over to the Azure Portal and then head to the Azure Virtual Desktop blade.\nOnce you are in the Azure Virtual Desktop blade, you will need to select the Custom image templates option from the left hand menu.\nCreate a Custom Image The same rule still applies here, you cannot currently update an image template. However, you will see as you go through the UI Selection options, it will give you the ability to select a previous template to use as a base. We won\u0026rsquo;t cover that here, but I wanted to make you aware of it.\nClick Add custom image template from the ribbon Enter the follwing information on the basic pane; Template Name: The name you wish to call the image template Import from existing template: If you have a previous template you wish to use as a base, select it here, other wise select No Subscription: The subscription your resource group is in Resource Group: The resource group you created in the Foundations post Location: The region you wish to deploy the image to Managed Identity: The Managed Identity created in the Foundations post On the Source Image pane, enter the following details; Source Type: For this post we will be using a Platform Image, but if you have other image types you can explore them here. Select Image: Select the image you wish to use as a base for your custom image (If you choose a Windows 10 Image, I recommend using the ones appended with Gen2) On the Distribution Targets pane, enter the following details, and then click Next; Managed Image: Select this option as this is what we are focusing on in this post Resource Group: Select the resource group you created in the Foundations post Image Name: The name you wish to call the image Location: The region you wish to deploy the image to Run output name: The name you wish to call the run output On the Build Properties Pane, we can leave all options as default. However, this is where you could add a VNET, increase the size of the packer VM to improve build speed etc. for now we can just click Next.\nOk, now we are onto the Customizations pane, this is where you will add in links to your scripts, or use the built-in scripts to help tailor the experience for your needs. Feel free to play around here and then click Next.\nAdd any tags you wish to add to the image, and then click Next.\nReview the information you have entered, and then click Create.\nOnce you click create, you will be taken back to the Custom Image Templates blade, and you will see your new template being created, this normally only takes a few minutes, but the image is not quite ready at this point.\nOnce the template status changes to to Success, you can select the template and then click Run from the ribbon.\nThis process creation time can vary, it all depends on the customisations you have added, and the size of the image you are creating, go grab a coffee and come back in a little while.\nUploading the Image to Windows 365 A little while later\u0026hellip; we can finally upload the image to Windows 365.\nLets start by heading over to the Intune Console and then head to the Devices Windows 365 blade.\nFrom here we need to select the Custom Images tab, and then click Add from the ribbon. The configuration menu will appear to the right of the screen, where you should enter the following information;\nImage Name: The display name of the image in Windows 365 Image Version: A version number for the image, I use the date format, for example 22.05.26. Subscription: The subscription your image is in Source Image: The image you created in the previous section Once you have entered the information, click Add and you will see the image appear in the list, once the upload completes, the status will change to Upload successful and be available for selection in the provisioning policy.\nConclusion I love that this is now making image creation more accessible to admins, however I find this the most labour intensive process as with the PowerShell and DevOps options, you can automate the process, and not have to worry about checking the status of the image creation \u0026amp; upload.\nI hope you have enjoyed this series, and I hope it has helped you on your journey to Windows 365 with Custom Images.\nAs always, if you have any questions, please feel free to reach ou t to me on Twitter or leave a comment below.\n","image":"https://hugo.euc365.com/images/post/w365/customimage/part4_hud641fded3c78fbf324c0982cbae1d7f6_45012_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/windows-365-custom-images-part-4-azure-avd-ui/","tags":["Windows 365","Azure Image Builder","Azure","PowerShell"],"title":"Windows 365 Custom Images - Part 4 - Azure AVD UI"},{"categories":["Power BI","Intune","Analytics","Security"],"contents":"At MMS MOA 2023, I presented a session alongside Kenny Buntinx on Attack Surface Reduction (ASR) rules, a session filled with lessons learnt, interaction and belgian chocolate. During this session, I showed a custom data collection script that I had written to collect the ASR events from the event log and send them to a Log Analytics workspace, and then how to inject that data into a Power BI report.\nThis post will cover the configuration of the Power BI Report and the Intune Remediation Script, to help you better report on ASR events in your environment, without having to pay for an E5 license.\nGetting Prepared First of all, you will need a Log Analytics Workspace, if you don\u0026rsquo;t have one already, you can create one in the Azure Portal. Once you have the workspace created, we will need the Workspace ID and Primary Key for the workspace, these can be found in the Agents section of the workspace.\nThe second thing we will need is the Intune Remediation Script, this is available for download from the below button. The script will need to be configured with the Workspace ID and Primary Key from the Log Analytics Workspace at a minimum, there are other configurable options in the script, but these are optional, and are noted in the HelpMessage of the parameters.\nOnce the script has been amended, the information on how to configure the tested configuration within Intune is available in the README file in the repository.\nConfiguring the Power BI Report Ok, so this section is to assume that there is now data flowing into the Log Analytics Workspace, as without the data, the report will be empty.\nThe first thing we need to do in this section is create an Azure App Registration that can be used to access both the Log Analytics Workspace and the Graph API.\nTo do this, we will need to create a new App Registration in the Azure Portal, and give it the following Graph API Application permissions:\nDeviceManagementManagedDevices.Read.All You can follow my guide on how to create an App Registration here, and then how to grant access to the Log Analytics Workspace here.\nOnce the App Registration has been created, we will need to download the Power BI Report, from the below button, and open it in Power BI Desktop.\nWhen you first open the report, you will be prompted to enter the following information:\nTenant ID - This is the tenant ID of the Azure AD Tenant Application ID - This is the Application ID of the App Registration Application Secret - An application secret key of the App Registration Log Analytics Workspace ID - This is the Workspace ID of the Log Analytics Workspace You will also need to select the Timeframe you wish to query from a drop down list, however this is further configurable in the report itself.\nOnce you click Load, you will be prompted about privacy levels, ensure you configure these as Public for the purposes of this guide, this is the only configuration that has been tested and confirmed working. This will then initiate the data load from the API\u0026rsquo;s and Log Analytics Workspace.\nOnce the Data has been loaded, you will be presented with the following report visual:\nOnce you click on an Event in the bottom left-hand table, the event data will be displayed to the right-hand side to help you make informed decisions on your data.\nCredits \u0026amp; Conclusion This post utilises frameworks of the Guys and Girls over at MSEndpointMGR, the function used to post the data to the LAW is based on a function used in other inventory collection scripts.\nYou can also look to further secure you data collection by utilising one of their other framworks which utilises Azure Function Apps (SEE HERE)\nI hope you find this post useful, and if you have any questions, please reach out to me or leave a comment below.\n","image":"https://hugo.euc365.com/images/post/asr/featuredImage_hue5e0e8fb6a833477482d0b7c10ff2963_70415_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/asr-custom-data-collection-intune/","tags":["Power BI","Intune","Log Analytics","ASR"],"title":"ASR Custom Data Collection with Intune"},{"categories":["Power BI","Intune","Analytics"],"contents":"What a blast this year at MMS MOA 2023!!. The time last year I spoke for the first time and did a quick 3 minute session in Tips \u0026amp; Tricks and this year (2023) I returned as a speaker, with an improved and fleshed out Power BI Session alongside Steve Beaumont. Some may recall my previous post Microsoft Graph API and PowerBI, there will be some reference back to this as we go through this one.\nWe covered a lot on stage, and this post is here to help flesh out those configuration area\u0026rsquo;s that we didn\u0026rsquo;t cover in the session, and also has the link to the goodies for you to start the journey to feature rich reports with Intune \u0026amp; Power BI.\nGetting Prepared As per the previous post, we will still be using a service principal to access the data, however we will also be expanding this scope to allowing data read access to your log analytics workspace.\nThe dataset that is available for download has the following Application API permission requirements for Intune and AAD:\nDeviceManagementManagedDevices.Read.All (Intune Devices Data \u0026amp; Autopilot Events) Device.Read.All (AAD Data - Device) User.Read.All (AAD Data - User) For information on creating a service principal (App Registration) and assigning the permissions, please refer to one of my previous post.\nAside from the above, we also need to grant access to the Log Analytics Workspace that holds the Windows Update for Business reports data, please refer to the following post for more information on how to grant access to the service principal.\nYou can utilise the same service principal for both the Intune and Log Analytics API\u0026rsquo;s, just ensure you have the correct permissions assigned.\nOnce all of the permissions have been granted, you will need the following information to configure the Power BI report:\nTenant ID - This is the tenant ID of the Azure AD Tenant Application ID - This is the Application ID of the App Registration Application Secret - An application secret key of the App Registration Log Analytics Workspace ID - This is the Workspace ID of the Log Analytics Workspace Configuring the Power BI Report The Power BI Files used in the session are available for download from the below button.\nOnce you have downloaded the files, you will need to open the MMS 2023 - Intune Data Model.pbit file, once launched you will be prompted to enter the information we gathered in the previous section. In addition to this, you will also need to select a time period for the data from the Log Analytics Workspace, I have put this in a drop down option, but this can be amended to suit your needs once the report is loaded.\nThe logicAppURL is not required for the report to function, however there is a template query which will use this for data ingestion and this was again the premise of my previous post, where the Deploy to Azure Button will allow you to deploy the Logic App in under 3 minutes.\nIt is important that you select the beta option from the Graph Version dropdown.\nThis should leave you with something like the below configuration screen. Once you click Load, you will be prompted about privacy levels, ensure you configure these as Public for the purposes of this guide, this is the only configuration that has been tested and confirmed working. This will then initiate the data load from the API\u0026rsquo;s and Log Analytics Workspace.\nNow we have the data loaded, we can start to build out the reports, you will see that there is a template page with the report which demonstrates basic usage of the data model, this is a good starting point for you to build out your own reports.\nTo further enhance the usage of the report, you should look to publish the report to the Power BI Service, this will allow you to configure the report to refresh on a schedule, and also allow you to share the report with other users. Please see the Microsoft Documentation for more information on publishing Power BI Desktop files to the Power BI Service.\nYou will need a Power BI Pro License to publish the data model to the Power BI Service.\nExpanding the Report So now we have the report loaded, we can start to expand on the report and add additional data from other Microsoft Graph and Log Analytics endpoints. A few things do need to be taken into account when adding additional data to the report, and these are as follows:\nPermissions - The service principal will need to have the correct permissions assigned to access the data. Error Fields - You may see additional fields in the data table that you do not see when querying the Graph API Directly, it is best to only select the fields you need. Standardisation - Standardise your queries where possible. Resource Authentication - The GraphToken function accepts an input which is used to determine the resource to authenticate against, this is important when you are querying different API\u0026rsquo;s, i.e. Graph and Log Analytics. Template Queries There are a number of template queries that are available in the report, these are as follows:\nNative Web Contents - This query is used to query the Graph API directly, this will not handle pagination over 1000 objects. Invoked Logic App - If you implemented the logic app to query the data, this query will be used to interact with the Logic App to gather data. Template Graph Call - This query will use the odata query to obtain the data from the Graph API, this will handle pagination natively. There is not a template query for the Log Analytics Workspace, however you can copy the query of the WUfB report and modify it to suit your needs.\nTips \u0026amp; Tricks from the Session Friendly Names Everywhere - Rename your columns and tables to something that makes sense enmasse, this will make it easier to build out your reports collaboratively. Set your Data Types - Ensure you set the data types for your columns, this will ensure that the data is displayed correctly in the report \u0026amp; it will also enhance the report performance. Only Get the data that is required - When querying the Graph API, only select the fields that you need, this will reduce the amount of data that is returned and will also improve the performance of the report. ","image":"https://hugo.euc365.com/images/post/powerbi/graphpbimms_huca6733458c13dbb19797d4ddcbaa639d_97980_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/intune-power-bi-mms-edition/","tags":["Power BI","Intune","Log Analytics"],"title":"Graph API \u0026 Power BI - MMS Edition"},{"categories":["Analytics"],"contents":"As IT Professionals start to look to the cloud for solutions, more and more of us are starting to utilise Log Analytics, weather that be to underpin other services \u0026amp; solutions or writing our own custom log collection scripts, it is becoming a very key piece of many deployments.\nAs much as I love using Kusto (KQL), Log Analytics Workbooks are very tedious to create and can take a log time to get right, those that have gone to the effort of doing so, I tip my hat to you!!\nBecause of this tedious nature, I decided I wanted to get the dara into Power BI, which is easy enough when you authenticate with your organisational account, but the problem with this, is just that, it ties it to that org account. Using service principals (App Registrations), removes such reliance.\nThis post will not focus on getting the data into Power BI, there will be further post\u0026rsquo;s to follow on such subjects, it will focus on Authenticating and Retrieving the data using the API, with the tool shown in my case been PowerShell.\nPrerequisites For this post we will need the following;\nAzure App Registration, please see my previous post (Create an Azure App Registration)). A log analytics workspace Your preferred client to Access APIs Granting Authorisation Log Analytics Data When you create an Application Registration, you will have a service principal that you can use to grant access to resources, for example, If I created a a registration with the name EUC365 - Reporting it would show as below.\nThis is the principal we will be using to grant access to the log analytics workspace to consume your data. The only thing we can do with this data is read it, so we will only be granting Reader access to the data, to grant authorisation follow the below steps.\nLocate the Log Analytics workspace you wish to use In the left-hand pane select Access Control (IAM). Select the Role Assignments tab Click Add ? Add role assignment Select Reader from the Role pane, then click Next Member type should be User, group, or service principal Click + Select Members Search for the name of the service principal you created earlier, then click Select Click Review + assign, and then click it again The principal now has access to read the data, however we have not yet granted access to the actual API from the Service Principal, so lets take a look at that in the next section.\nAPI Authorisation Lets dive straight into this, to grant authorisation to the Log Analytic APIs, follow the below steps;\nLocate the Application Registration In the left-hand pane, select API permissions Click Add a permission Select the APIs my organisation uses tab Start typing Log Analytics in the search bar Select the Log Analytics API result Select Application permissions. Select the Data.Read permission Click Add permissions Click Grant admin consent for Click Yes This will now allow you to use the service principal to call the API.\nTesting it out Now that we have granted the service principal access to the data and the API, we can now test it out. For this I will be using PowerShell, but you can use any client you wish, such as PostMan, Python, etc.\nPowerShell Below is a quick script to gather the data in PowerShell, it will prompt you the Tenant ID, Client ID, Client Secret and Workspace ID, it will then connect to Azure with the Service Principal and retrieve the data from the API.\nparam ( [Parameter(Mandatory = $true)] [string] $TenantId, [Parameter(Mandatory = $true)] [string] $ClientId, [Parameter(Mandatory = $true)] [string] $ClientSecret, [Parameter(Mandatory = $true)] [string] $workspaceId ) #Create the Service Principal Credential object $SPCredentials = [System.Management.Automation.PSCredential]::new($ClientId, (ConvertTo-SecureString $ClientSecret -AsPlainText -Force)) #Connect to Azure with the Service Principal Connect-AzAccount -Tenant $TenantId -Credential $SPCredentials -ServicePrincipal | Out-Null $AccessToken = (Get-AzAccessToken -ResourceUrl \u0026#34;https://api.loganalytics.io\u0026#34;).Token #Get WUfB Reports Data $WUfBReportsData = Invoke-RestMethod -Method Get \u0026#34;https://api.loganalytics.io/v1/workspaces/$workspaceId/query?query=UCClient | summarize arg_max(TimeGenerated,*) by AzureADDeviceId | project-away TenantId, TimeGenerated, AzureADTenantId, SourceSystem, Type\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} #Display Columns $WUfBReportsData.Tables[0].Columns #Display Data Rows $WUfBReportsData.Tables[0].Rows The data you gather back from the API will be broken down into Columns and Rows, and the matching up of such data can be a bit of a pain, however this post is focusing on simply getting the data, not manipulating it, as the manipulation of the data will be done in Power BI which will be covered in a future post.\nConclusion This post has shown how to grant access to the Log Analytics API using a Service Principal, and how to retrieve the data using PowerShell. The returned data directly from the API needs some work to match it up to the columns, but this is usable data that can be used in Power BI or any other tool you wish to use.\nI hope you found this post useful, and if you have any questions, please feel free to reach out to me!! If you have a function or code snippet that you think would be useful to others, please feel free to reach out to me and I will add it to the post.\n","image":"https://hugo.euc365.com/images/post/loganalytics/logAnalyticAPIFeatured_hu61ab4d2283fd403867058adf38e50939_288953_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/log-analytics-api-data-access-service-principals/","tags":["Log Analytics"],"title":"Log Analytics API Data Access with Service Principals"},{"categories":["Graph API","PowerShell","Driver and Firmware"],"contents":"Its been just over a month since the announcement of Commercial Driver and Firmware Servicing by Microsoft. Since then I have been working on delivering this to businesses, and sometimes it can be a challenge to keep on top of all of the different graph endpoints that are required to keep the cogs turning. So, out of my own sanity saving idea, I pulled together a PowerShell Module (Driver.Firmware.Servicing) to help make the service a lot more consumable for admins.\nMicrosoft provided some great conceptual documentation on the service, but I wanted to make it a lot easier to consume. So, I have created a PowerShell Module that abstracts away the complexity of the Graph API, and provides a simple interface to manage the service. See the Microsoft documentation for more information on the service.\nWhat is Driver and Firmware Servicing? Let us quickly recap on what Driver and Firmware Servicing is. Utilising the Windows Update for Business Deployment Service (WUfBDS), Driver and Firmware Servicing is a service that allows you to manage the drivers and firmware that are deployed to your devices. It is available to commercial customers, with one of the following licencing SKUs.\nMicrosoft 365 E3 \u0026amp; E5 Microsoft 365 A3 \u0026amp; A5 Microsoft 365 Business Premium What does the PowerShell Module do? The PowerShell Module is designed to make the management of Driver and Firmware Servicing a lot easier. Behind the scenes it is using the Graph API to make the calls to the service, with the PowerShell Module abstracting away the complexity of the Graph API.\nWhere can I get the PowerShell Module? As mentioned in the opening paragraph, the PowerShell Module is available from the PowerShell Gallery. You can install it by running the following command in PowerShell.\nInstall-module Driver.Firmware.Servicing #-MinimumVersion 1.0.0 is recommended How do I use the PowerShell Module? Whist I have also documented these on the GitHub repo (linked above), alongside the source code, I will also cover them here.\nCreate a new policy #Create a deployment audience $deploymentAudience = New-DeploymentAudience #create a new automatic policy, deferring updates for 1 day $policy = New-DriverUpdatePolicy -audienceID $deploymentAudience.id -policyType \u0026#34;Automatic\u0026#34; -deferralTime \u0026#34;P1D\u0026#34; Add a device to a policy #Array of Azure AD Device IDs $deviceIDs = @(\u0026#34;deviceID1\u0026#34;,\u0026#34;deviceID2\u0026#34;) #Explicitly Enrol the devices to the WUfBDS Driver Feature Push-EnrollUpdateableAsset -deviceIDs $deviceIDs #Add the devices to the deployment audience Add-DeploymentAudienceMember -audienceID $deploymentAudience.id -azureDeviceIDs $deviceIDs Get a list of applicable content #Get a list of applicable content for the policy Get-DriverUpdatePolicyApplicableContent -policyID $policy.id Get a list of compliance changes \u0026amp; view update schedule #Get a list of compliance changes for the policy $complianceChanges = Get-DriverUpdatePolicyComplianceChange -policyID $policy.id #View Update Schedule $updateEntry = $complianceChanges | Where-Object {$_.content.catalogEntry.displayName -eq \u0026#34;Intel - System - 4/12/2017 12:00:00 AM - 14.28.47.630\u0026#34;} $updateEntry.deploymentSettings.schedule Add a Driver Update Approval #Get the Update Catalog ID for the driver update. $catalogID = (Get-DriverUpdatePolicyApplicableContent -policyID $policy.id | Where-Object {$_.catalogEntry.displayName -eq \u0026#34;Intel - System - 4/12/2017 12:00:00 AM - 14.28.47.630\u0026#34;}).catalogEntry.id #Add the driver update approval and defer it for 2 days (Deferral time is set to 0 day in the policy) Add-DriverUpdateApproval -policyIDs @($($policy.id),\u0026#34;PolicyID2\u0026#34;) -catalogEntryID $catalogID -deferDays 2 Revoke a Driver Update Approval #Get the Update Catalog ID for the driver update. $catalogID = (Get-DriverUpdatePolicyApplicableContent -policyID $policy.id | Where-Object {$_.catalogEntry.displayName -eq \u0026#34;Intel - System - 4/12/2017 12:00:00 AM - 14.28.47.630\u0026#34;}).catalogEntry.id #Revoke the driver update approval Revoke-DriverUpdateApproval -policyIDs @($($policy.id),\u0026#34;PolicyID2\u0026#34;) -catalogEntryID $catalogID Update a Driver Update Deferral #Update the deferral time for the policy Update-DriverUpdatePatchDeferral -policyID $policy.id -deferralTime \u0026#34;P2D\u0026#34; Conclusion There are so many more things that you can do with the PowerShell Module, and I will be adding more functionality to it over time. If you have any suggestions, please let me know by using the discussions and issues tabs on the GitHub repo.\n","image":"https://hugo.euc365.com/images/post/daf/PowerShellModuleLogo-DAF_hu37b13791a92314445fa2a76775b3a02d_16280_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/driver-firmware-servicing-powershell-module/","tags":["Drivers","Firmware","Powershell","Updates","Graph API"],"title":"Driver and Firmware Servicing PowerShell Module"},{"categories":["Driver and Firmware","Graph API","Updates","PowerShell"],"contents":"The driver and firmware serving solution currently allows you to onboard up to 2000 devices at once using a CSV file, and even then most of the leg work need to be completed by the Admin to export, populate a CSV and then upload it and so on. It isn\u0026rsquo;t yet possible to map an AAD Group to the deployment audience either which again, isn\u0026rsquo;t always ideal if you have devices in their thousands and you want to enrol them into the service to bring back some more control.\nWith all that said and done, it is completely possible to automate this process with the Graph API and PowerShell. In this article we will focus on how to achieve just that with very little effort required from admins.\nWhile this solution will check the current update audience for a device, it will not check all of your audiences. This can leave devices ending up in multiple policies if they exist in multiple groups, and this is not a recommended practice, although it is allowed.\nPrerequisites Permissions to connect to the Graph API with the following scopes WindowsUpdates.ReadWrite.All Group.Read.All Permissions to Azure AD with the Ability to connect via PowerShell Microsoft.Graph PowerShell Module The script snippets and process will all be driven around a script called Update-BulkMembers.ps1 which is store on my GitHub repo, this can be accessed using the button below;\nExecuting the Script There are two things needed to execute the script;\nAzure AD Group ID Update Audience ID (for assistance on finding this, look at THIS POST) It is super simple to achieve the end goal by just executing the scrip with the following command;\nUpdate-BulkMembers.ps1 -aadGroupID \u0026lt;AADGroupID\u0026gt; -audienceId \u0026lt;updateAudienceID\u0026gt;\nThis will then get all group members, break them up in to chunks of 2000 devices and then onboard them into the services and populate the audience. However, if you want to know how it works, stick around and lets break it down.\nBreaking down the script We will skip past the first few bits which handle parameters, module installation and authentication and skip straight to the goodies!\nThe first part of the code (below), will get the ObjectID\u0026rsquo;s of the members of the Azure AD Group, and then display the count of devices, before then breaking it down into chunks of 2000 ids, and then getting all of the members of the current audience.\n#Get Group Members IDs $GroupMemberIDs = (Get-MgGroupMember -GroupId $aadGroupID -All).id \u0026#34;$aadgroupID $($GroupMemberIDs.Count) members\u0026#34; #Break the id\u0026#39;s into chunks of 2000 $chunks = [System.Collections.ArrayList]::new() for ($i = 0; $i -lt $GroupMemberIDs.Count; $i += 2000) { if (($GroupMemberIDs.Count - $i) -gt 1999 ) { $chunks.add($GroupMemberIDs[$i..($i + 1999)]) } else { $chunks.add($GroupMemberIDs[$i..($GroupMemberIDs.Count - 1)]) } } $updateAudienceMembers = Invoke-GetRequest ` -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences(\u0026#39;$audienceId\u0026#39;)/members\u0026#34; -All Following on from this, we enter the foreach loop for the chunks that were created in the previous snippet. Inside this foreach loop, it will get the DeviceID for each of the members and add them to the $azureDeviceIDs variable as follows;\n$AzureDeviceIDs = @() $GroupMemberIDs | foreach-object { $DeviceID = (Get-AzureADDevice -ObjectID $_).DeviceID $AzureDeviceIDs += $DeviceID } Following that, we create two post body objects, one for the enrollment into the service, the second for adding the device to the audience. $enrollParamBody = @{ updateCategory = \u0026#34;driver\u0026#34; assets = @( ) } $audienceParamBody = @{ addMembers = @( ) } We then do a foreach loop of the Azure AD DeviceID`s, and check if they exist in the policy audience, and also check if they are enrolled into the service. If they are NOT if will add them to the respective post bodies for invocation a right at the end.\nforeach ($id in $azureDeviceIDs) { IF (-Not($updateAudienceMembers.id -contains $id)) { $memberObject = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = $id } $audienceParamBody.addMembers += $memberObject } IF(-Not($updateAudienceMembers.id -contains $id) -or ($updateAudienceMembers | Where-Object {$_.id -match $id}).enrollments.updateCategory -notcontains \u0026#34;driver\u0026#34;){ $memberObject = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = $id } $enrollParamBody.assets += $memberObject } } Last, but not least we post the relevant bodies to the relevant endpoints and we are a wrap! #Explicitly Enrol Devices Invoke-MgGraphRequest ` -Method POST ` -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatableAssets/enrollAssets\u0026#34; ` -Body $enrollParamBody #Post Audience Members Invoke-MgGraphRequest ` -Method POST ` -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences(\u0026#39;$audienceId\u0026#39;)/updateAudience\u0026#34; ` -Body ( $audienceParamBody | ConvertTo-Json -Depth 5) Closing Thoughts I hope this will help you further onboard devices at scale to really put the services to the test before we get the intune capabilities or even the ability to use native Azure AD groups.\nPlease reach out with any feedback you may have too :).\n","image":"https://hugo.euc365.com/images/post/daf/bulkenrolFeature_hu61f59686cb6a339dc8eb17b50edb5be2_32032_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/bulk-enrol-device-driver-firmware-servicing/","tags":["Drivers","Firmware"],"title":"Bulk Enrol Device to Driver and Firmware Servicing"},{"categories":["Driver and Firmware","Graph API","KQL","Log Analytics","Updates","PowerShell"],"contents":"The commercial Driver and Firmware servicing has been big talk across the system management community (SEE PRESS RELEASE HERE) since it\u0026rsquo;s release on valentines day. It can be challenging though to find out the applicable devices, the UI does offer an applicable device count in its current state, however this post is to show you how to find the applicable devices in your audiences.\nWe will be looking at how we can do this via the Graph API, and also how you can retrieve this data from the Windows Update for Business Reports log analytics workspace.\nIf you don\u0026rsquo;t have WUfB Reports configured, you can take a look at my VLOG to assist with the configuration.\nPrerequisites Permissions to connect to the Graph API with the following scopes WindowsUpdates.ReadWrite.All Permission to query your Log Analytics workspace hosting WUfB Reports data Permissions to Azure AD with the Ability to connect via PowerShell Microsoft.Graph PowerShell Module Connecting to the Graph API Connecting via this module could not be easier, follow the below steps after ensuring the Microsoft.Graph module is installed.;\nLaunch a PowerShell prompt Enter Connect-MgGraph -Scopes WindowsUpdates.ReadWrite.All -ContextScope Process, hit Enter Sign in, If not already consented, you will be prompted with an image like the below. You can choose to grant for yourself or your organisation if you have the permissions. Microsoft Graph Profile Selection The first thing we need to do before running any commands is call Select-MgProfile -Name beta to ensure we are using the Beta endpoint of the Microsoft Graph.\nFind Applicable Device id\u0026rsquo;s using the Graph This will only display the AzureAD Device ID, this will not give you the device name.\nTo list all of the applicable content from a policy, you must first know the deploymentAudience id to which your policy is targeted. You can take a look at my Driver Management via Graph API and PowerShell post on how to obtain this information.\nOnce you have your deployment audience you can run (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences('\u0026lt;audienceID\u0026gt;')/applicableContent?`$expand=catalogEntry\u0026quot;).Value, replacing the \u0026lt;audienceID\u0026gt; with your audience id, this will return something like the following;\nName Value ---- ----- matchedDevices {8exxxxxx-2bbe-xxxx-ba2c-xxxxxxxxxxf9, 16xxxxxx-aee7-xxxx-xxxx-xxxxxxxxxx39} catalogEntry {[deployableUntilDateTime, ], [setupInformationFile, ], [provider, Intel], [versionDate… matchedDevices {24xxxxx-xxxx-xxxx-b50c-xxxxxxxxxxda, 8exxxxxx-2bbe-xxxx-ba2c-xxxxxxxxxxf9, 16xxxxxx-a… catalogEntry {[deployableUntilDateTime, ], [setupInformationFile, ], [provider, Intel], [versionDate… matchedDevices {8exxxxxx-2bbe-xxxx-ba2c-xxxxxxxxxxf9, 16xxxxxx-aee7-xxxx-xxxx-xxxxxxxxxx39, dd8af46f-4… catalogEntry {[deployableUntilDateTime, ], [setupInformationFile, ], [provider, Intel], [versionDate… matchedDevices {8exxxxxx-2bbe-xxxx-ba2c-xxxxxxxxxxf9} As you can see the matchedDevices has your Device IDs, however, from the above view, it isn\u0026rsquo;t ver consumable as you cannot see what driver is which, so you could run something like the following to try and make it a bit better;\n$applicableConent = (Invoke-MgGraphRequest -Method GET -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences(\u0026#39;\u0026lt;audienceID\u0026gt;\u0026#39;)/applicableContent?`$expand=catalogEntry\u0026#34;).Value $consumableContent = @() FOREACH ($dObj in $applicableConent) { $dContent = @{} $dContent.driverDisplayName += $dObj.catalogEntry.displayName $dContent.matchedDevices += $dObj.matchedDevices $consumableContent += $dContent } #Run this to view an example of your output $consumableContent[0] If you have run the above you should then see something like the following.\nName Value ---- ----- matchedDevices {8exxxxxx-2bbe-xxxx-ba2c-xxxxxxxxxxf9, 16xxxxxx-aee7-xxxx-xxxx-xxxxxxxxxx39} driverDisplayName Intel - net - 22.190.0.4 This is a lot more consumable, however, it is still only a list of ID\u0026rsquo;s which you would then have to circle through with the AzureAD powershell module to help this be of more use. You also have to do this for each deployment audience you have, so if you have multiple, this could soon become very complex.\nSo, with that in mind, lets take a look at a more consumable, and user friendly way to do this with WUfB Reports Log Data.\nWUfB Reports Data This data will have a delay of up to 24 hours.\nWe will not be looking at the Monitor Workbook for this section, we will only be looking at the Log Data in the Log Analytics workspace. So if you browse to your workspace and open up the logs section we will then be able to run a couple of queries.\nLets break the first couple of lines of the queries down a little before we look at the final summarization.\nWe select the table where the data is stored We filter for the DriverUpdate Category We ensure we are only picking up devices with applicable content, and where the status is not cancelled We join the UCClient table on the AzureADDeviceID so we can consume the DeviceName We then summerize the results to get the latest entry per device object. From the above we then summerize this data in a couple of ways to get different views on the data.\nBy CatalogID There are numerous ways we can group the data for review, however, no matter what the PolicyID, or DeploymentID is, if the update CatalogID is the same it makes sense to logically group them by this for a view across the entire estate.\nSo with that you end up with the following query, which will display a count of devices, the Device DisplayNames along with the catalogID and Update DisplayName as in the image at the end.\nUCServiceUpdateStatus | where UpdateCategory == \u0026#34;DriverUpdate\u0026#34; | where isnotempty(ServiceState) and ServiceState !in (\u0026#34;Cancelled\u0026#34;) | join UCClient on AzureADDeviceId | summarize arg_max(TimeGenerated,*) by AzureADDeviceId | summarize DeviceCount=count(), Devices=make_list(DeviceName) by CatalogId, UpdateDisplayName By Policy If you would prefer to view applicable device by policy, the following query will do just that, the output will be very similar to above however it will be broken down by the PolicyID as show in the image.\nUCServiceUpdateStatus | where UpdateCategory == \u0026#34;DriverUpdate\u0026#34; | where isnotempty(ServiceState) and ServiceState !in (\u0026#34;Cancelled\u0026#34;) | join UCClient on AzureADDeviceId | summarize arg_max(TimeGenerated,*) by AzureADDeviceId | summarize DeviceCount=count(), Devices=make_list(DeviceName) by PolicyId, UpdateDisplayName Closing Thoughts When all is said and done, I would choose the Log Analytics data to report on this, I know it\u0026rsquo;s 24 hours behind, but how often do you need to know the data at that single point in time? And if you do, you could then always perform is semi-manully.\nThere are also many other ways you can manipulate this data for your needs, these are basic examples to set you on your way!.\n","image":"https://hugo.euc365.com/images/post/daf/deploymentsPerDeviceFeature_huefc0c7c1cf085a68f039545df14c5611_19881_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/view-applicable-devices-driver-deployments/","tags":["Drivers","Firmware"],"title":"View Applicable Devices for Driver Deployments"},{"categories":["Driver and Firmware"],"contents":"It\u0026rsquo;s been a days or so since the release of the public preview of Driver and Firmware management (SEE PRESS RELEASE HERE), and many of you may have used the GUI to create your policies, and have been left scratching your head as to why you cannot specify a deferral period. Well, let me let you into a secret, IT IS.\nIf you have looked at my Driver Management via Graph API and PowerShell post, I call it out within the Create an Update Policy section.\nSo, how do we add this to our policies so you don\u0026rsquo;t have to re-create your policies? Let\u0026rsquo;s take a look shall we.\nPrerequisites Permissions to connect to the Graph API with the following scopes WindowsUpdates.ReadWrite.All Microsoft.Graph PowerShell Module Connecting to the Graph API Connecting via this module could not be easier, follow the below steps after ensuring the Microsoft.Graph module is installed.;\nLaunch a PowerShell prompt Enter Connect-MgGraph -Scopes WindowsUpdates.ReadWrite.All, hit Enter Sign in, If not already consented, you will be prompted with an image like the one below. You can choose to grant for yourself or your organisation if you have the permissions. Microsoft Graph Profile Selection The first thing we need to do before running any commands is call Select-MgProfile -Name beta to ensure we are using the Beta endpoint of the Microsoft Graph.\nAdd/Update Deferral Dates Throughout the rest of this article we will be referring to a script called Update-PatchDeferrals.ps1, this script can be found by using the GitHub Resource link below.\nThe aim of this script is to simplify the process as much as possible for everyone. I will step through the manual process so the understanding of what is happening is there, but for simplicity, the use of the script (or function within) will be the better option.\nTo update a policy using the script, it simply needs to be called as follows;\nUpdate-PatchDeferrals.ps1 -updatePolicyID 'cc4fbe71-a024-41c2-a99f-559dcde6e916' -deferralTime 'PT5D'\nLooking through the Function The basic premise of this script is to use the current complianceChangeRules, but only modify the durationBeforeDeploymentStart property which will control the deferral date. I thought the simplest way to do this was to pull in the current configuration, and amend only the durationBeforeDeploymentStart property. Let\u0026rsquo;s take a look at how we achieve this then shall we.\nFirst of all, we have our Mandatory parameters, updatePolicyID and the deferralTime. The update policy ID can be found by looking at the Listing Update Policies section in my original article.\nThe deferralTime, is the parameter that gives us what we need. This needs to be formatted in the ISO8601 standards for duration (LINK), for example, If we added PT1H, that would defer the update for an hour before offering. If we was to use P10D, that would defer the offer for 10 days.\n[CmdletBinding()] param ( # The Update Policy ID [Parameter(Mandatory = $true)] [string] $updatePolicyID, # ISO8601 Timeformat for Deferral [Parameter(Mandatory = $true)] [string] $deferralTime ) The first thing the function then does in the begin section is form the base object for the POST request later in the module, it then fulfils the $complianceChangeRules variable with the current settings from the policy.\nbegin { #The Base Object for the post Body $paramBody = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.updatePolicy\u0026#34; complianceChangeRules = @() } # Create the param body base $complianceChangeRules = (Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies/$updatePolicyID\u0026#34; -Method GET).complianceChangeRules } In the process section, this is where the object combines the $paramBody and the $complianceChangeRules objects, and then for each object in the complianceChangeRules array, it will update the deferral time.\nprocess { $paramBody.complianceChangeRules += $complianceChangeRules $paramBody.complianceChangeRules | foreach-object { $_.durationBeforeDeploymentStart = $deferralTime } } Finally, within the end section, we post the $paramBody object to the Graph API to update the deferral on the policy.\nend { Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies/$updatePolicyID\u0026#34; -Method PATCH -Body $paramBody } This deferral will only apply to updates approved Automatically after the change has been made. Any current approvals are unaffected.\nClosing Thoughts From what I have been seeing in the community, there seems to be an air of expectation that this would have been released with Intune UI Capability. However, what has been released is the foundations of the building, it is everything that underpins the structural walls of what is to come!, without what we have today, the roof would never be added!\n","image":"https://hugo.euc365.com/images/post/daf/deferralsFeatured_hu80ef44dc91ea78c34e9dd486ddd5d526_19709_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/add-offer-deferrals-driver-firmware-policies/","tags":["Drivers","Firmware","Updates"],"title":"Add Offer Deferrals to Driver and Firmware Policies"},{"categories":["Driver and Firmware","Graph API","Updates","PowerShell"],"contents":"Microsoft have now released the Driver and Firmware Update management via the Graph API to the public!! This is one small step for some, but a giant leap in terms of the management of Drivers and Firmware using the Windows Update for Business Deployment Service (WUfB DS).\nMany of you may have been anticipating this for a while as there has been various posts about it like the Tech Community Post from March 2021.\nWhile the solution isn\u0026rsquo;t yet baked into Intune, it is on the horizon and anticipated to land in preview some time this year (but don\u0026rsquo;t hold me to it!). The Driver and Firmware team though are committed to delivering the solution components via WUfB DS for organisations and SMBs to start to take control of their environments.\nThe solution may only be configurable via the Graph API, however, the product team behind the solution have delivered an application that can be used to drive this in a GUI format, take a look at my VLOG Post to find out how to configure this.\nWithout further ado though, let take a look at how we can interact with the service with the Graph API and PowerShell.\nPrerequisites Permissions to connect to the Graph API with the following scopes WindowsUpdates.ReadWrite.All Permission to view DeviceIDs in Azure AD Microsoft.Graph PowerShell Module Connecting to the Graph API Connecting via this module could not be easier, follow the below steps after ensuring the Microsoft.Graph module is installed.;\nLaunch a PowerShell prompt Enter Connect-MgGraph -Scopes WindowsUpdates.ReadWrite.All -ContextScope Process, hit Enter Sign in, If not already consented, you will be prompted with an image like the below. You can choose to grant for yourself or your organisation if you have the permissions. Microsoft Graph Profile Selection The first thing we need to do before running any commands is call Select-MgProfile -Name beta to ensure we are using the Beta endpoint of the Microsoft Graph.\nManaging Update Policies At the time of writing this article, the Policy Name was not available in the Graph API, this change is expected to be implemented in the future, but all of the work in this article is based on IDs. If you use the GUI, the Policy Names only exist on that device, if you were to try and use another machine with the same configuration the names would not appear.\nSo before we get to looking at the graph calls and creating policies, it is important to know that each Update Policy has an Update Audience which holds the Azure AD DeviceID. Each device, then also needs to be enrolled into the driver updateCategory.\nMake sure you use the DeviceID from Azure AD, and not the Object ID.\nListing Update Policies As mentioned in the note at the start of this section, there are no policy names stored in the API, so knowing the ID\u0026rsquo;s for the policies is imperative if you wish to undertake certain operations on a specific policy. For me, I store these in a hash table and in my IDE like follows;\n$policyMap = @{ Test = \u0026lt;TestGUID\u0026gt; Pilot = \u0026lt;PilotGUID\u0026gt; Production = \u0026lt;ProductionGUID\u0026gt; } This not only helps me understand which ID I am interacting with, but it allows me to utilise this for some mapping to audiences later down the line.\nTo list all of the policies within your environment, you can run (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies\u0026quot;).Value which will return something like the following;\nName Value ---- ----- audience {id, applicableContent} id \u0026lt;GUID\u0026gt; createdDateTime 03/02/2023 16:17:44 deploymentSettings {schedule, monitoring, expedite, userExperience...} autoEnrollmentUpdateCategories {driver} complianceChangeRules {} You can also call the policy directly if you know the ID by running (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies('\u0026lt;GUID\u0026gt;')\u0026quot;).Value, but replacing the GUID placeholder.\nListing Deployment Audience Now that\u0026rsquo;s great, we can see the policies, but what is the deployment audience you speak of? Well let look at that. If you re-run the command above but with a few changes as follow you will see the deployment audience id.\nPS C:\\\u0026gt; (Invoke-MgGraphRequest -Method GET -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies\u0026#34;).Value[0].audience Name Value ---- ----- id 8f636944-xxxx-xxxx-xxxx-a8abd4179687 applicableContent {} Once you have the deployment audience ID, you can see the members of the audience by running (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences('GUID')/members\u0026quot;).Value by replacing the GUID placeholder with the id, which will return something like the below;\nName Value ---- ----- id xxxxxx-15f2-xxxx-b195-316d6xxxxxx @odata.type #microsoft.graph.windowsUpdates.azureADDevice errors {} enrollments {System.Collections.Hashtable, System.Collections.Hashtable} id xxxxxx-15f2-xxxx-b195-316d7xxxxxx @odata.type #microsoft.graph.windowsUpdates.azureADDevice errors {} enrollments {System.Collections.Hashtable, System.Collections.Hashtable} These ID\u0026rsquo;s relate to the Azure AD DeviceID property, if you further expand one of the values using (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences('8f636944-xxxx-xxxx-xxxx-a8abd4179687')/members\u0026quot;).Value[0].enrollments you will see the device is on-boarded for Driver management.\nName Value ---- ----- @odata.type #microsoft.graph.windowsUpdates.updateManagementEnrollment updateCategory feature @odata.type #microsoft.graph.windowsUpdates.updateManagementEnrollment updateCategory driver Ok, so now we\u0026rsquo;ve seen them listed, lets look how to conceptually create a policy from the ground up.\nCreating an Update Policy At the start of the Managing Update Policies section we mentioned the fact that each update policy requires an audience right? Well you cannot create a policy without an audience, so that is the first item on the Agenda.\nCreating an Update Audience You CAN NOT add members to an audience during it\u0026rsquo;s creation, this has to be done once created.\nCreating an Audience is easy, it doesn\u0026rsquo;t require a post body as such, just a blank JSON object. In the below example we are going to create this and assign it to a variable that will be used in other parts the process.\n$daAudience = Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences\u0026#34; -Method POST -Body @{} -ContentType \u0026#39;application/json\u0026#39; # Calling the variable will return the response PS C:\\\u0026gt; $daAudience Name Value ---- ----- @odata.context https://graph.microsoft.com/beta/$metadata#admin/windows/updates/deploymentAudiences/$entity id dbe37901-xxxx-xxxx-xxxx-4745de6ee147 applicableContent {} Adding Members to the policy The reccomended device limit is 2000 when using the Graph API.\nNow we have our audience, you will want to add devices right? for this you will need a list of your device IDs in an object as below and then make the post request to add the members to the audience.\n$addMembersPostBody = @{ addMembers = @( @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = \u0026lt;DeviceID\u0026gt; } @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = \u0026lt;DeviceID2\u0026gt; } ) } Invoke-MgGraphRequest -Method POST -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences(\u0026#39;$($daAudience.id)\u0026#39;)/updateAudience\u0026#34; -Body $addMembersPostBody -ContentType \u0026#39;application/json\u0026#39; Ok, we are kind of close, but as of yet, no cigar. There are two ways to enrol your devices, Implicitly or Explicitly. You can create your update policies to Implicitly enroll device, however, for a larger number of devices (more likely during initial on-boarding) the longer this will take. You also have to bear in mind, this is a global service, so if everyone relies on implicit enrolment, then the slower your devices will be enrolled.\nIt is recommended, that devices are Explicitly enrolled as per the Explicitly Enrolling Devices section below.\nExplicitly Enrolling Devices While there isn\u0026rsquo;t a limit on the amount of devices you can post, the more devices you add, the longer it will take to complete the request.\nEnrolling a device, is somewhat similar to the adding members, you need an object with you devices in to be used as the body of the Graph call. A sample of this object would look like the following snippet, with the invoked call following .\n$enrollPostBody = @{ updateCategory = \u0026#34;driver\u0026#34; assets = @( { \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = \u0026lt;DeviceID\u0026gt; } { \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.azureADDevice\u0026#34; id = \u0026lt;DeviceID2\u0026gt; } ) } Invoke-MgGraphRequest -Method POST -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatableAssets/enrollAssets\u0026#34; -Body $enrollPostBody -ContentType \u0026#39;application/json\u0026#39; Once enrolled, you can review the enrolment configuration as per the end of the Listing Deployment Audience section.\nCreating an update policy There are two types of policies to create, Manual and Automatic. So in the drop-down\u0026rsquo;s below there are code snippets that will create the policies for you.\nManual $manualUpdatePolicyParams = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.updatePolicy\u0026#34; audience = @{ id = $daAudience.id } autoEnrollmentUpdateCategories = @( \u0026#34;driver\u0026#34; ) complianceChanges = @() deploymentSettings = @{ schedule = $null monitoring = $null contentApplicability = $null userExperience = $null expedite = $null } } Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies\u0026#34; -Method POST -Body $manualUpdatePolicyParams -ContentType \u0026#39;application/json\u0026#39; Automatic $automaticUpdatePolicyParams = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.updatePolicy\u0026#34; audience = @{ id = $daAudience.id } autoEnrollmentUpdateCategories = @( \u0026#34;driver\u0026#34; ) complianceChangeRules = @( @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.contentApprovalRule\u0026#34; durationBeforeDeploymentStart = \u0026#34;PT0S\u0026#34; contentFilter = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.driverUpdateFilter\u0026#34; } } ) deploymentSettings = @{ schedule = $null monitoring = $null contentApplicability = @{ offerWhileRecommendedBy = @( \u0026#34;microsoft\u0026#34; ) safeguard = $null } userExperience = $null expedite = $null } } Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies\u0026#34; -Method POST -Body $automaticUpdatePolicyParams -ContentType \u0026#39;application/json\u0026#39; There are a couple of properties that are worth noting.\nautoEnrollmentUpdateCategories - This array allows you to specify auto enrolment into the driver service, meaning if you forget to manually enrol them, it will take care of it for you, albeit slower. durationBeforeDeploymentStart - This property specifies the deferral in an ISO8601 format, e.g P1D = 1 Day, PT2H30M = 2 hours and 30 Minutes You can also dig further into these setting in the Graph API Documentation.\nListing Applicable Content Now we have our policies, let look at how we see the applicable content for devices in an deployment audience. We will use the same audience we created earlier so we will continue to use the $daAudience variable.\nTo list the applicable content, you can run (Invoke-MgGraphRequest -Method GET -Uri \u0026quot;https://graph.microsoft.com/beta/admin/windows/updates/deploymentAudiences/$($daAudience.id)/applicableContent\u0026quot;).value and it will return all applicable content, with their respective applicable device ids. You can further expand into one of these values by appending either .value[0].catalogentry or .value[0].matchedDevices in place of the .value.\nManually Approving Driver Content While I am writing this up for visibility, I would highly recommend standing up a machine to host the GUI with the configuration as required..\nTo manually approve the driver content, you will first need to find the catalogID of the desired update. This will take us on a journey back through the eco system, firstly finding the policy you want to review, followed by locating the audience id, and then reviewing the applicable content manually, and noting the catalogEntry id.\nI wont walk through the whole process, as they are stepped out along the way of this article, but I will summarise the order below.\nLocate the Policy Locate the audience id Review the Applicable Content Note the catalogEntry id Once you have done the above, you can compile another object as per below to approve the content, replacing the CatalogEntryID and UpdatePolicyID placeholders with the correct values.\n{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.contentApproval\u0026#34; content = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.catalogContent\u0026#34;, catalogEntry = @{ \u0026#34;@odata.type\u0026#34; = \u0026#34;#microsoft.graph.windowsUpdates.driverUpdateCatalogEntry\u0026#34;, id = \u0026lt;CatalogEntryID\u0026gt; } } } Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/admin/windows/updates/updatePolicies/\u0026lt;UpdatePolicyID\u0026gt;/complianceChanges\u0026#34; -Method POST -Body $automaticUpdatePolicyParams -ContentType \u0026#39;application/json\u0026#39; Closing Thoughts It has been a long time in the making, but gosh, its going to help so many organisations, and in the future it will be a must have implementation. But for now, lets feed back to the Microsoft Team and get this service in use to ensure we can build a brighter and better future for device management.\nTime to sign off on this one now, We\u0026rsquo;ve covered a lot in this article, and there is surely more to come. I always love hearing from people, so please comment, share and feedback on the article :).\n","image":"https://hugo.euc365.com/images/post/daf/graphDAF_hud59e2bc3e261e6e8ada28944555bddc7_60265_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/driver-management-graph-api/","tags":["Drivers","Firmware"],"title":"Driver Management via Graph API and PowerShell"},{"categories":["VLOG","Driver and Firmware"],"contents":"In this video, we’ll walk you through:\nInstalling required components Configuring the App Registration Securing your Web Application with User or Group Access Launching the GUI A brief look around the GUI Useful Links Node.JS Downloads WUfB DS Web Application Repo Graph API Documentation Driver Management via Graph API and PowerShell ","image":"https://hugo.euc365.com/images/post/vlog/wufbdsVlogFI_huc9af4502a5f53400e7f943d6ca6e35da_172901_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-driver-firmware-updates-public-preview-gui-configuration/","tags":["Drivers","Firmware","Windows","Updates"],"title":"VLOG - Driver and Firmware Updates Preview GUI Configuration"},{"categories":["Windows 365"],"contents":"Windows 365 and Azure Network connections offers additional control over routing, IP Addresses, Subnets, Peerings and so on. It can also be incredibly useful to help with testing moving to Azure AD Only (Can you tell, I\u0026rsquo;m already not a fan of HAADJ), as with the right configuration on your Azure vNet you can connect back to your internal network and get straight to testing.\nUsing an ANC with Windows 365 and Cloud PCs does bring an additional cost, not to the license or compute though, just the network usage cost. Microsoft do recommend using their Hosted Network, but they are not fools, they have built a system for enterprise and allowing the capability of Bring-Your-Own-Network (BYON) is a clear no brainer for organisations with existing on-premise infrastructure.\nBYON doesn\u0026rsquo;t just have to be used for a gateway connection back to on-premise though, it can be used to just add more control over the network traffic on these endpoint with firewalls etc. There is some further documentation on Azure Network Connections over on the Microsoft Learn Page.\nFor this post, I wanted to explore creating these connections with PowerShell and the Graph API. We will explore both Azure AD Only and Hybrid Joined ANCs and the differences in the policy creation.\nPrerequisites Azure Subscription Azure Virtual Network (vNet) Owner Permissions on the Subscription where the Network Resides Be an Intune Administrator Hybrid Azure AD Join If the chosen deployment option is HAADJ, you will need to ensure the network has line of sight to your domain controllers and that DNS can be resolved.\nIn addition to the above, you will require a domain Service Account which can join devices to the domain.\nCreating the Policies You may need to grant permissions to the PowerShell module within your Tenant. If this is the case and you do not have access to grant the relevant permissions, please reach out to an administrator who can grant these rights.\nIn both instances, there are details we require for the creation of the connection so lets grab them first.\nSubscriptionID - The Subscription ID in which the Virtual Network Resides Resource Group Name - The name of the resource group where the Virtual Network resides Virtual Network Name - The name of the Virtual Network vNet Subnet Name - The name of the Subnet which resides within the vNet In addition to the above, for Hybrid machines you will also require the following;\nDomain Join Service Account OU Path Domain DNS Suffix (e.g euc365.lab) As noted in the prerequisites, ensure you have Owner rights at the time of creation on the Subscription.\nThe scripts quoted in this article, are available on GitHub, please use the link below.\nFirst of all, lets look at creating a NON Hybrid connection. If you download the files from the GitHub Repo, this will be the create-w365-ANC-aad.ps1 script.\nIf you open it up and take a look inside, you can see, there is no major magic that is happening inside, it is just taking out passed in variables and building up the $params variable, which will be used as the body of our Graph Call.\nOnce the call is made, Windows 365 then runs checks on the Network connection to make sure everything is good to run machines from, for that reason you can not know for sure if it will work straight away, which is why at the end their is a DO, WHILE loop to ensure we know the final outcome.\nThese tests can vary in the amount of time taken to complete, but I haven\u0026rsquo;t seen one take more than 10 Minutes.\nIf we look at how we can then execute this script using the command line, we would use the following command with the placeholders replaced with their representational values.\ncreate-w365-ANC-aad.ps1 -subscription \u0026lt;SubscriptionID\u0026gt; -resourceGroupName \u0026lt;RGName\u0026gt; -vNetName \u0026lt;vNetName\u0026gt; -subnetName \u0026lt;subnetName\u0026gt; -ancName \u0026lt;ancName\u0026gt;\nOnce executed, you will see the following in the shell window.\nThe Network connection will also be available in the Windows 365 blade within Intune.\nNow if we take a look at the HAADJ Scenario, there are a few more variables required which are, $domainOU, $domainDNSSuffix, $domainJoinUser and $domainJoinUserPWD. The $domainOU variable must be populated with the distinguished name of the OU, for example OU=Devices,DC=EUC365,DC=LAB and the $domainJoinUser must be the user principal name, for example **[email protected]\u0026quot;.\nThe password will be plain text.\nWe execute this script in the same way as the above with the additional cmdlets as shown below with the additional placeholders.\ncreate-w365-ANC-haadk.ps1 -subscription \u0026lt;SubscriptionID\u0026gt; -resourceGroupName \u0026lt;RGName\u0026gt; -vNetName \u0026lt;vNetName\u0026gt; -subnetName \u0026lt;subnetName\u0026gt; -ancName \u0026lt;ancName\u0026gt; -domainOU \u0026lt;e.g OU=Devices,DC=EUC365,DC=LAB\u0026gt; -domainDNSSuffix \u0026lt;e.g euc365.lab\u0026gt; -domainJoinUser \u0026lt;e.g [email protected]\u0026gt; -domainJoinUserPWD \u0026quot;iamplaintext\u0026quot;\nThe platform will then again perform the relevant checks to ensure it can connect to the domain, look up DNS etc, so please ensure your vNet and subnet are configured accordingly for this test to pass.\nConclusion This is only one piece of the puzzle, but what I would say, is if you do not need one for a certain purpose (i.e routing back to on-premise) do not create one. It adds another level of management and complexity where it is not required, and if you use the moto of less is more, then you can save the management in the long run.\nPlease leave comments and reactions below, and let me know your experiences.\n","image":"https://hugo.euc365.com/images/post/w365/ancFeature_hu9332d07731226b04fa6e5eb784787c2b_18764_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/create-windows-365-azure-network-connections-powershell/","tags":["Windows 365","Intune","Cloud PC","PowerShell","Graph API"],"title":"Create Windows 365 Azure Network Connections with PowerShell"},{"categories":["Microsoft Intune","Microsoft Store Apps"],"contents":"While performing some work out in the wild, it was noted that one of the Microsoft Store Apps (New) would not install unless excluded from all policies. The app in this case was the Company Portal app which is pretty vital for self-service application delivery. The error displayed was 0x00000000 in the Intune console as below.\nSo I put down my configuration hat and brushed off the dust on the troubleshooting hat and got it nicely placed on my noggin and got stuck in.\nFinding the Problem As with all troubleshooting the first step is Logs, so lets open up C:\\ProgramData\\Microsoft\\IntuneManagementExtension\\Logs\\IntuneManagementExtension.log. I did troubleshooting using Notepad\u0026hellip; if your going to troubleshoot, you need to do it hardcore right? But you can open it in any editor or CMTrace etc.\nOnce launched, if you search for WinGet and you will be able to locate the events that relate to the \u0026lsquo;New\u0026rsquo; Store App deployments via Intune.\nLooking through the logs I came across the lovely gem below;\nThis was somewhat of a relief to see, as it had actual error information, rather than just the 0x00000000 error code.\nFollowing on from finding this, I opened up PowerShell as the user and tried to install the app manually for additional validation and I got the below;\nTaking a quick look at the configuration applied to the machine, I noted that there was an ADMX backed policy that Turned Off the Microsoft Store.\nI didn\u0026rsquo;t want to go through the hassle of changing the policy off the bat, I wanted to validate my thinking. I located the policy it controls in the registry under the HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\WindowsStore Key. Thd value I was looking for was RemoveWindowsStore. When located as shown below the value was set to 1.\nA quick change to set it to 0, and then back over to PowerShell to try it out again. An low and behold, it works first time.\nThis is then identified by Intune as installed, even after reverting the registry key back to 1.\nConclusion I am by no means suggesting this is the only thing that causes the 0x00000000 error code, It has also been noted that it can be caused by Insufficient Connectivity to the store, which in the case I\u0026rsquo;ve seen it looks to be caused by proxy/network configuration. I am sure this will not be the last of the phantom errors either, so please comment below if you come across this in any other scenario to help keep this post updated.\nI hope you find this useful and it solves your problem.\n","image":"https://hugo.euc365.com/images/post/winget/0x0/wingetSuccess_hub2c1ae989c73ab9e916ee8602b3da013_436836_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/troubleshooting-ms-store-app-install-error-0x00000000/","tags":["WinGet","Intune","Application Deployment"],"title":"Troubleshooting New MS Store App Install Error 0x00000000"},{"categories":["VLOG"],"contents":"In this video, we’ll walk you through:\nSelf service methods of Access Approval Steps Bringing this into your SLAM Process Why group based assignments are optimal Useful Links Entitlement Management (Access Packages) Getting Started with Windows 365 Enterprise Windows 365 Business vs Enterprise Comparison Windows 365 Pricing and Plans Microsoft Flow \u0026amp; Power Automate ","image":"https://hugo.euc365.com/images/post/vlog/w365AccessSolutions_hu11dcf5261802c0f6b320834264855c4b_179218_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-windows-365-access-solutions/","tags":["Windows 365","Intune","Windows","Entitlement Management"],"title":"VLOG - Windows 365 Access Solutions"},{"categories":["Windows 365"],"contents":"Windows 365 (Cloud PC) is the hot topic of the moment, and why wouldn\u0026rsquo;t it be? For many years, before most can remember, it has been the norm for IT Departments to Image, configure and deploy Laptops and Workstations for end-users.\nHowever, over the past couple of years, with COVID and other factors, the world has moved to a more distributed workforce, where country boundaries are no longer the border of employment. Companies invested in further development on their virtualisation offerings, such as Azure Virtual Desktop, VMWare Horizon, Citrix etc., however, all of these solutions require specialised skill sets and are very complex and often high maintenance.\nMicrosoft has taken this one step further and created a PaaS (Platform as a Service) solution that integrates and is managed via Microsoft Intune, delivering the \u0026lsquo;Single Pane of Glass\u0026rsquo; management that many businesses seek. The beauty of this is that you do not need to maintain complex infrastructure and configuration sets or perform platform upgrades (including hardware replacements).\nAs demonstrated in my VLOG post Getting Started with Windows 365 Enterprise, the deployment of a Windows 365 Machine is Quick, Simple and Easy. There are additional components that Windows 365 is capable of using, such as Customer Managed Networks and Custom Images, which add other complexities, but these are optional.\nOrganisational Advantages and Use Cases Windows 365 and Cloud PCs can benefit organisations in many ways, from onboarding users in remote locations to exploring an Azure AD Join strategy without requiring complex setups of multiple physical machines, VMs etc.\nAs Cloud PCs are subscription-based, it is simple to cancel the subscription, and all the costs for that license will no longer be billed. The user can use the Cloud PC until the end of the license term, and then the platform destroys the machine after a brief grace period.\nOne of the most significant use cases for the platform, is for Temporary Staff (Contractors, Consultancies, short-term employees etc.), as the cost of purchasing additional hardware. If the end-user is in a remote location, it saves on shipping costs and the risk of damage or total loss of the device.\nLets look at this scenario. Think of an end-user having an issue with a device whereby the device requires re-provisioning. While this is still a simple task on a well-constructed Modern Implementation of Device Management, an endpoint could be out of action for two or more hours. This also assumes the user does not request additional support. Well, with Cloud PCs, the IT Administrator can re-provision the machine within the Intune UI, and within an hour, the endpoint will be back up and running, ready for the user to be productive once again.\nMaybe re-provisioning isn\u0026rsquo;t the desired approach to every scenario, so Windows 365 also offers device restore points, which are configurable by administrators, and users can even restore the devices themselves using the WebUI or the Windows 365 Application.\nIf you are using Intune and Autopilot to manage devices, you can also use the same configuration policies (minus BitLocker) to manage these devices, Windows 365 Machines also support the use of ESP Policies, so they can be managed the same way as standard physical endpoints, you can read more on this in the Microsoft Documentation.\nDepending on the implementation strategy, it can be integrated into your current SLAM (Starters, Leaver and Movers) process with ease. For example, if you have a Microsoft Form that triggers a Flow, you could add a step to add the new account to the Windows 365 Licensed Group that is in the scope of a provisioning policy and the machine creation will trigger and be available in a short space of time.\nThere are so many other use cases and benefits to using Windows 365, and this post is to get you started on your journey and get people thinking about Why Windows 365.\nI use a Cloud PC to perform all of my developmental activities, this allows me to install all of the software I use without ever worrying about it been on my physical device, that as we all know, can have unexpected issues and sometimes require a reset. It also allows me to separate my workloads and have a more Powerful Cloud PC and a less powerful device if needed.\nAccessibility Users can access Windows 365 from any device, Windows, Mac, iOS/iPadOS, Android and Linux. For example, using the Web UI or OS-specific applications, making the platform readily available to users, no matter where they are or what device they are using.\nHybrid Join Although I would recommend that this is avoided, there are still use cases where this may be a necessary evil. Windows 365 does support Hybrid Joining clients to the Windows Domain, and all of the information can be found in the Microsoft Docuemtnation.\nConclusion I cannot rave about Cloud PC and Windows 365 enough, it has really changed how I work. I hope this post has been useful and highlights the reasons to implement and invest in Windows 365 for your organisations.\nSummery of Benefits Cost Saving Scalability Ease of Management No need to carry around multiple devices Easy to integrate with current proccess No need for an additional policy set ","image":"https://hugo.euc365.com/images/post/w365/whyW365_hue7ea29f65ba39963d65e6c21905bfacd_38262_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/why-windows-365/","tags":["Windows 365","Intune","Cloud PC"],"title":"Why Windows 365?"},{"categories":["Graph API","Windows 365","Powershell"],"contents":" This article was composed using Windows 365 Enterprise licences, the experience may differ if using Windows 365 Business.\nWindows 365 is still a relatively \u0026lsquo;New Kid\u0026rsquo; on the block, however it is developing at a rapid pace, with the technology now been used behind the scenes of DevBox. Windows 365 offers the ability to deploy machines at scale for the workforce without having to worry about the underlying maintenance within data centres etc.\nIn this post we will look at some of the basics using the Graph API natively, and also the Microsoft Graph PowerShell Module. We will look at SKUs and Provisioning Policies, and by the end of this article you should be able to deploy your Windows 365 profiles with ease.\nPrerequisites To get started you will need the following;\nVisual Studio Code (or another IDE) The following PowerShell Modules Microsoft.Graph MSAL.PS Assumptions There is an assumption made that you as the system administrator have the necessary permissions to perform the actions mentioned.\nConnecting to the Graph API As we will be covering two different way of interacting with the service, we will look at two types of authentication. First up, we will look at how to obtain a bearer token (Access Token) for use with direct endpoint invocation (using PostMan or Invoke-RestMethod), followed by the simplicity of connecting to the Microsoft Graph Module.\nIf your organisation restricts creating applications from Azure, you may need to take additional measures to be able to authenticate.\nBearer Token (Access) This is where the MSAL.PS module is required, you can create your own Azure AD App Registration for this, however for this guide I will utilise the Microsoft PowerShell App Registration with defined scoped.\nLaunch a PowerShell Prompt Enter $Token = Get-MsalToken -ClientId d1ddf0e4-d672-4dae-b554-9d5bdfd93547 -Scopes CloudPC.ReadWrite.All -RedirectUri \u0026quot;urn:ietf:wg:oauth:2.0:oob\u0026quot;, hit Enter Sign In, If not already consented, you will be prompted with an image as below. You can choose to grant for yourself or your organisation if you have the permissions. If you now call $Token.AccessToken, this will be the bearer token we will use. If you call $Token.ExpiresOn you will be able to see the lifespan of the token, this is usually 1 hour.\nWe will come back to using this further down the article, as at the moment we just need to get connected.\nMicrosoft.Graph PowerShell Module Connecting via this module could not be easier, follow the below steps after ensuring the Microsoft.Graph module is installed.;\nLaunch a PowerShell prompt Enter Connect-MgGraph -Scopes CloudPC.ReadWrite.All, hit Enter Sign in, If not already consented, you will be prompted with an image as below. You can choose to grant for yourself or your organisation if you have the permissions. That\u0026rsquo;s it, no Redirect URI\u0026rsquo;s or Client App ID\u0026rsquo;s to remember, just clean authentication.\nMaking your first call These methodologies can be used across the Graph API, by amending the scopes, URIs (for Access Tokens) and using alternate Microsoft.Graph Module commands.\nThis section will cover how to make your first Graph API Call, in this instance we will be listing all of the Windows 365 (Cloud PCs).\nBearer Token (Access) Method Using this method is great for more advanced users, but it have its pitfalls none-the-less, for example, the Graph API does have a limit on the amount of resources it returns before adding in a @odata.NextLink to the return. So to cover this scenario, I will talk about two ways to make this call, that way you have all of tools you need.\nBasic Call If you look at the code snippet in the Basic Call collapse below, you will see its already commented for ease. The basic premise of this is to build up the call using an object so you do not have a long winded command to run.\nSo lets look at the $GraphParams object, firstly, as you can see we are making a GET request, to the URI with a set of specified Headers.\nYou will see within the Header object, that we are calling $Token.AccessToken to place the Bearer token in the authorization header.\nBasic Call #Build up the Restmethod Parameters $GraphParams = @{ Method = \u0026#34;GET\u0026#34; #Perform a GET Action URI = \u0026#34;https://graph.microsoft.com/beta/deviceManagement/virtualEndpoint/cloudPCs\u0026#34; #Against this Endpoint Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } #Using the Token as the Authorisation header, and accept only a JSON object in return } #Invoke the request $GraphRequest = Invoke-RestMethod @GraphParams -ErrorAction Stop #View Values $GraphRequest.value If you take a look at the demonstration below, you will see that this returns the devices that you have within your environment.\nHandling Next Links I won\u0026rsquo;t dig into this too much as its a nice bonus, the premise is the same, however with the addition of an array and a while loop this code snippet will recursively gather your data.\nHandling NextLinks #Build up the Restmethod Parameters $GraphParams = @{ Method = \u0026#34;GET\u0026#34; #Perform a GET Action URI = \u0026#34;https://graph.microsoft.com/beta/deviceManagement/virtualEndpoint/cloudPCs\u0026#34; #Against this Endpoint Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } #Using the Token as the Authorisation header, and accept only a JSON object in return } #Invoke the request $GraphRequest = Invoke-RestMethod @GraphParams -ErrorAction Stop $All_GraphRequests = @() #Create a blank array $All_GraphRequests += $GraphRequest #Add the original request results #While there is a NextLink Available, loop though and append the array. while ($GraphRequest.\u0026#39;@odata.nextLink\u0026#39;) { $GraphRequest_NextLink = @{ Method = \u0026#34;GET\u0026#34; URI = $GraphRequest.\u0026#39;@odata.nextLink\u0026#39; Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } } $GraphRequest = Invoke-RestMethod @GraphRequest_NextLink -ErrorAction Stop $All_GraphRequests += $GraphRequest } $All_GraphRequests.Value #View Results Microsoft.Graph Call The first thing we need to do before running any commands is call Select-MgProfile -Name beta to ensure we do not run into any issues.\nTo achieve the same as above with the Microsoft.Graph module is run Get-MgDeviceManagementVirtualEndpointCloudPC\nLet take a look at how this looks shall we?\nHandling Next Links You will be flabergasted how complex this is\u0026hellip; Honestly. All you need to do is add the -All parameter.\nBuilding the basics Now we can authenticate and have run our first call, lets put some of those skills into practice and create a provisioning policy.\nMicrosoft.Graph Module For this basics blog, I am only going to focus on using the Microsoft Hosted Network and AzureAD Joined devices.\nLet get going shall we, one of the first things we need to do is select what gallery image we want. There are two types of images, OS Optimized (light) in terms of the Graph API and Microsoft 365 Apps (heavy). For this example we will be using the heavy type.\nIf we first run Get-MgDeviceManagementVirtualEndpointGalleryImage it will list the available image on the gallery, but there can only be one for the import. So lets filter to a specific image using Where-Object {($_.RecommendedSku -EQ \u0026quot;heavy\u0026quot;) -and ($_.DisplayName -match \u0026quot;11\u0026quot;) -and ($_.SkuDisplayName -eq \u0026quot;22H2\u0026quot;)}. This will return the Windows 11, 22H2 Microsoft 365 image as shown in the preview below. So lets assign the whole command to an $galleryImage variable.\n$galleryImage = Get-MgDeviceManagementVirtualEndpointGalleryImage | Where-Object {($_.RecommendedSku -EQ \u0026quot;heavy\u0026quot;) -and ($_.DisplayName -match \u0026quot;11\u0026quot;) -and ($_.SkuDisplayName -eq \u0026quot;22H2\u0026quot;)}\nNow we have our selected image, we can create a very basic provisioning policy. If you look inside the Create Provisioning Policy below you will see the code snippet which will create you provisioning policy.\nAs mentioned, we will only be focusing on AzureAD Joined machines, as you can see below when you specify you want it AAD only, you will need to specify a region.\nCreate Provisioning Policy $params = @{ DisplayName = \u0026#34;PowerShell Demo5\u0026#34; Description = \u0026#34;\u0026#34; ImageId = $galleryImage.id ImageType = \u0026#34;gallery\u0026#34; MicrosoftManagedDesktop = @{ Type = \u0026#34;notManaged\u0026#34; } DomainJoinConfiguration = @{ Type = \u0026#34;azureADJoin\u0026#34; RegionName = \u0026#34;automatic\u0026#34; RegionGroup = \u0026#34;usWest\u0026#34; } } $provisioningPolicy = New-MgDeviceManagementVirtualEndpointProvisioningPolicy -BodyParameter $params After creating our provisioning policy, we will want to assign this to an Azure AD Group, for this you will need the ObjectID of the group. Once you have the ID, you can amend the code snippet below to add the assignment.\nProvisioning Policy Assignment $assignmentParams = @{ Assignments = @( @{ Target = @{ GroupId = \u0026#34;\u0026lt;GROUPID\u0026gt;\u0026#34; } } ) } Set-MgDeviceManagementVirtualEndpointProvisioningPolicy -CloudPcProvisioningPolicyId $provisioningPolicy.id -BodyParameter $assignmentParams I would advise using group based licensing and using that group to assign the provisioning profile to, as that way the machine will provision when a user is dropped into that group.\nNative Endpoints So now we have fleshed this out with the PowerShell module, lets take a look at doing this with using your access token and the native endpoints.\nI will break this down in the collapse sections below, One of the things that you will notice is that on the creation of the provisioning policy and the assignment snippets, we switch from a GET to a POST method and we also add in the ContentType = \u0026quot;application/json\u0026quot; property to ensure the policy gets created without any errors.\nGet Gallery Image $GraphParams = @{ Method = \u0026#34;GET\u0026#34; URI = \u0026#34;$graphEndpoint/deviceManagement/virtualEndpoint/galleryImages\u0026#34; Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } } #Invoke the request $GraphRequest = Invoke-RestMethod @GraphParams -ErrorAction Stop #View Values $galleryImage = $GraphRequest.value | Where-Object {($_.RecommendedSku -EQ \u0026#34;heavy\u0026#34;) -and ($_.DisplayName -match \u0026#34;11\u0026#34;) -and ($_.SkuDisplayName -eq \u0026#34;22H2\u0026#34;)} Create Provisioning Policy $params = @{ DisplayName = \u0026#34;PowerShell Demo\u0026#34; Description = \u0026#34;\u0026#34; ImageId = $galleryImage.id ImageType = \u0026#34;gallery\u0026#34; MicrosoftManagedDesktop = @{ Type = \u0026#34;notManaged\u0026#34; } DomainJoinConfiguration = @{ Type = \u0026#34;azureADJoin\u0026#34; RegionName = \u0026#34;automatic\u0026#34; RegionGroup = \u0026#34;usWest\u0026#34; } } $GraphParams = @{ Method = \u0026#34;POST\u0026#34; URI = \u0026#34;$graphEndpoint/deviceManagement/virtualEndpoint/provisioningPolicies\u0026#34; Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } ContentType = \u0026#34;application/json\u0026#34; Body = ($params | ConvertTo-Json -Depth 5) } #Invoke the request $GraphRequest = Invoke-RestMethod @GraphParams -ErrorAction Stop #View Values $provisioningPolicyID = $GraphRequest.id Add Assignment $assignmentParams = @{ Assignments = @( @{ Target = @{ GroupId = \u0026#34;\u0026lt;GROUPID\u0026gt;\u0026#34; } } ) } $GraphParams = @{ Method = \u0026#34;POST\u0026#34; URI = \u0026#34;$graphEndpoint/deviceManagement/virtualEndpoint/provisioningPolicies/$($provisioningPolicyID)/assign\u0026#34; Headers = @{ Authorization = \u0026#34;Bearer $($Token.AccessToken)\u0026#34; Accept = \u0026#34;application/json\u0026#34; } ContentType = \u0026#34;application/json\u0026#34; Body = ($assignmentParams | ConvertTo-Json -Depth 5) } #Invoke the request $GraphRequest = Invoke-RestMethod @GraphParams -ErrorAction Stop Next Steps Now you have the provisioning policy effectively \u0026lsquo;as code\u0026rsquo;, you can put this together in a PowerShell Script and create consistent deployments.\nIf you want to provision a machine, ensure the user has a license assigned (Direct or Group Based) and is within Scope of the provisioning policy.\nAt the time of writing this article, a user can only provision devices using a one provisioning policy. For example, if you have a CloudPC provisioned with Demo1Policy but you then assign Demo2Policy and another Licence SKU, the new SKU will provision with Demo1Policy.\nConclusion I hope this article has been useful for you, there is also a link below to a script that contains the snippets used in this article.\nFurther Reading CloudPC Graph API Beta Reference Windows 365 Enterprise Documentation Windows 365 Supported Regions ","image":"https://hugo.euc365.com/images/post/w365/w365graphPS_hu878ffe44869f258a6370c5933422c221_41130_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/windows-365-graph-api-powershell-basics/","tags":["Graph API","Windows 365","Powershell"],"title":"Windows 365 - Graph API and PowerShell Basics"},{"categories":["VLOG"],"contents":" In this video, we’ll walk you through Configuring Windows Update for Business Reports, Creating a Device Configuration Profile for Windows Devices, Ways to Access the Reports and a brief look at using the Logs and Kusto Queries (KQL).\nLinks WUfB Reports Prerequisites Configuring Clients with Microsoft Intune Using WUfB Reports WUfB Reports Schema Check out our YouTube channel here: https://www.youtube.com/@euc365 Don’t forget to subscribe!\n","image":"https://hugo.euc365.com/images/post/vlog/blogthumb_WUFBR_hu346e0f00d030600cbed24384886149a0_187420_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-wufb-reports/","tags":["Updates","Windows"],"title":"VLOG - Windows Update for Business Reports"},{"categories":["Graph API","PowerShell","Microsoft Intune"],"contents":"Many organisations are starting to adopt cloud technologies, some of which decide to start again with a clean slate and add in policies where necessary. However, some organisations still look to take migrate their complex Group Policies.\nI would recommend taking the approach of clean slate, aligned to relevant framework(s) such as NCSC, NIST or CIS and applying only relevant policies there after. CIS: CIS Critical Security Controls Cloud Companion Guide (cisecurity.org) NCSC: Windows - NCSC.GOV.UK NIST: NATIONAL CHECKLIST PROGRAM\nThis post is to aid any IT Administrator in achieving goals, weather that be to analyse your current policies or only a selection of polices. There will be a script to export the Group Policies from a specific OU, and a script to recursively import the XML files to Intune utilising the Graph API.\nThe scripts detailed in this post are available on GitHub.\nGetting Started Permissions Intune One of the following permissions is required to use Group Policy Analytics.\nIntune Administrator Any role that includes the Security Baseline permission Group Policy This article assumes the Administrator has access to read and export the GPOs within the targeted scope. The export script will need to be run on an endpoint with Group Policy Management Tools Installed Running the Script(s) All scripts will need to be run with the Execution Policy of the PowerShell terminal set to bypass. If preferred, scripts can be launched prefixed with the below;\npowershell.exe -exectutionpolicy Bypass -File\nExport Group Policies This script is used to export Group Policy Objects using PowerShell. When executing the script you will need to specify which OU ( -OU ) you want to export the policies from and also the folder ( -GPOFolder ) where you want the exports to be stored.\nThe script is extensible, so if you want to widen the scope or make amendments, make it work for you.\nIf you execute the script with a command like below, you will see the policy GUIDs that are exported as displayed in the clip.\n\u0026quot;\u0026lt;Path\u0026gt;\\Get-LinkedGPOs.ps1\u0026quot; -OU \u0026quot;OU=Managed_Devices,DC=Domain,DC=LAB\u0026quot; -GPOFolder \u0026quot;$env:SystemDrive\\Temp\\GPOs\u0026quot;\nImporting Group Policies to Group Policy Analytics Importing the policy exports to Group Policy Analytics is just as simple as exporting them with the use of the Import-GroupPolicyAnalyticsPolicy.ps1 script. This script was designed for a specific purpose, to save time and clicks!.\nFor this example, we will start by taking a look at some of the parameters that are used upon launch.\nGPOFolder (Mandatory): This parameter is to be used to point the script to your .XML files. Recurse: If you have group policies nested inside other folders, this parameter is advised to recursively import them. LogOutputLocation: A location for the created logfile output, default is C:\\Temp. TenantID: If you are calling this script for any other tenant, other than the one you have previously logged into, you will need to specify the TenantID. UseDeviceAuthentication: Offers the ability to use Device Authentication. This script will only import Group Policies with Unique names, it does a check on the names prior to import.\nOnce you are ready, you can execute the script with the relevant parameters, for example you can run the following command to import the policies.\n\u0026quot;\u0026lt;Path\u0026gt;\\Import-GroupPolicyAnalyticsPolicy.ps1.ps1\u0026quot; -GPOFolder \u0026quot;$env:SystemDrive\\Temp\\GPOs\u0026quot; -UseDeviceAuthentication\nResults This results in the Group Policy being available within Intune, as below;\nConclusion This was a quite good to pull together, I hope you can put it to good use and make your life a lot easier and less pain staking.\nPlease leave feedback and comments below if you would like to see more things like this.\nFurther Reading For further reading on Group Policy Analytics, please review the Microsoft Documentation\n","image":"https://hugo.euc365.com/images/post/grouppolicy/gpoAnalytics_hu0d1abe62d3af20e61022459fc5af455f_70449_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/import-gpos-group-policy-analytics-graph-api/","tags":["Graph API","Intune","PowerShell","Group Policy"],"title":"Import GPOs to Group Policy Analytics using Graph API"},{"categories":["Graph API","PowerShell","Microsoft Intune"],"contents":"Often, during the initial adoption of Microsoft Intune you will see organisations and admins try and maintain some of their technical debt of old. One of those areas is often the device naming conventions. In the modern management world, tracking assets via this method is long out dated, and if you are using Hybrid Azure AD Joined Devices you end up with an entirely new challenge on your hands anyway.\nThis post is aimed at organisations and admins who have decided to remove some of this technical debt and move towards using the options available within the Deployment Profiles for Autopilot.\nIn this post we will be using a PowerShell Script with the Microsoft.Graph module to achieve our goal and also take a backup of the devices previously specified DisplayNames.\nYou can obtain the script we will use from my Git Repo by using the link below.\nAssumptions and Getting Started The current script will remove ALL Display Names from devices within your tenant, by all means customise the logic to ensure this only handles devices in an array etc, but my need was to remove this from the entire fleet of devices on a tenant.\nAn assumption is made that you have devices that devices have the following value set on their Autopilot entity.\nThere is also an Assumption that you have the relevant rights to perform this action and also grant application consent to run the PowerShell script.\nRunning the Script This part is fairly simple, however you will need to run this under a PowerShell session that is at least in bypass mode. My recommendation instead of changing the execution policy for PowerShell in its entirety is to run it with the following command in an elevated PowerShell prompt.\npowershell.exe -executionpolicy Bypass -File \u0026quot;\u0026lt;Path\u0026gt;\\Remove-AutopilotDisplayNameProperty.ps1\u0026quot; -LogOutputLocation \u0026quot;$env:ProgramData\\Logs\u0026quot;\nOnce the script has complete you will have a log created in your specified location which will look something like this.\nConclusion I hope this script can prove useful to you, your peers and also your organisation. Please don\u0026rsquo;t forget to comment and or provide feedback below.\n","image":"https://hugo.euc365.com/images/post/autopilot/remove_ap_identity_hu30c4bc797a171d211907a9c505d94a3f_50109_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/remove-autopilot-displayname/","tags":["Graph API","Intune","PowerShell","Autopilot"],"title":"Bulk Remove Autopilot DisplayName Property"},{"categories":["VLOG"],"contents":"A look into a \u0026lsquo;From the Ground Up\u0026rsquo; approach to Windows 365, including Licensing, differences in Product SKUs, configuration and also three ways to access Windows 365.\nUseful Links Business vs Enterprise Comparison Windows 365 Pricing and Plans Gallery Image Information Windows 365 Data Encryption Access Links: Windows 365 Preview Store App Remote Desktop Clients Windows 365 Web Access ","image":"https://hugo.euc365.com/images/post/vlog/gsw365_huf91e7b40b955d513d36a4fb0b7f781f7_273453_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-gs-windows365/","tags":["Windows 365","Intune","Windows"],"title":"VLOG - Getting Started with Windows 365 Enterprise"},{"categories":["VLOG"],"contents":"This video goes over what Windows Autopatch is, and how the tenant enrollment flow happens.\n","image":"https://hugo.euc365.com/images/post/vlog/gsap_hu488b5017bdcff61b49bb65d2b4cef499_275805_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-windows-autopatch/","tags":["Autopatch","Intune","Updates"],"title":"VLOG - Getting Started with Windows Autopatch"},{"categories":["Azure","PowerShell"],"contents":"Self-Service Password reset is just one of many features that reduce the pressure on support staff. Often users and admins get frustrated when it comes to resetting passwords, \u0026ldquo;Can you try Bf756dsgT!\u0026rdquo; Short Pause\u0026hellip; \u0026ldquo;Is that F for foxtrot or S for sugar?\u0026rdquo;. Once you get past this stage, the user then has to type it again and then think of a new password, the whole process is just sub-optimal.\nWith today\u0026rsquo;s cloud infrastructure you can relieve both end users and also admins from this stress and also streamline the process with Self-Service Password Reset or SSPR for short. I am not going to tell you that this is a silver bullet with one shot clearing out all password reset calls, as it won\u0026rsquo;t. The key to the success of SSPR and the ROI is stakeholder buy-in and great communication.\nI have seen SSPR used each time a user needs to update their password. This is due to focusing on a Passwordless strategy, which provides a more secure method of authentication. If this is a goal for you, then this may be a piece in your puzzle!\nFor those of you awesome ladies and gentlemen that follow me on twitter, or have seen my recent VLOGs you may have seen that I blew my entire lab away and started a fresh, with the aim of blogging/vlogging/tweeting about elements of the rebuild along the way.\nThis time around, I chose to use Cloud Sync as my gateway to hybrid identities as it is lightweight, provides a more seamless High-Availability offering and fits perfectly for what I want to achieve, so this will be the area in which we focus on in this post for SSPR.\nPre-Requisites Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled. If needed, create one for free. Global Administrator Account Azure AD Connect cloud sync version 1.1.972.0 or later Configuring SSPR Enable Self-Service Password Reset This may seem an obvious step, but I have often seen it missed.\nHead over to the Azure Active Directory Portal Click Azure Active Directory in the left-hand pane Click Password Reset On the Properties page you will see the below options, ensure you configure this to suit your organisational needs, for this Lab I will be setting it to All. Once you have made your selection, click Save.\nConfigure On-Premise Integration On the assumption that you are still on the Password Reset blade from the above section.\nClick on On-premises integration Select Enable password write back for synced users Select Write back password with Azure AD Connect Cloud Sync Click Save Personally, I would leave the Allow users to Unlock accounts without resetting their passwords un-selected, but this would be a decision you can take away to discuss with peers and the organisation.\nPowerShell You can also use powershell to configure Password Writeback, however, when using PowerShell to complete this you will not see it visually in the Azure Portal (or at least your couldn\u0026rsquo;t at the time of publishing this article).\nLogon to the Server hosting the Agent Launch an Administrative PowerShell Prompt Run the following commands; Import-Module 'C:\\\\Program Files\\\\Microsoft Azure AD Connect Provisioning Agent\\\\Microsoft.CloudSync.Powershell.dll' Set-AADCloudSyncPasswordWritebackConfiguration -Enable $true -Credential $(Get-Credential) Enter your Global Administrator credentials Using SSPR Using SSPR is super simple, all the user has to do it browse to https://aka.ms/sspr and enter their username, complete the captcha and then follow the prompts to use one (or two) of their chosen security methods and then they can enter a new password.\nThe whole process takes about 1-2 minutes. This is often quicker than the wait in the queue for a support staff call.\nIf you notice that after completing the configuration that when attempting SSPR you receive error SSPR_010, try turning SSPR off and on again (Yes!! Really!!).\nThank you to Maurice Daly for his input on this one!! I was searching for a mountain and missing a mole hill. Sandy Zeng also has a similar issue with Azure AD Connect previously, Take a look at Sunday debug: password reset failed for the things Sandy tried and the process she went through.\n","image":"https://hugo.euc365.com/images/post/azure/ssprFeaturedImage_hu7fe8d9aaffdc6a568026ee45c593d736_72505_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/configure-service-password-reset-cloud-sync/","tags":["Azure","Azure AD","Identity"],"title":"Configure Self-Service Password Reset with Cloud Sync"},{"categories":["VLOG"],"contents":"What is this video about? This video goes over the configuration of Azure AD Connect Cloud Sync, with some insight into what the differences are with its bigger brother AADC, the Pre-Reqs and some other insights.\n","image":"https://hugo.euc365.com/images/post/vlog/CloudSyncFeature_hu267c69196e8e9904407f898eff51afae_672666_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-azuread-connect-cloud-sync/","tags":["Azure","Azure AD","Identity"],"title":"VLOG - Azure AD Connect Cloud Sync"},{"categories":["VLOG"],"contents":"My first Vlog Post This video, is the first, of what I hope to be many Vlog\u0026rsquo;s. I recently destroyed my old Lab set up, and I thought I would Vlog the steps I take to re-create it along with some useful information along the way when I start connecting my Lab up with Azure and Intune.\nWhat is this video about? This video explains how I isolate my \u0026lsquo;Lab\u0026rsquo; network from my external network to avoid and DHCP and/or DNS crossover, providing a contained environment behind a single NAT address using Windows Server Routing and Remote Access.\nPlease provide any feedback, good, bad or indifferent to help me improve my delivery and bring you content that is meaningful.\n","image":"https://hugo.euc365.com/images/post/vlog/isolatelabnetwork_hu15c42a98a054587f32a47154ed8f9d41_351379_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/vlog-isolate-hyper-v-lab-environment/","tags":["Hyper-V","Windows Server","Lab"],"title":"VLOG - Isolate your Hyper V Lab Environment from your network"},{"categories":["Graph API","PowerShell","Teams"],"contents":"Why use an API to create a Teams Channel? Using API\u0026rsquo;s and Automation accounts help create a continuously repeatable process whilst minimising human error and providing a consistent experience. It has many purposes, should that be to provide a script to a managed client, create a team from a form submission, the list could go on, and If you\u0026rsquo;ve found this post by organic search, then it must be at least in some way what your are looking for.\nSo how do we do it? This guide will focus on using PowerShell to call the Graph API using the Microsoft.Graph module. However, the key take away is that this can be achieve via alternative API calling methods.\nWhat will we be deploying? We will look at deploying a Microsoft Team, with an additional channel, whist also removing Tabs from the channels and adding custom tabs for Web Links.\nPre-requisites Microsoft.Graph PowerShell Module Teams Administrator (or equivalent/higher) privileges Building up the channel Object Before we can POST anything to the Graph API, we need to start by building up our Team and channels. In the drop down below, there is a sample of the $params object which will later be used to create a Team. We will be referencing back to this throughout this section.\nTeam Object $params = @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/beta/teamsTemplates(\u0026#39;standard\u0026#39;)\u0026#34; Visibility = \u0026#34;Private\u0026#34; DisplayName = $TeamName Description = \u0026#34;This Teams Channel will be used for collaboration.\u0026#34; Channels = @( @{ DisplayName = \u0026#34;General\u0026#34; IsFavoriteByDefault = $true Description = \u0026#34;This channel will be used for communication purposes\u0026#34; Tabs = @( @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;com.microsoft.teamspace.tab.web\u0026#39;)\u0026#34; DisplayName = \u0026#34;Microsoft Intune\u0026#34; Configuration = @{ ContentUrl = \u0026#34;https://endpoint.microsoft.com\u0026#34; } } ) }, @{ DisplayName = \u0026#34;Service Announcements\u0026#34; IsFavoriteByDefault = $true Description = \u0026#34;This tab will be used for things like Third Party Patching and other Service Related Alerts\u0026#34; } ) MemberSettings = @{ AllowCreateUpdateChannels = $true AllowDeleteChannels = $true AllowAddRemoveApps = $true AllowCreateUpdateRemoveTabs = $true AllowCreateUpdateRemoveConnectors = $true } GuestSettings = @{ AllowCreateUpdateChannels = $true AllowDeleteChannels = $true } FunSettings = @{ AllowGiphy = $true AllowStickersAndMemes = $true AllowCustomMemes = $true } MessagingSettings = @{ AllowUserEditMessages = $true AllowUserDeleteMessages = $true AllowOwnerDeleteMessages = $true AllowTeamMentions = $true AllowChannelMentions = $true } DiscoverySettings = @{ ShowInTeamsSearchAndSuggestions = $true } InstalledApps = @( @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;com.microsoft.teamspace.tab.web\u0026#39;)\u0026#34; } @{ #The invoke webhook \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;203a1e2c-26cc-47ca-83ae-be98f960b6b2\u0026#39;)\u0026#34; } ) } So lets look at some of the main properties;\n\u0026ldquo;[email protected]\u0026rdquo; - This is teams template you want to base your channel on. This can be a custom channel, or an in-built one. CLICK HERE: Other Inbuilt Template Types CLICK HERE: For information on custom templates Visibility - You\u0026rsquo;re channel visibility, either public or private. DisplayName - The display name of the Team you want to create Description - A brief description of the purpose of this team. Channels - The channels you want to create within the Team. There are other options within the object, which are comparable to their GUI counterparts, I have left them in the object to allow the ease of updating these values if you need to change them.\nChannels Let us explore the channel array a bit further, this is where you create additional channels within the team. This is also the section you will add in any custom tabs you may want to add as demonstrated within the object.\nEach channel will be an object within the channel array, and as before, there are some basic properties like DisplayName and Description, then you have the IsFavouriteByDefault property, this is what controls if the channel is displayed or hidden upon creation based on a boolean input. Then you have Tabs, where you can add apps.\nYou can find the Apps available to add to this array by calling the TeamsApp API. An example query would be GET https://graph.microsoft.com/beta/appCatalogs/teamsApps?$expand=appDefinitions($select=id,displayName,allowedInstallationScopes). Using this query you could find the app IDs.\nMy recommendation for this would be to export a template that already has the application within it and obtain the values you need to ensure you enrich the app properly with configurations, alternatively seek these configuration values from the app vendor.\nAll of the apps within thr array are defined as objects as with the channels. If we look at the below object as an , you can see the teams app is bound to a URL similar to the one above. Followed by a DisplayName and the Configuration for the App. In the below example I will be creating a Tab for the Microsoft Intune Console.\n@{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;com.microsoft.teamspace.tab.web\u0026#39;)\u0026#34; DisplayName = \u0026#34;Microsoft Intune\u0026#34; Configuration = @{ ContentUrl = \u0026#34;https://endpoint.microsoft.com\u0026#34; } } If you prefer to use the direct api with a JSON object from Graph Explorer or PostMan you can use the following command to convert your object to JSON.\n$params | ConvertTo-Json -Depth 5\nNOTE: MAKE SURE YOU FILL OUT YOUR VARIABLES WHERE THEY ARE CALLED WITHIN THE OBJECT\nOK, so now we\u0026rsquo;ve explored the channels, lets explore how we POST it to the Graph API with PowerShell.\nCreate the Team with PowerShell One thing I found when creating the Team via the Graph API, is that you will only receive a success status code when posting the object to the Graph. This is because the API is more like an orchestrator, which means we need to do some additional bits to track the creation. This is more of a requirement if you want to amend the team after creation, for things like removing the Wiki tab etc, which will all be described in the following sections.\nAs mentioned in the Pre-requisites, we will need the Microsoft.Graph PowerShell module, you can install this by running Install-Module -Name Microsoft.Graph -AllowClobber in an elevated shell, or append with -Scope CurrentUser from a non-elevated prompt.\nPowerShell Script Example [CmdletBinding()] param ( [Parameter(DontShow = $true)] [Array] $ModuleNames = @(\u0026#34;Microsoft.Graph.Teams\u0026#34;), # Teams Channel Name [Parameter(Mandatory = $true)] [String] $TeamName ) #TeamsAdmin and Groups admin Required FOREACH ($Module in $ModuleNames) { IF (!(Get-Module -ListAvailable -Name $Module)) { try { Write-Output \u0026#34;Attempting to install $Module Module for the Current Device\u0026#34; Install-Module -Name $Module -Force -AllowClobber } catch { Write-Output \u0026#34;Attempting to install $Module Module for the Current User\u0026#34; Install-Module -Name $Module -Force -AllowClobber -Scope CurrentUser } } Import-Module $Module } $params = @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/beta/teamsTemplates(\u0026#39;standard\u0026#39;)\u0026#34; Visibility = \u0026#34;Private\u0026#34; DisplayName = $TeamName Description = \u0026#34;This Teams Channel will be used for collaboration.\u0026#34; Channels = @( @{ DisplayName = \u0026#34;General\u0026#34; IsFavoriteByDefault = $true Description = \u0026#34;This channel will be used for communication purposes\u0026#34; Tabs = @( @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;com.microsoft.teamspace.tab.web\u0026#39;)\u0026#34; DisplayName = \u0026#34;Microsoft Intune\u0026#34; Configuration = @{ ContentUrl = \u0026#34;https://endpoint.microsoft.com\u0026#34; } } ) }, @{ DisplayName = \u0026#34;Service Announcements\u0026#34; IsFavoriteByDefault = $true Description = \u0026#34;This tab will be used for things like Third Party Patching and other Service Related Alerts\u0026#34; } ) MemberSettings = @{ AllowCreateUpdateChannels = $true AllowDeleteChannels = $true AllowAddRemoveApps = $true AllowCreateUpdateRemoveTabs = $true AllowCreateUpdateRemoveConnectors = $true } GuestSettings = @{ AllowCreateUpdateChannels = $true AllowDeleteChannels = $true } FunSettings = @{ AllowGiphy = $true AllowStickersAndMemes = $true AllowCustomMemes = $true } MessagingSettings = @{ AllowUserEditMessages = $true AllowUserDeleteMessages = $true AllowOwnerDeleteMessages = $true AllowTeamMentions = $true AllowChannelMentions = $true } DiscoverySettings = @{ ShowInTeamsSearchAndSuggestions = $true } InstalledApps = @( @{ \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;com.microsoft.teamspace.tab.web\u0026#39;)\u0026#34; } @{ #The invoke webhook \u0026#34;[email protected]\u0026#34; = \u0026#34;https://graph.microsoft.com/v1.0/appCatalogs/teamsApps(\u0026#39;203a1e2c-26cc-47ca-83ae-be98f960b6b2\u0026#39;)\u0026#34; } ) } Connect-MgGraph $Team = Invoke-MgGraphRequest -Uri \u0026#34;https://graph.microsoft.com/beta/teams\u0026#34; -Body $params -Method POST -OutputType HttpResponseMessage #Wait while the team is created, this below link tracks the job. while ((Invoke-MGGraphRequest -URI \u0026#34;https://graph.microsoft.com/beta$($Team.Headers.Location.OriginalString)\u0026#34;).status -ne \u0026#34;succeeded\u0026#34;) { Start-Sleep 60 \u0026#34;Awaiting the team creation to complete...\u0026#34; } In the above drop-down you will see an example script, which contains the same object we have been working on previously in this post, so if you have started making your own object, simply replace the object in the example script.\nIn this section we will focus on everything after the object and then how the script can be invoked from the command line using parameters.\nThe first thing that we need to do is authenticate to the Microsoft Graph, we use the Connect-MgGraph command for this when using direct execution, for automation scenarios, please review the Microsoft Documentation.\nOnce we have authenticated, We use the Invoke-MGGraphRequest to POST the param object to the Graph API. In this example, we assign this call to the $Team variable so we can then track the team creation.\nAfter the initial POST to the API, the example then use a while loop to track the creation of the team. As mentioned in the tip at the start of this section, the API call to create the team is more of an orchestration API which is the reason we need to go to the additional effort to track the progress of creation.\n#Wait while the team is created, this below link tracks the job. while ((Invoke-MGGraphRequest -URI \u0026#34;https://graph.microsoft.com/beta$($Team.Headers.Location.OriginalString)\u0026#34;).status -ne \u0026#34;succeeded\u0026#34;) { Start-Sleep 60 \u0026#34;Awaiting the team creation to complete...\u0026#34; } Once the operation has succeeded, you can then layer on additional customisations, such as removing the Wiki tab as shown in the example below.\n#Get the Teams ID from the Output of the header $TeamID = (Select-String -Pattern \u0026#34;\\\u0026#39;([^\\\u0026#39;]*)\\\u0026#39;\u0026#34; -InputObject $Team.Content.Headers.ContentLocation.OriginalString).Matches.Groups[1].Value #Get the Teams Channels for the new Team $TeamChannels = Get-MgTeamChannel -TeamId $TeamID #For Each of the Channels, remove the Wiki Tab and ensure they are all set to show by default ForEach ($Channel in $TeamChannels) { $wikiTab = (Get-MgTeamChannelTab -ChannelId $Channel.id -TeamId $TeamID | Where-Object {$_.DisplayName -eq \u0026#34;Wiki\u0026#34;}).id Remove-MGTeamChannelTab -TeamId $TeamID -ChannelID $Channel.id -TeamsTabId $wikiTab Update-MGTeamChannel -TeamId $TeamID -ChannelID $Channel.id -IsFavoriteByDefault } Conclusion As mentioned at the start, automation is the key to consistency when performing repetitive tasks. Hopefully this post can aid with the understanding of how to achieve and automated approach to creating Teams and Channels within your organisation.\nResources Teams Resource Beta Graph Reference Microsoft Graph PowerShell Module Documentation Inbuilt Template Types Custom Templates ","image":"https://hugo.euc365.com/images/post/teams/TeamsGraphFeaturedImage_hu6d0b03eee463f0955841d4d27954408c_23636_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/create-a-microsoft-team-with-graph-api-powershell/","tags":["Graph API","PowerShell","Teams","Microsoft Graph Module"],"title":"Create a Microsoft Team with Graph API and PowerShell"},{"categories":["Bicep","Azure Resource Manager"],"contents":"What is Bicep? Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.\n-Microsoft Why use Bicep over generic ARM Templates? Although generic JSON ARM Templates are still needed for Azure Resource Manager, using Bicep makes creating ARM templates more consumable by the masses as it uses very simple syntax compared to its big sister, see the example in the Benefits of Bicep section of the documentation for examples.\nGetting started with Bicep Let\u0026rsquo;s assume that you already have an IaC (Infrastructure as Code) strategy, and you are already using ARM Templates for deployment. Sometimes it can be a struggle for other colleagues/teams to reverse engineer your complex templates. So you decide to take a look at Bicep to help make them more consumable to your colleagues and team members.\nWell, Microsoft and the Bicep team could not have made this simpler to achieve!. Using the Bicep command line tools, you can easily decompile (convert) your standard json template to Bicep\u0026hellip; and the best bit, your json template file (if used) still works seamlessly.\nPre-requisites I will be be using a Windows Client and VSCode for this post, using a different setup, your experience may differ.\nPlease visit the Install Bicep tools page to install the Bicep tools.\nDecompile an ARM Template This really couldn\u0026rsquo;t be any simpler, but just for the sake of completeness before you see the warning anyway, the decompilation of the templates is on a best efforts basis, if your template does not decompile then you can raise an issue on the link provided in your shell.\nOpen VSCode to the directory where your templates are stored Type bicep decompile Review the output of the file The output file will be located in the same directory as the original file, but with a bicep extension. You can specify the output directory (which will need to exist first) by using the \u0026ndash;outdir parameter, or you can specify the file name but using the \u0026ndash;outfile parameter.\nYou can not specify both \u0026ndash;outdir and \u0026ndash;outfile together.\nCompile and ARM Template from Bicep Again, this is relatively straightforward. My biggest use case for this at the moment is when I want to publish a Deploy to Azure Button as this still requires a JSON template to be passed in.\nOpen VSCode to the directory where your templates are stored Type bicep build Review the output of the file The output file will be located in the same directory as the original file, but with a JSON extension. You can specify the output directory (which will need to exist first) by using the \u0026ndash;outdir parameter, or you can specify the file name but using the \u0026ndash;outfile parameter.\nYou can not specify both \u0026ndash;outdir and \u0026ndash;outfile together.\nConclusion I enjoyed learning about Bicep over the course of a weekend, and going forward it will be my chosen language to be building template files for Azure as its so much easier than ARM templates for others to read.\nWhy not start your journey using some of the resources below.\nWatch out for more Bicep and ARM posts coming soon.\nResources Bicep Documentation Bicep Microsoft Learn Modules Bicep with Pipelines ","image":"https://hugo.euc365.com/images/post/azure/FeaturedImage_hub0c9b0557b7c4046bd893566db26fc3c_36499_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/use-bicep-to-compile-or-decompile-arm-templates/","tags":["Infrastructure as Code","Bicep","ARM"],"title":"Use Bicep to Compile or Decompile ARM Templates"},{"categories":["Azure","Governance"],"contents":"What is Azure Policy? Azure Policy is a way to help organisations enforce standards and compliance. Azure Policy is commonly used for things like enforcing only certain allowed locations for resources, locating non-compliant Azure resources and for many, many other reasons.\nFor a full overview of Azure Policy, please take a look at the Microsoft Documentation.\nWhy would you want to deny resource creation? There could be many reasons, such as wanting to prevent the creation of Virtual Machines, Network Security Groups, Storage Accounts, to name a few. A requirement I came up against recently was a client wanted to prevent the creation of New Virtual Machines inside a subscription.\nWhat is the Solution? The solution to the problem is to create an Azure Policy definition and assign it to your Management Group, Subscription or to a specific resource group.\nYou can assign a Policy Definition more than once, it can also be assigned at different levels each time.\nThere are various ways you can create the definition, for example, you can hardcore your parameters, you can make use of Azure strongTypes and have a drop down menu to select your resource types or you can simply apply them in an anyOf array, see the different definition snippets, along with their assignment experiences in the drop-downs below.\nanyOf Array { \u0026#34;mode\u0026#34;: \u0026#34;All\u0026#34;, \u0026#34;policyRule\u0026#34;: { \u0026#34;if\u0026#34;: { \u0026#34;anyOf\u0026#34;: [ { \u0026#34;field\u0026#34;: \u0026#34;type\u0026#34;, \u0026#34;equals\u0026#34;: \u0026#34;Microsoft.Compute/virtualMachines\u0026#34; }, { \u0026#34;field\u0026#34;: \u0026#34;type\u0026#34;, \u0026#34;like\u0026#34;: \u0026#34;Microsoft.Network*\u0026#34; } ] }, \u0026#34;then\u0026#34;: { \u0026#34;effect\u0026#34;: \u0026#34;deny\u0026#34; } }, \u0026#34;parameters\u0026#34;: {} } Assignment Experience Hardcoded Array { \u0026#34;mode\u0026#34;: \u0026#34;All\u0026#34;, \u0026#34;policyRule\u0026#34;: { \u0026#34;if\u0026#34;: { \u0026#34;anyOf\u0026#34;: [ { \u0026#34;field\u0026#34;: \u0026#34;type\u0026#34;, \u0026#34;in\u0026#34;: \u0026#34;[parameters(\u0026#39;deniedResouces\u0026#39;)]\u0026#34; } ] }, \u0026#34;then\u0026#34;: { \u0026#34;effect\u0026#34;: \u0026#34;deny\u0026#34; } }, \u0026#34;parameters\u0026#34;: { \u0026#34;deniedResouces\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;Array\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Denied Resources\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The list of resources that are denied for creation.\u0026#34; }, \u0026#34;allowedValues\u0026#34;: [ \u0026#34;Microsoft.Compute/VirtualMachines\u0026#34;, \u0026#34;Microsoft.Network/virtualNetworks/subnets\u0026#34; ] } } } Assignment Experience Azure Resource Drop-down (strongType) { \u0026#34;mode\u0026#34;: \u0026#34;All\u0026#34;, \u0026#34;policyRule\u0026#34;: { \u0026#34;if\u0026#34;: { \u0026#34;anyOf\u0026#34;: [ { \u0026#34;field\u0026#34;: \u0026#34;type\u0026#34;, \u0026#34;in\u0026#34;: \u0026#34;[parameters(\u0026#39;deniedResouces\u0026#39;)]\u0026#34; } ] }, \u0026#34;then\u0026#34;: { \u0026#34;effect\u0026#34;: \u0026#34;deny\u0026#34; } }, \u0026#34;parameters\u0026#34;: { \u0026#34;deniedResouces\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;Array\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Denied Resources\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The list of resources that are denied for creation.\u0026#34;, \u0026#34;strongType\u0026#34;: \u0026#34;resourceTypes\u0026#34; } } } } Assignment Experience Create an Azure Policy Definition Browse to the Azure Portal\nUse the search bar and locate Policy\nSelect Definitions from the left-hand pane\nClick + Policy definition\nEnter the following details;\nSelect your desired definition location Enter a descriptive name Add a description Select or Create a new category for your definition to live within Copy and paste your desired definition snippet from one of the above drop-downs. Click Save\nAssigning the Policy Once saved, you then have the ability to assign the Policy to a Subscription, Management Group or a Resource group within a subscription. You can view the assignment experience within the drop-downs above to see how it behaves depending on your chosen method.\nConclusion The route I would take would be to use the strongType list and name the definition something along the lines of \u0026lt;COMPANY SHORTCODE\u0026gt;-Deny Resource Types. This will allow you to use a single definition to define which resources to deny in different Subscriptions, Management Groups or Resource Groups.\nResouces Azure Policy Documentation Azure Policy definition structure ","image":"https://hugo.euc365.com/images/post/azurepolicy/mainicon_hu63a42c266a2c7f414152444b4ea2c821_3847_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/deny-resource-type-creation-azure-policy/","tags":["Compliance","Azure Policy"],"title":"Deny Resource Type Creation with Azure Policy"},{"categories":["Samsung Knox","Microsoft Intune","Mobile Device Management"],"contents":"Why disable MAC Randomization? In some scenarios setting static IP Addresses for mobile devices is a requirement (such as EPOS, Kiosks, Meeting Room Self-Service Tablets etc.). With most modern Mobile Devices, they are shipped with MAC Randomization enabled by default, and that is certainly the case for Samsung Tablets.\nThis was introduced back in Android 8.0 when probing for new networks, however, starting in Android 10 this was enabled by default for client mode activities as mentioned in the Android Documentation.\nMAC randomization prevents listeners from using MAC addresses to build a history of device activity, thus increasing user privacy.\n-Samsung Documentation What is the solution? The solution is to use the Knox Service Plugin from Samsung, coupled with an Intune OEM Configuration profile. By using both of these elements you can control various aspects of the device configuration, however, we are only going to to cover MAC Randomization.\nMyself and my colleague have seen issues with this when using Certificate Authentication for your WiFi. That is not to say it will not work, you will have to configure the Knox Service Plugin to connect to your network with certificate. For this post I will be focusing on the core configuration and a WPA2 network.\nPre-requisites To have the ability to disable MAC Randomization on Samsung Devices with Intune you will need the following;\nSamsung Knox Platform for Enterprise Commercial Key Samsung Knox Service Plugin Managed Google Play App Intune Licenses Intune Administrator Role (Custom RBAC Roles are not in scope) If you do not have a Samsung Knox license or account, take a look at my Getting Started with Samsung Knox for Enterprise post.\nImport the Samsung Knox Service Plugin App Browse to Microsoft Intune Select Apps from the left-hand pane, then select Android Select Managed Google Play app from the App type list, then press Select Search Knox Service Plugin, select the app shown below Select Approve, read the permission page, if you are happy click Approve Select Keep approved when app requests new permissions, then click Done Click Sync Allow up to 15 minutes for the application to appear, if it doesn\u0026rsquo;t appear go back to the Managed Google Play Store and click Sync again.\nOnce the application has synced assign this to your devices.\nObtain the commercial key for Knox Login to Samsung Knox Hover over Knox Platform for Enterprise, then click See License Select the Commercial Keys, locate the Knox Platform for Enterprise: Premium Edition key Copy the contents of the License Number field. Creating the OEM Config Profile Browse to Microsoft Intune Select Devices from the left-hand pane, then select Android Select Configuration Profiles Click Create Profile Platform: Android Enterprise Profile Type: OEMConfig Click Create Enter a Name and Descriptions Click Select an OEMConfig app Select Knox Service Plugin, then click Select Configure the following settings: Profile name:: \u0026lt;suitable name for your organization\u0026gt; KPE Premium or Knox Suite License key: The commercial key you obtained in the previous section Debug Mode: I would only change this to true during testing, I would not change this if your devices in production have the KSP installed. Locate Device-wide policies (Selectively applicable to Fully Manage Device (DO) or Work Profile-on company owned devices (WP-C) mode as noted), then click Configure Select true on the Enable device policy controls slider Locate Device customization controls (Premium), then click Configure Select true on the Enable device customization slider In the left-pane select Device-wide policies (Selectively applicable to Fully Manage Device (DO) or Work Profile-on company owned devices (WP-C) mode as noted) Locate Device Controls, then click Configure Locate Wi-Fi Policy, then click Configure and set the following settings Enable Wi-Fi policy controls: true Allow Automatic Wi-Fi Connection to saved SSIDs: true Allow Wi-Fi State Change: true Allow to configure Wi-Fi (Configure details below): true In the left-pane select Knox Service Plugin Locate Wi-Fi Configuration, then click Configure In the left-hand pane click the three ellipses (\u0026hellip;) next to Wi-Fi Configurations, then click Add Setting Enter you network details, then change the Skip Mac randomization slider to true Your Wireless Credentials can be seen in plain text when using a PSK.\nThe policy is not in a state to disable MAC randomization, complete the policy creation and add any scope tags and Assignments on the next pages. Testing Now the policy is created, if you haven\u0026rsquo;t already done so assign this to a group that contains test devices. If you want visibility that the settings have applied properly, change the Debug mode setting to true within your OEMConfig policy.\nYou will also need to assign the Knox Service plugin to the same group as the OEMConfig Profile.\nYou can only assign one KSP profile to a device at any given time.\nOnce you deploy the application and configuration to the device, you will be prompted to to Agree the licence terms first, without doing so the Knox Service Plugin will not function.\nAfter you have agreed the licence terms, launch the Knox Service Plugin app. Press on the Configuration on \u0026hellip;. section, here you will see the configuration applied to your device. If you press on the Configuration results in the top left-hand corner, and then select Policies received you will see the JSON representation of the policy you have defined.\nAs mentioned previously, you will see your PSK in plain text, as shown below.\nYou will not be able to see on the network side, that this device is connecting with the correct MAC Address.\nPlease don\u0026rsquo;t forget to change debug mode to false before a production rollout\nConclusion There are many many things you can do with the Knox Service Plugin, and I have been told by a Samsung Support rep that this can work with a Certificate Based network. However, I do not have the means to test therefore it is not included in this guide. Should I have a requirement for it in the future you can bet your last dollar I will blog about it :).\n","image":"https://hugo.euc365.com/images/post/knox/knoxmacrandom_hude25847a0c83a85455a8238693b079eb_13513_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/post/disable-wifi-mac-randomization-samsung-devices/","tags":["Knox","Samsung"],"title":"Disable WiFi MAC Randomization on Samsung with Intune"},{"categories":["Samsung Knox","Microsoft Intune","Mobile Device Management"],"contents":"Why should I configure SSO? Configuring SSO in Knox Login to Samsung Knox\nIn the top-right hand corner, click your Avatar icon, then select Account Information\nOn the left-hand side click SSO Settings\nBrowse to the Azure Active Directory Portal\nIn the left-hand pane, click Azure Active Directory, then click Enterprise Applications\nClick New Application\nEnter Samsung Knox and Business Services into the search box Click the app, then click Create\nOnce created, click Users and group\nClick Add user/group, then click None Selected under Users and Groups to add your assignment.\nI recommend using a dedicated Azure AD group for Samsung Knox Administrators\nOnce you have selected your user/group, click Assign\nClick Single sign-on in the left-hand pane\nClick SAML\nConfigure the following Basic SAML Configuration settings, then click Save\nIdentifier (Entity ID): Leave default (https://www.samsungknox.com) Reply URL: https://central.samsungknox.com/ams/ad/saml/acs Sign on URL: https://accounts.samsung.com/ Copy the contents of the App Federation Metadata Url under SAML Signing Certificate\nNavigate back to the SSO Settings page in Samsung Knox\nPaste the copied contents into the App federation metadata URL box\nClick Connect to SSO\nSign in with your AAD Credential\nThe user you initially configure SSO with must be the Super Admin Account. Ensure the user was selected or is within a selected group in steps 10/11.\nThings to be aware of If your account already has permissions to another Knox Suite, you will not be able to use your account Once you configure SSO for Knox, you can not use a mixture of Samsung and SSO Accounts, you can only use SSO Accounts. The App on the My Apps will not sign you into Knox Any account that has already been configured will continue to work with their SSO Credentials providing they are in scope of enterprise app To add a user to Knox, you are still required to send the invite in the first instance from Knox, Adding them to the scope does not suffice If when you sign-in you receive a Sorry, you don\u0026rsquo;t have access screen, ensure that a Samsung Knox administrator within your organization has configured an account for you. Conclusion Using SSO for applications such as Knox will save admins time and effort storing multiple password and identities.\nI tried numerous ways to configure the application to open the Knox portal from the MyApps page to no avail. If you have managed to succeed in doing so, I would love to hear from you :).\n","image":"https://hugo.euc365.com/images/post/knox/knoxsso-1_hu20f6cc02eb057f0d8cbde8d4fec91028_23585_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/configure-samsung-knox-sso-azure/","tags":["Azure","Knox","Samsung","SSO"],"title":"Configure Samsung Knox SSO with Azure "},{"categories":["Microsoft Intune","Mobile Device Management","Samsung Knox"],"contents":"What is Samsung Knox? Samsung Knox is a service made up of a number of solutions such as Knox Mobile Enrollment and Knox Platform for Enterprise. Samsung Knox solutions provide extra security, and configurability and also improve the end-user onboarding experience.\nKnox Mobile Enrollment is Samsung\u0026rsquo;s alternative to Apple Business/School manager for devices. KME offers the ability to upload devices using an app on Samsung devices, or via a vendor/reseller channel.\nOnce devices are onboarded into Samsung Knox you can assign profiles to those devices to integrate with your chosen MDM provider. Using profiles linked to your MDM will vastly improve your onboarding experience for end-users and also secure the device further.\nWhat do I need to get started? Do I need a license? Knox Mobile Enrollment is a free IT solution offered by Samsung and does not require a license key.\n-Samsung As the quote says, no license is required for the KME service. All you need to do is sign up for an account on the Register for Knox page.\nEnter your Work email address, then click Next\nClick Agree\nComplete the information on the Create your Samsung account form, then click Next If you try enter a password longer than 15 characters you will be presented with the above message. Enter the Verification Code sent to the e-mail provided, then click Next\nClick Done (Optional, You can at this point choose to configure Multi-factor Authentication, I would recommend doing so, however this is not a requirement.)\nOnce you have returned to the Registration system, click Next\nEnter your Company Information, then click Next\nReview each of the agreements, and if you accept them tick each box (excl. marketing information), then click Submit\nYou will now be taken to the Knox Landing page where you see all of the services that are available through the Knox Platform. As you can see, once you first configure your account the available solutions are set to Pending.\nIf you click on the Licenses tab on the top ribbon, then click on Knox license keys this will then submit a request automatically for the solutions.\nAt this point, it is a case of awaiting a response from Samsung. The SLA for this is 48hours (Working Days only). Knox Service Plugin If you plan to use the Knox Service Plugin on your device estate, you will need to Generate a Commercial Key from the Samsung Knox Portal, you can achieve this by doing the following;\nLog into Samsung Knox\nHover over the Knox Platform for Enterprise tile, then click Generate\nTo obtain the key, hover back over the Knox Platform for Enterprise tile, then click See License\nThe Knox Platform for Enterprise: Premium Edition license is the one required for the Knox Service Plugin.\nGetting started with Knox Mobile Enrollment Hover over Knox Mobile Enrollment, select Launch Console Tick Don\u0026rsquo;t show me again, then click Got It Select your services, for this post I will be selecting the following; Click Confirm Knox Mobile Enrollment is now configured, we can now start to take a look at adding devices, creating Profiles and reseller registration.\nManually adding a device To manually add devices to the Knox Mobile Enrollment solution you will need the Knox Deployment app on two devices. The first device being a device you are logged into the Knox Deployment app as an Admin from the Knox solution, and the second a device you wish to import.\nThe device where the Admin is going to be signing in cannot already have an account that is not an Administrator within your Knox solution signed in. If you are, the Knox Deployment app will advise you to log out of your Samsung account and log in with an account which has Knox Deployment permission.\nSamsung has a comprehensive guide on using the Knox Deployment app for enrolling devices. (See: Samsung Knox Deployment App)\nReseller Registration To streamline the process further, you can have your device Resellers import your devices for you. The major benefit of doing this is reducing the time taken by engineers to configure the device ready for enterprise use.\nA list of Resellers can be found using the following link: Resellers | Samsung Knox. Configuring a reseller is simple and can be done so by following Register resellers | Knox Mobile Enrollment.\nCreating a MDM profile for Intune One of the biggest benefits of Knox Mobile Enrollment in an enterprise which uses Intune is the ability to assign a device profile to an enrollment token to remove the need to scan QR codes or enter enrollment tokens. Configuring the profile for this is easy, and in the long run, you will thank yourself for doing it.\nIf you plan to use this for Corporate, Dedicated devices, please note that the Enrollment Token expires after a maximum of 90 days. The enrollment profile within Samsung Knox would need to be updated when the token is renewed.\nObtaining your enrollment token from Intune Login to Microsoft Intune Select Devices in the left-hand pane, then select Android Select Android Enrollment Corporate-owned, fully managed user devices Click Corporate-owned, fully managed user devices (If applicable) Switch the slider to Yes Take a note of the Token above the QR code Corporate-owned dedicated devices Click Corporate-owned dedicated devices Select your desired token Select Token from the left-hand pane, then click Show token Take note of the Token above the QR code Corporate-owned devices with work profile Click Corporate-owned devices with work profile Select your desired token Select Token from the left-hand pane, then click Show token Take note of the Token above the QR code Creating your profile in Samsung Knox Log into Samsung Knox Hover over Knox Mobile Enrollment, select Launch Console Click Profiles from the left-hand pane Click Create Profile in the top right-hand corner Click Android Enterprise Enter a Profile Name and Description In the Pick you MDM drop-down, select Microsoft Intune Leave the rest as default, click Continue Enter {\u0026quot;com.google.android.apps.work.clouddpc.EXTRA_ENROLLMENT_TOKEN\u0026quot;:\u0026quot;\u0026lt;TOKENHERE\u0026gt;\u0026quot;} into the Custom JSON data, field replacing with your Token (Optional) To remove the default bloatware, select Disable system applications Enter your Company Name Click Create Assigning a profile to a device Click Devices in the left-hand pane Locate the Device using the IMEI or Serial Number Select the device by placing a tick in the selection box, then select Actions Select Configure Device, then select the relevant profile for the enrollment Click Save After you have assigned a profile to the device and it is then powered on, the end-user will be able to self-enroll the device without having to worry about pressing the screen 5 times and then scanning a QR code. This makes it a lot easier to ship a device directly to an end-user.\nConclusion There are going to be further blogs covering the Knox Service Plugin and other Samsung Knox services, however, I hope this post gives you insight into the benefits this solution can provide along with a nice guide to get you started.\n","image":"https://hugo.euc365.com/images/post/knox/knoxgettingstarted_hu94bb4decba523efdfc9de8af16144630_23808_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/getting-started-with-knox-mobile-enrollment/","tags":["Intune","Knox","Samsung"],"title":"Getting Started with Samsung Knox for Enterprise"},{"categories":["Azure","Microsoft Intune","Graph API","PowerBI"],"contents":"A little bit about this post At MMS 2022 I took a huge leap and briefly presented this in the Tips \u0026amp; Tricks Session. This was the first time I had ever presented in front of an audience.\nI thought it would be good to give a bit of context on how this was conceptualised and why certain aspects are the way they are (some of them still make me shudder).\nOn a day to day basis, I work for a company that provides both Professional and Managed Services. As part of our service offering, we provide reporting for technical and C-Suite stakeholders. While functional, our original report ran from the Intune Data Warehouse, and it wasn\u0026rsquo;t extensible and didn\u0026rsquo;t offer much room for adding in additional metrics and visuals.\nAs a man who likes working with the Graph API, I thought, why can we not use the data directly from there??? Here began many hours of visiting dead ends and rabbit holes. Before we get going, lets talk about some of the challenges faced.\nThe first challenge was authentication. We wanted to make this work without the need for Service Accounts and Passwords etc., so we decided to use App Registrations. Unfortunately, OData feeds in PowerBI did not like using dynamically generated content within the headers for authentication when creating our dashboards. This may have changed now, but if it ain\u0026rsquo;t broke don\u0026rsquo;t fix it right?\nThe next challenge after overcoming authentication was pagination. Now, it\u0026rsquo;s tough to do without some very complex PowerBI wizardry with loops etc. So this led me to look at using Logic Apps to paginate the data for me and send it back in a response JSON.\nOne of the final challenges was publishing the report to the PowerBI service. When published, the data refresh schedule was not available. The first issue I faced was PowerBI telling me that I had an issue with Query1\u0026hellip; This query didn\u0026rsquo;t even exist. After hours of stripping the report right back, we found it to be the way we were passing in the authentication bearer token. We resolved this by nesting (Yes!!! Nesting, I told you things made me shudder.) the Authentication function within each query. After we fixed this and re-published the report, we were hopeful, but that hope lasted no more than a minute. PowerBI then suggested the data was not directly, again back into the Transform Data screen to see what we could do\u0026hellip; The final solution to this??? (Shudders at the ready!!) Nest our other two functions into the queries too.\nMoving swiftly on, let\u0026rsquo;s look at some of this in action. During this post, we will create a basic report with PowerBI. We will pull in two tables, Devices and Device Hardware information.\nWe do we need for the basics? If you would like to follow along with the post you will need the following;\nThe ability to create an Azure AD App Registration and Grant Admin Consent The ability to create an Azure Logic App PowerBI Desktop Let\u0026rsquo;s get started During this post, we will be looking at the web call to gather the bearer token and two different methods of getting data from the Graph API, and how we use them side by side to ensure we are not burning money on a consumption Logic Apps.\nAuthentication Lets start by creating an App Registration with the following Application permissions.\nDeviceManagementManagedDevices.Read.All If you are not familiar with creating App Registrations, take a look at my Create an Azure App Registration post.\nOnce you have added permissions and granted consent, you need to obtain a secret, this can be done from the App Registration by;\nClicking Certificates \u0026amp; Secrets Click New client secret Enter a description and select the validity period of the secret, then click Add Click the Copy icon next to the Value contents The last step is important, as once you navigate away from this page it disappears and cannot be retrieved. You can always create another, so all is not lost.\nYou now have the start of the authentication piece, lets head over to PowerBI and enter the Power Query Editor (Transform Data) and create three Parameters.\nApplicationID: This will be the Application (client) ID of the App Registration ApplicationSecret: This will be the secret value copied previously TenantID: Your Azure Tenant ID We will halt the authentication section here for now as we need some other components to start retrieving data.\nThe Logic App You can simply deploy the logic app using the button below, or follow the manual steps detailed below.\nLogic App Blade Click Add Select your Subscription and Resource Group Give you Logic App a meaningful name (e.g. EUC-MSGRAPHCALL-v1) For Publish Select Workflow and select your desired Region Select your desired plan, for this post I will be using Consumption (Optional) Select Zone Redundancy (Optional) Add Tags Click Review + Create, then click Create Once the deployment is complete click Go to resource Select a trigger, template or click Blank Logic App From the ribbon, click the Code view button Copy and paste the JSON from the Logic App Code drop down below Click Save Logic App Code { \u0026#34;definition\u0026#34;: { \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#\u0026#34;, \u0026#34;actions\u0026#34;: { \u0026#34;Append_to_array_variable_-_GraphReturn\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;GraphReturn\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@body(\u0026#39;HTTP_-_Initial_Request\u0026#39;)\u0026#34; }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Parse_JSON_-_Initial_Response_(for_NextLink)\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;AppendToArrayVariable\u0026#34; }, \u0026#34;[email protected]\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;Response_-_NextLink\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;body\u0026#34;: \u0026#34;@variables(\u0026#39;GraphReturn\u0026#39;)\u0026#34;, \u0026#34;statusCode\u0026#34;: 200 }, \u0026#34;kind\u0026#34;: \u0026#34;Http\u0026#34;, \u0026#34;runAfter\u0026#34;: { \u0026#34;Until_-_NextLink_is_Blank\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;Response\u0026#34; }, \u0026#34;Set_variable_-_NextLink_for_Next_Call\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;nextlink\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@body(\u0026#39;Parse_JSON_-_Initial_Response_(for_NextLink)\u0026#39;)?[\u0026#39;@odata.nextLink\u0026#39;]\u0026#34; }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;SetVariable\u0026#34; }, \u0026#34;Until_-_NextLink_is_Blank\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;Compose_-_Union_GraphReturn_and_the_additional_data_from_NextLink_Call\u0026#34;: { \u0026#34;inputs\u0026#34;: \u0026#34;@union(variables(\u0026#39;GraphReturn\u0026#39;),body(\u0026#39;HTTP__-_Get_NextLink_Data\u0026#39;)[\u0026#39;value\u0026#39;])\u0026#34;, \u0026#34;runAfter\u0026#34;: { \u0026#34;Parse_JSON_-_NextLink_Response_(for_NextLink)\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;Compose\u0026#34; }, \u0026#34;[email protected]_is_not_blank_(Until_Loop)\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;Set_variable_-_NextLink_to_Blank\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;nextlink\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;\\\u0026#34;\\\u0026#34;\u0026#34; }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;SetVariable\u0026#34; } }, \u0026#34;else\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;[email protected]\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;nextlink\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@body(\u0026#39;Parse_JSON_-_NextLink_Response_(for_NextLink)\u0026#39;)?[\u0026#39;@odata.nextLink\u0026#39;]\u0026#34; }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;SetVariable\u0026#34; } } }, \u0026#34;expression\u0026#34;: { \u0026#34;and\u0026#34;: [ { \u0026#34;equals\u0026#34;: [ \u0026#34;@body(\u0026#39;Parse_JSON_-_NextLink_Response_(for_NextLink)\u0026#39;)?[\u0026#39;@odata.nextLink\u0026#39;]\u0026#34;, \u0026#34;\u0026#34; ] } ] }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Set_variable_-_GraphReturn_from_Compose_Output\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;If\u0026#34; }, \u0026#34;HTTP__-_Get_NextLink_Data\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;headers\u0026#34;: { \u0026#34;Authorization\u0026#34;: \u0026#34;bearer @{triggerBody()?[\u0026#39;accesstoken\u0026#39;]}\u0026#34;, \u0026#34;content-type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;uri\u0026#34;: \u0026#34;@variables(\u0026#39;nextlink\u0026#39;)\u0026#34; }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;Http\u0026#34; }, \u0026#34;Parse_JSON_-_NextLink_Response_(for_NextLink)\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;content\u0026#34;: \u0026#34;@body(\u0026#39;HTTP__-_Get_NextLink_Data\u0026#39;)\u0026#34;, \u0026#34;schema\u0026#34;: { \u0026#34;properties\u0026#34;: { \u0026#34;@@odata.context\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; }, \u0026#34;@@odata.nextLink\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;object\u0026#34; } }, \u0026#34;runAfter\u0026#34;: { \u0026#34;HTTP__-_Get_NextLink_Data\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;ParseJson\u0026#34; }, \u0026#34;Set_variable_-_GraphReturn_from_Compose_Output\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;GraphReturn\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@outputs(\u0026#39;Compose_-_Union_GraphReturn_and_the_additional_data_from_NextLink_Call\u0026#39;)\u0026#34; }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Compose_-_Union_GraphReturn_and_the_additional_data_from_NextLink_Call\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;SetVariable\u0026#34; } }, \u0026#34;expression\u0026#34;: \u0026#34;@equals(variables(\u0026#39;nextlink\u0026#39;), \u0026#39;\u0026#39;)\u0026#34;, \u0026#34;limit\u0026#34;: { \u0026#34;count\u0026#34;: 60, \u0026#34;timeout\u0026#34;: \u0026#34;PT1H\u0026#34; }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Set_variable_-_NextLink_for_Next_Call\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;Until\u0026#34; } }, \u0026#34;else\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;Response_-_No_NextLink\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;body\u0026#34;: \u0026#34;@variables(\u0026#39;GraphReturn\u0026#39;)\u0026#34;, \u0026#34;statusCode\u0026#34;: 200 }, \u0026#34;kind\u0026#34;: \u0026#34;Http\u0026#34;, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;Response\u0026#34; } } }, \u0026#34;expression\u0026#34;: { \u0026#34;and\u0026#34;: [ { \u0026#34;not\u0026#34;: { \u0026#34;equals\u0026#34;: [ \u0026#34;@body(\u0026#39;Parse_JSON_-_Initial_Response_(for_NextLink)\u0026#39;)?[\u0026#39;@odata.nextLink\u0026#39;]\u0026#34;, \u0026#34;\u0026#34; ] } }, { \u0026#34;not\u0026#34;: { \u0026#34;equals\u0026#34;: [ \u0026#34;@body(\u0026#39;Parse_JSON_-_Initial_Response_(for_NextLink)\u0026#39;)?[\u0026#39;@odata.nextLink\u0026#39;]\u0026#34;, \u0026#34;@null\u0026#34; ] } } ] }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Append_to_array_variable_-_GraphReturn\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;If\u0026#34; }, \u0026#34;HTTP_-_Initial_Request\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;headers\u0026#34;: { \u0026#34;Authorization\u0026#34;: \u0026#34;bearer @{variables(\u0026#39;AccessToken\u0026#39;)}\u0026#34;, \u0026#34;content-type\u0026#34;: \u0026#34;application/json\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;uri\u0026#34;: \u0026#34;@triggerBody()?[\u0026#39;Url\u0026#39;]\u0026#34; }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Initialize_variable_-_AccessToken\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;Http\u0026#34; }, \u0026#34;Initialize_variable_-_AccessToken\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;variables\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;AccessToken\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@{string(triggerBody()?[\u0026#39;accesstoken\u0026#39;])}\u0026#34; } ] }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Initialize_variable__-_NextLink\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;InitializeVariable\u0026#34; }, \u0026#34;Initialize_variable_-_Graph_Return\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;variables\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;GraphReturn\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;array\u0026#34; } ] }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;InitializeVariable\u0026#34; }, \u0026#34;Initialize_variable__-_NextLink\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;variables\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;nextlink\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } ] }, \u0026#34;runAfter\u0026#34;: { \u0026#34;Initialize_variable_-_Graph_Return\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;InitializeVariable\u0026#34; }, \u0026#34;Parse_JSON_-_Initial_Response_(for_NextLink)\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;content\u0026#34;: \u0026#34;@body(\u0026#39;HTTP_-_Initial_Request\u0026#39;)\u0026#34;, \u0026#34;schema\u0026#34;: { \u0026#34;properties\u0026#34;: { \u0026#34;@@odata.context\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; }, \u0026#34;@@odata.nextLink\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;object\u0026#34; } }, \u0026#34;runAfter\u0026#34;: { \u0026#34;HTTP_-_Initial_Request\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] }, \u0026#34;type\u0026#34;: \u0026#34;ParseJson\u0026#34; } }, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;outputs\u0026#34;: {}, \u0026#34;parameters\u0026#34;: {}, \u0026#34;triggers\u0026#34;: { \u0026#34;manual\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;schema\u0026#34;: { \u0026#34;properties\u0026#34;: { \u0026#34;Url\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; }, \u0026#34;accesstoken\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;object\u0026#34; } }, \u0026#34;kind\u0026#34;: \u0026#34;Http\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Request\u0026#34; } } }, \u0026#34;parameters\u0026#34;: {} } Feel free to dig into the Logic App and look at how the data is being processed, but for now the logic ready for use!!!\nOne of the final things to do before we start building up a queries is to add a parameter in PowerBI called LogicAppURL. The value of this parameter can be found on the HTTP trigger from within the logic app as shown below.\nCreating the first query Lets head back in to the Power Query Editor (Transform Data) and start building up our first query.\nFrom the ribbon select New Source \u0026gt; Blank Query Right-click on your new query, select Rename and enter Devices Right-click on the Devices query, select Advanced Editor Paste the following content into the editor, then click Done PowerBI Query // All functions and the SessionToken Query must be within EACH query as when publishing to PowerBI Online you will receive and unable to access data source. let // ************************************* SessionToken ************************************* SessionToken = let TokenUri = \u0026#34;https://login.microsoftonline.com/\u0026#34;, ResourceId = \u0026#34;https://graph.microsoft.com\u0026#34;, TokenResponse = Json.Document(Web.Contents(TokenUri, [ Content = Text.ToBinary(Uri.BuildQueryString([client_id = #\u0026#34;ApplicationID\u0026#34;, resource = ResourceId, grant_type = \u0026#34;client_credentials\u0026#34;, client_secret = #\u0026#34;ApplicationSecret\u0026#34;])), Headers = [Accept = \u0026#34;application/json\u0026#34;], ManualStatusHandling = {400}, RelativePath = #\u0026#34;TenantID\u0026#34; \u0026amp; \u0026#34;/oauth2/token\u0026#34; ] )), AzureAccessToken = TokenResponse[access_token] in AzureAccessToken, // ************************************* SessionToken ************************************* // ************************************* LogicAppCall ************************************* MSGraphLA = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Url = LogicAppURL, body = \u0026#34;{\u0026#34;\u0026#34;accesstoken\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; SessionToken \u0026amp; \u0026#34;\u0026#34;\u0026#34;, \u0026#34;\u0026#34;URL\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; #\u0026#34;GraphURL\u0026#34; \u0026amp; \u0026#34;\u0026#34;\u0026#34;}\u0026#34;, Source = Json.Document(Web.Contents(Url, [Headers=[#\u0026#34;Content-Type\u0026#34;=\u0026#34;application/json\u0026#34;],Content = Text.ToBinary(body)])) in Source in Source, // ************************************* LogicAppCall ************************************* // ************************************* MS Graph Call ************************************* GraphCall = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Source = Json.Document( Web.Contents( \u0026#34;https://graph.microsoft.com/beta/\u0026#34;, [ RelativePath = #\u0026#34;GraphURL\u0026#34;, Headers= [ Authorization= \u0026#34;Bearer \u0026#34; \u0026amp; #\u0026#34;SessionToken\u0026#34;,Accept=\u0026#34;application/json\u0026#34; ] ] ) ) in Source in Source, // ************************************* MS Graph Call ************************************* Source = #\u0026#34;MSGraphLA\u0026#34;(\u0026#34;https://graph.microsoft.com/beta/deviceManagement/managedDevices\u0026#34;) in Source A yellow banner will appear stating Information is required about data privacy, click Continue Select Ignore Privacy (as per below screenshot), Alternatively, change the data sources to private, then click Save Let\u0026rsquo;s break the query down to gain further understanding of it. First lets look at the SessionToken nested function, this is where the bearer token is obtained by using the provided ApplicationID and ApplicationSecret values. The function builds up a web call with additional headers and requests the response as a JSON, from this point we then extract the returned access_token.\nThe second of three nested functions is the MSGraphLA, this is the function used to send information to the Logic App that you created. The way it handles the data is pretty simple, again this function uses a web call to post a JSON object to the Logic App. The JSON is made up of the SessionToken which slots into the accesstoken JSON value, and the GraphURL which is the value that is put into the function.\nTHe final nested function is GraphCall, this function is to be used on an ad-hoc basis. For example, I use this function when I collect the Hardware Information for a device, and the reason I do that is because when you query the device endpoint like we have done here, you do not get all of the values from the Hardware Information field. This function is basically the direct call to the Graph API, without pagination.\nOk, so that\u0026rsquo;s a start, you are now at a point were you have data within you PowerBI Report, Now you can feel free to continue the drill down of data yourself, however for anyone who wants that old Art Attack moment of \u0026lsquo;Here\u0026rsquo;s one I made earlier\u0026rsquo; you can use the below query.\nNOTE: There is still unexpanded columns with the query below, please expand them if required.\nPowerBI Query (Drilled Down) // All funtions and the SessionToken Query must be within EACH qury as when publishig to PowerBI Online you will recieve and unable to access data source. let // ************************************* SessionToken ************************************* SessionToken = let TokenUri = \u0026#34;https://login.microsoftonline.com/\u0026#34;, ResourceId = \u0026#34;https://graph.microsoft.com\u0026#34;, TokenResponse = Json.Document(Web.Contents(TokenUri, [ Content = Text.ToBinary(Uri.BuildQueryString([client_id = #\u0026#34;ApplicationID\u0026#34;, resource = ResourceId, grant_type = \u0026#34;client_credentials\u0026#34;, client_secret = #\u0026#34;ApplicationSecret\u0026#34;])), Headers = [Accept = \u0026#34;application/json\u0026#34;], ManualStatusHandling = {400}, RelativePath = #\u0026#34;TenantID\u0026#34; \u0026amp; \u0026#34;/oauth2/token\u0026#34; ] )), AzureAccessToken = TokenResponse[access_token] in AzureAccessToken, // ************************************* SessionToken ************************************* // ************************************* LogicAppCall ************************************* MSGraphLA = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Url = LogicAppURL, body = \u0026#34;{\u0026#34;\u0026#34;accesstoken\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; SessionToken \u0026amp; \u0026#34;\u0026#34;\u0026#34;, \u0026#34;\u0026#34;URL\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; #\u0026#34;GraphURL\u0026#34; \u0026amp; \u0026#34;\u0026#34;\u0026#34;}\u0026#34;, Source = Json.Document(Web.Contents(Url, [Headers=[#\u0026#34;Content-Type\u0026#34;=\u0026#34;application/json\u0026#34;],Content = Text.ToBinary(body)])) in Source in Source, // ************************************* LogicAppCall ************************************* // ************************************* MS Graph Call ************************************* GraphCall = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Source = Json.Document( Web.Contents( \u0026#34;https://graph.microsoft.com/beta/\u0026#34;, [ RelativePath = #\u0026#34;GraphURL\u0026#34;, Headers= [ Authorization= \u0026#34;Bearer \u0026#34; \u0026amp; #\u0026#34;SessionToken\u0026#34;,Accept=\u0026#34;application/json\u0026#34; ] ] ) ) in Source in Source, // ************************************* MS Graph Call ************************************* Source = #\u0026#34;MSGraphLA\u0026#34;(\u0026#34;https://graph.microsoft.com/beta/deviceManagement/managedDevices\u0026#34;), // ************************************* MS Graph Call ************************************* Source1 = Source{0}, #\u0026#34;Converted to Table\u0026#34; = Record.ToTable(Source1), Value = #\u0026#34;Converted to Table\u0026#34;{2}[Value], #\u0026#34;Converted to Table1\u0026#34; = Table.FromList(Value, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #\u0026#34;Expanded Column1\u0026#34; = Table.ExpandRecordColumn(#\u0026#34;Converted to Table1\u0026#34;, \u0026#34;Column1\u0026#34;, {\u0026#34;id\u0026#34;, \u0026#34;userId\u0026#34;, \u0026#34;deviceName\u0026#34;, \u0026#34;ownerType\u0026#34;, \u0026#34;managedDeviceOwnerType\u0026#34;, \u0026#34;managementState\u0026#34;, \u0026#34;enrolledDateTime\u0026#34;, \u0026#34;lastSyncDateTime\u0026#34;, \u0026#34;chassisType\u0026#34;, \u0026#34;operatingSystem\u0026#34;, \u0026#34;deviceType\u0026#34;, \u0026#34;complianceState\u0026#34;, \u0026#34;jailBroken\u0026#34;, \u0026#34;managementAgent\u0026#34;, \u0026#34;osVersion\u0026#34;, \u0026#34;easActivated\u0026#34;, \u0026#34;easDeviceId\u0026#34;, \u0026#34;easActivationDateTime\u0026#34;, \u0026#34;aadRegistered\u0026#34;, \u0026#34;azureADRegistered\u0026#34;, \u0026#34;deviceEnrollmentType\u0026#34;, \u0026#34;lostModeState\u0026#34;, \u0026#34;activationLockBypassCode\u0026#34;, \u0026#34;emailAddress\u0026#34;, \u0026#34;azureActiveDirectoryDeviceId\u0026#34;, \u0026#34;azureADDeviceId\u0026#34;, \u0026#34;deviceRegistrationState\u0026#34;, \u0026#34;deviceCategoryDisplayName\u0026#34;, \u0026#34;isSupervised\u0026#34;, \u0026#34;exchangeLastSuccessfulSyncDateTime\u0026#34;, \u0026#34;exchangeAccessState\u0026#34;, \u0026#34;exchangeAccessStateReason\u0026#34;, \u0026#34;remoteAssistanceSessionUrl\u0026#34;, \u0026#34;remoteAssistanceSessionErrorDetails\u0026#34;, \u0026#34;isEncrypted\u0026#34;, \u0026#34;userPrincipalName\u0026#34;, \u0026#34;model\u0026#34;, \u0026#34;manufacturer\u0026#34;, \u0026#34;imei\u0026#34;, \u0026#34;complianceGracePeriodExpirationDateTime\u0026#34;, \u0026#34;serialNumber\u0026#34;, \u0026#34;phoneNumber\u0026#34;, \u0026#34;androidSecurityPatchLevel\u0026#34;, \u0026#34;userDisplayName\u0026#34;, \u0026#34;configurationManagerClientEnabledFeatures\u0026#34;, \u0026#34;wiFiMacAddress\u0026#34;, \u0026#34;deviceHealthAttestationState\u0026#34;, \u0026#34;subscriberCarrier\u0026#34;, \u0026#34;meid\u0026#34;, \u0026#34;totalStorageSpaceInBytes\u0026#34;, \u0026#34;freeStorageSpaceInBytes\u0026#34;, \u0026#34;managedDeviceName\u0026#34;, \u0026#34;partnerReportedThreatState\u0026#34;, \u0026#34;retireAfterDateTime\u0026#34;, \u0026#34;preferMdmOverGroupPolicyAppliedDateTime\u0026#34;, \u0026#34;autopilotEnrolled\u0026#34;, \u0026#34;requireUserEnrollmentApproval\u0026#34;, \u0026#34;managementCertificateExpirationDate\u0026#34;, \u0026#34;iccid\u0026#34;, \u0026#34;udid\u0026#34;, \u0026#34;roleScopeTagIds\u0026#34;, \u0026#34;windowsActiveMalwareCount\u0026#34;, \u0026#34;windowsRemediatedMalwareCount\u0026#34;, \u0026#34;notes\u0026#34;, \u0026#34;configurationManagerClientHealthState\u0026#34;, \u0026#34;configurationManagerClientInformation\u0026#34;, \u0026#34;ethernetMacAddress\u0026#34;, \u0026#34;physicalMemoryInBytes\u0026#34;, \u0026#34;processorArchitecture\u0026#34;, \u0026#34;specificationVersion\u0026#34;, \u0026#34;joinType\u0026#34;, \u0026#34;skuFamily\u0026#34;, \u0026#34;skuNumber\u0026#34;, \u0026#34;managementFeatures\u0026#34;, \u0026#34;enrollmentProfileName\u0026#34;, \u0026#34;hardwareInformation\u0026#34;, \u0026#34;deviceActionResults\u0026#34;, \u0026#34;usersLoggedOn\u0026#34;, \u0026#34;chromeOSDeviceInfo\u0026#34;}, {\u0026#34;id\u0026#34;, \u0026#34;userId\u0026#34;, \u0026#34;deviceName\u0026#34;, \u0026#34;ownerType\u0026#34;, \u0026#34;managedDeviceOwnerType\u0026#34;, \u0026#34;managementState\u0026#34;, \u0026#34;enrolledDateTime\u0026#34;, \u0026#34;lastSyncDateTime\u0026#34;, \u0026#34;chassisType\u0026#34;, \u0026#34;operatingSystem\u0026#34;, \u0026#34;deviceType\u0026#34;, \u0026#34;complianceState\u0026#34;, \u0026#34;jailBroken\u0026#34;, \u0026#34;managementAgent\u0026#34;, \u0026#34;osVersion\u0026#34;, \u0026#34;easActivated\u0026#34;, \u0026#34;easDeviceId\u0026#34;, \u0026#34;easActivationDateTime\u0026#34;, \u0026#34;aadRegistered\u0026#34;, \u0026#34;azureADRegistered\u0026#34;, \u0026#34;deviceEnrollmentType\u0026#34;, \u0026#34;lostModeState\u0026#34;, \u0026#34;activationLockBypassCode\u0026#34;, \u0026#34;emailAddress\u0026#34;, \u0026#34;azureActiveDirectoryDeviceId\u0026#34;, \u0026#34;azureADDeviceId\u0026#34;, \u0026#34;deviceRegistrationState\u0026#34;, \u0026#34;deviceCategoryDisplayName\u0026#34;, \u0026#34;isSupervised\u0026#34;, \u0026#34;exchangeLastSuccessfulSyncDateTime\u0026#34;, \u0026#34;exchangeAccessState\u0026#34;, \u0026#34;exchangeAccessStateReason\u0026#34;, \u0026#34;remoteAssistanceSessionUrl\u0026#34;, \u0026#34;remoteAssistanceSessionErrorDetails\u0026#34;, \u0026#34;isEncrypted\u0026#34;, \u0026#34;userPrincipalName\u0026#34;, \u0026#34;model\u0026#34;, \u0026#34;manufacturer\u0026#34;, \u0026#34;imei\u0026#34;, \u0026#34;complianceGracePeriodExpirationDateTime\u0026#34;, \u0026#34;serialNumber\u0026#34;, \u0026#34;phoneNumber\u0026#34;, \u0026#34;androidSecurityPatchLevel\u0026#34;, \u0026#34;userDisplayName\u0026#34;, \u0026#34;configurationManagerClientEnabledFeatures\u0026#34;, \u0026#34;wiFiMacAddress\u0026#34;, \u0026#34;deviceHealthAttestationState\u0026#34;, \u0026#34;subscriberCarrier\u0026#34;, \u0026#34;meid\u0026#34;, \u0026#34;totalStorageSpaceInBytes\u0026#34;, \u0026#34;freeStorageSpaceInBytes\u0026#34;, \u0026#34;managedDeviceName\u0026#34;, \u0026#34;partnerReportedThreatState\u0026#34;, \u0026#34;retireAfterDateTime\u0026#34;, \u0026#34;preferMdmOverGroupPolicyAppliedDateTime\u0026#34;, \u0026#34;autopilotEnrolled\u0026#34;, \u0026#34;requireUserEnrollmentApproval\u0026#34;, \u0026#34;managementCertificateExpirationDate\u0026#34;, \u0026#34;iccid\u0026#34;, \u0026#34;udid\u0026#34;, \u0026#34;roleScopeTagIds\u0026#34;, \u0026#34;windowsActiveMalwareCount\u0026#34;, \u0026#34;windowsRemediatedMalwareCount\u0026#34;, \u0026#34;notes\u0026#34;, \u0026#34;configurationManagerClientHealthState\u0026#34;, \u0026#34;configurationManagerClientInformation\u0026#34;, \u0026#34;ethernetMacAddress\u0026#34;, \u0026#34;physicalMemoryInBytes\u0026#34;, \u0026#34;processorArchitecture\u0026#34;, \u0026#34;specificationVersion\u0026#34;, \u0026#34;joinType\u0026#34;, \u0026#34;skuFamily\u0026#34;, \u0026#34;skuNumber\u0026#34;, \u0026#34;managementFeatures\u0026#34;, \u0026#34;enrollmentProfileName\u0026#34;, \u0026#34;hardwareInformation\u0026#34;, \u0026#34;deviceActionResults\u0026#34;, \u0026#34;usersLoggedOn\u0026#34;, \u0026#34;chromeOSDeviceInfo\u0026#34;}) in #\u0026#34;Expanded Column1\u0026#34; Using the Ad-hoc GraphCall Function In the previous part of this post, you created the first query which used the Logic App to gather all of its data. This section will explain how to use this side by side with an ad-hoc function to gather additional data from the Graph API.\nFor this example we will focus on gathering the Hardware Information of the devices. As mentioned above, calling the API Endpoint for the devices does not provide all of the information within the Hardware Information object, so to get the most of the dataset available we need to call device endpoint with the device id to expand the hardware information. Lets jump in and get going.\nFrom the ribbon select New Source \u0026gt; Blank Query Right-click on your new query, select Rename and enter Device Hardware Information Right-click on the Device Hardware Information query, select Advanced Editor Paste the following content into the editor, then click Done PowerBI Query (Device IDs) // All functions and the SessionToken Query must be within EACH query as when publishing to PowerBI Online you will receive and unable to access data source. let // ************************************* SessionToken ************************************* SessionToken = let TokenUri = \u0026#34;https://login.microsoftonline.com/\u0026#34;, ResourceId = \u0026#34;https://graph.microsoft.com\u0026#34;, TokenResponse = Json.Document(Web.Contents(TokenUri, [ Content = Text.ToBinary(Uri.BuildQueryString([client_id = #\u0026#34;ApplicationID\u0026#34;, resource = ResourceId, grant_type = \u0026#34;client_credentials\u0026#34;, client_secret = #\u0026#34;ApplicationSecret\u0026#34;])), Headers = [Accept = \u0026#34;application/json\u0026#34;], ManualStatusHandling = {400}, RelativePath = #\u0026#34;TenantID\u0026#34; \u0026amp; \u0026#34;/oauth2/token\u0026#34; ] )), AzureAccessToken = TokenResponse[access_token] in AzureAccessToken, // ************************************* SessionToken ************************************* // ************************************* LogicAppCall ************************************* MSGraphLA = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Url = LogicAppURL, body = \u0026#34;{\u0026#34;\u0026#34;accesstoken\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; SessionToken \u0026amp; \u0026#34;\u0026#34;\u0026#34;, \u0026#34;\u0026#34;URL\u0026#34;\u0026#34;:\u0026#34;\u0026#34;\u0026#34; \u0026amp; #\u0026#34;GraphURL\u0026#34; \u0026amp; \u0026#34;\u0026#34;\u0026#34;}\u0026#34;, Source = Json.Document(Web.Contents(Url, [Headers=[#\u0026#34;Content-Type\u0026#34;=\u0026#34;application/json\u0026#34;],Content = Text.ToBinary(body)])) in Source in Source, // ************************************* LogicAppCall ************************************* // ************************************* MS Graph Call ************************************* GraphCall = let Source = (#\u0026#34;GraphURL\u0026#34; as any) =\u0026gt; let Source = Json.Document( Web.Contents( \u0026#34;https://graph.microsoft.com/beta/\u0026#34;, [ RelativePath = #\u0026#34;GraphURL\u0026#34;, Headers= [ Authorization= \u0026#34;Bearer \u0026#34; \u0026amp; #\u0026#34;SessionToken\u0026#34;,Accept=\u0026#34;application/json\u0026#34; ] ] ) ) in Source in Source, // ************************************* MS Graph Call ************************************* Source = #\u0026#34;MSGraphLA\u0026#34;(\u0026#34;https://graph.microsoft.com/beta/deviceManagement/managedDevices?$select=id\u0026#34;), // ************************************* MS Graph Call ************************************* Source1 = Source{0}, #\u0026#34;Converted to Table\u0026#34; = Record.ToTable(Source1), Value = #\u0026#34;Converted to Table\u0026#34;{2}[Value], #\u0026#34;Converted to Table1\u0026#34; = Table.FromList(Value, Splitter.SplitByNothing(), null, null, ExtraValues.Error), ShowIDs = Table.ExpandRecordColumn(#\u0026#34;Converted to Table1\u0026#34;, \u0026#34;Column1\u0026#34;, {\u0026#34;id\u0026#34;}, {\u0026#34;id\u0026#34;}) in ShowIDs Notice you now have all of your device ids, from the ribbon click Add Column Click Custom Column, enter CallURL as the column name Enter each \u0026quot;deviceManagement/managedDevices/\u0026quot; \u0026amp; [id] \u0026amp; \u0026quot;?$select=hardwareInformation\u0026quot;, then click OK Create another Custom Column, enter Graph Web Call as the name Enter each #\u0026quot;GraphCall\u0026quot;([CallURL]), then click OK You will be prompted to Enter Credentials, click on the button, ensure Anonymous is selected, then click Connect (This may fail the first time, if it does cancel the prompt and then click Edit Credentials again) You will now see an additional column per device, click on the icon next to Graph Web Call in the column header Un-tick all options but hardwareInformation (I would also un-tick Use original column name as prefix), then click OK (Optional) Right-click the CallURL column, then select Remove Click on the icon next to hardwareInformation in the column header, un-tick Use original column name as prefix, then click Load More, then click OK Voilà!, you now have the hardware information for each device. There are still further columns you can expand should you see fit, but the data is now usable to create some lovely visuals. You can compare the hardwareInformation from this table to the one in the devices table and you will see the comparable difference in the datasets.\nLinking the datasets The final piece of the puzzle is to link the tables together do you can create free flowing reports and visuals. You may find that this has been done Automagically for you, however it is better to cover it just in the edge case that it doesn\u0026rsquo;t.\nIf you haven\u0026rsquo;t already done so, hit the save button and save your progress. Once saved minimise or close the Transform data window as it is not required.\nFrom the main PowerBI Screen click the Model button in the left-hand pane Find id in the Devices table, drag this to the id field within the Device Hardware Information table Conclusion You now have the tools to gather all of your data into PowerBI, one thing to note is that depending on amount of objects you are querying your refresh time may be extended. Using the method where you collect devices ids and then call on each object to gather data also extends the refresh time. However, if you are running this on a refresh cycle that may not be such an issue, but it is worth noting.\nThere may be better ways to do this, but this is only the first iteration of a working model, there may very well be more to come in the future.\nIf you manage multiple environments, you will only need the one Logic App. The ApplicationID, ApplicationSecret and TenantID are the only changes that would need to be made.\nIf you like the post, be sure to leave feedback below.\n","image":"https://hugo.euc365.com/images/post/graphbi/logo_hue10b3e762eaa52b2b3eb36e6a0738f46_56001_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/microsoft-graph-and-powerbi/","tags":["Azure","Azure AD","App Registrations","Powershell","Intune","Graph API","PowerBI","Logic Apps"],"title":"Microsoft Graph API and PowerBI"},{"categories":["Apple","Azure"],"contents":"What is SCIM? Meaning: SCIM (System for Cross-domain Identity Management)\nPurpose: SCIM allows organisations to provision Managed Apple IDs immediately and to combine Apple School Manager or Apple Business Manager properties (such as SIS username and grade levels for Apple School Manager and roles) over account data imported from Azure AD. When an organisation imports users with SCIM, the account information is added as read-only in Apple School Manager and Apple Business Manager until they disconnect from SCIM, in which case the accounts become manual accounts and attributes in these accounts can then be edited. Changes made to accounts in Azure AD sync to Apple School Manager and Apple Business Manager accounts every 20 to 40 minutes.\nTaken from Apple: Integrate Apple devices with Azure AD SCIM Tokens expire xxx days, each Administrator will be notified via e-mail 60 days prior to expiry.\\\nHow do I update it? The process is fairly simple, and requires no more than 5 minutes of your time to complete.\nHead over to Apple Business Manager/Apple School Manager Sign in Click the profile icon in the top right-hand corner, Click Preferences Click Directory Sync, then click Edit Click Generate Token, this will display the following pop-up Click Copy, the click Close\nHead over to Azure Active Driectory\nClick Enterprise Applications from the left-hand pane\nLocate the Apple Business Manager/Apple School Manager App, Open it by clicking on the name\nSelect Provisioning, then click Edit provisioning\nExpand Admin Credentials, paste your previously copied token into the token box, then click Test Connection Ensure the connection was successful, then click Save. Conclusion And that\u0026rsquo;s a wrap!!! It really is as simple as that. Once you have completed the update you should receive an e-mail with confirmation of the setup been complete.\n","image":"https://hugo.euc365.com/images/post/updatescim/scimemail_hu56a8f6042ba923dc6def0edf3cf1e83b_21130_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/update-apple-scim-token/","tags":["Azure","Azure AD","Azure Enterprise App","Apple Business Manager","Apple School Manager"],"title":"Update Apple SCIM Token"},{"categories":["Microsoft Intune","Azure"],"contents":"Why would I want to do this? Whats the Purpose? You maybe asking yourself why should I be adding Terms of use to Autopilot Enrolment? Well there are a few use cases that spring to mind.\nThe first been accepting an It acceptable use policy, for many years users have come to collect devices from a field office and/or had the devices delivered to them by an internal tech. Well in the current day and age of Modern Management and Windows Autopilot you now have the option to ship straight from the vendor to the end user, ensuring that the user accepts the policy prior to using the device maybe an absolute must for your organisation.\nNot only can you present these terms of use, but you can also see who has accepted and or denied these from the Azure portal, cool right?\nDid I also mention that you can add multiple languages for your terms of use, No? Well it\u0026rsquo;s actually super easy to do so without creating additional policies etc.\nCreating your Terms of Use We will be making use of conditional access and the inbuilt terms of use from the Azure portal, meaning we are taking advantage of products you may already have licenses for.\nWithout further ado, lets get started.\nHead over to the Conditional Access Pane in the Azure portal.\nUnder the Manage section, click Terms of Use\nIn the right-hand pane, click New terms\nComplete the Name and Display Name fields Click on the box that says Upload required PDF, Locate and upload the PDF of your Terms.\nSelect your default language\nAt this point you can upload the same terms in a different language, simply click **+ Add language**, Upload the PDF and then select its language from the drop down to the right-hand side. Require user to expand the terms of use, now this is optional, you can choose to require the users to expand the terms of use or not. Personally, I prefer too as if the terms are broken you can show that they would have had to read them. Require users to consent on every device, This one HAS to be set to Off This has to be Off because, if you do switch it on the device has to be Joined to Azure AD Already and in fully working order. You will see this warning if you do attempt to switch it on.\nExpire Consents, again this one is optional, if you would like the user to accept it once in then never have to see it again then you can leave this off. However, If you would like users to have to accept this on another Autopilot build after a specified period of time then flick the switch on. Expire Starting On, use the date picker and select the date you are on. Frequency, Set this to your desired frequency. I like to set this to Monthly. Duration before re-acceptance required (days), set this to the amount of days you would like before the end user has to perform re-acceptance. Your final terms of use setting should look something like this; Conditional Access, you can choose to create a policy later, or select Custom Policy which will allow you to create the policy now.\nClick Create\nIf you chose to create a Custom Policy you will be redirected to a Conditional Access policy configuration. Give your Policy a name e.g. Autopilot Enrolment Terms of Use Policy Assignments, To start with I would test this out with a bunch of your techs, or users who give good feedback to ensure that this suits your organizational needs. Make sure this is targeted to users. Cloud apps and actions, now this is where we specify it to only apply to Intune Enrolment (Autopilot). Under the Cloud Apps Slider, select Select Apps, Click on the selections Type Microsoft Intune Enrollment and click the app to select it, Click Select. Conditions, You can change this to suit your needs, I generally select the Device Platform as Windows\nAccess Control - Grant, Select Grant Access, and then select your Terms of Use policy like below\nClick Select Session, You don\u0026rsquo;t need to select a Session At the bottom of the browser window, ensure that you have set Enable Policy to On. Click Create Well that\u0026rsquo;s a wrap from a configuration perspective, lets jump into some testing and see what the end user will experience.\nSo what does it look like? Firstly, you will hit the standard Autopilot Screen where you log in with your details and MFA etc. You will then notice that you are re-directed to a screen that represents the below;\nIf you selected to force the users to expand the terms, but just click accept you will see the message below pop up. If you expand the terms you will see your terms in an embedded PDF viewer, once you have finished click Accept, you will then just continue along your way on a standard Autopilot build.\nIf you have this conditional access policy enabled and a user does not accept the policy, they will be prevented from performing an Intune Enrolment. They are however able to Accept the policy on further attempts.\nHow do I see who\u0026rsquo;c Accepted/Declined? This is super easy to check, Head back over to the Conditional Access Pane in the Azure portal. From here click Terms of Use, Straight away you can see the numbers for Accepted and Declined.\nIf you go ahead and click those numbers, you can see who has/hasn\u0026rsquo;t accepted the policy, you can also download the list should it be required.\nConclusion For something so simple, it is quite effective an there may be a ton of use cases for this in other scenarios, however, I was asked to scope this out for a customer I was working with.\nI hope it may be some use :D, enjoy your day guys and girls!!!.\n","image":"https://hugo.euc365.com/images/post/apterms/FeaturedImage_hub577f64f88aa630e6b5a874de9878d48_16119_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/autopilot-enrolment-terms-of-use/","tags":["Azure","Azure AD","Conditional Access","Autopilot","Intune","Terms of Use"],"title":"Autopilot Enrolment Terms of Use"},{"categories":["PowerShell","Microsoft Intune","MEMCM"],"contents":"Dynamic? In what way? When I say the packages are dynamic, I mean that you don\u0026rsquo;t have to update the application package when a new version is released by the vendor.\nThere are caveats to both methods that we walk through below, there are also other community and paid for tools to do similar things.\nHowever, the reason I wrote this post and started focusing on packaging applications in this was to avoid having support tickets when a new version is released, I also wanted to use this with ConfigMGR and Intune without the requirement of additional modules.\nShow me the way! Well lets show you a couple ways to do this, Web (HTML) Scraping and using the GitHub API.\nWeb Scraping In my opinion, this is the most flawed method, as this replies on the website layout and/or table structure to stay the same as when you write your script. However, it is still an option and it works really well.\nTo be able to get the data from the tables in PowerShell we are going to need to use Invoke-WebRequest, Normally this would be super easy to use as it parses the HTML data for you. However, as this script will run as system in Intune you will need to launch it with -UseBasicParsing which complicates things a little more.\nFor this example we will use the Microsoft Remote Desktop Client, Are you ready? Lets begin. (You can achieve this using API Calls, However, this is a good example of table structure for Web Scraping)\nDetection Lets start by looking at the way we obtain the latest version and check it again the version in the registry.\nAs you can see from the image below, there is a version table right at the top of the web page.\nIf you press F12 and open the developer options, you can click through the HTML sections in the Elements tab and find the table like below;\nAs you can see from the snippet below we have to use a HTMLContent COM object to parse the HTML data so we can interact with the tables.\nIn it\u0026rsquo;s simplest form we get the RawContent and then write it to the IHTMLDocument2 object with the COM object, giving us the functionality work with the tables.\nfunction Get-LatestVersion { [String]$URL = \u0026#34;https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew\u0026#34; $WebResult = Invoke-WebRequest -Uri $URL -UseBasicParsing $WebResultHTML = $WebResult.RawContent $HTML = New-Object -Com \u0026#34;HTMLFile\u0026#34; $HTML.IHTMLDocument2_write($WebResultHTML) $Tables = @($html.all.tags(\u0026#39;table\u0026#39;)) $LatestVer = $null [System.Collections.ArrayList]$LatestVer = New-Object -TypeName psobject for ($i = 0; $i -le $tables.count; $i++) { $table = $tables[0] $titles = @() $rows = @($table.Rows) ## Go through all of the rows in the table foreach ($row in $rows) { $cells = @($row.Cells) ## If we\u0026#39;ve found a table header, remember its titles if ($cells[0].tagName -eq \u0026#34;TH\u0026#34;) { $titles = @($cells | ForEach-Object { (\u0026#34;\u0026#34; + $_.InnerText).Trim() }) continue } $resultObject = [Ordered] @{} $counter = 0 foreach ($cell in $cells) { $title = $titles[$counter] if (-not $title) { continue } $resultObject[$title] = (\u0026#34;\u0026#34; + $cell.InnerText).Trim() $Counter++ } #$Version_Data = @() $Version_Data = [PSCustomObject]@{ \u0026#39;LatestVersion\u0026#39; = $resultObject.\u0026#39;Latest version\u0026#39; } $LatestVer.Add($Version_Data) | Out-null } } $LatestVer } Let\u0026rsquo;s take a closer look at the interaction with the tables, as you can see the variable $Tables uses the the $HTML variable which contains the COM object data to select everything with the tag of table ($Tables = @($html.all.tags('table'))). From this point it uses a for loop to gather the table data, until finally we decide which part of the table we want to use.\nFor example, We are focusing on the latest version, so if you run the for loop manually and look at $resultObject in PowerShell it will return something like this;\nFrom this point you can create a PSCustomObject with the table header you want. Now this is kind of over complicating it for this example as you could just return $resultObject.'Latest version' however, I use this loop for other methods and keeping it in this format helps me standardise the way I work, but it also gives you the ability to use if for other things too.\nAll of this is wrapped inside a function (Get-LatestVersion) as I plan on using the same script for the detection method as for the install, but I also like to re-check in my install script that the application definitely is not installed before the install action executes. If you look at the Detect-Application function you can see that I check both the 64-bit and 32-bit registry locations with an IF statement based on the variables below;\n$UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $LatestVersion = ((Get-LatestVersion | Get-Unique | Sort-Object $_.LatestVersion)[0]).LatestVersion $AppName = \u0026#34;Remote Desktop\u0026#34; function Detect-Application { IF (((Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).DisplayVersion -Match $LatestVersion) -or ((Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).DisplayVersion -Match $LatestVersion)) { $True } } The IF statement uses an -or operator, meaning if one of the conditions matches then run the code within the brackets below. As you can see from the variables the $LatestVersion uses the Get-LatestVersion function which is used to match the display version in the registry.\nThis the fundamental foundation of the operation, as we can now detect the application without using any additional modules in the next section we will look at the download and installation of the app.\nDownload Link Now we know that we can Detect the application, lets look at obtaining the download link.\nIf you look at the below snippet, you can see we use a variable which calls a function to get the download link ($DownloadLink = Get-DownloadLink). For this to work there is a reliance on the the Variable $Arch been set, by default this is set to 64-bit. However, this is available as a command line parameter.\nparam ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType, [string]$DownloadPath = \u0026#34;$env:Temp\\RDInstaller\\\u0026#34; ) $DownloadLink = Get-DownloadLink Lets take a look at the Get-DownloadLink function, the basics of getting the data and writing it to an HTML COM object is the same as the detection method, however this time we do not need to look at a table, we are specifically looking for a link which matched the $Arch variable.\nfunction Get-DownloadLink { $URL = \u0026#34;https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop\u0026#34; $WebResult = Invoke-WebRequest -Uri $URL -UseBasicParsing $WebResultHTML = $WebResult.RawContent $HTML = New-Object -Com \u0026#34;HTMLFile\u0026#34; $HTML.IHTMLDocument2_write($WebResultHTML) ($HTML.links | Where-Object {$_.InnerHTMl -Like \u0026#34;*$Arch*\u0026#34;}).href } If you look at the web page for the downloads you will see that the links are in an unordered list;\nAgain if you hit F12 and look at the html content behind the table, you will see the data we are looking for.\nTo get this using the script we simply run ($HTML.links | Where-Object {$_.InnerHTMl -Like \u0026quot;*$Arch*\u0026quot;}).href simple right?\nDownload \u0026amp; Install Before we look at the install function, lets look at the logic that calls the install.\nLets just assume you called the script with the Install execution type (.\\\u0026lt;ScriptName.ps1 -ExecutionType Install) or launched it without any parameters.\nLets look inside the default section highlighted below, firstly it will check if the latest version of the application is not installed using an IF statement, ELSE return that it is already installed.\nIf the application is not installed it then proceeds to attempt the installation in a try{} catch{} statement. The basics of this is as it says, it will try the install, if it fails it will catch it and throw back the Write-Error text.\nswitch ($ExecutionType) { Detect { Detect-Application } Uninstall { try { Uninstall-Application -ErrorAction Stop \u0026#34;Uninstallation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } Default { IF (!(Detect-Application)) { try { \u0026#34;The latest version is not installed, Attempting install\u0026#34; Install-Application -ErrorAction Stop \u0026#34;Installation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } ELSE { \u0026#34;The Latest Version ($LatestVersion) of $AppName is already installed\u0026#34; } } } Lets take a look at the Install-Application function that is called in the statement.\nLets Break it down into stages.\nChecks if the $DownloadPath exists, if not it will try to create it. Download the installer from the Link to the Download folder ($DownloadPath) Install the MSI with the additional command line arguments \u0026quot;$DownloadPath\\$InstallerName\u0026quot;\u0026quot; /qn /norestart /l* \u0026quot;\u0026quot;$DownloadPath\\RDINSTALL$(get-Date -format yyyy-MM-dd).log\u0026quot;\u0026quot; When using double quotes (\u0026quot;) inside double quotes you must double them up. For Example \u0026quot;The file is located: \u0026quot;\u0026quot;$Variable\\Path.txt\u0026quot;\u0026quot;\u0026quot;\n$LatestVersion = ((Get-LatestVersion | Get-Unique | Sort-Object $_.LatestVersion)[0]).LatestVersion $InstallerName = \u0026#34;RemoteDesktop-$LatestVersion-$Arch.msi\u0026#34; function Install-Application { IF (!(Test-Path $DownloadPath)) { try { Write-Verbose \u0026#34;$DownloadPath Does not exist, Creating the folder\u0026#34; MKDIR $DownloadPath -ErrorAction Stop | Out-Null } catch { Write-Verbose \u0026#34;Failed to create folder $DownloadPath\u0026#34; } } try { Write-Verbose \u0026#34;Attempting client download\u0026#34; Invoke-WebRequest -Usebasicparsing -URI $DownloadLink -Outfile \u0026#34;$DownloadPath\\$InstallerName\u0026#34; -ErrorAction Stop } catch { Write-Error \u0026#34;Failed to download $AppName\u0026#34; } try { \u0026#34;Installing $AppName v$($LatestVersion)\u0026#34; Start-Process \u0026#34;MSIEXEC.exe\u0026#34; -ArgumentList \u0026#34;/I \u0026#34;\u0026#34;$DownloadPath\\$InstallerName\u0026#34;\u0026#34; /qn /norestart /l* \u0026#34;\u0026#34;$DownloadPath\\RDINSTALL$(get-Date -format yyyy-MM-dd).log\u0026#34;\u0026#34;\u0026#34; -Wait } catch { Write-Error \u0026#34;failed to Install $AppName\u0026#34; } } Uninstall As we have a dynamic installation, we want the same for the uninstall right?\nWell this is also achievable, take a look the the Uninstall-Application function below;\n$UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $AppName = \u0026#34;Remote Desktop\u0026#34; function Uninstall-Application { try { \u0026#34;Uninstalling $AppName\u0026#34; IF (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallGUID = (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).PSChildName $UninstallArgs = \u0026#34;/X \u0026#34; + $UninstallGUID + \u0026#34; /qn\u0026#34; Start-Process \u0026#34;MSIEXEC.EXE\u0026#34; -ArgumentList $UninstallArgs -Wait } IF (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallGUID = (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).UninstallString $UninstallArgs = \u0026#34;/X \u0026#34; + $UninstallGUID + \u0026#34; /qn\u0026#34; Start-Process \u0026#34;MSIEXEC.EXE\u0026#34; -ArgumentList $UninstallArgs -Wait } } catch { Write-Error \u0026#34;failed to Uninstall $AppName\u0026#34; } } This is using some of the same logic as the Detection method, It checks both the 64-bit and the 32-bit registry keys to see if an application that is like the display name of our application.\nIf a registry entry is detected, it will obtain the Key Name in this case as we are dealing with an MSI. This is because the MSI Key name is the GUID, it will then build up the MSIEXEC arguments for the uninstall. After it has completed both steps it will then process with the uninstallation.\nFinished Script If you compile all of the sections together with a little bit of formatting you will end up with a script like the one below.\nExamples To Install the 64-Bit version .\\Dynamic-RemoteDesktopClient.ps1\nTo Install the 32-Bit version .\\Dynamic-RemoteDesktopClient.ps1 -Arch '32-bit'\nTo detect the installation only .\\Dynamic-RemoteDesktopClient.ps1 -ExecutionType Detect\nTo uninstall the application .\\Dynamic-RemoteDesktopClient.ps1 -ExecutionType Uninstall\nYou will need to change the param block variable for $ExecutionType to $ExecutionType = Detect when using this as a detection method within Intune or ConfigMGR.\n\u0026lt;# .SYNOPSIS This is a script to Dynamically Detect, Install and Uninstall the Microsoft Remote Desktop Client for Windows. https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop .DESCRIPTION Use this script to detect, install or uninstall the Microsoft Remote Desktop client for Windows .PARAMETER Arch Select the architecture you would like to install, select from the following - 64-bit (Default) - 32-bit - ARM64 .PARAMETER ExecutionType Select the Execution type, this determines if you will be detecting, installing uninstalling the application. The options are as follows; - Install (Default) - Detect - Uninstall .Parameter DownloadPath The location you would like the downloaded installer to go. Default: $env:TEMP\\RDInstaller .NOTES Version: 1.2 Author: David Brook Creation Date: 21/02/2021 Purpose/Change: Initial script development #\u0026gt; param ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType, [string]$DownloadPath = \u0026#34;$env:Temp\\RDInstaller\\\u0026#34; ) function Get-LatestVersion { [String]$URL = \u0026#34;https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew\u0026#34; $WebResult = Invoke-WebRequest -Uri $URL -UseBasicParsing $WebResultHTML = $WebResult.RawContent $HTML = New-Object -Com \u0026#34;HTMLFile\u0026#34; $HTML.IHTMLDocument2_write($WebResultHTML) $Tables = @($html.all.tags(\u0026#39;table\u0026#39;)) $LatestVer = $null [System.Collections.ArrayList]$LatestVer = New-Object -TypeName psobject for ($i = 0; $i -le $tables.count; $i++) { $table = $tables[0] $titles = @() $rows = @($table.Rows) ## Go through all of the rows in the table foreach ($row in $rows) { $cells = @($row.Cells) ## If we\u0026#39;ve found a table header, remember its titles if ($cells[0].tagName -eq \u0026#34;TH\u0026#34;) { $titles = @($cells | ForEach-Object { (\u0026#34;\u0026#34; + $_.InnerText).Trim() }) continue } $resultObject = [Ordered] @{} $counter = 0 foreach ($cell in $cells) { $title = $titles[$counter] if (-not $title) { continue } $resultObject[$title] = (\u0026#34;\u0026#34; + $cell.InnerText).Trim() $Counter++ } #$Version_Data = @() $Version_Data = [PSCustomObject]@{ \u0026#39;LatestVersion\u0026#39; = $resultObject.\u0026#39;Latest version\u0026#39; } $LatestVer.Add($Version_Data) | Out-null } } $LatestVer } function Get-DownloadLink { $URL = \u0026#34;https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop\u0026#34; $WebResult = Invoke-WebRequest -Uri $URL -UseBasicParsing $WebResultHTML = $WebResult.RawContent $HTML = New-Object -Com \u0026#34;HTMLFile\u0026#34; $HTML.IHTMLDocument2_write($WebResultHTML) ($HTML.links | Where-Object {$_.InnerHTMl -Like \u0026#34;*$Arch*\u0026#34;}).href } function Detect-Application { IF (((Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).DisplayVersion -Match $LatestVersion) -or ((Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).DisplayVersion -Match $LatestVersion)) { $True } } function Install-Application { IF (!(Test-Path $DownloadPath)) { try { Write-Verbose \u0026#34;$DownloadPath Does not exist, Creating the folder\u0026#34; MKDIR $DownloadPath -ErrorAction Stop | Out-Null } catch { Write-Verbose \u0026#34;Failed to create folder $DownloadPath\u0026#34; } } try { Write-Verbose \u0026#34;Attempting client download\u0026#34; Invoke-WebRequest -Usebasicparsing -URI $DownloadLink -Outfile \u0026#34;$DownloadPath\\$InstallerName\u0026#34; -ErrorAction Stop } catch { Write-Error \u0026#34;Failed to download $AppName\u0026#34; } try { \u0026#34;Installing $AppName v$($LatestVersion)\u0026#34; Start-Process \u0026#34;MSIEXEC.exe\u0026#34; -ArgumentList \u0026#34;/I \u0026#34;\u0026#34;$DownloadPath\\$InstallerName\u0026#34;\u0026#34; /qn /norestart /l* \u0026#34;\u0026#34;$DownloadPath\\RDINSTALL$(get-Date -format yyyy-MM-dd).log\u0026#34;\u0026#34;\u0026#34; -Wait } catch { Write-Error \u0026#34;failed to Install $AppName\u0026#34; } } function Uninstall-Application { try { \u0026#34;Uninstalling $AppName\u0026#34; IF (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallGUID = (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).PSChildName $UninstallArgs = \u0026#34;/X \u0026#34; + $UninstallGUID + \u0026#34; /qn\u0026#34; Start-Process \u0026#34;MSIEXEC.EXE\u0026#34; -ArgumentList $UninstallArgs -Wait } IF (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallGUID = (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$AppName*\u0026#34;}).UninstallString $UninstallArgs = \u0026#34;/X \u0026#34; + $UninstallGUID + \u0026#34; /qn\u0026#34; Start-Process \u0026#34;MSIEXEC.EXE\u0026#34; -ArgumentList $UninstallArgs -Wait } } catch { Write-Error \u0026#34;failed to Uninstall $AppName\u0026#34; } } $UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $LatestVersion = ((Get-LatestVersion | Get-Unique | Sort-Object $_.LatestVersion)[0]).LatestVersion $InstallerName = \u0026#34;RemoteDesktop-$LatestVersion-$Arch.msi\u0026#34; $AppName = \u0026#34;Remote Desktop\u0026#34; $DownloadLink = Get-DownloadLink switch ($ExecutionType) { Detect { Detect-Application } Uninstall { try { Uninstall-Application -ErrorAction Stop \u0026#34;Uninstallation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } Default { IF (!(Detect-Application)) { try { \u0026#34;The latest version is not installed, Attempting install\u0026#34; Install-Application -ErrorAction Stop \u0026#34;Installation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } ELSE { \u0026#34;The Latest Version ($LatestVersion) of $AppName is already installed\u0026#34; } } } That wraps up the Web Scraping method, I hope this proves useful when trying to make your apps more dynamic.\nGitHub API Using API calls is a better way to do dynamic updates. Some vendors host their content on GitHub as this provides build pipelines, wikis, projects and a whole host of other things. This is the method that is least likely to change, and if it does it will be documented using the GitHub API Docs.\nFor this example we are going to look at using Git for Windows, we will be using their GitHub Repo to query the version and also get the download.\nGitHub has a rate limit for the API calls, unautenticated calls has a rate limit of 60, GitHub authenticated accounts has a limit of 5000 and GitHub Enterprise accounts has a limit of 15000 calls. Each time the script is launched it used1 call, so in terms of a detection and installation you will need a 2 api calls. You will need to take this into account if you plan to package multiple applications in this way, you could use multiple accounts and randomise the PAC Key from an array, however this is something that should be highlighted.\nGIT Detection Lets start by looking at the latest releases page.\nThe first thing you may notice that it automatically redirects the URL, but we just want to check the version.\nNow that we know what the latest version is on the GitHub page, lets take a look at the API. If you change the URL in your browser to https://api.github.com/repos/git-for-windows/git/releases/latest, you will see a JSON response like the below.\n{ \u0026#34;url\u0026#34;: \u0026#34;https://api.github.com/repos/git-for-windows/git/releases/37800609\u0026#34;, \u0026#34;assets_url\u0026#34;: \u0026#34;https://api.github.com/repos/git-for-windows/git/releases/37800609/assets\u0026#34;, \u0026#34;upload_url\u0026#34;: \u0026#34;https://uploads.github.com/repos/git-for-windows/git/releases/37800609/assets{?name,label}\u0026#34;, \u0026#34;html_url\u0026#34;: \u0026#34;https://github.com/git-for-windows/git/releases/tag/v2.30.1.windows.1\u0026#34;, \u0026#34;id\u0026#34;: 37800609, \u0026#34;author\u0026#34;: { \u0026#34;login\u0026#34;: \u0026#34;git-for-windows-ci\u0026#34;, \u0026#34;id\u0026#34;: 24522801, \u0026#34;node_id\u0026#34;: \u0026#34;MDQ6VXNlcjI0NTIyODAx\u0026#34;, \u0026#34;avatar_url\u0026#34;: \u0026#34;https://avatars.githubusercontent.com/u/24522801?v=4\u0026#34;, \u0026#34;gravatar_id\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci\u0026#34;, \u0026#34;html_url\u0026#34;: \u0026#34;https://github.com/git-for-windows-ci\u0026#34;, \u0026#34;followers_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/followers\u0026#34;, \u0026#34;following_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/following{/other_user}\u0026#34;, \u0026#34;gists_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/gists{/gist_id}\u0026#34;, \u0026#34;starred_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/starred{/owner}{/repo}\u0026#34;, \u0026#34;subscriptions_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/subscriptions\u0026#34;, \u0026#34;organizations_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/orgs\u0026#34;, \u0026#34;repos_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/repos\u0026#34;, \u0026#34;events_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/events{/privacy}\u0026#34;, \u0026#34;received_events_url\u0026#34;: \u0026#34;https://api.github.com/users/git-for-windows-ci/received_events\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;User\u0026#34;, \u0026#34;site_admin\u0026#34;: false }, \u0026#34;node_id\u0026#34;: \u0026#34;MDc6UmVsZWFzZTM3ODAwNjA5\u0026#34;, \u0026#34;tag_name\u0026#34;: \u0026#34;v2.30.1.windows.1\u0026#34;, \u0026#34;target_commitish\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Git for Windows 2.30.1\u0026#34;, \u0026#34;draft\u0026#34;: false, \u0026#34;prerelease\u0026#34;: false, \u0026#34;created_at\u0026#34;: \u0026#34;2021-02-09T12:53:04Z\u0026#34;, \u0026#34;published_at\u0026#34;: \u0026#34;2021-02-09T13:41:03Z\u0026#34;, \u0026#34;assets\u0026#34;: [ All objects in assets, this is just a snippet] } If you look at the highlighted line above, you will notice that the versions matches the one on the latest release page.\nNow we know what property within the API we are looking for and how it displays, we can head into PowerShell and start working on the detection.\nFirst of all we need to get the latest version, to do this we first perform and API Call to get all of the information and store the information in the $RestResult variable.\nTake a look at the below snippet;\nparam ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType = \u0026#34;Detect\u0026#34;, [string]$DownloadPath = \u0026#34;$env:Temp\\GitInstaller\\\u0026#34;, [string]$GITPAC ) ############################################################################## ##################### Get the Information from the API ####################### ############################################################################## [String]$GitHubURI = \u0026#34;https://api.github.com/repos/git-for-windows/git/releases/latest\u0026#34; IF ($GITPAC) { $RestResult = Invoke-RestMethod -Method GET -Uri $GitHubURI -ContentType \u0026#34;application/json\u0026#34; -Headers @{Authorization = \u0026#34;token $GITPAC\u0026#34;} } ELSE { $RestResult = Invoke-RestMethod -Method GET -Uri $GitHubURI -ContentType \u0026#34;application/json\u0026#34; } ############################################################################## ########################## Set Required Variables ############################ ############################################################################## $LatestVersion = $RestResult.name.split()[-1] } The first thing to note on this snippet is the method it will use to connect to the API, If you specify a Personal Access Token with the -GITPAC parameter or via the variable in the script you will be able to have 5000 API calls for your application installs.\nIn short we specify the $URL variable and then run a GET request with Invoke-RestMethod and specify that we want the output as application/json. Once it has the data we want to then format the $LatestVersion variable to return just the version number, for this we use the .split() operator, by default this splits on spaces, you can specify other characters to split it with by adding in something like '.' and it would split the string at every point there is a dot. Now we have split the string, we want to select the index, for this example as the version number is at the end we want to select the index [-1]. If the index was at the start we would use [0], feel free to experiment with this.\nThis variable is then used to call the Detect-Application function which will return True if the application is installed, otherwise it will return null.\n$LatestVersion = $RestResult.name.split()[-1] $UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $DetectionString = \u0026#34;Git version\u0026#34; $AppName = \u0026#34;Git For Windows\u0026#34; ############################################################################## ########################## Application Detection ############################# ############################################################################## function Detect-Application { IF (((Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).DisplayVersion -Match $LatestVersion) -or ((Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).DisplayVersion -Match $LatestVersion)) { Write-Output \u0026#34;$AppName is installed\u0026#34; $True } } GIT Download Link If you take a look back at the latest releases page, and scroll down to Assets, if you hover over one of them you will see the URL it links to in the bottom left-hand corner of your browser.\nNow we know that we can Detect the application, lets look at obtaining the download link.\nIf you look at the script snippet below, you can see that we are still using the $RestResult to obtain the download link. To get the download link for the architecture you specify we first have to build up the $EXEName Variable, this uses the $LatestVersion and $Arch variables to bring the name together.\nOnce the name EXE Name is sorted, we then use this to get the link, by using the Where-Object function to select the download URL from the asset where the $_.name matches $EXEName.\nparam ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType = \u0026#34;Detect\u0026#34;, [string]$DownloadPath = \u0026#34;$env:Temp\\GitInstaller\\\u0026#34;, [string]$GITPAC ) ############################################################################## ########################## Set Required Variables ############################ ############################################################################## $LatestVersion = $RestResult.name.split()[-1] $EXEName = \u0026#34;Git-$LatestVersion-$Arch.exe\u0026#34; $DownloadLink = ($RestResult.assets | Where-Object {$_.Name -Match $EXEName}).browser_download_url GIT Download \u0026amp; Install The Install logic is the same as web scraping, however we will cover it here too so you don\u0026rsquo;t need to scroll up.\nLets just assume you called the script with the Install execution type (.\\\u0026lt;ScriptName.ps1 -ExecutionType Install) or launched it without any parameters.\nLets look inside the default section highlighted below, firstly it will check if the latest version of the application is not installed using an IF statement, ELSE return that it is already installed.\nIf the application is not installed it then proceeds to attempt the installation in a try{} catch{} statement. The basics of this is as it says, it will try the installation, if it fails it will catch it and throw back the Write-Error text.\nposh switch ($ExecutionType) { Detect { Detect-Application } Uninstall { try { Uninstall-Application -ErrorAction Stop \u0026#34;Uninstallation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } Default { IF (!(Detect-Application)) { try { \u0026#34;The latest version is not installed, Attempting install\u0026#34; Install-Application -ErrorAction Stop \u0026#34;Installation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } ELSE { \u0026#34;The Latest Version is already installed\u0026#34; } } } Lets take a look at the Install-Application function that is called in the statement.\nLets Break it down into stages.\nChecks if the $DownloadPath exists, if not it will try to create it. Download the installer from the Link to the Download folder ($DownloadPath) Install the application with the additional command line arguments stored in the $InstallArgs variable. param ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType = \u0026#34;Detect\u0026#34;, [string]$DownloadPath = \u0026#34;$env:Temp\\GitInstaller\\\u0026#34;, [string]$GITPAC ) $LatestVersion = $RestResult.name.split()[-1] $DownloadLink = ($RestResult.assets | Where-Object {$_.Name -Match $EXEName}).browser_download_url $EXEName = \u0026#34;Git-$LatestVersion-$Arch.exe\u0026#34; $InstallArgs = \u0026#34;/SP- /VERYSILENT /SUPPRESSMSGBOXES /NORESTART\u0026#34; $AppName = \u0026#34;Git For Windows\u0026#34; ############################################################################## ################## Application Installation/Uninstallation ################### ############################################################################## function Install-Application { # If the Download Path does not exist, Then try and crate it. IF (!(Test-Path $DownloadPath)) { try { Write-Verbose \u0026#34;$DownloadPath Does not exist, Creating the folder\u0026#34; New-Item -Path $DownloadPath -ItemType Directory -ErrorAction Stop | Out-Null } catch { Write-Verbose \u0026#34;Failed to create folder $DownloadPath\u0026#34; } } # Once the folder exists, download the installer try { Write-Verbose \u0026#34;Downloading Application Binaries for $AppName\u0026#34; Invoke-WebRequest -Usebasicparsing -URI $DownloadLink -Outfile \u0026#34;$DownloadPath\\$EXEName\u0026#34; -ErrorAction Stop } catch { Write-Error \u0026#34;Failed to download application binaries\u0026#34; } # Once Downloaded, Install the application try { \u0026#34;Installing $AppName $($LatestVersion)\u0026#34; Start-Process \u0026#34;$DownloadPath\\$EXEName\u0026#34; -ArgumentList $InstallArgs -Wait } catch { Write-Error \u0026#34;Failed to Install $AppName, please check the transcript file ($TranscriptFile) for further details.\u0026#34; } } GIT Uninstall As we have a dynamic installation, we want the same for the uninstall right?\nWell this is also achievable, take a look the the Uninstall-Application function below;\nLets break this down,\nCheck if an application is installed with a display name like the string stored in $DetectionString (Checks both 64 and 32 Uninstall Keys) If the application is installed, get the UninstallString from the key and store this in the $UninstallEXE variable. Uninstall the application using the $UninstallEXE with the command line arguments stored in the $UninstallArgs variable. ############################################################################## ################## Application Installation/Uninstallation ################### ############################################################################## $UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $DetectionString = \u0026#34;Git version\u0026#34; $UninstallArgs = \u0026#34;/VERYSILENT /NORESTART\u0026#34; function Uninstall-Application { try { IF (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallExe = (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).UninstallString Start-Process $UninstallExe -ArgumentList $UninstallArgs -Wait } IF (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallExe = (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).UninstallString Start-Process $UninstallExe -ArgumentList $UninstallArgs -Wait } } catch { Write-Error \u0026#34;failed to Uninstall $AppName\u0026#34; } } GIT Finished Script If you compile all of the sections together with a little bit of formatting you will end up with a script like the one below.\nExamples To install the 64-Bit version .\\Dynamic-GitforWindows.ps1\nTo install the 32-Bit version .\\Dynamic-GitforWindows.ps1 -Arch '32-bit'\nTo detect the installation only .\\Dynamic-GitforWindows.ps1 -ExecutionType Detect\nTo install the application with a Git Personal Access Key .\\Dynamic-GitforWindows.ps1 -ExecutionType Install -GITPAC \u0026lt;YourPAC\u0026gt;\nTo uninstall the application .\\Dynamic-GitforWindows.ps1 -ExecutionType Uninstall\nYou will need to change the param block variable for $ExecutionType to $ExecutionType = Detect when using this as a detection method within Intune or ConfigMGR.\n\u0026lt;# .SYNOPSIS This is a script to Dynamically Detect, Install and Uninstall the Git for Windows Client. https://gitforwindows.org/ .DESCRIPTION Use this script to detect, install or uninstall the Git for Windows client. .PARAMETER Arch Select the architecture you would like to install, select from the following - 64-bit (Default) - 32-bit - ARM64 .PARAMETER ExecutionType Select the Execution type, this determines if you will be detecting, installing uninstalling the application. The options are as follows; - Install (Default) - Detect - Uninstall .Parameter DownloadPath The location you would like the downloaded installer to go. Default: $env:TEMP\\GitInstall .NOTES Version: 1.0 Author: David Brook Creation Date: 21/02/2021 Purpose/Change: Initial script development #\u0026gt; param ( [ValidateSet(\u0026#39;64-bit\u0026#39;,\u0026#39;32-bit\u0026#39;,\u0026#39;ARM64\u0026#39;)] [String]$Arch = \u0026#39;64-bit\u0026#39;, [ValidateSet(\u0026#39;Install\u0026#39;,\u0026#39;Uninstall\u0026#39;,\u0026#34;Detect\u0026#34;)] [string]$ExecutionType = \u0026#34;Detect\u0026#34;, [string]$DownloadPath = \u0026#34;$env:Temp\\GitInstaller\\\u0026#34;, [string]$GITPAC ) $TranscriptFile = \u0026#34;$env:SystemRoot\\Logs\\Software\\GitForWindows_Dynamic_Install.Log\u0026#34; IF (-Not ($ExecutionType -Match \u0026#34;Detect\u0026#34;)) { Start-Transcript -Path $TranscriptFile } ############################################################################## ########################## Application Detection ############################# ############################################################################## function Detect-Application { IF (((Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).DisplayVersion -Match $LatestVersion) -or ((Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).DisplayVersion -Match $LatestVersion)) { Write-Output \u0026#34;$AppName is installed\u0026#34; $True } } ############################################################################## ################## Application Installation/Uninstallation ################### ############################################################################## function Install-Application { # If the Download Path does not exist, Then try and crate it. IF (!(Test-Path $DownloadPath)) { try { Write-Verbose \u0026#34;$DownloadPath Does not exist, Creating the folder\u0026#34; New-Item -Path $DownloadPath -ItemType Directory -ErrorAction Stop | Out-Null } catch { Write-Verbose \u0026#34;Failed to create folder $DownloadPath\u0026#34; } } # Once the folder exists, download the installer try { Write-Verbose \u0026#34;Downloading Application Binaries for $AppName\u0026#34; Invoke-WebRequest -Usebasicparsing -URI $DownloadLink -Outfile \u0026#34;$DownloadPath\\$EXEName\u0026#34; -ErrorAction Stop } catch { Write-Error \u0026#34;Failed to download application binaries\u0026#34; } # Once Downloaded, Install the application try { \u0026#34;Installing $AppName $($LatestVersion)\u0026#34; Start-Process \u0026#34;$DownloadPath\\$EXEName\u0026#34; -ArgumentList $InstallArgs -Wait } catch { Write-Error \u0026#34;Failed to Install $AppName, please check the transcript file ($TranscriptFile) for further details.\u0026#34; } } function Uninstall-Application { try { IF (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallExe = (Get-ChildItem -Path $UninstallKey | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).UninstallString Start-Process $UninstallExe -ArgumentList $UninstallArgs -Wait } IF (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;} -ErrorAction SilentlyContinue) { \u0026#34;Uninstalling $AppName\u0026#34; $UninstallExe = (Get-ChildItem -Path $UninstallKeyWow6432Node | Get-ItemProperty | Where-Object {$_.DisplayName -like \u0026#34;*$DetectionString*\u0026#34;}).UninstallString Start-Process $UninstallExe -ArgumentList $UninstallArgs -Wait } } catch { Write-Error \u0026#34;failed to Uninstall $AppName\u0026#34; } } ############################################################################## ##################### Get the Information from the API ####################### ############################################################################## [String]$GitHubURI = \u0026#34;https://api.github.com/repos/git-for-windows/git/releases/latest\u0026#34; IF ($GITPAC) { $RestResult = Invoke-RestMethod -Method GET -Uri $GitHubURI -ContentType \u0026#34;application/json\u0026#34; -Headers @{Authorization = \u0026#34;token $GITPAC\u0026#34;} } ELSE { $RestResult = Invoke-RestMethod -Method GET -Uri $GitHubURI -ContentType \u0026#34;application/json\u0026#34; } ############################################################################## ########################## Set Required Variables ############################ ############################################################################## $LatestVersion = $RestResult.name.split()[-1] $EXEName = \u0026#34;Git-$LatestVersion-$Arch.exe\u0026#34; $DownloadLink = ($RestResult.assets | Where-Object {$_.Name -Match $EXEName}).browser_download_url ############################################################################## ########################## Install/Uninstall Params ########################## ############################################################################## $UninstallKey = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $UninstallKeyWow6432Node = \u0026#34;HKLM:\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\\u0026#34; $DetectionString = \u0026#34;Git version\u0026#34; $UninstallArgs = \u0026#34;/VERYSILENT /NORESTART\u0026#34; $InstallArgs = \u0026#34;/SP- /VERYSILENT /SUPPRESSMSGBOXES /NORESTART\u0026#34; $AppName = \u0026#34;Git For Windows\u0026#34; ############################################################################## ############################# Do the Business ################################ ############################################################################## switch ($ExecutionType) { Detect { Detect-Application } Uninstall { try { Uninstall-Application -ErrorAction Stop \u0026#34;Uninstallation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } Default { IF (!(Detect-Application)) { try { \u0026#34;The latest version is not installed, Attempting install\u0026#34; Install-Application -ErrorAction Stop \u0026#34;Installation Complete\u0026#34; } catch { Write-Error \u0026#34;Failed to Install $AppName\u0026#34; } } ELSE { \u0026#34;The Latest Version is already installed\u0026#34; } } } IF (-Not ($ExecutionType -Match \u0026#34;Detect\u0026#34;)) { Stop-Transcript } Application Deployment Please see Creating Intune Win32 Apps for creating an Intune Win32 App Package.\nLets look at how we deploy these applications from ConfigMG (MEMCM) and Intune.\nIntune Load up Microsoft Intune\nSelect Apps from the navigation pane Select All Apps, Click Add Select App type Other\u0026gt;Windows app (Win32), Click Select Click Select app package file, Click the Blue Folder icon to open the browse window Select the .intunewin file you have created containing a copy of the script, Click Open and then click OK Fill out the Name and Publisher mandatory fields, and any other fields you desire Upload an icon if you desire, I would recommend doing this if you are deploying this to users via the Company Portal Click Next Enter your install command powershell.exe -executionpolicy bypass \u0026quot;.\\\u0026lt;Script Name.ps1\u0026gt;\u0026quot; Enter your uninstall command powershell.exe -executionpolicy bypass \u0026quot;.\\\u0026lt;Script Name.ps1\u0026gt;\u0026quot; -ExecutionType Uninstall Select your install behaviour as System Select your desired restart behaviour, Adding custom return codes if required Click Next Complete your OS Requirements, At a minimum you need to specify the Architecture and the minimum OS Version (e.g. 1607/1703 etc.) Click Next For Detection rules, select Use a custom detection script Script File: Browse to a copy of the Script where the ExecutionType was amended to $ExecutionType = \u0026quot;Detect\u0026quot;. Assign the application to your desired group If you want to display the app in the company portal, it MUST be assigned to a group containing that user. Required Assignments will force the app to install, whereas Available will show this in the Company Portal. Click Next\nClick Create ConfigMGR Head over to your Software Library and Start Creating an application in your desired folder\nGeneral Tab - Select Manually Specify the application information General Information - Input the information for your app Software Center - Input any additional information and upload an icon Deployment Types - Click Add Deployment Type - General - Change the Type to Script Installer Deployment Type - General Information - Provide a name and admin comments for your deployment type Deployment Type - Content Content Location - Select your content location (Where you saved the PowerShell Script) Installation Program - Powershell.exe -ExecutionPolicy Bypass -File \u0026ldquo;..ps1\u0026rdquo; -ExecutionType Install Uninstallation Program - Powershell.exe -ExecutionPolicy Bypass -File \u0026ldquo;..ps1\u0026rdquo; -ExecutionType Uninstall Detection Method - Select Use a custom script and click Edit Script Type - PowerShell Script Content - Paste the content of the script adding Detect to the header (If you are using a GitHub PAC key, you will also need to add this in) Installation Behavior - Install for System (Leave the reset as default or change as you desire) Dependencies \u0026amp; Requirements - Add any dependencies and requirements you wish Click through the windows to complete the creation Deploy the app to your desired collection During the installation and the uninstallation of the apps, there is a transcript of the session that is by default stored in C:\\Windows\\Logs\\Software. This will help in troubleshooting the install should you have any issues.\nOther Blogs and Tools Evergreen - Arron Parker I came across this when putting a tweet out to see if this post was worth while, Well worth a read.\nGitHub - aaronparker/Evergreen: Create evergeen Windows image build scripts with the latest version and download links for applications\nGaryTown Blog Post Using Ninite Apps - Gary Blok Ninite, is an awesome tool and Gary used this along with ConfigMGR to deploy applications with no content.\nConfigMgr Lab – Adding Ninite Apps – GARYTOWN ConfigMgr Blog\nPatchMy PC - A leader in the 3rd Party Patching world Now, this is not a community tool and it is licensed, however if you want to have this manage some of your Third Party apps with ConfigMGR, Intune or WSUS I would highly recommend them. This will save you a ton of time and help you on your way to having a fully patched estate.\nPatch My PC: Simplify Third-Party Patching in Microsoft SCCM and Intune\n","image":"https://hugo.euc365.com/images/post/dynamicappinstall/FeaturedImage_hu8a41ba5a60045e0bb1d63743bb32acd5_24516_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/create-dynamic-application-packages-with-powershell/","tags":["PowerShell","Intune","GitHub API","HTML Web Scraping","Application Deployment"],"title":"Create Dynamic Application Packages with PowerShell"},{"categories":["Powershell","Microsoft Intune","MEMCM","Windows Store for Business"],"contents":"What is the Windows Susbsystem for Linux? As per the Microsoft Documentation, The Windows Subsystem for Linux (WSL) is a new Windows 10 feature that enables you to run native Linux command-line tools directly on Windows, alongside your traditional Windows desktop and modern store apps.\nNow what does that mean? Can you run a full Linux setup in this way? Well the answer to that is NO unfortunately not. This tool is designed for developers and other users who use bash and other common Linux tools.\nI won\u0026rsquo;t go on as all of the information about WSL is in the link on the Microsoft Documentation text above.\nI will however provide the two scripts I use and wrote (nothing special just a couple of lines) for deployment and detection and demonstrate how to deploy with MEMCM.\nOnce the subsystem is installed it doesn\u0026rsquo;t mean that a Linux distribution is automatically installed. You can access these distributions via the Public Microsoft Store, however if you use this in an Enterprise and would like them added to your Enterprise Store you will need to contact your Store Admin. I will touch on how to add these just for simple convenience.\n(Un)Installation Script The script below can be used to both Enable and Disable the Windows Subsystem for Linux depending on what command line switch you specify. As mentioned this is not a complex script and it is easily edited for other Windows Optional Features\n\u0026lt;# .SYNOPSIS This script is used to Enable and Disable the Windows Subsystem for Linux Depending on the command line switch it is called with .DESCRIPTION This script is used to Enable and Disable the Windows Subsystem for Linux Depending on the command line switch it is called with .PARAMETER Enable Enables the Windows Subsystem for Linux .PARAMETER Disable Disables the Windows Subsystem for Linux .INPUTS None .OUTPUTS None .NOTES Version: 1.0 Author: David Brook Creation Date: 13/08/2020 Purpose/Change: Initial script creation .EXAMPLE Windows_SubSystem_for_Linux.ps1 -Enable #\u0026gt; param ( [switch] $Enable, [switch] $Disable, [switch] ) IF ($Enable) { Enable-WindowsOptionalFeature -Online -FeatureName \u0026#34;Microsoft-Windows-Subsystem-Linux\u0026#34; -All -NoRestart } IF ($Disable) { Disable-WindowsOptionalFeature -Online -FeatureName \u0026#34;Microsoft-Windows-Subsystem-Linux\u0026#34; -NoRestart } Detection Script The script below can be for detection of the Windows Subsystem for Linux. I did try to use the Get-WindowsOptionalFeature -Online however it seemed to never be detected.\nIF ( Get-WmiObject -Class Win32_OptionalFeature | Where-Object {($_.Name -Match \u0026#34;Microsoft-Windows-Subsystem-Linux\u0026#34;) -and ($_.InstallState -eq 1)} ){ $True } MEMCM Application Head over to your Software Library and Start Creating an application in your desired folder\nGeneral Tab - Select Manually Specify the application information General Information - Input your desired information, I called this Windows Subsystem for Linux but this is entirely your choice Software Center - Check the information and upload an icon if you would like, I used the below feel free to save it :D Deployment Types - Click Add Deployment Type - General - Change the Type to Script Installer Deployment Type - General Information - Provide a name and admin comments for your deployment type Deployment Type - Content Content Location - Select your content location (Where you saved the PowerShell Script) Installation Program - Powershell.exe -ExecutionPolicy Bypass -File \"..ps1\" -Enable Uninstall Program - Powershell.exe -ExecutionPolicy Bypass -File \"..ps1\" -Disable Deployment Type - Detection Method - Select Use custom script to detect the presence of this deployment type and click Edit Script Type - PowerShell Script Content - Use the detection method script above Deployment Type - User Experience Installation Behavior - Install for System (Leave the reset as default or change as you desire) Deployment Type - Requirements - Add any requirements you want it to meet (The application does not require anything to install) Deployment Type - Dependencies - Add any dependencies you want it to meet (The application does not require any to install) Finish both of the off the dialog windows through the summary panes and then deploy the applications to your desired collections. As mentioned above you will need to use a Linux distribution to use with the Windows Subsystem for Linux which are available in the Microsoft Store.\nIntune Application Please see Creating Intune Win32 Apps for creating an Intune Win32 App Package.\nSelect Apps from the navigation pane Select All Apps, Click Add Select App type Other\u0026gt;Windows app (Win32), Click Select Click Select app package file, Click the Blue Folder icon to open the browse windows Select the .intunewin file you have created containing a copy of the script above, Click Open and then click OK Fill out the Name and Publisher mandatory fields, and any other fields you desire Upload an icon if you desire, I would recommend doing this if you are deploying this to users via the Company Portal Click Next Enter your install command powershell.exe -executionpolicy bypass \u0026quot;.\\\u0026lt;Script Name.ps1\u0026gt;\u0026quot; -Enable Enter your uninstall command powershell.exe -executionpolicy bypass \u0026quot;.\\\u0026lt;Script Name.ps1\u0026gt;\u0026quot; -Disable Select your install behavior as System Select your desired restart behavior, Adding custom return codes if required WSL Does require a reboot to function, so please bear that in mind.\nClick Next Complete your OS Requirements, At a minimum you need to specify the Architecture (x86/x64) and the minimum OS Version (e.g. 1607/1703 etc.) Click Next For Detection rules, select Use a custom detection script Script File: Browse to a copy of the Detection Script provided above. Assign the application to your desired group If you want to display the app in the company portal, it MUST be assigned to a group containing that user. Required Assignments will force the app to install, whereas Available will show this in the Company Portal. Click Next\nClick Create Microsoft Store For Business The assignments are only user targeted, if you use groups and only the device you are using is in that group and not the user nothing will appear in the store.\nThe Linux Distributions are available in the Microsoft Store for Business (MSfB), you and/or your company may restrict what apps can be installed from the store.\nBelow is a run down on how to deploy these Distros to Azure AD/Microsoft 365 Groups.\nWe will also look at how to deploy these in Offline mode.\nTo get started launch the Microsoft Store for Business page.\nClick Sign in in the top right-hand corner and complete the sign-in process Type Linux in the search bar You will receive the WSL Distros at the top if you use the Developer Tools category filter\nClick on the Distro you would like to use/deploy Select you Licence Type, See Microsoft Documentation, Click Get App. Select the drop down below for your method of distribution. Online This option allows you to publish the Distro to the Microsoft Private Store.\nClick the Ellipses (\u0026hellip;) next to the Install button, Select Manage See the options in the drop downs below Users I would suggest using groups instead of assigning this to individual users. Please see the Private store availability section below\nIf you want to deploy the application to just a specific user(s), you can just add them individually.\nClick Assign to Users Enter their Name or Email Address Select the User Click Assign Wait for the process to complete, click Close Private store availability I would suggest using Specific Groups for the distros, As this have a requirement of WSL been enabled, unless you deploy this as a required deployment.\nNo one Make sure you remove it from any Users in the users tab if you want to ensure No One has access to it.\nIf you want to stop deploying the application, you simply have to select No one. No options for confirmation, it just removes ot from the Microsoft Store.\nEveryone If you want to deploy the application to your whole organisation, you simply have to select Everyone. No options for confirmation, it just makes this available in the Microsoft Store. Specific Groups If you want to deploy the application to a group of Users select Specific Groups.\nClick Assign Groups Enter the Name of the group Select the Group Click Add Groups Offline This option allows you to download the AppX Package for installation with DISM, PowerShell CmdLets or your MDM Provider.\nClick Manage Select your Platform, Minimum Version, Architecture, App Metadata You will then see something like the image below, this contains the Package Identity Name, Package family name, Package full name, Package format and the Supported architectures. Click Download I will demonstrate how to install this using PowerShell, however, please see the Distribute Offline Apps Microsoft Documentation for alternative methods.\nLaunch a Admin PowerShell console Browse to the directory the AppXBundle is stored Type Add-AppxPackage -Path .\\\u0026lt;PackageName\u0026gt;.AppxBundle, Hit Enter The distro is now installed You can check that the distro is installed by using the Package Identity Name.\nGet-AppxPackage -Name \u0026lt;Package Identity Name\nDistribute the content using your preferred method The Distro will now appear in your Start Menu To Enable WSL for use the device must be restarted, If you see the below message, WSL is either not installed or your device is pending a reboot.\nSummary I hope that you find this useful if you ever need to deploy WSL. If you have any questions please do not hesitate to reach out using the Contact page or in the comment section below.\nI had to use WSL the other day when deploying Docker Desktop as a dependency, the script came in handy for sure.\n","image":"https://hugo.euc365.com/images/post/deploywsl/FeaturedImage_hu88af6168fb2889d0285544f033f3f748_59352_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/deploy-windows-subsystem-for-linux/","tags":["Powershell","Intune","MEMCM","Application Deployment"],"title":"Deploy Windows Subsystem for Linux"},{"categories":["PowerShell","Microsoft Intune"],"contents":"Win32 Apps, What Are they? If you\u0026rsquo;re familiar with Configuration Manager/MEMCM then think of these files as your source directory, the difference being you are effectively zipping it up and then uploading to Intune.\nAccording to Microsoft, if you decide to use Win32 Apps, it is advised that you use these exclusively and NOT \u0026lsquo;Mix and Match\u0026rsquo; these with Line of Business applications when using Autopilot (See Microsoft Doc link below).\nWhat content can be in a Win32 App Package? The answer to that is well pretty much anything to a certain extent. These files are just proprietary files for Intune however under the hood they are just zip files that are then hashed and encoded.\nWhat uses are there for Win32 Apps? Well put, to Install apps. Now don\u0026rsquo;t be thrown by the 32 as these are not just for 32-bit apps, they can be used for any app.\nYou can use Win32 apps to just launch PowerShell scripts, Batch scripts, VBScripts etc. as long as you have a detection method if they succeed.\nMainly they are used for installing custom app packages like Greenshot, Citrix, PSADT Apps etc.\nMicrosoft Doc: Win32 app management in Microsoft Intune | Microsoft Docs\nPackage Creation Methods IntuneWinAppUtil Application The first method is creating a packaged using the GUI (Well kind of GUI) that is mentioned in the Microsoft Doc. Yiu can grab the utility from the below link;\nGitHub - Microsoft/Microsoft-Win32-Content-Prep-Tool: A tool to wrap Win32 App and then it can be uploaded to Intune\nIf you clone/download the files, and extract them to a suitable location to work with.\nLet\u0026rsquo;s get started. The below works on the assumption you have your files in a folder with noting other than those required for the app. (You don\u0026rsquo;t want to be uploading your entire desktop do you :P)\nLaunch the IntuneWinAppUtil.exe Type/Paste your Source Directory (e.g. C:/Win 32 Apps/7-Zip), hit Enter. Type/Paste you setup file name (e.g. 7z2002-x64.exe or MyScript.ps1), hit Enter Type/Paste your Output Directory (e.g. C:/Win 32 Apps), hit Enter. When prompted about catalogue files type N unless you are deploying to Windows S Mode, hit Enter The window will automatically close when your .intunewin file is finished if you head over your output folder you will be able to get your file for upload. PowerShell PowerShell Gallery | IntuneWin32App 1.2.0\nFor you command-line gurus and script lovers out there, you will be pleased to know that there is a PowerShell module for bundling these your apps up, you can even go a step further and import them via a script, but we will save that for another post :D.\nYou can install the module using the following command;\n# # To install the module for the current user add -Scope CurrentUser to the below command # Install-Module IntuneWin32App Once you have the module installed you can type a command like this;\n# # Setup File example: Powershell.ps1, setup.exe, MyInstaller.msi # New-IntuneWin32AppPackage -SourceFolder \u0026#34;C:\\Win32 Apps\\7-Zip\u0026#34; -OutputFolder \u0026#34;C:\\Win32 Apps\\Outputs\u0026#34; -SetupFile 7z2002-x64.exe This command will create a .intunewin file in the output location named 7z2002-x64.intunewin, this is because it takes the installers name for the output. Unfortunately at the time of writing this, you can\u0026rsquo;t do it natively with this module. However, you can add a Rename-Item into your script to change it.\nUsing the packages with Intune Head over to Microsoft Intune admin center (Intune) to to get started\nSelect Apps from the navigation pane Select All Apps, Click Add Select App type Other\u0026gt;Windows app (Win32), Click Select Click Select app package file, Click the Blue Folder icon to open the browse windows Select the .intunewin file you have created, Click Open and then click OK Fill out the Name and Publisher mandatory fields, and any other fields you desire Upload an icon if you desire, I would recommend doing this if you are deploying this to users via the Company Portal Click Next Enter your install command (e.g. 7z2002-x64.exe /S) Enter your uninstall command (e.g. \"C:\\Program Files\\7-Zip\\Uninstall.exe\" /S) Select your install behavior, if this is a machine wide installation you will need to select System, otherwise select User if this is installing to the user profile Select your desired restart behavior, Adding custom return codes if required Click Next Complete your OS Requirements, At a minimum you need to specify the Architecture (x86/x64) and the minimum OS Version (e.g. 1607/1703 etc.) Click Next For Detection rules, See the Detection Rules section below, Once complete click Next Add any dependent Intune Apps you may require, Click Next Assign the application to your desired group, just as a NOTE if you want to display the app in the company portal, it MUST be assigned to a group containing that user. Required Assignments will force the app to install, whereas Available will show this in the Company Portal. Click Next Click Create That is your app finished and deploying, it is worth noting it may take 15/20 minutes to be available on the device, the device must also perform a sync to check for the app. Detection Rules\nDetection rules have 4 options, you can use a Custom Detection Script, Registry, File(Folder) and MSI, lets look at them in a little bit more detail.\nWhen you first reach the Detection Rule Screen you will have a single Drop-Down box with two options, Use a custom detection script and Manually configure detection rules. File, Registry and MSI are all available under the Manual option, it is worth noting that you can can mix and match these rules, however there are considered AND methods. If you are looking to do a AND/OR detection you will need to use a custom PowerShell Script.\nWe will dive into all of the options below.\nFile As you can see above using this detection method is fairly straight forward, however it can get a bit messy if you use the Date Created/Modified options.\nLets put a rule together.\nRule Type - File Path - \"YourPath\" (e.g. C:\\Program Files\\7-Zip\\) File or Folder - \"YourFileFolder\" (e.g. 7z.exe) Detection Method - File or Folder Exists Associated with a 32-bit app on a 64-bit client, No. Now that rule is very quick and simple, as mentioned you can use the date modified or created option, and that would look something like below. Rule Type - File Path - \"YourPath\" (e.g. C:\\Program Files\\7-Zip\\) File or Folder - \"YourFileFolder\" (e.g. 7z.exe) Detection Method - Date Modified Operator, select the option that you wish to validate against (e.g. Equals, Greater than etc.) Select the date using the date picker and enter the time using the 12 hour format Associated with a 32-bit app on a 64-bit client, No. Registry The registry option is fairly straight forward, and is the most likely option you are going to select if you are just installing a simple application and just want to check that the program itself exists. Again for the detection method you have various options, for this example we will just use Key Exists\nRule Type - Registry Key Path - \"Path to key\" (e.g. HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Uninstall/7-Zip) Value Name - \"Value Name\" (e.g. DisplayVersion) Detection Method - File or Folder Exists Associated with a 32-bit app on a 64-bit client, No. MSI MSI detections are quick and easy if you are installing an MSI application, all you need is the GUID, for the 7-zip app this is not applicable however below is a basic example. You can also perform version checks on the MSI apps.\nRule Type - MSI MSI Product Code - \"Product GUID\" (e.g. {8C3A8923-0000-0000-0000-C82C1BE7294D}) MSI product version check - Yes Select your operator (e.g. Equals, Greater than etc.) Value - Product Version (e.g. 20.02) Detection Script For me, this is the most favorable option, but I love to script :D. But that aside you can check multiple actions, the only thing you need to do is return any value other than Null for the detection to pass. For example the below script checks for the registry value and also that the file exists, if they do it will return a True value, else it will return nothing.\n# $7zReg = \u0026#34;HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\7-Zip\u0026#34; $7zExe = \u0026#34;$env:ProgramFiles\\7-zip\\7z.exe\u0026#34; IF ((Get-ItemPropertyValue -Path $7zReg -Name DisplayVersion) -and (Test-Path $7zExe)) { $true } else {} You will need to have the file save and ready to be upload to Intune, The above is written in PowerShell so will need a .ps1 extension. To use this method follow the below steps.\nRule format - Use custom detection script Script file - Upload yours using the blue folder icon Run Script as a 32-bit process on 64-bit clients - No (This is entirely your choice again but for this example it is not required) Enforce script signature check and run script silently - No That covers the basics all of the detection methods, if you have any further questions please reach out or review the Microsoft Docs. 3rd Party/Community Tools \u0026amp; Blogs Here are some of the 3rd Party and Community Tools and Blogs that I have found useful and they may help you in you hour of need!!\nSyst \u0026amp; Deploy - Intune Win32 App Tool This is a great tool to create and extract/decode Win32 apps if you prefer a GUI to creating your intunewin files, this tool also has a feature to decode the packages you already have incase you loose the source files but have the intunewin file.\nIntune Win32app tool - Create and Extract Intunewin | Syst \u0026amp; Deploy (systanddeploy.com)\nOliver Kieselbach - How to decode Win32 App Packages This is a great guide and it can truly help pull you out of the gutter if you have lost all of your intunewin files, although its not straight forward to get them back (Not Oliver\u0026rsquo;s Fault) this guid provides you an in-depth guide on how to retrieve the intunewin packages. Truly worth a read and Kudos to Oliver for giving us this gift.\nHow to decode Intune Win32 App Packages – Modern IT – Cloud – Workplace (oliverkieselbach.com)\n","image":"https://hugo.euc365.com/images/post/createwin32/FeaturedImage_hud9c2deeabf5570fc1ba59ea2e2ca97e1_36785_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/creating-intune-win32-apps/","tags":["Powershell","Intune","Application Deployment"],"title":"Creating Intune Win32 Apps"},{"categories":["Azure","Microsoft Intune","Graph API"],"contents":"Why do I need to find these? Welcome (Back) to another post about the graph API\u0026rsquo;s, This time it isn\u0026rsquo;t so much about rambling through documentation but about giving you a nice handy tip on your route to finding what API is been called when browsing in the Web Console.\nThe only things your are going to need for this will be an Azure AD Account with permissions to at least Read policies and a modern web browser, Yes really that\u0026rsquo;s it.\nTo test the API Paths you can use Postman or Microsoft Graph Explorer\nLets get started We are going to be using Azure AD groups for this but the same methods can be used across the board (Users, Intune etc.).\nLets start by opening your browser and browse to https://portal.azure.com, Once loaded if you hit the F12 key you will see the Developer Tools pane open. The next steps may vary based on your browser choice (I\u0026rsquo;m using Edge), If you click on the Network tab (shown below) you will notice it is blank.\nIf you navigate your way to Groups (Azure Active Directory \u0026gt; Groups) you will notice it starts to populate the left pane (As below)\nAs you can see there are some little cogs next to some of the rows, If you click on one of these you will see there will be data within them\u0026hellip; In this case I am going to use the one that is a darker grey in the above image. If you click on it in the right pane you will see what data is returned (As shown Below).\nIf you expand the value field you will see the data that it has returned and will notice that it matchs up to what your seeing on screen. Now if you hover over the field you will see what API call has been made (See Below).\nIf you right-click on the entry and click Copy\u0026gt;Copy Link Address and head over to the Graph Explorer and paste the URL into the URL box, you will receive the same results, albeit better formatted.\nTo Summarise Now this is only the start of the journey, you can use the knowledge you gain and already have to combine the two and make your own apps/scripts off of the back of it. This guide is just to help you find the API\u0026rsquo;s you need to be calling :D.\n","image":"https://hugo.euc365.com/images/post/findapis/FeaturedImage_hua9f5a063cc8a59b9fe8a432de9662652_22801_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/finding-microsoft-graph-apis-from-the-web/","tags":["Azure","Graph API"],"title":"Finding Microsoft Graph API's from the Web"},{"categories":["Powershell","Microsoft Intune"],"contents":"Why backup App Protection Policies? Why would you need to back up something that runs in stored and hosted on Azure? Well there are numerous answers to this question really.\nIf you make a change and you break something you can look back and analyse what it was You can make copies of the policy easily rather than having two windows side by side In case one is deleted… (Let’s hope this is never the case) To review the maturity of your policies (Lets say you started from ground zero and now have over 100 policy settings… Might be nice to review what you’ve done) We also live in a world of change management and service improvement so there is always a need to make changes to policies and configurations. If you have or are moving from traditional group policy you will know that you can run an HTML report or a backup before you go ahead and make any changes. Well when using Intune there was no way to export or backup your profiles or policies from the console… I have seen people taking screenshots of the pages as a backup of the policies which is far from an ideal scenario.\nWhat if I told you there is a way you can back-up your configuration policies using the Microsoft Graph API?? Well it’s possible and it’s easier than you think.\nThis is part of a series of posts about backing up and importing policies and profiles, so if you feel like you\u0026rsquo;ve read this part before then you probably have.\nBack when I wrote my first post about these (HERE) the script just backed up the policies/profiles, however over time they have grown into scripts that you can also use to re-import these policies/profile.\nThis one is the Fourth in the series, where we will focus on App Protection Policies. Each one has brought its own challenges which are hopefully mitigated within the script, but if not you can always get in touch and let me know.\nThe Script You will notice that most of this (the authentication part and most of the param block at least) are the same as my other script\u0026hellip; But if its not broke why fix it? (Those famous last words!!!). Although this script does have an alternative run method, if you run it directly without the ClientID, ClientSecret and TenantID parameters it will install the Azure AD Powershell module and use a custom Function (Connect-AzAD_Token) to enable users to interact with a login Window if they do not wish to use Azure AD App Registrations with client secrets.\nThis script can be run from anywhere, as a user (If the using the command line parameters or if the AzureAD Module is installed already), as an Administrator or even as System. You could put this into an Automation Engine to do backups on a schedule if that is your desire but this would need to be done with an Azure App Registration.\nparam( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID, #The Client Secret for your AzureAD App [string] $ClientSecret, #Your Azure Tenent ID [string] $TenantId, [Parameter()] [string] $OutputFolder = \u0026#34;.\\AppProtectionPolicyBackup\u0026#34;, [switch] $Import, [string] $ImportJSON )# FUNCTION Connect-AzAD_Token { Write-Host -ForegroundColor Cyan \u0026#34;Checking for AzureAD module...\u0026#34; $AADMod = Get-Module -Name \u0026#34;AzureAD\u0026#34; -ListAvailable if (!($AADMod)) { Write-Host -ForegroundColor Yellow \u0026#34;AzureAD PowerShell module not found, looking for AzureADPreview\u0026#34; $AADModPrev = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable #Check to see if the AzureAD Preview Module is insalled, If so se that as the AAD Module Else Insall the AzureAD Module IF ($AADModPrev) { $AADMod = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable } else { try { Write-Host -ForegroundColor Yello \u0026#34;AzureAD Preview is not installed...\u0026#34; Write-Host -ForegroundColor Cyan \u0026#34;Attempting to Install the AzureAD Powershell module...\u0026#34; Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force -ErrorAction Stop | Out-Null Install-Module AzureAD -Force -ErrorAction Stop } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to install the AzureAD PowerShell Module `n $($Error[0])\u0026#34; break } } } else { Write-Host -ForegroundColor Green \u0026#34;AzureAD Powershell Module Found\u0026#34; } $AADMod = ($AADMod | Select-Object -Unique | Sort-Object)[-1] $ADAL = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.dll\u0026#34; $ADALForms = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.Platform.dll\u0026#34; [System.Reflection.Assembly]::LoadFrom($ADAL) | Out-Null [System.Reflection.Assembly]::LoadFrom($ADALForms) | Out-Null $UserInfo = Connect-AzureAD # Microsoft Intune PowerShell Enterprise Application ID $MIPEAClientID = \u0026#34;d1ddf0e4-d672-4dae-b554-9d5bdfd93547\u0026#34; # The redirectURI $RedirectURI = \u0026#34;urn:ietf:wg:oauth:2.0:oob\u0026#34; #The Authority to connect with (YOur Tenant) Write-Host -Foregroundcolor Cyan \u0026#34;Connected to Tenant: $($UserInfo.TenantID)\u0026#34; $Auth = \u0026#34;https://login.microsoftonline.com/$($UserInfo.TenantID)\u0026#34; try { $AuthContext = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext\u0026#34; -ArgumentList $Auth # https://msdn.microsoft.com/en-us/library/azure/microsoft.identitymodel.clients.activedirectory.promptbehavior.aspx # Change the prompt behaviour to force credentials each time: Auto, Always, Never, RefreshSession $platformParameters = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.PlatformParameters\u0026#34; -ArgumentList \u0026#34;Auto\u0026#34; $userId = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier\u0026#34; -ArgumentList ($UserInfo.Account, \u0026#34;OptionalDisplayableId\u0026#34;) $authResult = $AuthContext.AcquireTokenAsync((\u0026#34;https://\u0026#34; + $MSGraphHost),$MIPEAClientID,$RedirectURI,$platformParameters,$userId).Result # If the accesstoken is valid then create the authentication header if($authResult.AccessToken){ # Creating header for Authorization token $AADAccessToken = $authResult.AccessToken return $AADAccessToken } else { Write-Host -ForegroundColor Red \u0026#34;Authorization Access Token is null, please re-run authentication...\u0026#34; break } } catch { Write-Host -ForegroundColor Red $_.Exception.Message Write-Host -ForegroundColor Red $_.Exception.ItemName break } } # Web page used to help with getting the access token #https://morgantechspace.com/2019/08/get-graph-api-access-token-using-client-id-and-client-secret.html if (($ClientID) -and ($ClientSecret) -and ($TenantId) ) { #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token } else { $global:AccessToken = Connect-AzAD_Token } IF (!($Import)) { $FormattedOutputFolder = \u0026#34;$OutputFolder\\$(Get-Date -Format yyyyMMdd_HH-mm-ss)\u0026#34; IF (!(Test-Path $FormattedOutputFolder)){ try { mkdir $FormattedOutputFolder -ErrorAction Stop | Out-Null } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to create $FormattedOutputFolder\u0026#34; $Error[0] break } } Invoke-RestMethod -Method GET -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceAppManagement/managedAppPolicies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty \u0026#34;Value\u0026#34; | %{ $_ | ConvertTo-Json | Out-File \u0026#34;$FormattedOutputFolder\\$($_.displayname).json\u0026#34; } }elseif ($Import) { IF($ImportJSON) { $JSON = Get-Content $ImportJSON | ConvertFrom-Json | Select-Object -Property * -ExcludeProperty Version,LastModifiedTime,CreatedDateTime,id | ConvertTo-Json Invoke-RestMethod -Method POST -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceAppManagement/managedAppPolicies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} -Body $JSON -ContentType \u0026#34;application/json\u0026#34; } else { Write-Host -ForegroundColor RED \u0026#34;You must specify an a JSON file using the -ImportJSON parameter\u0026#34; } } The Pre-Reqs Azure AD App Registration To make the script work without user interaction from an automation engine you will need an Azure App Registration with the following permissions for the Microsoft Graph API;\nBacking Up App Protection Policies Only DeviceManagementApps.Read.All (Application Permission) Importing App Protection Policies DeviceManagementApps.ReadWrite.All (Application Permission) GRAPH API DOCUMENTATION If you are not executing the script directly, you will also need the Tenant ID and the account that the script will be running as will need permission to the Output folder for backups.\nIf your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don\u0026rsquo;t forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D.\nExecuting the Script Unattended with an Azure AD App Registration You can run this script directly from a PowerShell console, using Task Scheduler or using a 3rd party automation product that supports Powershell.\nThe main thing we will go through here is just the parameters and then putting them all together from the command line, it\u0026rsquo;s really that simple.\nFor Backup Only Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID OutputFolder: Your desired Output folder `./Backup_Import_AppProtectionPolicies.ps1 -ClientID \"\" -ClientSecret \"\" -TenantID \"\" -OutputFolder \"./YourServerBackups/AppProtectionPolicies\"` For Importing Policies Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID Import: This is a switch parameter which states if your intention is to import or not ImportJSON: the path to your JSON file. You will finally end up with something like this; ./Backup_Import_AppProtectionPolicies.ps1 -ClientID \u0026quot;\u0026quot; -ClientSecret \u0026quot;\u0026quot; -TenantID \u0026quot;\u0026quot; -Import -ImportJSON \u0026quot;./YourServerBackups/AppProtectionPolicies/ImportMe.JSON\u0026quot;\nDirect Execution If you launch the script without the Client ID, Secret and Tenant ID you will be prompted with a Microsoft Logon Window similar to the below.\nOnce you login the script will continue to run and then output the configuration files in the same way it would using the App Registration. You will need an account with permissions to be able to read (for backups only) or Read and Write the App Protection Policies. However the likelihood is that if you are looking at this guide you are probably an Intune Service Administrator or Global Administrator on your Tenant.\nWhen you run it directly without any switches the script will prompt you to log in and it would only perform a backup of your profiles and output the configurations to the the folder you are executing it from.\nIf you add the -OutputFolder parameter you can change the destination of the base output folder. However if you are wishing to use the script to Import policies you can add the -Import and -ImportJSON parameters, If you specify the -Import parameter you must also specify the -ImportJSON parameter with a path to the JSON file (e.g. C:/ImportMe.json) otherwise the script will display a message that you did not specify the -ImportJSON Parameter.\nYou will notice that when you run the script, if the folder does not exist it will create it. It also put its into a dated folder in the yyyyMMdd_HH-mm-ss format leaving you with something like 20200901_16_05_36\nSummary This can also be useful if you are wanting to make a copy of your policies to assign to a test machine. All you will need to do is backup your current policies and amend the JSON file, If you find the displayName field in the JSON file and amend it and save the file you will be able to re-import this the same settings. All you then need to do is assign it.\nI have tested this myself at the time of writing the post but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\nI hope this is useful for your needs.\n","image":"https://hugo.euc365.com/images/post/backupappprotection/FeaturedImage_hue9f4eda435c769dc3995279027cb34d2_65426_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/backup-and-import-app-protection-policies/","tags":["App Registrations","Powershell","Intune","Graph API","App Protection"],"title":"Backup and Import App Protection Policies"},{"categories":["Graph API","PowerShell","Microsoft Intune","Azure"],"contents":"Why backup Conditional Access Policies? Why would you need to back up something that runs in stored and hosted on Azure? Well there are numerous answers to this question really.\nIf you make a change and you break something you can look back and analyse what it was You can make copies of the policy easily rather than having two windows side by side In case one is deleted… (Let’s hope this is never the case) To review the maturity of your policies (Lets say you started from ground zero and now have over 100 policy settings… Might be nice to review what you’ve done) We also live in a world of change management and service improvement so there is always a need to make changes to policies and configurations. If you have or are moving from traditional group policy you will know that you can run an HTML report or a backup before you go ahead and make any changes. Well when using Intune there was no way to export or backup your profiles or policies from the console… I have seen people taking screenshots of the pages as a backup of the policies which is far from an ideal scenario.\nWhat if I told you there is a way you can back-up your configuration policies using the Microsoft Graph API?? Well it’s possible and it’s easier than you think.\nThis is part of a series of posts about backing up and importing policies and profiles, so if you feel like you\u0026rsquo;ve read this part before then you probably have.\nBack when I wrote my first post about these (HERE) the script just backed up the policies/profiles, however over time they have grown into scripts that you can also use to re-import these policies/profile.\nThis one is the Third in the series, where we will focus on Conditional Access Policies.Each one has brought its own challenges which are hopefully mitigated within the script, but if not you can always get in touch and let me know.\nThe Script You will notice that most of this (the authentication part and most of the param block at least) are the same as my other script\u0026hellip; But if its not broke why fix it? (Those famous last words!!!). Although this script does have an alternative run method, if you run it directly without the ClientID, ClientSecret and TenantID parameters it will install the Azure AD Powershell module and use a custom Function (Connect-AzAD_Token) to enable users to interact with a login Window if they do not wish to use Azure AD App Registrations with client secrets.\nThis Conditional Access script WILL ALWAYS REQUIRE A CUSTOM APP REGISTRATION. I\u0026rsquo;ve not put that in bold to be shouty but just to highlight it and stand out, as I was going around in circles for a couple of days trying to figure out why this one would not work!!!.\nThis script can be run from anywhere, as a user (If the using the command line parameters or if the AzureAD Module is installed already), as an Administrator or even as System. You could put this into an Automation Engine to do backups on a schedule if that is your desire but this would need to be done with an Azure App Registration.\nparam( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App with Delegate permissions [string] $DelegateClientID, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID, #The Client Secret for your AzureAD App [string] $ClientSecret, #Your Azure Tenent ID [string] $TenantId, [Parameter()] [string] $OutputFolder = \u0026#34;.ConditionalAccessPolicyBackup\u0026#34;, [switch] $Import, [string] $ImportJSON )# FUNCTION Connect-AzAD_Token ($DelegateID){ Write-Host -ForegroundColor Cyan \u0026#34;Checking for AzureAD module...\u0026#34; $AADMod = Get-Module -Name \u0026#34;AzureAD\u0026#34; -ListAvailable if (!($AADMod)) { Write-Host -ForegroundColor Yellow \u0026#34;AzureAD PowerShell module not found, looking for AzureADPreview\u0026#34; $AADModPrev = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable #Check to see if the AzureAD Preview Module is insalled, If so se that as the AAD Module Else Insall the AzureAD Module IF ($AADModPrev) { $AADMod = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable } else { try { Write-Host -ForegroundColor Yello \u0026#34;AzureAD Preview is not installed...\u0026#34; Write-Host -ForegroundColor Cyan \u0026#34;Attempting to Install the AzureAD Powershell module...\u0026#34; Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force -ErrorAction Stop | Out-Null Install-Module AzureAD -Force -ErrorAction Stop } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to install the AzureAD PowerShell Module `n $($Error[0])\u0026#34; break } } } else { Write-Host -ForegroundColor Green \u0026#34;AzureAD Powershell Module Found\u0026#34; } $AADMod = ($AADMod | Select-Object -Unique | Sort-Object)[-1] $ADAL = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.dll\u0026#34; $ADALForms = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.Platform.dll\u0026#34; [System.Reflection.Assembly]::LoadFrom($ADAL) | Out-Null [System.Reflection.Assembly]::LoadFrom($ADALForms) | Out-Null $UserInfo = Connect-AzureAD # Your Azure Application ID $MIPEAClientID = $DelegateID # The redirectURI $RedirectURI = \u0026#34;urn:ietf:wg:oauth:2.0:oob\u0026#34; #The Authority to connect with (YOur Tenant) Write-Host -Foregroundcolor Cyan \u0026#34;Connected to Tenant: $($UserInfo.TenantID)\u0026#34; $Auth = \u0026#34;https://login.microsoftonline.com/$($UserInfo.TenantID)\u0026#34; try { $AuthContext = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext\u0026#34; -ArgumentList $Auth # https://msdn.microsoft.com/en-us/library/azure/microsoft.identitymodel.clients.activedirectory.promptbehavior.aspx # Change the prompt behaviour to force credentials each time: Auto, Always, Never, RefreshSession $platformParameters = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.PlatformParameters\u0026#34; -ArgumentList \u0026#34;Auto\u0026#34; $userId = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier\u0026#34; -ArgumentList ($UserInfo.Account, \u0026#34;OptionalDisplayableId\u0026#34;) $authResult = $AuthContext.AcquireTokenAsync((\u0026#34;https://\u0026#34; + $MSGraphHost),$MIPEAClientID,$RedirectURI,$platformParameters,$userId).Result # If the accesstoken is valid then create the authentication header if($authResult.AccessToken){ # Creating header for Authorization token $AADAccessToken = $authResult.AccessToken return $AADAccessToken } else { Write-Host -ForegroundColor Red \u0026#34;Authorization Access Token is null, please re-run authentication...\u0026#34; break } } catch { Write-Host -ForegroundColor Red $_.Exception.Message Write-Host -ForegroundColor Red $_.Exception.ItemName break } } # Web page used to help with getting the access token #https://morgantechspace.com/2019/08/get-graph-api-access-token-using-client-id-and-client-secret.html if (($ClientID) -and ($ClientSecret) -and ($TenantId) ) { #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token } else { if (!($DelegateClientID)) { Write-Host -ForegroundColor Red \u0026#34;You must specify a clientID which has the correct delegate permissions and URI Re-write configuration \u0026#34; break } $global:AccessToken = Connect-AzAD_Token -DelegateID $DelegateClientID } IF (!($Import)) { $FormattedOutputFolder = \u0026#34;$OutputFolder$(Get-Date -Format yyyyMMdd_HH-mm-ss)\u0026#34; IF (!(Test-Path $FormattedOutputFolder)){ try { mkdir $FormattedOutputFolder -ErrorAction Stop | Out-Null } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to create $FormattedOutputFolder\u0026#34; $Error[0] break } } Invoke-RestMethod -Method GET -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/identity/conditionalAccess/policies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} -ContentType \u0026#34;application/json\u0026#34; | Select-Object -ExpandProperty \u0026#34;Value\u0026#34; | %{ $_ | ConvertTo-Json -Depth 10 | Out-File \u0026#34;$FormattedOutputFolder$($_.displayname).json\u0026#34; } }elseif ($Import) { $JSON = Get-Content $ImportJSON | ConvertFrom-Json | Select-Object -Property * -ExcludeProperty Version,modifiedDateTime,CreatedDateTime,id,sessionControls | ConvertTo-Json -Depth 10 Invoke-RestMethod -Method POST -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/identity/conditionalAccess/policies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} -Body $JSON -ContentType \u0026#34;application/json\u0026#34; } The Pre-Reqs Azure AD App Registration To make the script work you will need an Azure App Registration with the following permissions for the Microsoft Graph API;\nBacking Up Conditional Access Policies Only For Direct Execution (Using the login box) you will need;\nPolicy.Read.All (Delegate Permission) Using the ClientID, ClientSecret and TenantID (Unattended) you will need; Policy.Read.All (Application Permission) Importing Conditional Access Policies For Direct Execution (Using the login box) you will need;\nPolicy.Read.All (Delegate Permission) Policy.ReadWrite.ConditionalAccess (Delegate Permission) Application.Read.All (Delegate Permission) Using the ClientID, ClientSecret and TenantID (Unattended) you will need; Policy.Read.All (Application Permission) Policy.ReadWrite.ConditionalAccess (ApplicationPermission) Application.Read.All (Application Permission) GRAPH API DOCUMENTATION If you are not executing the script directly, you will also need the Tenant ID and the account that the script will be running as will need permission to the Output folder for backups.\nIf your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don\u0026rsquo;t forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D.\nRedirect URI For this one, there is a little bit more to do with the Azure AD Application. We are going to need to add a re-write URI for Authentication when using the Login Prompt (Delegated Permissions). If you do not have a Redirect URI or it is not the correct one you will recieve an error like below.\nWe need to add urn:ietf:wg:oauth:2.0:oob as a redirect URI for the application. To do so follow the below steps;\nBrowse to your Azure AD Application Registration Click on Authntication located in the left pane Click Add a Platform Click Mobile and Desktop applications Copy and paste urn:ietf:wg:oauth:2.0:oob into the Redirect URI field Click Configure This will enable the Authentication box to work with Conditional Access. Unfortunately I was unable to add these Permissions to the Microsoft Intune PowerShell Enterprise Application so I could have left this as a default ClientID for that in each tenant. Executing the Script Unattended with an Azure AD App Registration You can run this script directly from a PowerShell console, using Task Scheduler or using a 3rd party automation product that supports Powershell.\nThe main thing we will go through here is just the parameters and then putting them all together from the command line, it\u0026rsquo;s really that simple.\nFor Backup Only Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID OutputFolder: Your desired Output folder ./Backup_Import_ConditionalAccessPolicies.ps1 -ClientID \u0026quot;\u0026quot; -ClientSecret \u0026quot;\u0026quot; -TenantID \u0026quot;\u0026quot; -OutputFolder \u0026quot;./YourServerBackups/ConditionalAccessPolicies\u0026quot;\nFor Importing Policies Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID Import: This is a switch parameter which states if your intention is to import or not ImportJSON: the path to your JSON file. You will finally end up with something like this; ./Backup_Import_ConditionalAccessPolicies.ps1 -ClientID \u0026quot;\u0026quot; -ClientSecret \u0026quot;\u0026quot; -TenantID \u0026quot;\u0026quot; -Import -ImportJSON \u0026quot;./YourServerBackups/ConditionalAccessPolicies/ImportMe.JSON\u0026quot;\nDirect Execution (With your Azure AD App Registration) There is a slight change here from my previous posts as mentioned above in the Script section we need an Azure AD App registration for this one in any case. The fundamental difference here though is the permission type (Delegate) and it does not require a Secret and TenantID.\nIf you launch the script without the required -DelegateClientID parameter you will be prompted with a message saying you need to launch it with one. So for this direct execution you will need to launch the script like below;\nFor Backup ./Backup_Import_ConditionalAccessPolicies.ps1 -DelegateClientID \u0026quot;\u0026quot;\nFor Import ./Backup_Import_ConditionalAccessPolicies.ps1 -DelegateClientID \u0026quot;\u0026quot; -Import -ImportJSON \u0026quot;./YourServerBackups/ConditionalAccessPolicies/ImportMe.JSON\u0026quot;\nYou will then be prompted with a Microsoft Logon Window similar to the below.\nOnce you login the script will continue to run and then output the policy files in the same way it would using the App Registration.\nIf you add the -OutputFolder parameter you can change the destination of the base output folder. However if you are wishing to use the script to Import policies you can add the -Import and -ImportJSON parameters, If you specify the -Import parameter you must also specify the -ImportJSON parameter with a path to the JSON file (e.g. C:/ImportMe.json) otherwise the script will display a message that you did not specify the -ImportJSON Parameter.\nYou will notice that when you run the script, if the folder does not exist it will create it. It also put its into a dated folder in the yyyyMMdd_HH-mm-ss format leaving you with something like 20200914_11-52-22\nSummary This can also be useful if you are wanting to make a copy of your policies to assign to a test machine. All you will need to do is backup your current policies and amend the JSON file, If you find the displayName field in the JSON file and amend it and save the file you will be able to re-import this the same settings. All you then need to do is assign it.\nI have tested this myself at the time of writing the post but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\nI hope this is useful for your needs.\n","image":"https://hugo.euc365.com/images/post/backupconditionalaccess/FeaturedImage_hubd800af2b061297bd97bdf31d47a608b_49712_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/backup-and-import-conditional-access-policies/","tags":["Azure","App Registrations","Powershell","Intune","Graph API","Conditional Access"],"title":"Backup and Import Conditional Access Policies"},{"categories":["Powershell","Microsoft Intune","Graph API"],"contents":"Why backup Compliance Policies Why would you need to back up something that runs in stored and hosted on Azure? Well there are numerous answers to this question really.\nIf you make a change and you break something you can look back and analyse what it was You can make copies of the policy easily rather than having two windows side by side In case one is deleted… (Let’s hope this is never the case) To review the maturity of your policies (Lets say you started from ground zero and now have over 100 policy settings… Might be nice to review what you’ve done) We also live in a world of change management and service improvement so there is always a need to make changes to policies and configurations. If you have or are moving from traditional group policy you will know that you can run an HTML report or a backup before you go ahead and make any changes. Well when using Intune there was no way to export or backup your profiles or policies from the console… I have seen people taking screenshots of the pages as a backup of the policies which is far from an ideal scenario.\nWhat if I told you there is a way you can back-up your configuration policies using the Microsoft Graph API?? Well it’s possible and it’s easier than you think.\nThis is part of a series of posts about backing up and importing policies and profiles, so if you feel like you’ve read this part before then you probably have.\nBack when I wrote my first post about these (HERE) the script just backed up the policies/profiles, however over time they have grown into scripts that you can also use to re-import these policies/profile.\nThis one is the Second in the series, where we will focus on Compliance Policies. Each one has brought its own challenges which are hopefully mitigated within the script, but if not you can always get in touch and let me know.\nThe Script You will notice that most of this (the authentication part and most of the param block at least) are the same as my other script… But if its not broke why fix it? (Those famous last words!!!). Although this script does have an alternative run method, if you run it directly without the ClientID, ClientSecret and TenantID parameters it will install the Azure AD Powershell module and use a custom Function (Connect-AzAD_Token) to enable users to interact with a login Window if they do not wish to use Azure AD App Registrations with client secrets.\nThis script can be run from anywhere, as a user (If the using the command line parameters or if the AzureAD Module is installed already), as an Administrator or even as System. You could put this into an Automation Engine to do backups on a schedule if that is your desire but this would need to be done with an Azure App Registration.\nparam( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID, #The Client Secret for your AzureAD App [string] $ClientSecret, #Your Azure Tenent ID [string] $TenantId, [Parameter()] [string] $OutputFolder = \u0026#34;./CompliancePolicyBackup\u0026#34;, [switch] $Import, [string] $ImportJSON )# FUNCTION Connect-AzAD_Token { Write-Host -ForegroundColor Cyan \u0026#34;Checking for AzureAD module...\u0026#34; $AADMod = Get-Module -Name \u0026#34;AzureAD\u0026#34; -ListAvailable if (!($AADMod)) { Write-Host -ForegroundColor Yellow \u0026#34;AzureAD PowerShell module not found, looking for AzureADPreview\u0026#34; $AADModPrev = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable #Check to see if the AzureAD Preview Module is insalled, If so se that as the AAD Module Else Insall the AzureAD Module IF ($AADModPrev) { $AADMod = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable } else { try { Write-Host -ForegroundColor Yello \u0026#34;AzureAD Preview is not installed...\u0026#34; Write-Host -ForegroundColor Cyan \u0026#34;Attempting to Install the AzureAD Powershell module...\u0026#34; Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force -ErrorAction Stop | Out-Null Install-Module AzureAD -Force -ErrorAction Stop } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to install the AzureAD PowerShell Module `n $($Error[0])\u0026#34; break } } } else { Write-Host -ForegroundColor Green \u0026#34;AzureAD Powershell Module Found\u0026#34; } $AADMod = ($AADMod | Select-Object -Unique | Sort-Object)[-1] $ADAL = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.dll\u0026#34; $ADALForms = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.Platform.dll\u0026#34; [System.Reflection.Assembly]::LoadFrom($ADAL) | Out-Null [System.Reflection.Assembly]::LoadFrom($ADALForms) | Out-Null $UserInfo = Connect-AzureAD # Microsoft Intune PowerShell Enterprise Application ID $MIPEAClientID = \u0026#34;d1ddf0e4-d672-4dae-b554-9d5bdfd93547\u0026#34; # The redirectURI $RedirectURI = \u0026#34;urn:ietf:wg:oauth:2.0:oob\u0026#34; #The Authority to connect with (YOur Tenant) Write-Host -Foregroundcolor Cyan \u0026#34;Connected to Tenant: $($UserInfo.TenantID)\u0026#34; $Auth = \u0026#34;https://login.microsoftonline.com/$($UserInfo.TenantID)\u0026#34; try { $AuthContext = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext\u0026#34; -ArgumentList $Auth # https://msdn.microsoft.com/en-us/library/azure/microsoft.identitymodel.clients.activedirectory.promptbehavior.aspx # Change the prompt behaviour to force credentials each time: Auto, Always, Never, RefreshSession $platformParameters = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.PlatformParameters\u0026#34; -ArgumentList \u0026#34;Auto\u0026#34; $userId = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier\u0026#34; -ArgumentList ($UserInfo.Account, \u0026#34;OptionalDisplayableId\u0026#34;) $authResult = $AuthContext.AcquireTokenAsync((\u0026#34;https://\u0026#34; + $MSGraphHost),$MIPEAClientID,$RedirectURI,$platformParameters,$userId).Result # If the accesstoken is valid then create the authentication header if($authResult.AccessToken){ # Creating header for Authorization token $AADAccessToken = $authResult.AccessToken return $AADAccessToken } else { Write-Host -ForegroundColor Red \u0026#34;Authorization Access Token is null, please re-run authentication...\u0026#34; break } } catch { Write-Host -ForegroundColor Red $_.Exception.Message Write-Host -ForegroundColor Red $_.Exception.ItemName break } } # Web page used to help with getting the access token #https://morgantechspace.com/2019/08/get-graph-api-access-token-using-client-id-and-client-secret.html if (($ClientID) -and ($ClientSecret) -and ($TenantId) ) { #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token } else { $global:AccessToken = Connect-AzAD_Token } IF (!($Import)) { $FormattedOutputFolder = \u0026#34;$OutputFolder$(Get-Date -Format yyyyMMdd_HH-mm-ss)\u0026#34; IF (!(Test-Path $FormattedOutputFolder)){ try { mkdir $FormattedOutputFolder -ErrorAction Stop | Out-Null } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to create $FormattedOutputFolder\u0026#34; $Error[0] break } } Invoke-RestMethod -Method GET -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceManagement/deviceCompliancePolicies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty \u0026#34;Value\u0026#34; | %{ $_ | ConvertTo-Json | Out-File \u0026#34;$FormattedOutputFolder$($_.displayname).json\u0026#34; } }elseif ($Import) { IF ($ImportJSON){ #$JSON = GET-Content $ImportJSON $JSON = Get-Content $ImportJSON | ConvertFrom-Json | Select-Object -Property * -ExcludeProperty Version,LastModifiedTime,CreatedDateTime,id | ConvertTo-Json $SAFRule = \u0026#39;\u0026#34;scheduledActionsForRule\u0026#34;: [ { \u0026#34;ruleName\u0026#34;: \u0026#34;PasswordRequired\u0026#34;, \u0026#34;scheduledActionConfigurations\u0026#34;: [ { \u0026#34;actionType\u0026#34;: \u0026#34;block\u0026#34;, \u0026#34;gracePeriodHours\u0026#34;: 0, \u0026#34;notificationTemplateId\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;notificationMessageCCList\u0026#34;: [ ] } ] } ]\u0026#39; $JSON = $Json.trimend(\u0026#34;}\u0026#34;) + \u0026#34;,\u0026#34; + \u0026#34;`r`n\u0026#34; + $SAFRule + \u0026#34;`r`n\u0026#34; + \u0026#34;}\u0026#34; Invoke-RestMethod -Method POST -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceManagement/deviceCompliancePolicies\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} -Body $JSON -ContentType \u0026#34;application/json\u0026#34; } else { Write-Host -ForegroundColor RED \u0026#34;You must specify an a JSON file using the -ImportJSON parameter\u0026#34; } } The Pre-Reqs With an Azure AD App Registration To make the script work you will need an Azure App Registration with the following permissions for the Microsoft Graph API;\nFor backing up the Compliance Policies you will need the DeviceManagementConfiguration.Read.All permission (NOTE: This will need to be Application permissions if you are using the App Registration).\nIf you wish to import Compliance Policies you will need the DeviceManagementConfiguration.ReadWrite.All permission.\nGRAPH API DOCUMENTATION\nYou will also need the Tenant ID and the account that the script will be running as will need permission to the Output folder.\nIf your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don\u0026rsquo;t forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D.\nExecuting the Script With an Azure AD App Registration You can run this script directly from a PowerShell console, using Task Scheduler or using a 3rd party automation product that supports Powershell.\nThe main thing we will go through here is just the parameters and then putting them all together from the command line, it\u0026rsquo;s really that simple.\nFor Backup Only Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID OutputFolder: Your desired Output folder ./Backup_Import_CompliancePolicies.ps1 -ClientID \"\" -ClientSecret \"\" -TenantID \"\" -OutputFolder \"./YourServerBackups/CompliancePolicies\" For Importing Policies Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID Import: This is a switch parameter which states if your intention is to import or not ImportJSON: the path to your JSON file. You will finally end up with something like this; ./Backup_Import_CompliancePolicies.ps1 -ClientID \u0026quot;\u0026quot; -ClientSecret \u0026quot;\u0026quot; -TenantID \u0026quot;\u0026quot; -Import -ImportJSON \u0026ldquo;./YourServerBackups/CompliancePolicies/ImportMe.JSON\u0026rdquo;\nDirect Execution If you launch the script without the Client ID, Secret and Tenant ID you will be prompted with a Microsoft Logon Window similar to the below. Once you login the script will continue to run and then output the configuration files in the same way it would using the App Registration. You will need an account with permissions to be able to read (for backups only) or Read and Write the Compliance Policies. However the likelihood is that if you are looking at this guide you are probably an Intune Service Administrator or Global Administrator on your Tenant.\nWhen you run it directly without any switches the script will prompt you to log in and it would only perform a backup of your profiles and output the configurations to the the folder you are executing it from.\nIf you add the -OutputFolder parameter you can change the destination of the base output folder. However if you are wishing to use the script to Import policies you can add the -Import and -ImportJSON parameters, If you specify the -Import parameter you must also specify the -ImportJSON parameter with a path to the JSON file (e.g. C:/ImportMe.json) otherwise the script will display a message that you did not specify the -ImportJSON Parameter.\nYou will notice that when you run the script, if the folder does not exist it will create it. It also put its into a dated folder in the yyyyMMdd_HH-mm-ss format leaving you with something like 20200901_16_05_36\nSummary This can also be useful if you are wanting to make a copy of your policies to assign to a test machine. All you will need to do is backup your current policies and amend the JSON file, If you find the displayName field in the JSON file and amend it and save the file you will be able to re-import this the same settings. All you then need to do is assign it.\nI have tested this myself at the time of writing the post but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\nI hope this is useful for your needs.\n","image":"https://hugo.euc365.com/images/post/backupcompliencepolicy/FeaturedImage_huf593cc9f2e18770358a68f3fa6bb35ce_39438_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/backup-and-import-intune-compliance-policies/","tags":["App Registrations","Powershell","Intune","Graph API","Compliance"],"title":"Backup and Import Intune Compliance Policies"},{"categories":["Microsoft Intune","Powershell","Graph API"],"contents":"Why backup Device Configuration Profiles? Why would you need to back up something that runs in stored and hosted on Azure? Well there are numerous answers to this question really.\nIf you make a change and you break something you can look back and analyse what it was You can make copies of the policy easily rather than having two windows side by side In case one is deleted... (Let's hope this is never the case) To review the maturity of your policies (Lets say you started from ground zero and now have over 100 policy settings... Might be nice to review what you've done) We also live in a world of change management and service improvement so there is always a need to make changes to policies and configurations. If you have or 9are moving from traditional group policy you will know that you can run an HTML report or a backup before you go ahead and make any changes. Well when using Intune there was no way to export or backup your profiles or policies from the console\u0026hellip; I have seen people taking screenshots of the pages as a backup of the policies which is far from an ideal scenario.\nWhat if I told you there is a way you can back-up your configuration policies using the Microsoft Graph API?? Well it\u0026rsquo;s possible and it\u0026rsquo;s easier than you think.\nThis is the first of a series of guides on how to backup and import different types of policies and profiles using the API. This one will be focusing on Device Configuration Profiles.\nThe Script You will notice that most of this (the authentication part and most of the param block at least) are the same as my other script\u0026hellip; But if its not broke why fix it? (Those famous last words!!!). Although this script does have an alternative run method, if you run it directly without the ClientID, ClientSecret and TenantID parameters it will install the Azure AD Powershell module and use a custom Function Connect-AzAD_Token to enable users to interact with a login Window if they do not wish to use Azure AD App Registrations with client secrets.\nThis script can be run from anywhere, as a user, as an Admin or even as System. You could put this into an Automation Engine to do backups on a schedule if that is your desire.\nparam( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID, #The Client Secret for your AzureAD App [string] $ClientSecret, #Your Azure Tenent ID [string] $TenantId, [Parameter()] [string] $OutputFolder = \u0026#34;.\\ConfigurationProfileBackup\u0026#34;, [switch] $Import, [string] $importJSON ) FUNCTION Connect-AzAD_Token { Write-Host -ForegroundColor Cyan \u0026#34;Checking for AzureAD module...\u0026#34; $AADMod = Get-Module -Name \u0026#34;AzureAD\u0026#34; -ListAvailable if (!($AADMod)) { Write-Host -ForegroundColor Yellow \u0026#34;AzureAD PowerShell module not found, looking for AzureADPreview\u0026#34; $AADModPrev = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable #Check to see if the AzureAD Preview Module is insalled, If so se that as the AAD Module Else Insall the AzureAD Module IF ($AADModPrev) { $AADMod = Get-Module -Name \u0026#34;AzureADPreview\u0026#34; -ListAvailable } else { try { Write-Host -ForegroundColor Yello \u0026#34;AzureAD Preview is not installed...\u0026#34; Write-Host -ForegroundColor Cyan \u0026#34;Attempting to Install the AzureAD Powershell module...\u0026#34; Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force -ErrorAction Stop | Out-Null Install-Module AzureAD -Force -ErrorAction Stop } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to install the AzureAD PowerShell Module `n $($Error[0])\u0026#34; break } } } else { Write-Host -ForegroundColor Green \u0026#34;AzureAD Powershell Module Found\u0026#34; } $AADMod = ($AADMod | Select-Object -Unique | Sort-Object)[-1] $ADAL = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.dll\u0026#34; $ADALForms = Join-Path $AADMod.ModuleBase \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.Platform.dll\u0026#34; [System.Reflection.Assembly]::LoadFrom($ADAL) | Out-Null [System.Reflection.Assembly]::LoadFrom($ADALForms) | Out-Null $UserInfo = Connect-AzureAD # Microsoft Intune PowerShell Enterprise Application ID $MIPEAClientID = \u0026#34;d1ddf0e4-d672-4dae-b554-9d5bdfd93547\u0026#34; # The redirectURI $RedirectURI = \u0026#34;urn:ietf:wg:oauth:2.0:oob\u0026#34; #The Authority to connect with (YOur Tenant) Write-Host -Foregroundcolor Cyan \u0026#34;Connected to Tenant: $($UserInfo.TenantID)\u0026#34; $Auth = \u0026#34;https://login.microsoftonline.com/$($UserInfo.TenantID)\u0026#34; try { $AuthContext = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext\u0026#34; -ArgumentList $Auth # https://msdn.microsoft.com/en-us/library/azure/microsoft.identitymodel.clients.activedirectory.promptbehavior.aspx # Change the prompt behaviour to force credentials each time: Auto, Always, Never, RefreshSession $platformParameters = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.PlatformParameters\u0026#34; -ArgumentList \u0026#34;Auto\u0026#34; $userId = New-Object \u0026#34;Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier\u0026#34; -ArgumentList ($UserInfo.Account, \u0026#34;OptionalDisplayableId\u0026#34;) $authResult = $AuthContext.AcquireTokenAsync((\u0026#34;https://\u0026#34; + $MSGraphHost),$MIPEAClientID,$RedirectURI,$platformParameters,$userId).Result # If the accesstoken is valid then create the authentication header if($authResult.AccessToken){ # Creating header for Authorization token $AADAccessToken = $authResult.AccessToken return $AADAccessToken } else { Write-Host -ForegroundColor Red \u0026#34;Authorization Access Token is null, please re-run authentication...\u0026#34; break } } catch { Write-Host -ForegroundColor Red $_.Exception.Message Write-Host -ForegroundColor Red $_.Exception.ItemName break } } # Web page used to help with getting the access token #https://morgantechspace.com/2019/08/get-graph-api-access-token-using-client-id-and-client-secret.html if (($ClientID) -and ($ClientSecret) -and ($TenantId) ) { #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token } else { $global:AccessToken = Connect-AzAD_Token } if ($Import) { IF ($ImportJSON){ #$JSON = GET-Content $ImportJSON $JSON = Get-Content $ImportJSON | ConvertFrom-Json | Select-Object -Property * -ExcludeProperty Version,LastModifiedTime,CreatedDateTime,id,supportsScopeTags | ConvertTo-Json Invoke-RestMethod -Method POST -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceManagement/deviceConfigurations\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} -Body $JSON -ContentType \u0026#34;application/json\u0026#34; } else { Write-Host -ForegroundColor RED \u0026#34;You must specify an a JSON file using the -ImportJSON parameter\u0026#34; } } else { $FormattedOutputFolder = \u0026#34;$OutputFolder\\$(Get-Date -Format yyyyMMdd_HH-mm-ss)\u0026#34; IF (!(Test-Path $FormattedOutputFolder)){ try { mkdir $FormattedOutputFolder -ErrorAction Stop | Out-Null } catch { Write-Host -ForegroundColor Red \u0026#34;Failed to create $FormattedOutputFolder\u0026#34; $Error[0] break } } Invoke-RestMethod -Method GET -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/deviceManagement/deviceConfigurations\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty \u0026#34;Value\u0026#34; | %{ $_ | ConvertTo-Json | Out-File \u0026#34;$FormattedOutputFolder\\$($_.displayname).json\u0026#34; } } Pre-Reqs Azure AD App Registration To make the script work without any interaction you will need an Azure App Registration with the following permissions for the Microsoft Graph API.\nBacking Up Device Configuration Profiles DeviceManagementConfiguration.Read.All (Application Permission) Importing Device Configuration Profiles DeviceManagementConfiguration.ReadWrite.All (Application Permission) GRAPH API DOCUMENTATION If you are not executing the script directly, you will also need the Tenant ID and the account that the script will be running as will need permission to the Output folder for backups.\nIf your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don\u0026rsquo;t forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D.\nExecuting the Script Using Azure AD App Registrations You can run this script directly from a PowerShell console, using Task Scheduler or using a 3rd party automation product that supports PowerShell.\nThe main thing we will go through here is just the parameters and then putting them all together from the command line, it’s really that simple.\nFor Backup Only Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID OutputFolder: Your desired Output folder ./Backup_Import_DeviceConfigurationPolicies.ps1 -ClientID “” -ClientSecret “” -TenantID “” -OutputFolder “./YourServerBackups/ConfigurationPolicies” For Importing Policies Client ID: This is the Client ID for your Azure AD App ClientSecret: The Client Secret for the Azure AD App TenantID: Your Azure Tenant ID Import: This is a switch parameter which states if your intention is to import or not ImportJSON: the path to your JSON file. You will finally end up with something like this; ./Backup_DeviceConfigurationPolicies.ps1 -ClientID “” -ClientSecret “” -TenantID “” -Import -ImportJSON “./YourServerBackups/ConfigurationPolicies/ImportMe.JSON”\nDirect Execution If you launch the script without the Client ID, Secret and Tenant ID you will be prompted with a Microsoft Logon Window similar to the below. Once you login the script will continue to run and then output the configuration files in the same way it would using the App Registration. You will need an account with permissions to be able to read (for backups only) or Read and Write the Device Configuration Profiles. However the likelihood is that if you are looking at this guide you are probably an Intune Service Administrator or Global Administrator on your Tenant.\nWhen you run it directly without any switches the script will prompt you to log in and it would only perform a backup of your profiles and output the configurations to the the folder you are executing it from.\nIf you add the -OutputFolder parameter you can change the destination of the base output folder. However if you are wishing to use the script to Import policies you can add the -Import and -ImportJSON parameters, If you specify the -Import parameter you must also specify the -ImportJSON parameter with a path to the JSON file (e.g. C:/ImportMe.json) otherwise the script will display a message that you did not specify the -ImportJSON Parameter.\nYou will notice that when you run the script, if the folder does not exist it will create it. It also put its into a dated folder in the yyyyMMdd_HH-mm-ss format leaving you with something like 20200901_16_05_36\nSummary This can also be useful if you are wanting to make a copy of your policies to assign to a test machine. All you will need to do is backup your current policies and amend the JSON file, If you find the displayName field in the JSON file and amend it and save the file you will be able to re-import this the same settings. All you then need to do is assign it.\nI have tested this myself at the time of writing the post but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\n","image":"https://hugo.euc365.com/images/post/backupconfigprofile/FeaturedImage_hue851cb8723407c34c7bcfe49139a1b62_65052_460x200_fill_q100_box_smart1.jpg","permalink":"https://hugo.euc365.com/backing-up-intune-device-configuration-profiles/","tags":["App Registrations","Powershell","Intune","Graph API","Configuration Profiles"],"title":"Backup and Import Intune Device Configuration Profiles"},{"categories":["Azure","Powershell","Microsoft Intune"],"contents":"Have you ever needed to add a device to an Azure AD Group as part of your MEMCM or Autopilot deployment for specific app, profiles or scripts?\nWell it became the case that my organisation needed to do so for a couple of reason, one of those was to disable Windows Hello and the other for devices migrating from a previous Configuration Manager (Not MEMCM).\nBecause I was using Hybrid AD Join Autopilot Deployments it became the case that I had to use the devices\u0026rsquo; computer name and get the device information that way.\nFor you that don\u0026rsquo;t use the Hybrid AD Join Autopilot method, This creates two Azure AD computers, The first been purely Azure AD Joined and the second is an Intune (MDM Enrolled) object. Microsoft do link these together for the Bitlocker Keys etc. and from my understanding they are looking at making them just one object but at the time this article was written they remain two separate objects\u0026hellip;. I look forward to the day when two become one ;).\nOh\u0026hellip; did I also mention that you do not need to install any other modules for PowerShell to be able to run these script? No? Well that\u0026rsquo;s the nature of the game for me is to have as little reliance on Modules etc so the scripts can be run practically anywhere :D.\nThe Script You will notice that most of this (the authentication part and most of the param block at least) are the same as my other script\u0026hellip; But if its not broke why fix it? (Those famous last words!!!).\nAs mentioned above you will notice that this uses the computer name to identify the device and then use the information from that device object to add it to the Azure AD Group. When the device is identified from the name it gets the device Azure ID and then proceeds to create the JSON body for the request and then submits this to the API.\nparam( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID = \u0026#34;\u0026lt;YourClientID\u0026gt;\u0026#34;, #The Client Secret for your AzureAD App [string] $ClientSecret = \u0026#34;\u0026lt;YourClientSecret\u0026gt;\u0026#34;, #Your Azure Tenent ID [string] $TenantId = \u0026#34;\u0026lt;YourTenentID\u0026gt;\u0026#34;, #The Azure AD Group Object ID [string] $GroupID = \u0026#34;\u0026lt;YourGroupID\u0026gt;\u0026#34;, #The name of the device [string] $InputDevice ) IF (!($InputDevice)) { $InputDevice = $env:COMPUTERNAME } #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token $GroupMembers = Invoke-RestMethod -Method Get -uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/groups/$GroupID/members\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty Value $Devices = Invoke-RestMethod -Method Get -uri \u0026#34;https://$MSGraphHost/$MSGraphVersion/devices?`$filter=startswith(displayName,\u0026#39;$InputDevice\u0026#39;)\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty Value | %{ if ($GroupMembers.ID -contains $_.id) { Write-Host -ForegroundColor Yellow \u0026#34;$($_.DisplayName) ($($_.ID)) is in the Group\u0026#34; } else { Write-Host -ForegroundColor Green \u0026#34;Adding $($_.DisplayName) ($($_.ID)) To The Group\u0026#34; $BodyContent = @{ \u0026#34;@odata.id\u0026#34;=\u0026#34;https://graph.microsoft.com/v1.0/devices/$($_.id)\u0026#34; } | ConvertTo-Json Invoke-RestMethod -Method POST -uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/groups/$GroupID/members/`$ref\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;; \u0026#39;Content-Type\u0026#39; = \u0026#39;application/json\u0026#39;} -Body $BodyContent } } The Pre-Reqs To make the script work you will need an Azure App Registration with the following permissions for the Microsoft Graph API;\nGroupMember.ReadWrite.All Group.ReadWrite.All Directory.ReadWrite.All GRAPH API DOCUMENTATION You will also need the Group ID and Tenant ID, to find this following these steps\nLogin to the Azure AD console (You can get your Tenant ID from the Overview tab under Tenant Information) Select Groups Search for the group you want to utilise and open it From the Group overview page copy the Object ID as this is the Group ID we need. If your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don't forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D. Executing the Script There are numerous ways you can execute this script, you could use it as a script in Script in MEMCM or Intune, In a Task Sequence, as an Application or Package (You will need to add some for of check file for the detection rule) or you could execute this directly from the command line.\nI will demonstrate the Script in MEMCM and Intune for you.\nScript in MEMCM This is the best option if you want to do it manually on a case by case basis (i.e. Right click on the computer object and select run script).\nJump into the Script section in MEMCM (Software Library \u0026gt; Scripts) and click Create Script from the ribbon.\nGive the script a Name, select the language as PowerShell and then copy and paste the script above (Tip: In the top right corner of the script block you can click Copy Script Text).\nClick Next, This is where you need the details we noted earlier. MEMCM is great at pulling through the Param block parameters, all we need to do is amend the ClientID, ClientSecret and TenantId arguments as below.\nAs we are using the environment variable for the InputDevice we will need to Hide this from selection as the script will use the Environment Variable if the parameter is not used.\nDouble Click on InputDevice, Change the Hidden drop down to True click OK.\nWhen finished click Next review the settings and then click next and then close.\nDon\u0026rsquo;t forget to Approve your Script\nNow lets choose a client computer from Assets and Compliance \u0026gt; Devices. Right click on the object and select Run Script, Select the script object you created and review the details and then let the script run.\nThis does not take long to run and the output of the script if the device is successfully added to the group is as below;\nAs mentioned before as these devices are Hybrid Joined they have two entries in Azure AD which is why the output shows its adding the device twice with two different GUIDS.\nScript in Intune This time the script needs to be saved as a .ps1 file to be uploaded and used by Intune, unfortunately using the Scripts section in Intune you cannot specify parameters so you will need to put your Client ID, Secret, TenantID and Group ID into the script before uploading. You could use a Win32 App as an alternative method if you wish to use them via the command line. Once you\u0026rsquo;ve saved the script launch the Endpoint Manager Console from your favourite web browser.\nSelect Devices from the left hand pane, under the Policy section click Scripts.\nClick Add \u0026gt; Windows 10, Name your script appropriately and enter a short description (Even a link to this blog :P), Once you\u0026rsquo;ve done hit next and then select your script to use.\nLeave all of the sliders as No;\nClick Next, Add your Scope Tags (if any) and your assignments. Review the configuration and click Add.\nThis doesn\u0026rsquo;t run instantaneously, please refer to the Microsoft Documentation this also has some other notable consideration listed.\nTo Conclude These are just two of the ways you can run the script, you could also potentially run this in the back end of a web application for people who want to request to disable things like Windows Hello (As I mentioned at the start). I could spent days, weeks even months on writing articles for some of the uses. A user one will no doubt follow in due course so watch this space :D.\nI did fully test these methods at the time of writing the blog but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\nI hope this is useful for your needs.\n","image":"https://hugo.euc365.com/images/post/addtoaadgroup/Featuredimage_hu3628e62f4b830086042ed40985c25021_8390_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/add-devices-to-an-azure-ad-group-using-the-microsoft-graph-api/","tags":["Azure AD","Azure","App Registrations","Graph API"],"title":"Add devices to an Azure Group using the Microsoft Graph API"},{"categories":["Powershell","Microsoft Intune","MEMCM","Graph API"],"contents":"I have been working on a project at the company I work for, and up to this point we have been primarily focused on getting new devices imported and deploying via Autopilot.\nNow we have successfully leaped over that hurdle with very little issues (apart from the odd TPM attestation issue here and there and the ESP Profile page been skipped), we moved onto focusing on our current estate and how to import these into Autopilot .\nThere is a couple of ways to do this, you could run this in a package, as an application, as a script or in a task sequence for when you decide to re-build the machines.\nNow the choice is yours on which method will suit your organization the best.\nThe Script Now on GitHub Now, lets talk about the script itself. When I started out on this path I used Michael Niehaus\u0026rsquo; Get-WindowsAutoPilotInfo script, even before this had the -online parameter. I was also hoping to leverage the same script for importing devices into Autopilot silently.\nThere was however a couple of stumbling blocks for me not doing so, the first been the Connect-MSGraph would not connect using the ClientID and Secret from the Azure App Registration and kept prompting for credentials. The second being it downloaded other PowerShell Modules. This was an issue for us as firstly it added a further time delay to the script and secondly one of our security product blocked it during this process.\nI had also recently started leveraging the Microsoft Graph API and decided to find a way to do this without additional the modules while achieving the same outcome. And the following is the outcome.\nI have recently updated the script (28/08/2020) to include the use of Group Tags, but also to add the -Hash parameter. The hash parameter allows you to use any device has to register it with your tenant, for example if you had a folder with a set of .csv files containing the device hash\u0026rsquo;s you could do a recursive import of all of these.\nIf you want to export a device hash to a CSV file to test this use the following command which will create the CSV.\nYou can either copy and paste the hash or import the CSV into PowerShell and reference it that way.\nGet-CimInstance -Namespace root/cimv2/mdm/dmmap -Class MDM_DevDetail_Ext01 -Filter \u0026#34;InstanceID=\u0026#39;Ext\u0026#39; AND ParentID=\u0026#39;./DevDetail\u0026#39;\u0026#34; | Export-CSV \u0026#34;C:\\$($ENV:ComputerName)_HardwareInformation.csv\u0026#34; -NoTypeInformation \u0026lt;#PSScriptInfo .VERSION 2.0 .AUTHOR David Brook .COMPANYNAME EUC365 .COPYRIGHT .TAGS Autopilot; Intune; Mobile Device Management .LICENSEURI .PROJECTURI .ICONURI .EXTERNALMODULEDEPENDENCIES .REQUIREDSCRIPTS .EXTERNALSCRIPTDEPENDENCIES .RELEASENOTES Version 2.0: Added the ability to make the script accept command line arguments for just the Hash and also allow Group Tags Version 1.0: Original published version. #\u0026gt; \u0026lt;# .SYNOPSIS This script will import devices to Microsoft Intune Autopilot using the device\u0026#39;s hardware hash. .DESCRIPTION This script will import devices to Microsoft Intune Autopilot using the device\u0026#39;s hardware hash with the added capability of been able to add a Group Tag. .PARAMETER MSGraphVersion The Version of the MS Graph API to use Default: Beta e.g: 1.0 .PARAMETER MsGraphHost The MS Graph API Host Default: graph.microsoft.com .PARAMETER ClientID This is the Azure AD App Registration Client ID .PARAMETER ClientSecret This is the Azure AD App Registration Client Secret .PARAMETER TenantId Your Azure Tenant ID .PARAMETER Hash This parameter is to be used if you want to import a specific hash from either a file or copying and pasting from an application. .PARAMETER GroupTag This Parameter is to be used if you want to Tag your devices with a specific group tag. .EXAMPLE .\\Enroll_to_Autopliot_Unattended.ps1 -ClientID \u0026#34;\u0026lt;Your Client ID\u0026gt;\u0026#34; -Client Secret \u0026#34;\u0026lt;YourClientSecret\u0026gt;\u0026#34; -TenantID \u0026#34;\u0026lt;YourTenantID\u0026gt;\u0026#34; This will enroll the device it is running on to Autopilot, Please note this will need to be done as an administrator .EXAMPLE .\\Enroll_to_Autopliot_Unattended.ps1 -ClientID \u0026#34;\u0026lt;Your Client ID\u0026gt;\u0026#34; -Client Secret \u0026#34;\u0026lt;YourClientSecret\u0026gt;\u0026#34; -TenantID \u0026#34;\u0026lt;YourTenantID\u0026gt;\u0026#34; -GroupTag \u0026#34;Sales Device\u0026#34; This will enroll the device it is running on to Autopilot with a Group Tag of Sales Device, Please note this will need to be done as an administrator .EXAMPLE .\\Enroll_to_Autopliot_Unattended.ps1 -ClientID \u0026#34;\u0026lt;Your Client ID\u0026gt;\u0026#34; -Client Secret \u0026#34;\u0026lt;YourClientSecret\u0026gt;\u0026#34; -TenantID \u0026#34;\u0026lt;YourTenantID\u0026gt;\u0026#34; -Hash \u0026#34;\u0026lt;A Hash\u0026gt;\u0026#34; This will enroll the inputed deivce Hash to Autopilot, this can be done against a group of CSV files etc. #\u0026gt; param( [Parameter(DontShow = $true)] [string] $MsGraphVersion = \u0026#34;beta\u0026#34;, [Parameter(DontShow = $true)] [string] $MsGraphHost = \u0026#34;graph.microsoft.com\u0026#34;, #The AzureAD ClientID (Application ID) of your registered AzureAD App [string] $ClientID = \u0026#34;\u0026lt;YourClientID\u0026gt;\u0026#34;, #The Client Secret for your AzureAD App [string] $ClientSecret = \u0026#34;\u0026lt;YourSecret\u0026gt;\u0026#34;, #Your Azure Tenent ID [string] $TenantId = \u0026#34;\u0026lt;YourTenant\u0026gt;\u0026#34;, [string] $Hash, [string] $GroupTag ) Begin { #Create the body of the Authentication of the request for the OAuth Token $Body = @{client_id=$ClientID;client_secret=$ClientSecret;grant_type=\u0026#34;client_credentials\u0026#34;;scope=\u0026#34;https://$MSGraphHost/.default\u0026#34;;} #Get the OAuth Token $OAuthReq = Invoke-RestMethod -Method Post -Uri \u0026#34;https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token\u0026#34; -Body $Body #Set your access token as a variable $global:AccessToken = $OAuthReq.access_token } Process { if(!$Hash) { $session = New-CimSession # Get the common properties. Write-Verbose \u0026#34;Checking $comp\u0026#34; $serial = (Get-CimInstance -CimSession $session -Class Win32_BIOS).SerialNumber # Get the hash (if available) $devDetail = (Get-CimInstance -CimSession $session -Namespace root/cimv2/mdm/dmmap -Class MDM_DevDetail_Ext01 -Filter \u0026#34;InstanceID=\u0026#39;Ext\u0026#39; AND ParentID=\u0026#39;./DevDetail\u0026#39;\u0026#34;) if ($devDetail) { $hash = $devDetail.DeviceHardwareData } else { $hash = \u0026#34;\u0026#34; } Remove-CimSession $session } } End { if(!($GroupTag)) { $PostData = @{ \u0026#39;hardwareIdentifier\u0026#39; = \u0026#34;$hash\u0026#34; } | ConvertTo-Json } else { $PostData = @{ \u0026#39;hardwareIdentifier\u0026#39; = \u0026#34;$hash\u0026#34; \u0026#39;groupTag\u0026#39; = \u0026#34;$GroupTag\u0026#34; } | ConvertTo-Json } $Post = Invoke-RestMethod -Method POST -Uri \u0026#34;https://$MSGraphHost/$MsGraphVersion/devicemanagement/importedWindowsAutopilotDeviceIdentities\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;; \u0026#39;Content-Type\u0026#39; = \u0026#39;application/json\u0026#39;} -Body $PostData DO { Write-Host \u0026#34;Waiting for device import\u0026#34; Start-Sleep 10 } UNTIL ((Invoke-RestMethod -Method Get -Uri \u0026#34;https://$MsGraphHost/$MsGraphVersion/Devicemanagement/importedwindowsautopilotdeviceidentities/$($Post.ID)\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty State) -NOTmatch \u0026#34;unknown\u0026#34;) Invoke-RestMethod -Method Get -Uri \u0026#34;https://$MsGraphHost/$MsGraphVersion/Devicemanagement/importedwindowsautopilotdeviceidentities/$($Post.ID)\u0026#34; -Headers @{Authorization = \u0026#34;Bearer $AccessToken\u0026#34;} | Select-Object -ExpandProperty State } The Pre-Reqs To make the script work you will need an Azure App Registration with the DeviceManagementServiceConfig.ReadWrite.All Application permission for the Microsoft Graph API.\nIf your not sure how to create an Azure AD App Registration head over to one of my other posts by clicking HERE, Don\u0026rsquo;t forget to store your Client ID and Secret securely and also have it to hand for the rest of the post :D.\nExecuting the Script As mentioned before there are numerous ways you can run this script, however I will demonstrate 2 different ways to do so, I will just mention though that if you do use this as an Application you will need to amend the script to add some form of check file or registry key.\nScript in MEMCM This is the best option if you want to do it manually on a case by case basis (i.e. Right click on the computer object and select run script).\nJump into the Script section in MEMCM (Software Library \u0026gt; Scripts) and click Create Script from the ribbon.\nGive the script a Name, select the language as PowerShell and then copy and paste the script above (Tip: In the top right corner of the script block you can click Copy Script Text).\nClick Next, This is where you need the details we noted earlier. MEMCM is great at pulling through the Param block parameters, all we need to do is amend the ClientID, ClientSecret and TenantId arguments as below. When finished click Next review the settings and then click next and then close.\nDon\u0026rsquo;t forget to Approve your Script\nNow choose your victim\u0026hellip; erm I mean client computer from Assets and Compliance \u0026gt; Devices. Right click on the object and select Run Script, Select the script object you created and review the details and then let the script run :D. This can take about 2/5 minutes, as it keeps a loop going until the device is imported. When the script finishes if you look at the script out put you will see the following;\nIf you notice the last output shows the import status of the device.\nIn a Task Sequence in MEMCM I wont go into how to create the entire Task Sequence for a device rebuild however I will explain how you can use the script to import the device into Autopilot during a Task Sequence weather it be a new one or a current one.\nHead over to Software Library \u0026gt; Operating System \u0026gt; Task Sequences so we can get started.\nI will be using a current Task Sequence for this Demo. There may be a future post on how to create a Task Sequence to re-build your device to a standard OS with Drivers and Import it to Autopilot.\nMy existing Task Sequence looks like this;\nThis is a very basic TS which just boots to Win PE, Installs windows and loads driver packs for VMware Virtual Machines (Only a test TS).\nI no longer want to have to re-build the device and then import it to Autopilot Manually so instead we add the script to the top of the TS as follows.\nClick Add \u0026gt; General \u0026gt; Run Powershell Script Enter a Name and Description for the script Select Enter a PowerShell Script Click Add Script Copy the Script above and paste it into the window and click OK In the Parameters box enter -ClientID \"\" -ClientSecret \"\" -TenantId \"\" Select Bypass under the PowerShell Execution Policy drop-down Your window should then look like this;\nHit Apply and then OK and give it a whirl on your machine (well not yours\u0026hellip; always be sure to test it first :P)\nWhen it runs you will see the following appear (depending on your Task Sequence);\nThe device will then be enrolled into Autopilot;\nWhen the device then reboots after my task sequence I am presented with the expected Autopilot Enrolment window.\nTo Conclude So I have shown two ways of using the script to enroll to Autopilot Unattended, now there is nothing preventing you running this from the command line with the same parameters however if you wanted to do it that way I would definitely look at Michael Niehaus\u0026rsquo; Get-WindowsAutopilotInfo script (See the opening few paragraphs with the links to these) as this does not require an App Registration.\nI did fully test these methods at the time of writing the blog but if you come across any information you think may be wrong then please leave a comment or e-mail me on [email protected].\nI hope this is useful for your needs.\n","image":"https://hugo.euc365.com/images/post/enroltoap/featuredImage_hu4a55fcf092277c676faee897c45089d9_131576_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/enrol-devices-to-autopilot-unattended/","tags":["App Registrations","Graph API","Autopilot"],"title":"Enrol Devices to Autopilot (Unattended)"},{"categories":["Azure"],"contents":"What can you use an Azure App Registration for An Azure App Registration has many uses, in my case I use it mainly for the Microsoft Graph API to perform Intune Configuration Profile backups, List devices, Update local CMDB dynamically and also enrol devices into Autopilot.\nYou will see the vast amount of options you can use for the API when you start adding the API permissions below.\nIf you are also using MEMCM and Co-Managing devices with Intune or you have this Azure Tenant attached this will also be using an Azure App Registration to read user details and use user impersonation.\nHowever as you will notice this guide is focused on using the App Registration with the Microsoft Graph API with Client Secrets.\nFinding what API Permission is required for your Microsoft Graph API Call Each API can have a different set of permissions required to be able to read and/or write data. The best way to find these is by using the Microsoft Graph API Reference Guide.\nOnce you have loaded the API Reference guide, you will notice a list of categories for the API like below.\nFor this post I am using the importedWindowsAutopilotDeviceIdentities API reference which is in the Beta API. You can change what API Reference you are using by using the drop down under the Version header in the left-hand pane.\nIf you browse to Devices and apps \u0026gt; Corporate management \u0026gt; Imported windows autopilot device identity \u0026gt; List, You will see under the prerequisites which permissions that particular API requires as highlighted below.\nI will be using the Application permissions as the App this was created for is unattended\nYou will notice that even though you are only listing devices that the ReadWrite permission is listed in the permission set. If you look closer at the permission table headers you will see it states Permissions (from most to least privileged) meaning that to use the full functionality of this API (such as Create, Delete and Update) you will need the ReadWrite permission, However if you just wanted to list the data you would only need the least amount of privileges which is Read.\nHave a browse around and notice the differences in different categories before moving on.\nCreating the Azure App Registration Head over to the Azure Portal and launch Azure Active Directory.\nFrom the pane on the left select App Registrations, from here you can either choose to use an existing registrations or create one for this specific purpose. I would however recommend that you use a specific one for this purpose, this way the app does not have more permissions than it requires.\nLets get started;\nClick New Registration from the ribbon Give the App a name that represents its purpose and leave the rest as default and click Register From the left pane, select API Permissions, This is where we are going to grant the App the permission to the Microsoft Graph API Select Add a permission from the ribbon, you will see a pop out like the below; Click on Microsoft Graph \u0026gt; Application Permissions In the search box type Service and this will show the permissions we require Click Add Permissions You will then see a orange banner stating that the permissions are being edited and consent will need to be given Click the Grant admin consent for, click Yes on the banner to confirm your would like to grant consent Next we need a client secret... you will need to store this in a safe place as once you click of the page its hides all but a few characters. Click Client \u0026amp; Secrets from the pane on the left Click New Client Secret Specify a description, if you are going to put this in numerous locations and let multiple people use it you could relate it to that team/department. But for this example we will keep it simple. Specify a validity period, you have three options, 1 year, 2 years or never. I would not recommend using the later and would ensure that you have processes in place to review the application When you have added the secret, copy the value as you will need this later. To go with the client secret you will also need the Application (Client) ID and the Directory (tenant) ID. These can be found on the Overview page.\nThe details you have gathered from this article you can use them to perform unattended actions on the Microsoft Graph API and other services. I will be posting some other blog posts which relate to using these details so keep an eye on the blog for interesting ways to use the App Registrations.\n","image":"https://hugo.euc365.com/images/post/createazureapp/featuredImage_hu59fdad6db3c4c8c16140cbf07aa02e2c_218680_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/create_an_azure_app_registration/","tags":["Azure","Azure AD","App Registrations"],"title":"Create an Azure App Registration"},{"categories":["Microsoft Intune","Group Policy"],"contents":"What is an OMA-URI Policy? Meaning: Open Mobile Alliance Uniform Resource Identifier Traditionally when you implement a new piece of software which contains Group Policy Objects (GPO\u0026rsquo;s) to customise the feature of the application you would import the .adml and .admx files into the PolicyDefinitions folder located within SysVol (SYSVOL) for the domain or into the local PolicyDefinitions (C:\\WindowsPolicyDefinitions) folder.\nIf like me you are wanting to move most if not all of your policies to Intune for better MDM (Mobile Device Management) then you can run into a rather complex scenario where you need to ingest the .admx file and then work out how to configure your policies correctly. Now the first time I looked at this I thought \u0026lsquo;Oh man this looks tricky, its a job for later on\u0026rsquo;. However after putting it off for so long I finally dived into it and it turns out it isn\u0026rsquo;t as complicated as it looks.\nThe link to the Microsoft Guide can be found HERE!!, Feel free to head over and check it out. Hopefully this post/guide can help you out when it comes to creating your own.\nImporting your ADMX file For this example we will be using Google Chrome\u0026rsquo;s GPO\u0026rsquo;s, Not because they are simple because each policy has its own complexities but because it will be a common policy people will look to migrate to Intune. The ADMX files can be found on the Chrome for Enterprise Download Page, you can obtain the gpo files only be finding the Manage Chrome Browser section.\nOnce downloaded extract the .zip file and find the Chrome.admx (Normally located within WindowsADMX), Open this in a text editor (Notepad will do fine). Now we will switch over to Intune, From the Endpoint Manager Admin Center Homepage select Devices \u0026gt; Configuration Profiles from the navigation bar. Click Create profile.\nClick Create Profile Select the following and click Create; Enter the name for your profile (i.e Windows - Google Chrome Policy) and any other information you wish to add, click Next Now here is the fun part, Click Add Enter a Name (Use something Descriptive) and Enter a description The OMA-URI Field Should be something like the below ./Vendor/MSFT/Policy/ConfigOperations/ADMXInstall/{AppName}/Policy/{ADMXFileName} Data Type: String Value: Paste the contents of the chrome.admx file Click Add Now Proceed to deploy the policy (I would recommend doing this to a test group first) The above path determines what the OMA-URI is for future policies, for example if you enter ./Vendor/MSFT/Policy/ConfigOperations/ADMXInstall/Chrome/Policy/ChromeAdmx as the OMA-URI for the device import the policies within the ADMX will then be formatted like Chrome~Policy~googlechrome~Startup.\nThe full path for the OMA-URI for the Homepage Setting would be ./{AppName}/Vendor/MSFT/Policy/Config/Chrome~Policy~googlechrome~Startup/HomepageLocation. The format of the URI is is {AppName}~{SettingType}~{CategoryandSubCategory}.\nNow the category path may not just be a root category for example the startup settings will look like this Chrome~Policy~googlechrome~Startup to break it down\nAppName: Chrome (Set in the ADMX File Ingest) SettingType: Policy (Set in the ADMX File Ingest) CategoryAndSubCategory: Category: googlechrome Sub Category: Startup Now this may seem complicated at first glance, however do keep reading and I will explain how to obtain the paths and settings.\nFinding and Creating Policies Before we get started, if you haven\u0026rsquo;t already re-open the properties of the policy you created previously. You will also need to open the .admx files in your favourite text editor. Personally I use VSCode as it has syntax highlighting for XML files and many other useful features.\nThis section requires this to be published to a device.\nTo get started Open the Chrome.admx and Chrome.adml (This will be in the language folder i.e. en-US within the .zip file) and launch REGEDIT (Registry Editor).\nLet\u0026rsquo;s get started, within REGEDIT browse to HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/PolicyManager/AdmxDefault (There is always a random GUID per device, however this doesn\u0026rsquo;t make a difference to the policy so don\u0026rsquo;t worry), from here if you deployed the Chrome Policy as per the above steps you will see something like the below. (Note: this is showing Google Updates, but we will\nAs mentioned above they are broken down like below, we will be sticking with the Homepage setting which is within the startup category.\nAppName: Chrome SettingType: Policy Categories: Root Category: googlechrome Sub Category: startup Now if you look at the .admx file, you will see the following under the categories section\nAs you can see that this correlates with with policy in the registry, now if you expand the policy within the registry you can see that the HomePage Location settings is created from within there\nThis is the part where you actually need the .admx file. Lets start by explaining why you need to use .admx file. For this example you are not only enabling or disabling a policy but you will be actually specifying a value. When you go to specify this value in Intune you will need to use the ValueName from within the ADMX File. See the below example;\nLets break the policy setting down first, As always I would recommend naming the setting descriptively and also adding a description if you want more information about it. The OMA-URI would then be ./Device/Vendor/MSFT/Policy/Config/Chrome~Policy~googlechrome~Startup/HomepageLocation.\nClass (Device or User) One things to note here is the first section of the OMA-URI, It seems simple but you will need to specify if you want to apply this to a Device or User. Some policies are only available for Devices and likewise for Users. You can see this in the ADMX File by looking at the policy like below\nNow the HomePageLocation setting can be User or Device based so the class shows as Both. If this was a device only policy class=\u0026ldquo;Machine\u0026rdquo; and for a user only policy it would show as class=\u0026ldquo;User\u0026rdquo;.\nAppName As mentioned above the AppName is can be totally customisable however I wouldn\u0026rsquo;t advise that and I would keep it as simple and descriptive as possible.\nFor example, For this post I used Chrome as the AppName so the Policies show as Chrome~{SettingType}~{Root Category}~{Sub** **Category}. If i was to do another policy for Lenovo Vantage for example and set the AppName to LenovoVantage the polcies would show LenovoVantage~{SettingType}~{Root Category}~{Sub Category}.\nSettingType As we imported the ADMX into the Policy section (./Vendor/MSFT/Policy/ConfigOperations/ADMXInstall/{AppName}/Policy/{ADMXFileName}) of the ADMXInstall OMA-URI this part of the OMA-URI would always be policy.\nCategories and Sub-Categories This is detailed above but I will elaborate a little bit more here.\nIf you look at the .admx file, and browse to the categories section you will see lots of categories set with a parent category within there.\nIf you work backwards you can see how this all comes together,\nThe OMA-URI is Chrome~Policy~{Root Category}~{Sub Category}\nSub Category: Startup Root Category: googlechrome Something to not here is that it is possible for a parent category to have another parent category to make matters more confusing. You could end up having a policy like {AppName}~Policy~{Parent Category 1}~{Parent Category 2}~{Sub Category} and so on\u0026hellip; Just something to always check.\nThis is partly the reason to always check the registry when creating polices.\nThe Policy Itself So finally we come to the policy itself, at the end all the OMA-URI string comes the setting, in this case HomePageLocation.\nThis is derived again from the .admx file, if you search for the policy you will come across something like the above screenshot. You will notice that the policy is based on the DisplayName property. We won\u0026rsquo;t cover setting the value in this section as I feel it deserves its own.\nThe Policy Value/Setting Some setting are really simple to configure, for example if you want to disable something like MetricsReportingEnabled you can simply just add like below\nOthers like the HomePageLocation are not complicated but require a little bit more than just a simple disable or enable switch as you need to specify a value for the property.\nAgain jumping back into the .admx file, if you look at the last highlighted section of the below screenshot you can see that there is a ValueName, now this is the value we need want to change.\nto do this you would need to write the policy string like the below screenshot\nIf you look at the data id section, this is where you would put the value name in out of the .admx file, from there you would then set the value of that property for example https://euc365.com.\nIf you apply this policy and sync it to your device you will notice that it will change the Homepage upon start up.\nNow\u0026hellip; Here is another example of adding a string to a policy which requires multiple values. This once is not so much tricky but its messy.\nFor this example we will use the Allow Pop ups for certain sites, I will start off by showing you the screenshot of the policy itself\nYou will notice that at the start and end of each URL there is a rather ugly looking set of characters, now this is where is can get messy. The pre-URL string is the only one that needs to be changed per entry. If you think of this like a list, option 1 in the list is http://euc365.com and option 2 is http://bbc.co.uk, As you can see in the screenshot the pre-URL has a number which needs to be incremented each time you add another URL.\nIf you are just having one URL you don’t need to add the final \u0026amp;#xF000;, However if you are using multiple values you will need to add that to the end of the URL and then follow it up with you next URL.\nThe last URL in the sequence should not be followed by \u0026amp;#xF000;\nLooking at putting the URL together you will end up with something like 1\u0026amp;#xF000; + https://euc365.com + \u0026amp;#xF000; + Following URLs, For a single url it would be 1\u0026amp;#xF000; + https://euc365.com\n","image":"https://hugo.euc365.com/images/post/omauri/chromeadmx-300x208_hu9ed1ef16713ef4b381fc8134635f7fa9_21756_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/breaking-custom-oma-uri-csp-policies/","tags":["Intune","Configuration Profiles","Group Policy"],"title":"Breaking Down Custom OMA-URI (CSP) Policies"},{"categories":["PowerShell","Community Tools"],"contents":"SO\u0026hellip; I\u0026rsquo;ve been working tirelessly in trying to get an Autopilot Hybrid Deployment working along with my colleagues. As any Tech does I give everything my blood sweat and tears before logging a call with support (Sound silly as we could waste hours\u0026hellip; But where\u0026rsquo;s the fun in just logging a call), however on occasion you have to admit defeat and raise a service request.\nWhen logging an SR (Service Request) its always best to provide as much information about the device you are using (the one that\u0026rsquo;s having the issues) so the Microsoft Engineer can do as much fault finding and troubleshooting that they can before they contact you.\nWhen troubleshooting an Autopilot Deployment its useful to have the following information from a support engineer point of view but it is also helpful when speaking with Microsoft;\nAzure Device ID (GUID) Intune Device ID (GUID) Name (Not vital as it can be obtained from the above) IP Address Logon Server Useful for Hybrid Deployments in an organisation with multiple Domain Controllers when using Azure AD Connect Device Serial Number (Easiest way to find a device in the Windows Enrolment Devices Screen)\nAll of that information is not just something that you can go grab in one place off of the shelf. Going to obtain all of this information from various locations got very long in the tooth and became a bit of a drag. In true IT fashion I spent hours writing a PowerShell script which Displays as a Windows Form and gathers all of this information and copies it to your clipboard.\nThe form will look like the below when run, It will only display active networks and it can handle more than one network (this is dynamic) and the form will resize dynamically depending on content.\nThis will also copy the following information to your clipboard; Computer Name: Device Serial: Device Manufacturer: Device Model: Logon Server: Intune Device ID: AzureAD Device ID: IP Information: Interface Name: WiFi Interface Description: Intel(R) Wi-Fi 6 AX201 160MHz Profile Name: IPv4 Address: 192.168.0.141 IPv6 Address: Interface Name: Microsoft IP-HTTPS Platform Interface Interface Description: Profile Name: IPv4 Address: IPv6 Address: fd59:a9a9:6c55:1000:8c4a:531c:b003:cf56 Information gathered 23/05/2020 22:08:30 The form can be branded, and amended to your hearts content. I have uploaded it to GitHub in a Public Library if anyone wants to head over and download it or Fork it (Still not sure what that does, New to this GitHub stuff). I haven’t got around to putting any information on the GitHub Page yet but it you need any help drop me a message from the Contact page or on the GitHub page.\n","image":"https://hugo.euc365.com/images/post/getdeviceinfo/banner_huacdb1a8a65f90a7d35e95159e3b3149e_57105_460x200_fill_box_smart1_3.png","permalink":"https://hugo.euc365.com/getdeviceinfo/","tags":["PowerShell Tools"],"title":"Getting Useful Device Information… Troubleshooting Made Easy"}]