Skip to content

Commit 565e054

Browse files
authored
Merge pull request MicrosoftDocs#2287 from Microsoft/FromPublicRepo
From public repo
2 parents 6de4df4 + 777cfa9 commit 565e054

5 files changed

+27
-31
lines changed

articles/container-service/container-service-kubernetes-walkthrough.md

-8
Original file line numberDiff line numberDiff line change
@@ -183,11 +183,3 @@ To use pscp from [putty](http://www.chiark.greenend.org.uk/~sgtatham/putty/downl
183183
1. [Kubernetes Bootcamp](https://katacoda.com/embed/kubernetes-bootcamp/1/) - shows you how to deploy, scale, update, and debug containerized applications.
184184
2. [Kubernetes Userguide](http://kubernetes.io/docs/user-guide/) - provides information on running programs in an existing Kubernetes cluster.
185185
3. [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/master/examples) - provides a number of examples on how to run real applications with Kubernetes.
186-
187-
## Kubernetes Production Users
188-
Here are a sampling of production Kubernetes users from recent talks at the Kubernetes Conference (KubeCon)
189-
190-
* [TicketMaster](https://www.youtube.com/watch?v=PSPNg5AU_II)
191-
* [Comcast](https://www.youtube.com/watch?v=lmeFkH-rHII)
192-
* [Buffer](https://www.youtube.com/watch?v=EC_ZRLsw58M)
193-
* [Concur](https://www.youtube.com/watch?v=eQ9R8prQUHU)

articles/data-factory/data-factory-monitor-manage-pipelines.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -219,29 +219,29 @@ If the activity run fails in a pipeline, the dataset produced by the pipeline is
219219
1. Launch **Azure PowerShell**.
220220
2. Run **Get-AzureRmDataFactorySlice** command to see the slices and their statuses. You should see a slice with the status: **Failed**.
221221

222-
Get-AzureRmDataFactorySlice [-ResourceGroupName] <String> [-DataFactoryName] <String> [-TableName] <String> [-StartDateTime] <DateTime> [[-EndDateTime] <DateTime> ] [-Profile <AzureProfile> ] [ <CommonParameters>]
222+
Get-AzureRmDataFactorySlice [-ResourceGroupName] <String> [-DataFactoryName] <String> [-DatasetName] <String> [-StartDateTime] <DateTime> [[-EndDateTime] <DateTime> ] [-Profile <AzureProfile> ] [ <CommonParameters>]
223223

224224
For example:
225225

226-
Get-AzureRmDataFactorySlice -ResourceGroupName ADF -DataFactoryName LogProcessingFactory -TableName EnrichedGameEventsTable -StartDateTime 2014-05-04 20:00:00
226+
Get-AzureRmDataFactorySlice -ResourceGroupName ADF -DataFactoryName LogProcessingFactory -DatasetName EnrichedGameEventsTable -StartDateTime 2014-05-04 20:00:00
227227

228228
Replace **StartDateTime** with the StartDateTime value you specified for the Set-AzureRmDataFactoryPipelineActivePeriod.
229229
3. Now, run the **Get-AzureRmDataFactoryRun** cmdlet to get details about activity run for the slice.
230230

231-
Get-AzureRmDataFactoryRun [-ResourceGroupName] <String> [-DataFactoryName] <String> [-TableName] <String> [-StartDateTime]
231+
Get-AzureRmDataFactoryRun [-ResourceGroupName] <String> [-DataFactoryName] <String> [-DatasetName] <String> [-StartDateTime]
232232
<DateTime> [-Profile <AzureProfile> ] [ <CommonParameters>]
233233

234234
For example:
235235

236-
Get-AzureRmDataFactoryRun -ResourceGroupName ADF -DataFactoryName LogProcessingFactory -TableName EnrichedGameEventsTable -StartDateTime "5/5/2014 12:00:00 AM"
236+
Get-AzureRmDataFactoryRun -ResourceGroupName ADF -DataFactoryName LogProcessingFactory -DatasetName EnrichedGameEventsTable -StartDateTime "5/5/2014 12:00:00 AM"
237237

238238
The value of StartDateTime is the start time for the error/problem slice you noted from the previous step. The date-time should be enclosed in double quotes.
239239
4. You should see the output with details about the error (similar to the following):
240240

241241
Id : 841b77c9-d56c-48d1-99a3-8c16c3e77d39
242242
ResourceGroupName : ADF
243243
DataFactoryName : LogProcessingFactory3
244-
TableName : EnrichedGameEventsTable
244+
DatasetName : EnrichedGameEventsTable
245245
ProcessingStartTime : 10/10/2014 3:04:52 AM
246246
ProcessingEndTime : 10/10/2014 3:06:49 AM
247247
PercentComplete : 0
@@ -279,7 +279,7 @@ The following example sets the status of all slices for the table 'DAWikiAggrega
279279

280280
The UpdateType is set to UpstreamInPipeline, which means that statuses of each slice for the table and all the dependent (upstream) tables are set to "Waiting." Other possible value for this parameter is "Individual."
281281

282-
Set-AzureRmDataFactorySliceStatus -ResourceGroupName ADF -DataFactoryName WikiADF -TableName DAWikiAggregatedData -Status Waiting -UpdateType UpstreamInPipeline -StartDateTime 2014-05-21T16:00:00 -EndDateTime 2014-05-21T20:00:00
282+
Set-AzureRmDataFactorySliceStatus -ResourceGroupName ADF -DataFactoryName WikiADF -DatasetName DAWikiAggregatedData -Status Waiting -UpdateType UpstreamInPipeline -StartDateTime 2014-05-21T16:00:00 -EndDateTime 2014-05-21T20:00:00
283283

284284

285285
## Create alerts
@@ -615,4 +615,3 @@ You can move a data factory to a different resource group or a different subscri
615615
You can also move any related resources (such as alerts associated with the data factory) along with the data factory.
616616

617617
![Move Resources dialog box](./media/data-factory-monitor-manage-pipelines/MoveResources.png)
618-

articles/iot-suite/iot-suite-what-are-preconfigured-solutions.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ In this preconfigured solution, the ASA jobs form part of to the **IoT solution
8686
## Event processor
8787
In this preconfigured solution, the event processor forms part of the **IoT solution back end** in a typical [IoT solution architecture][lnk-what-is-azure-iot].
8888

89-
The **DeviceInfo** and **Rules** ASA jobs send their output to Event hubs for delivery to other back-end services. The solution uses an [EventPocessorHost][lnk-event-processor] instance, running in a [WebJob][lnk-web-job], to read the messages from these Event hubs. The **EventProcessorHost** uses the **DeviceInfo** data to update the device data in the DocumentDB database, and uses the **Rules** data to invoke the Logic app and update the alerts display in the solution portal.
89+
The **DeviceInfo** and **Rules** ASA jobs send their output to Event hubs for delivery to other back-end services. The solution uses an [EventProcessorHost][lnk-event-processor] instance, running in a [WebJob][lnk-web-job], to read the messages from these Event hubs. The **EventProcessorHost** uses the **DeviceInfo** data to update the device data in the DocumentDB database, and uses the **Rules** data to invoke the Logic app and update the alerts display in the solution portal.
9090

9191
## Device identity registry and DocumentDB
9292
Every IoT hub includes a [device identity registry][lnk-identity-registry] that stores device keys. IoT Hub uses this information authenticate devices - a device must be registered and have a valid key before it can connect to the hub.

articles/security/azure-security-disk-encryption.md

+19-14
Original file line numberDiff line numberDiff line change
@@ -409,11 +409,14 @@ Disk encryption can be enabled on customer encrypted VHD using the PS cmdlets pu
409409
Follow the steps below to enable disk encryption for this scenario using CLI commands:
410410

411411
1. Set access policies on Key Vault:
412-
* Set ‘EnabledForDiskEncryption’ flag: `azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true`
413-
* Set permissions to Azure AD app to write secrets to KeyVault: `azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys [\"all\"] --perms-to-secrets [\"all\"]`
412+
* Set ‘EnabledForDiskEncryption’ flag:
413+
`azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true`
414+
* Set permissions to Azure AD app to write secrets to KeyVault:
415+
`azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys '["wrapKey"]' --perms-to-secrets '["set"]'`
414416
2. To enable encryption on an existing/running VM, type:
415-
*azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId>*
416-
3. Get encryption status: *“azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json”*
417+
`azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId> --volume-type [All|OS|Data]`
418+
3. Get encryption status:
419+
`azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json`
417420
4. To enable encryption on a new VM from customer encrypted VHD, use the below parameters with “azure vm create” command:
418421
* disk-encryption-key-vault-id <disk-encryption-key-vault-id>
419422
* disk-encryption-key-url <disk-encryption-key-url>
@@ -447,11 +450,10 @@ Refer to the **Explore Azure disk encryption with Azure PowerShell** blog post [
447450
Follow the steps below to enable encryption on existing/running IaaS Windows VM in Azure using CLI commands:
448451

449452
1. Set access policies on Key Vault:
450-
* Set ‘EnabledForDiskEncryption’ flag: “azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true”
451-
* Set permissions to Azure AD app to write secrets to KeyVault: “azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys [\"all\"] --perms-to-secrets [\"all\"]
452-
2. To enable encryption on an existing/running VM, type:
453-
*azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId>*
454-
3. Get encryption status: *“azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json”*
453+
* Set ‘EnabledForDiskEncryption’ flag: `azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true`
454+
* Set permissions to Azure AD app to write secrets to KeyVault: `azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys '["wrapKey"]' --perms-to-secrets '["set"]'`
455+
2. To enable encryption on an existing/running VM: `azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId> --volume-type [All|OS|Data]`
456+
3. Get encryption status: `azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json`
455457
4. To enable encryption on a new VM from customer encrypted VHD, use the below parameters with “azure vm create” command:
456458
* disk-encryption-key-vault-id <disk-encryption-key-vault-id>
457459
* disk-encryption-key-url <disk-encryption-key-url>
@@ -480,11 +482,14 @@ The Resource Manager template parameters details for existing/running VM scenari
480482
Disk encryption can be enabled on customer encrypted VHD using the CLI command installed from [here](../xplat-cli-install.md). Follow the steps below to enable encryption on existing/running IaaS Linux VM in Azure using CLI commands:
481483

482484
1. Set access policies on Key Vault:
483-
* Set ‘EnabledForDiskEncryption’ flag: “azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true”
484-
* Set permissions to Azure AD app to write secrets to KeyVault: “azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys [\"all\"] --perms-to-secrets [\"all\"]
485-
2. To enable encryption on an existing/running VM, type:
486-
*azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId>*
487-
3. Get encryption status: “azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json”
485+
* Set ‘EnabledForDiskEncryption’ flag:
486+
`azure keyvault set-policy --vault-name <keyVaultName> --enabled-for-disk-encryption true`
487+
* Set permissions to Azure AD app to write secrets to KeyVault:
488+
`azure keyvault set-policy --vault-name <keyVaultName> --spn <aadClientID> --perms-to-keys '["wrapKey"]' --perms-to-secrets '["set"]'`
489+
2. To enable encryption on an existing/running VM:
490+
`azure vm enable-disk-encryption --resource-group <resourceGroupName> --name <vmName> --aad-client-id <aadClientId> --aad-client-secret <aadClientSecret> --disk-encryption-key-vault-url <keyVaultURL> --disk-encryption-key-vault-id <keyVaultResourceId> --volume-type [All|OS|Data]`
491+
3. Get encryption status:
492+
`azure vm show-disk-encryption-status --resource-group <resourceGroupName> --name <vmName> --json`
488493
4. To enable encryption on a new VM from customer encrypted VHD, use the below parameters with “azure vm create” command.
489494
* *disk-encryption-key-vault-id <disk-encryption-key-vault-id>*
490495
* *disk-encryption-key-url <disk-encryption-key-url>*

articles/virtual-machines/virtual-machines-windows-reset-local-password-without-agent.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Always try to reset a password using the [Azure portal or Azure PowerShell](virt
122122
![Copy disk URI](./media/virtual-machines-windows-reset-local-password-without-guest-agent/copy_source_vhd_uri.png)
123123
9. Create a VM from the source VM’s OS disk:
124124
125-
* Use [this Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-from-specialized-vhd) to create a VM from a specialized VHD. Click the `Deploy to Azure` button to open the Azure portal with the templated details populated for you.
125+
  * Use [this Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-specialized-vhd) to create a VM from a specialized VHD. Click the `Deploy to Azure` button to open the Azure portal with the templated details populated for you.
126126
* If you want to retain all the previous settings for the VM, select *Edit template* to provide your existing VNet, subnet, network adapter, or public IP.
127127
* In the `OSDISKVHDURI` parameter text box, paste the URI of your source VHD obtain in the preceding step:
128128

0 commit comments

Comments
 (0)