|
1 | 1 | <properties
|
2 |
| - pageTitle="How to collect logs with Linux Azure Diagnostics | Microsoft Azure" |
| 2 | + pageTitle="Collect logs by using Linux Azure Diagnostics | Microsoft Azure" |
3 | 3 | description="This article describes how to set up Azure Diagnostics to collect logs from a Service Fabric Linux cluster running in Azure."
|
4 | 4 | services="service-fabric"
|
5 | 5 | documentationCenter=".net"
|
|
17 | 17 | ms.author="subramar"/>
|
18 | 18 |
|
19 | 19 |
|
20 |
| -# How to collect logs with Azure Diagnostics |
| 20 | +# Collect logs by using Azure Diagnostics |
21 | 21 |
|
22 | 22 | > [AZURE.SELECTOR]
|
23 | 23 | - [Windows](service-fabric-diagnostics-how-to-setup-wad.md)
|
24 | 24 | - [Linux](service-fabric-diagnostics-how-to-setup-lad.md)
|
25 | 25 |
|
26 |
| -When you're running an Azure Service Fabric cluster, it's a good idea to collect the logs from all the nodes in a central location. Having the logs in a central location makes it easy to analyze and troubleshoot issues, whether they are in your services, application or the cluster itself. One way to upload and collect logs is to use the Azure Diagnostics extension, which uploads logs to Azure Storage. You can read the events from storage and place them into a product such as [Elastic Search](service-fabric-diagnostic-how-to-use-elasticsearch.md) or another log parsing solution. |
| 26 | +When you're running an Azure Service Fabric cluster, it's a good idea to collect the logs from all the nodes in a central location. Having the logs in a central location makes it easy to analyze and troubleshoot issues, whether they are in your services, your application, or the cluster itself. One way to upload and collect logs is to use the Azure Diagnostics extension, which uploads logs to Azure Storage. You can read the events from storage and place them in a product such as [Elastic Search](service-fabric-diagnostic-how-to-use-elasticsearch.md) or another log-parsing solution. |
27 | 27 |
|
28 |
| -## Different log sources that you may want to collect |
29 |
| -1. **Service Fabric logs:** Emitted by the platform using LTTng and uploaded to your storage account. Logs may be operational events or runtime events emitted by the platform. These logs are stored in the location pointed in your cluster manifest (Search for the tag "AzureTableWinFabETWQueryable", and look for the "StoreConnectionString" to get the storage account details. |
30 |
| -2. **Application events:** Events emitted from your services code. You can use any logging solution that writes text-based logfiles, for example [LTTng](http://lttng.org). Refer to th LTTng documentation on tracing your application. |
| 28 | +## Log sources that you might want to collect |
| 29 | +- **Service Fabric logs**: Emitted from the platform via [LTTng](http://lttng.org) and uploaded to your storage account. Logs can be operational events or runtime events that the platform emits. These logs are stored in the location that the cluster manifest specifies. (To get the storage account details, search for the tag **AzureTableWinFabETWQueryable** and look for **StoreConnectionString**.) |
| 30 | +- **Application events**: Emitted from your service's code. You can use any logging solution that writes text-based log files--for example, LTTng. For more information, see the LTTng documentation on tracing your application. |
31 | 31 |
|
32 | 32 |
|
33 |
| -## Deploy the diagnostics extensions |
34 |
| -The first step in collecting logs is to deploy the Diagnostics extension on each of the VMs in the Service Fabric cluster. The Diagnostics extension collects logs on each VM and uploads them to the storage account you specify. The steps vary based on whether you use the Azure portal or the Azure Resource Manager. |
| 33 | +## Deploy the Diagnostics extension |
| 34 | +The first step in collecting logs is to deploy the Diagnostics extension on each of the VMs in the Service Fabric cluster. The Diagnostics extension collects logs on each VM and uploads them to the storage account that you specify. The steps vary based on whether you use the Azure portal or Azure Resource Manager. |
35 | 35 |
|
36 |
| -### Deploy the diagnostics extension as part of cluster creation |
37 |
| -To deploy diagnostics extension to the VMs in the cluster as part of cluster creation, set "Diagnostics" to **On**. After the cluster has been created, this setting cannot be changed using the portal. |
| 36 | +To deploy the Diagnostics extension to the VMs in the cluster as part of cluster creation, set **Diagnostics** to **On**. After you create the cluster, you can't change this setting by using the portal. |
38 | 37 |
|
39 |
| -Configure Linux Azure Diagnostics (LAD) to collect the files and place them into the customers storage account. This process is explained as scenario 3("Upload your own log files") in the article [Using LAD to monitor and diagnose Linux VMs](../virtual-machines/virtual-machines-linux-classic-diagnostic-extension.md). Following this process gets you access to the traces, which can be uploaded to a visualizer of your choice. |
| 38 | +Then, configure Linux Azure Diagnostics (LAD) to collect the files and place them into your storage account. This process is explained as scenario 3 ("Upload your own log files") in the article [Using LAD to monitor and diagnose Linux VMs](../virtual-machines/virtual-machines-linux-classic-diagnostic-extension.md). Following this process gets you access to the traces. You can upload the traces to a visualizer of your choice. |
40 | 39 |
|
| 40 | +You can also deploy the Diagnostics extension by using Azure Resource Manager. The process is similar for Windows and Linux and is documented for Windows clusters in [How to collect logs with Azure Diagnostics](service-fabric-diagnostics-how-to-setup-wad.md). |
41 | 41 |
|
42 |
| -You can also deploy the diagnostics extension using the Azure Resource Manager. The process is similar for Windows and Linux and is documented for Windows clusters at [How to collect logs with Azure Diagnostics](service-fabric-diagnostics-how-to-setup-wad.md). |
| 42 | +You can also use Operations Management Suite, as described in [Operations Management Suite Log Analytics with Linux](https://blogs.technet.microsoft.com/hybridcloud/2016/01/28/operations-management-suite-log-analytics-with-linux/). |
43 | 43 |
|
44 |
| -You can also use OMS as described in [OMS log analytics with linux](https://blogs.technet.microsoft.com/hybridcloud/2016/01/28/operations-management-suite-log-analytics-with-linux/). |
45 |
| - |
46 |
| -After this configuration is done, the LAD agent monitors the log files specified. Whenever a new line is appended to the file, it creates a syslog entry that is sent to the storage specified by the customer. |
| 44 | +After you finish this configuration, the LAD agent monitors the specified log files. Whenever a new line is appended to the file, it creates a syslog entry that is sent to the storage that you specified. |
47 | 45 |
|
48 | 46 |
|
49 | 47 | ## Next steps
|
50 |
| -Check out [LTTng documentation](http://lttng.org/docs) and [Using LAD](../virtual-machines/virtual-machines-linux-classic-diagnostic-extension.md) to understand in more detail what events you should look into while troubleshooting issues. |
51 |
| - |
| 48 | +To understand in more detail what events you should examine while troubleshooting issues, see [LTTng documentation](http://lttng.org/docs) and [Using LAD](../virtual-machines/virtual-machines-linux-classic-diagnostic-extension.md). |
0 commit comments