You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/automated-testing/e2e-testing/README.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -159,11 +159,11 @@ Katalon is endorsed by Gartner, IT professionals, and a large testing community.
159
159
160
160

161
161
162
-
**BugBug** is an easy way to automate tests for web applications. The tool focuses on simplicity, yet allows you to cover all essential test cases without coding. It's an all-in-one solution - you can easily create tests and use the built-in cloud to run them on schedule or from your CI/CD, without changes to your own infrastructure.
162
+
**BugBug** is an easy way to automate tests for web applications. The tool focuses on simplicity, yet allows you to cover all essential test cases without coding. It's an all-in-one solution - you can easily create tests and use the built-in cloud to run them on schedule or from your CI/CD, without changes to your own infrastructure.
163
163
164
-
BugBug is an interesting alternative to Selenium because it's actually a completely different technology. It is based on a Chrome extension that allows BugBug to record and run tests faster than old-school frameworks.
164
+
BugBug is an interesting alternative to Selenium because it's actually a completely different technology. It is based on a Chrome extension that allows BugBug to record and run tests faster than old-school frameworks.
165
165
166
-
The biggest advantage of BugBug is its user-friendliness. Most tests created with BugBug simply work out of the box. This makes it easier for non-technical people to maintain tests - with BugBug you can save money on hiring a QA engineer.
166
+
The biggest advantage of BugBug is its user-friendliness. Most tests created with BugBug simply work out of the box. This makes it easier for non-technical people to maintain tests - with BugBug you can save money on hiring a QA engineer.
Copy file name to clipboardExpand all lines: docs/machine-learning/ml-feasibility-study.md
+36-36Lines changed: 36 additions & 36 deletions
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
# Feasibility Studies
2
2
3
3
The main goal of feasibility studies is to assess whether it is feasible to solve the problem satisfactorily using ML with the available data. We want to avoid investing too much in the solution before we have:
4
-
*Sufficient evidence that a solution would be the best technical solution given the business case
5
-
*Sufficient evidence that a solution is compatible with the problem context
6
-
*Sufficient evidence that a solution is possible
7
-
*Some vetted direction on what a solution should look like
4
+
*Sufficient evidence that a solution would be the best technical solution given the business case
5
+
*Sufficient evidence that a solution is compatible with the problem context
6
+
*Sufficient evidence that a solution is possible
7
+
*Some vetted direction on what a solution should look like
8
8
This effort ensures quality solutions backed by the appropriate, thorough amount of consideration and evidence.
9
9
10
10
## When are feasibility studies useful?
@@ -23,25 +23,25 @@ Collaboration from individuals with diverse skill sets is desired at this stage,
23
23
24
24
### Problem definition and desired outcome
25
25
26
-
*Ensure that the problem is complex enough that coding rules or manual scaling is unrealistic
27
-
*Clear definition of the problem from business and technical perspectives
26
+
*Ensure that the problem is complex enough that coding rules or manual scaling is unrealistic
27
+
*Clear definition of the problem from business and technical perspectives
28
28
29
29
### Deep contextual understanding
30
30
31
31
Confirm that the following questions can be answered based on what was learned during the Discovery Phase of the project. For items that can not be satisfactorily answered, undertake additional investigation to answer.
32
-
*Understanding the people who are using and/or affected by the solution
33
-
*Understanding the contextual forces at play around the problem, including goals, culture, and historical context
34
-
*To accomplish this a researcher will:
35
-
*Collaborate with customers and colleagues to explore the landscape of people who relate to and may be affected by the problem space being explored (Users, stakeholders, subject matter experts, etc)
36
-
*Formulate the research question(s) to be addressed
37
-
*Select and design research to best serve the research question(s)
38
-
*Identify and select representative research participants across the problem space with whom to conduct the research
39
-
*Construct a research plan and necessary preparation documents for the selected research method(s)
40
-
*Conduct research activity with the participants via the selected method(s)
41
-
*Synthesize, analyze, and interpret research findings
42
-
*Where relevant, build frameworks, artefacts and processes that help explore the findings and implications of the research across the team
43
-
*Share what was uncovered and understood, and the implications thereof across the engagement team and relevant stakeholders.
44
-
*If the above research was conducted during the Discovery phase, it should be reviewed, and any substantial knowledge gaps should be identified and filled by following the above process.
32
+
*Understanding the people who are using and/or affected by the solution
33
+
*Understanding the contextual forces at play around the problem, including goals, culture, and historical context
34
+
*To accomplish this a researcher will:
35
+
*Collaborate with customers and colleagues to explore the landscape of people who relate to and may be affected by the problem space being explored (Users, stakeholders, subject matter experts, etc)
36
+
*Formulate the research question(s) to be addressed
37
+
*Select and design research to best serve the research question(s)
38
+
*Identify and select representative research participants across the problem space with whom to conduct the research
39
+
*Construct a research plan and necessary preparation documents for the selected research method(s)
40
+
*Conduct research activity with the participants via the selected method(s)
41
+
*Synthesize, analyze, and interpret research findings
42
+
*Where relevant, build frameworks, artefacts and processes that help explore the findings and implications of the research across the team
43
+
*Share what was uncovered and understood, and the implications thereof across the engagement team and relevant stakeholders.
44
+
*If the above research was conducted during the Discovery phase, it should be reviewed, and any substantial knowledge gaps should be identified and filled by following the above process.
45
45
46
46
### Data access
47
47
@@ -69,13 +69,13 @@ Confirm that the following questions can be answered based on what was learned d
69
69
70
70
### Concept ideation and iteration
71
71
72
-
*Develop value proposition(s) for users and stakeholders based on the contextual understanding developed through the discovery process (e.g. key elements of value, benefits)
73
-
*As relevant, make use of
74
-
*Co-creation with team
75
-
*Co-creation with users and stakeholders
76
-
*As relevant, create vignettes, narratives or other materials to communicate the concept
77
-
*Identify the next set of hypotheses or unknowns to be tested (see concept testing)
78
-
*Revisit and iterate on the concept throughout discovery as understanding of the problem space evolves
72
+
*Develop value proposition(s) for users and stakeholders based on the contextual understanding developed through the discovery process (e.g. key elements of value, benefits)
73
+
*As relevant, make use of
74
+
*Co-creation with team
75
+
*Co-creation with users and stakeholders
76
+
*As relevant, create vignettes, narratives or other materials to communicate the concept
77
+
*Identify the next set of hypotheses or unknowns to be tested (see concept testing)
78
+
*Revisit and iterate on the concept throughout discovery as understanding of the problem space evolves
79
79
80
80
### Exploratory data analysis (EDA)
81
81
@@ -106,13 +106,13 @@ Confirm that the following questions can be answered based on what was learned d
106
106
107
107
### Concept testing
108
108
109
-
*Where relevant, to test the value proposition, concepts or aspects of the experience
110
-
*Plan user, stakeholder and expert research
111
-
*Develop and design necessary research materials
112
-
*Synthesize and evaluate feedback to incorporate into concept development
113
-
*Continue to iterate and test different elements of the concept as necessary, including testing to best serve RAI goals and guidelines
114
-
*Ensure that the proposed solution and framing are compatible with and acceptable to affected people
115
-
*Ensure that the proposed solution and framing is compatible with existing business goals and context
109
+
*Where relevant, to test the value proposition, concepts or aspects of the experience
110
+
*Plan user, stakeholder and expert research
111
+
*Develop and design necessary research materials
112
+
*Synthesize and evaluate feedback to incorporate into concept development
113
+
*Continue to iterate and test different elements of the concept as necessary, including testing to best serve RAI goals and guidelines
114
+
*Ensure that the proposed solution and framing are compatible with and acceptable to affected people
115
+
*Ensure that the proposed solution and framing is compatible with existing business goals and context
116
116
117
117
### Risk assessment
118
118
@@ -121,9 +121,9 @@ Confirm that the following questions can be answered based on what was learned d
121
121
### Responsible AI
122
122
123
123
* Consideration of responsible AI principles
124
-
*Understanding of users and stakeholders’ contexts, needs and concerns to inform development of RAI
125
-
*Testing AI concept and experience elements with users and stakeholders
126
-
*Discussion and feedback from diverse perspectives around any responsible AI concerns
124
+
*Understanding of users and stakeholders’ contexts, needs and concerns to inform development of RAI
125
+
*Testing AI concept and experience elements with users and stakeholders
126
+
*Discussion and feedback from diverse perspectives around any responsible AI concerns
Copy file name to clipboardExpand all lines: docs/observability/tools/OpenTelemetry.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ Java OpenTelemetry instrumentation provides another way to integrate with Azure
92
92
93
93
When configuring this option, the Applications Insights Agent file is added when executing the application. The `applicationinsights.json` configuration file must be also be added as part of the applications artifacts. Pay close attention to the preview section, where the `"openTelemetryApiSupport": true,` property is set to true, enabling the agent to intercept OpenTelemetry telemetry created in the application code pushing it to Azure Monitor.
94
94
95
-
OpenTelemetry Java Agent instrumentation supports many [libraries and frameworks and application servers](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md#supported-libraries-frameworks-application-servers-and-jvms). Application Insights Java Agent [enhances](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent#auto-instrumentation) this list.
95
+
OpenTelemetry Java Agent instrumentation supports many [libraries and frameworks and application servers](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md#supported-libraries-frameworks-application-servers-and-jvms). Application Insights Java Agent [enhances](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent#auto-instrumentation) this list.
96
96
Therefore, the main difference between running the OpenTelemetry Java Agent vs. the Application Insights Java Agent is demonstrated in the amount of traces getting logged in Azure Monitor. When running with Application Insights Java agent there's more telemetry getting pushed to Azure Monitor. On the other hand, when running the solution using the Application Insights agent mode, it is essential to highlight that nothing gets logged on Jaeger (or any other OpenTelemetry exporter). All traces will be pushed exclusively to Azure Monitor. However, both manual instrumentation done via the OpenTelemetry SDK and all automatic traces, dependencies, performance counters, and metrics being instrumented by the Application Insights agent are sent to Azure Monitor. Although there is a rich amount of additional data automatically instrumented by the Application Insights agent, it can be deduced that it is not necessarily OpenTelemetry compliant. Only the traces logged by the manual instrumentation using the OpenTelemetry SDK are.
97
97
98
98
#### OpenTelemetry vs Application Insights agents compared
0 commit comments