Skip to content

Commit b42b903

Browse files
authored
Fix linter flow not triggered (microsoft#897)
* testcraft got rebranded to perfecto. * We get 403 from perfecto, they might have blocked bots as the site is working ... * ignore ranorex * [MegaLinter] Apply linters fixes * Update mega-linter.yml Remove master * fix linter * Update mega-linter.yml Fix lintrer * Update mega-linter.yml Fix linter * Update mega-linter.yml Co-authored-by: shiranr <[email protected]>
1 parent c144d7d commit b42b903

File tree

6 files changed

+46
-44
lines changed

6 files changed

+46
-44
lines changed

.cspell.json

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -424,9 +424,10 @@
424424
"zstd",
425425
"Apdex",
426426
"Aur\u00e9lien",
427-
"Avi\u02c7zienis",
427+
"Avi",
428+
"zienis",
428429
"Customizabile",
429-
"Dodd\u2019s",
430+
"Dodd's",
430431
"G\u00e9ron's"
431432
],
432433
"version": "0.2"

.github/workflows/mega-linter.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ on:
77
# Trigger mega-linter at every push. Action will also be visible from pull requests to main
88
push: # Comment this line to trigger action only on pull-requests (not recommended if you don't pay for GH Actions)
99
pull_request:
10-
branches: [master, main]
10+
branches: [main]
1111

1212
env: # Comment env block if you do not want to apply fixes
1313
# Apply linter fixes configuration

.markdown-link-check.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,8 @@
2020
{"pattern": "^https://www.researchgate.net/publication/301839557_The_landscape_of_software_failure_cause_models"},
2121
{"pattern": "^https://www.cmu.edu/iso/governance/guidelines/data-classification.html"},
2222
{"pattern": "^https://machinelearningmastery.com/how-to-get-baseline-results-and-why-they-matter/"},
23-
{"pattern": "^https://www.perfecto.io/"}
23+
{"pattern": "^https://www.perfecto.io/"},
24+
{"pattern": "^https://www.ranorex.com/free-trial/"}
2425
],
2526
"httpHeaders": [
2627
{

docs/automated-testing/e2e-testing/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -159,11 +159,11 @@ Katalon is endorsed by Gartner, IT professionals, and a large testing community.
159159

160160
![BugBug](./images/bugbug-logo-208x65.png)
161161

162-
**BugBug** is an easy way to automate tests for web applications. The tool focuses on simplicity, yet allows you to cover all essential test cases without coding. It's an all-in-one solution - you can easily create tests and use the built-in cloud to run them on schedule or from your CI/CD, without changes to your own infrastructure.
162+
**BugBug** is an easy way to automate tests for web applications. The tool focuses on simplicity, yet allows you to cover all essential test cases without coding. It's an all-in-one solution - you can easily create tests and use the built-in cloud to run them on schedule or from your CI/CD, without changes to your own infrastructure.
163163

164-
BugBug is an interesting alternative to Selenium because it's actually a completely different technology. It is based on a Chrome extension that allows BugBug to record and run tests faster than old-school frameworks.
164+
BugBug is an interesting alternative to Selenium because it's actually a completely different technology. It is based on a Chrome extension that allows BugBug to record and run tests faster than old-school frameworks.
165165

166-
The biggest advantage of BugBug is its user-friendliness. Most tests created with BugBug simply work out of the box. This makes it easier for non-technical people to maintain tests - with BugBug you can save money on hiring a QA engineer.
166+
The biggest advantage of BugBug is its user-friendliness. Most tests created with BugBug simply work out of the box. This makes it easier for non-technical people to maintain tests - with BugBug you can save money on hiring a QA engineer.
167167

168168
[BugBug Website](https://bugbug.io?utm_source=microsoft_github&utm_medium=referral)
169169

docs/machine-learning/ml-feasibility-study.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# Feasibility Studies
22

33
The main goal of feasibility studies is to assess whether it is feasible to solve the problem satisfactorily using ML with the available data. We want to avoid investing too much in the solution before we have:
4-
* Sufficient evidence that a solution would be the best technical solution given the business case
5-
* Sufficient evidence that a solution is compatible with the problem context
6-
* Sufficient evidence that a solution is possible
7-
* Some vetted direction on what a solution should look like
4+
* Sufficient evidence that a solution would be the best technical solution given the business case
5+
* Sufficient evidence that a solution is compatible with the problem context
6+
* Sufficient evidence that a solution is possible
7+
* Some vetted direction on what a solution should look like
88
This effort ensures quality solutions backed by the appropriate, thorough amount of consideration and evidence.
99

1010
## When are feasibility studies useful?
@@ -23,25 +23,25 @@ Collaboration from individuals with diverse skill sets is desired at this stage,
2323

2424
### Problem definition and desired outcome
2525

26-
* Ensure that the problem is complex enough that coding rules or manual scaling is unrealistic
27-
* Clear definition of the problem from business and technical perspectives
26+
* Ensure that the problem is complex enough that coding rules or manual scaling is unrealistic
27+
* Clear definition of the problem from business and technical perspectives
2828

2929
### Deep contextual understanding
3030

3131
Confirm that the following questions can be answered based on what was learned during the Discovery Phase of the project. For items that can not be satisfactorily answered, undertake additional investigation to answer.
32-
* Understanding the people who are using and/or affected by the solution
33-
* Understanding the contextual forces at play around the problem, including goals, culture, and historical context
34-
* To accomplish this a researcher will:
35-
* Collaborate with customers and colleagues to explore the landscape of people who relate to and may be affected by the problem space being explored (Users, stakeholders, subject matter experts, etc)
36-
* Formulate the research question(s) to be addressed
37-
* Select and design research to best serve the research question(s)
38-
* Identify and select representative research participants across the problem space with whom to conduct the research
39-
* Construct a research plan and necessary preparation documents for the selected research method(s)
40-
* Conduct research activity with the participants via the selected method(s)
41-
* Synthesize, analyze, and interpret research findings
42-
* Where relevant, build frameworks, artefacts and processes that help explore the findings and implications of the research across the team
43-
* Share what was uncovered and understood, and the implications thereof across the engagement team and relevant stakeholders.
44-
* If the above research was conducted during the Discovery phase, it should be reviewed, and any substantial knowledge gaps should be identified and filled by following the above process.
32+
* Understanding the people who are using and/or affected by the solution
33+
* Understanding the contextual forces at play around the problem, including goals, culture, and historical context
34+
* To accomplish this a researcher will:
35+
* Collaborate with customers and colleagues to explore the landscape of people who relate to and may be affected by the problem space being explored (Users, stakeholders, subject matter experts, etc)
36+
* Formulate the research question(s) to be addressed
37+
* Select and design research to best serve the research question(s)
38+
* Identify and select representative research participants across the problem space with whom to conduct the research
39+
* Construct a research plan and necessary preparation documents for the selected research method(s)
40+
* Conduct research activity with the participants via the selected method(s)
41+
* Synthesize, analyze, and interpret research findings
42+
* Where relevant, build frameworks, artefacts and processes that help explore the findings and implications of the research across the team
43+
* Share what was uncovered and understood, and the implications thereof across the engagement team and relevant stakeholders.
44+
* If the above research was conducted during the Discovery phase, it should be reviewed, and any substantial knowledge gaps should be identified and filled by following the above process.
4545

4646
### Data access
4747

@@ -69,13 +69,13 @@ Confirm that the following questions can be answered based on what was learned d
6969

7070
### Concept ideation and iteration
7171

72-
* Develop value proposition(s) for users and stakeholders based on the contextual understanding developed through the discovery process (e.g. key elements of value, benefits)
73-
* As relevant, make use of
74-
* Co-creation with team
75-
* Co-creation with users and stakeholders
76-
* As relevant, create vignettes, narratives or other materials to communicate the concept
77-
* Identify the next set of hypotheses or unknowns to be tested (see concept testing)
78-
* Revisit and iterate on the concept throughout discovery as understanding of the problem space evolves
72+
* Develop value proposition(s) for users and stakeholders based on the contextual understanding developed through the discovery process (e.g. key elements of value, benefits)
73+
* As relevant, make use of
74+
* Co-creation with team
75+
* Co-creation with users and stakeholders
76+
* As relevant, create vignettes, narratives or other materials to communicate the concept
77+
* Identify the next set of hypotheses or unknowns to be tested (see concept testing)
78+
* Revisit and iterate on the concept throughout discovery as understanding of the problem space evolves
7979

8080
### Exploratory data analysis (EDA)
8181

@@ -106,13 +106,13 @@ Confirm that the following questions can be answered based on what was learned d
106106

107107
### Concept testing
108108

109-
* Where relevant, to test the value proposition, concepts or aspects of the experience
110-
* Plan user, stakeholder and expert research
111-
* Develop and design necessary research materials
112-
* Synthesize and evaluate feedback to incorporate into concept development
113-
* Continue to iterate and test different elements of the concept as necessary, including testing to best serve RAI goals and guidelines
114-
* Ensure that the proposed solution and framing are compatible with and acceptable to affected people
115-
* Ensure that the proposed solution and framing is compatible with existing business goals and context
109+
* Where relevant, to test the value proposition, concepts or aspects of the experience
110+
* Plan user, stakeholder and expert research
111+
* Develop and design necessary research materials
112+
* Synthesize and evaluate feedback to incorporate into concept development
113+
* Continue to iterate and test different elements of the concept as necessary, including testing to best serve RAI goals and guidelines
114+
* Ensure that the proposed solution and framing are compatible with and acceptable to affected people
115+
* Ensure that the proposed solution and framing is compatible with existing business goals and context
116116

117117
### Risk assessment
118118

@@ -121,9 +121,9 @@ Confirm that the following questions can be answered based on what was learned d
121121
### Responsible AI
122122

123123
* Consideration of responsible AI principles
124-
* Understanding of users and stakeholders’ contexts, needs and concerns to inform development of RAI
125-
* Testing AI concept and experience elements with users and stakeholders
126-
* Discussion and feedback from diverse perspectives around any responsible AI concerns
124+
* Understanding of users and stakeholders’ contexts, needs and concerns to inform development of RAI
125+
* Testing AI concept and experience elements with users and stakeholders
126+
* Discussion and feedback from diverse perspectives around any responsible AI concerns
127127

128128

129129
## Output of a feasibility study

docs/observability/tools/OpenTelemetry.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ Java OpenTelemetry instrumentation provides another way to integrate with Azure
9292

9393
When configuring this option, the Applications Insights Agent file is added when executing the application. The `applicationinsights.json` configuration file must be also be added as part of the applications artifacts. Pay close attention to the preview section, where the `"openTelemetryApiSupport": true,` property is set to true, enabling the agent to intercept OpenTelemetry telemetry created in the application code pushing it to Azure Monitor.
9494

95-
OpenTelemetry Java Agent instrumentation supports many [libraries and frameworks and application servers](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md#supported-libraries-frameworks-application-servers-and-jvms). Application Insights Java Agent [enhances](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent#auto-instrumentation) this list.
95+
OpenTelemetry Java Agent instrumentation supports many [libraries and frameworks and application servers](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/supported-libraries.md#supported-libraries-frameworks-application-servers-and-jvms). Application Insights Java Agent [enhances](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent#auto-instrumentation) this list.
9696
Therefore, the main difference between running the OpenTelemetry Java Agent vs. the Application Insights Java Agent is demonstrated in the amount of traces getting logged in Azure Monitor. When running with Application Insights Java agent there's more telemetry getting pushed to Azure Monitor. On the other hand, when running the solution using the Application Insights agent mode, it is essential to highlight that nothing gets logged on Jaeger (or any other OpenTelemetry exporter). All traces will be pushed exclusively to Azure Monitor. However, both manual instrumentation done via the OpenTelemetry SDK and all automatic traces, dependencies, performance counters, and metrics being instrumented by the Application Insights agent are sent to Azure Monitor. Although there is a rich amount of additional data automatically instrumented by the Application Insights agent, it can be deduced that it is not necessarily OpenTelemetry compliant. Only the traces logged by the manual instrumentation using the OpenTelemetry SDK are.
9797

9898
#### OpenTelemetry vs Application Insights agents compared

0 commit comments

Comments
 (0)