|
5 | 5 |
|
6 | 6 | \begin{document}
|
7 | 7 | \begin{multicols}{3}
|
8 |
| - \scriptsize |
9 | 8 | \noindent\underline{\textbf{Week 1}}\\
|
10 | 9 | \textbf{Software testing}: process of executing program/system with intent of finding errors\\
|
11 | 10 | \textbf{Fault}: incorrect portions of code (can be missing as well as incorrect)\\
|
|
27 | 26 | \includegraphics[width=\linewidth]{44.pdf}\\
|
28 | 27 | \includegraphics[width=\linewidth]{45.pdf}\\
|
29 | 28 | \textbf{Coverage types}: statement, branch, path (infinite if loop exists), strictly subsumes all beforehand\\
|
30 |
| - \underline{\textbf{Week 2}}\\ |
| 29 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 2}}\\ |
31 | 30 | \textbf{Test oracle}: expected output of software for given input, part of test case\\
|
32 | 31 | \textbf{Test driver}: software framework that can load collection of test cases or test suite\\
|
33 | 32 | \textbf{Test suite}: collection of test cases\\
|
|
59 | 58 | \textbf{MC/DC coverage}: each entry \& exit point invoked, each decision takes every possible outcome, each condition in a decision takes every possible outcome, each condition in decision is shown to independenly affect outcome of decision, independence of condition is shown by proving that only one condition changes at a time\\
|
60 | 59 | \includegraphics[width=\linewidth]{148.pdf}\\
|
61 | 60 | \includegraphics[width=\linewidth]{152.pdf}\\
|
62 |
| - \underline{\textbf{Week 4}}\\ |
| 61 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 4}}\\ |
63 | 62 | \textbf{Dataflow Coverage}: considers how data gets accessed \& modified in system \& how it can get corrupted\\
|
64 | 63 | \textbf{Common access-related bugs}: using undefined/uninitializsed variable, deallocating/reinitialising variable before constructed/initialised/used, deleting collection object leaving members unaccessible\\
|
65 | 64 | \textbf{Variable definition}: defined whenever value modified (LHS of assignment, input statement, call-by-reference)\\
|
|
84 | 83 | \textbf{All-C-Uses for above}: $dcu(x,1) + dcu(y,1) + dcu(y,6) + dcu(z,1) + dcu(z,4) + dcu(z,5) + dcu(count,1) + dcu(count,6) = 2 + 2 + 2 + 3 + 3 + 3 + 1 + 1 = 17$\\
|
85 | 84 | \textbf{All-P-Uses for above}: $dpu(x,1) + dpu(y,1) + dpu(y,6) + dpu(z,1) + dpu(z,4) + dpu(z,5) + dpu(count,1) + dpu(count,6) = 2 + 2 + 2 + 0 + 0 + 0 + 2 + 2 = 10$ (note this includes using the initial count definition even though it will always be redefined (-1) before the comparison)\\
|
86 | 85 | \includegraphics[width=\linewidth]{203.pdf}\\
|
87 |
| - \underline{\textbf{Week 5}}\\ |
| 86 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 5}}\\ |
88 | 87 | \textbf{Program mutation}: create artificial bugs by injecting changes to statements of programs, simulate subtle bugs in real programs\\
|
89 | 88 | \textbf{Mutation testing}: software testing technique based on program mutation, can be used to evaluate test effectiveness \& enhance test suite, can be stronger than control/data-flow coverage, extremely costly since need to run whole test suite against each mutant\\
|
90 | 89 | \textbf{Mutation testing steps}: applies artificial changes based on mutation operators to generate mutants (each mutant with ony one artificial bug), run test suite against each mutant (if any test fails mutant killed, else survives), compute mutation score\\
|
91 |
| - \textbf{Symbolic execution/evaluation}: analyse program to determine what inputs cause each part of program to execute, execute programs with symbols (track symbolic state rather than concrete input, when execute one path actually simulate many test inputs (since considering all inputs that can exercise same path)) |
| 90 | + \textbf{Symbolic execution/evaluation}: analyse program to determine what inputs cause each part of program to execute, execute programs with symbols (track symbolic state rather than concrete input, when execute one path actually simulate many test inputs (since considering all inputs that can exercise same path))\\ |
92 | 91 | \textbf{Problems with symbolic execution}:\\
|
93 | 92 | \textit{Path explosion}: $2^n$ paths for $n$ branches, infinite paths for unbounded loops, calculate constraints for all paths is infeasible for real software\\
|
94 | 93 | \textit{Constraint too complex}: especially for large programs, also it is NP-complete\\
|
95 | 94 | \textbf{Input sub-domain}: set of inputs satisfying path condition\\
|
96 | 95 | \textbf{Searching input to execute path}: equivalent to solving associated path condition\\
|
97 | 96 | \includegraphics[width=\linewidth]{242.pdf}\\
|
98 |
| - \underline{\textbf{Week 6}}\\ |
| 97 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 6}}\\ |
99 | 98 | \textbf{Random testing}: random number generator (monkeys) to generate test cases, also called fuzz testing, monkey testing, slelect tests from entire input domain (set of all possible inputs) randomly \& independently, no guide towards failure-causing inputs\\
|
100 | 99 | \textbf{Adaptive Random Testing}: achieve even spread of test cases\\
|
101 | 100 | \includegraphics[width=\linewidth]{259.pdf}\\
|
|
122 | 121 | \textit{Genetic algorithm}: simualte process of evolution, start with random points, select number of best points, combine \& mutate points until no more improvements can be made\\
|
123 | 122 | \textbf{Transform testing to search}: list of random test cases as start point, each test case is point in input domain, use various metaheuristic search algorithms to find test cases, measure how well we have solved the problem (use simeple fitness function, how fat is already covered elements from target code elements, try to make it 0)\\
|
124 | 123 | \includegraphics[width=\linewidth]{293.pdf}\\
|
125 |
| - \underline{\textbf{Week 7}}\\ |
| 124 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 7}}\\ |
126 | 125 | \textbf{Combinatorial Testing}: instead of all possible combinations generate subset to satisfy some well-defined combination strategies, not every variable contributes to every fault, often fault caused by interactions among few variables, can dramatically reduce number of combinations to be covered but remains very effective in terms of fault detection\\
|
127 | 126 | \textbf{t-way Interaction}: fault triggered by certain conbination of $t$ input values, simple fault is $t=1$, pairwise is $t=2$\\
|
128 | 127 | \textbf{Best size for t}: 70\% failures detected by $t=2$, max for fault triggering was $t=6$ for certain interactions (medical devices \& NASA distributed database $t=4$, medical 98\% $t=2$, web server \& browser actually 6)\\
|
|
146 | 145 | \textit{Usability}: concerned mainly with the use of the software, assess user friendliness \& suitability by gathering info about how users interact with site, study what user actually does\\
|
147 | 146 | \textbf{Stress testing tool report}: number of requests, transactions, KBps, round trip time (from user making request to receiving result), number of concurrent connection, performance degradation, types of visitors to site \& number, CPU \& memory use of app server\\
|
148 | 147 | \textbf{Top 2 Web App Security Risks}:\\
|
149 |
| - \textit{Injection}: SQL, OS,LDAP, occur whe untrusted data sent to interpreter as part of command/query\\ |
| 148 | + \textit{Injection}: SQL, OS, LDAP, occur whe untrusted data sent to interpreter as part of command/query\\ |
150 | 149 | \textit{Cross Site Scripting}: occur whenever app takes untrusted data \& send to web browser without proper validation \& escaping\\
|
151 | 150 | \textbf{Injection Protectiom}: validate input (careful with special characters, whitelist, validate length, type, syntax), avoid use of interpreter (use stored procedures), otherwise use safe APIs (strongly typed parameterised queries, such as PreparedStatement), use Object Relational Manager\\
|
152 | 151 | \textbf{XSS Protection}: appropriate encoding of all output data (HTML/XML depending on output mechanism, encode all characters other than very limited subset, specify character encoding)\\
|
153 | 152 | \textbf{Usability Testing Steps}: identify website purpose, identify intended users, define tests \& conduct usability testing, analyze acquired info\\
|
154 | 153 | \includegraphics[width=\linewidth]{340.pdf}\\
|
155 | 154 | \textbf{Compatibility Testing}: ensures product functionality \& reliability on supported browsers \& platforms that exist on customer computer\\
|
156 |
| - \underline{\textbf{Week 8}}\\ |
| 155 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 8}}\\ |
157 | 156 | \textbf{Test management}: manage test plans \& cases, track requirements \& defects, execute tests, measure progress\\
|
158 | 157 | \textbf{Test Report Contents}:\\
|
159 | 158 | \textit{Test objective}: identifying objectives of testing, should be planned so all requirements individually tested, state exit criteria\\
|
|
190 | 189 | \textbf{Mean time to failure}: mean of probability density, expected value of T, average lifetime of system, $E(T) = \int_0^\infty t \: f(t)dt = \int_0^\infty R(t)dt$, for exponential is $\frac{1}{\lambda}$\\
|
191 | 190 | \textbf{Mean time between failures}: $MTTF + MTTR$ (mean time to repair)\\
|
192 | 191 | \textbf{Software reliability tools tasks}: collecting failure \& test time info, calculating estimates of model parameters using this onfo, testing to fit model against collected info, selecting model to make predictions of remaining faults, time to test, apply model\\
|
193 |
| - \underline{\textbf{Week 9}}\\ |
| 192 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 9}}\\ |
194 | 193 | \textbf{Software reviews}: quality improvement processes for written material, by detecting defects early \& preventing leakage downstream higher cost of later detection \& rework eliminated\\
|
195 | 194 | \textbf{Software products that can be reviewed}: requirements specifications, design descriptions, source code (code review), release notes\\
|
196 | 195 | \textbf{Code review types}: ad-hoc review, pass-round, walkthrough, group review, formal inspection\\
|
|
207 | 206 | \textbf{Static Analysis}: analyse program without executing, doesn't depend on test cases, generally doesn't know what the software is supposed to do, looks for bug patterns, no replacement for testing, many defects can't be found with static analysis\\
|
208 | 207 | \textbf{Patterns to be checked}: bad practice, correctness, performance, dodgy code, vulnerability to malicious code\\
|
209 | 208 | \textbf{Pattern examples}: equals method should not assume type of object argument, collection should not contain themselves ($!s.contains(s)$), should not use $String.toString()$\\
|
| 209 | + \vfill\null\columnbreak\noindent\underline{\textbf{Week 10}}\\ |
| 210 | + \includegraphics[width=\linewidth]{435.pdf}\\ |
| 211 | + \includegraphics[width=\linewidth]{436.pdf}\\ |
| 212 | + \includegraphics[width=\linewidth]{437.pdf}\\ |
| 213 | + \includegraphics[width=\linewidth]{439.pdf}\\ |
| 214 | + \includegraphics[width=\linewidth]{440.pdf}\\ |
| 215 | + \includegraphics[width=\linewidth]{441.pdf}\\ |
| 216 | + \includegraphics[width=\linewidth]{442.pdf}\\ |
| 217 | + \includegraphics[width=\linewidth]{443.pdf}\\ |
| 218 | + \includegraphics[width=\linewidth]{444.pdf}\\ |
| 219 | + \includegraphics[width=\linewidth]{448.pdf}\\ |
| 220 | + \includegraphics[width=\linewidth]{449.pdf}\\ |
| 221 | + \includegraphics[width=\linewidth]{451.pdf}\\ |
| 222 | + \includegraphics[width=\linewidth]{452.pdf}\\ |
| 223 | + \includegraphics[width=\linewidth]{453.pdf}\\ |
| 224 | + \includegraphics[width=\linewidth]{454.png}\\ |
| 225 | + \includegraphics[width=\linewidth]{455.pdf}\\ |
| 226 | + \includegraphics[width=\linewidth]{456.pdf}\\ |
| 227 | + \includegraphics[width=\linewidth]{457.pdf}\\ |
| 228 | + \includegraphics[width=\linewidth]{458.pdf}\\ |
| 229 | + \includegraphics[width=\linewidth]{459.pdf}\\ |
| 230 | + \includegraphics[width=\linewidth]{460.pdf}\\ |
| 231 | + \includegraphics[width=\linewidth]{461.pdf}\\ |
210 | 232 | \end{multicols}
|
211 | 233 | \end{document}
|
0 commit comments