about summary refs log tree commit diff
path: root/usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt
diff options
context:
space:
mode:
Diffstat (limited to 'usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt')
-rw-r--r--usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt127
1 files changed, 127 insertions, 0 deletions
diff --git a/usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt b/usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt
new file mode 100644
index 0000000..264491d
--- /dev/null
+++ b/usth/ICT2.7/P4L3 White-Box Testing Subtitles/30 - Summary - lang_en_vs4.srt
@@ -0,0 +1,127 @@
+1

+00:00:00,190 --> 00:00:02,800

+Now let me conclude the lesson by summarizing a few important

+

+2

+00:00:02,800 --> 00:00:06,840

+aspects of white-box testing. The first important aspect is that white-box

+

+3

+00:00:06,840 --> 00:00:09,770

+testing works on a formal model. The code itself are models

+

+4

+00:00:09,770 --> 00:00:12,430

+derived from the code. So when we do white-box testing, we

+

+5

+00:00:12,430 --> 00:00:15,160

+don't need to make subjective decision, for example, on the level

+

+6

+00:00:15,160 --> 00:00:18,890

+of obstruction of our models. Normally, we simply represent what's there.

+

+7

+00:00:18,890 --> 00:00:22,010

+And so what we will obtain are objective, results and objective

+

+8

+00:00:22,010 --> 00:00:25,400

+measures. As I also said at the beginning, coverage criteria allows

+

+9

+00:00:25,400 --> 00:00:28,790

+us to compare different test suites, different sets of tests,

+

+10

+00:00:28,790 --> 00:00:31,780

+because I can measure the coverage achieved by one test suite

+

+11

+00:00:31,780 --> 00:00:34,280

+and by the other, and then decide which one to use

+

+12

+00:00:34,280 --> 00:00:37,400

+based on this measure. And again, remember, these measures aren't perfect,

+

+13

+00:00:37,400 --> 00:00:40,700

+but they at least give you an objective number, an objective

+

+14

+00:00:40,700 --> 00:00:44,470

+measure of the likely effectiveness of your tests. So even though

+

+15

+00:00:44,470 --> 00:00:48,130

+achieving 100% coverage does not mean that you identify all the

+

+16

+00:00:48,130 --> 00:00:50,710

+problems in the code for sure. If your level of coverage

+

+17

+00:00:50,710 --> 00:00:53,700

+is 10%, for example, for stemen coverage. That means that

+

+18

+00:00:53,700 --> 00:00:57,360

+you haven't exercised 90% of your code, and therefore the trouble

+

+19

+00:00:57,360 --> 00:01:00,565

+is in a piece of software that is inadequately tested

+

+20

+00:01:00,565 --> 00:01:03,600

+and likely to be of inadequate quality. We also saw that

+

+21

+00:01:03,600 --> 00:01:07,370

+there are two broad classes of coverage criteria, practical criteria

+

+22

+00:01:07,370 --> 00:01:10,430

+that we can actually use, and theoretical criteria that are interesting

+

+23

+00:01:10,430 --> 00:01:13,410

+from a conceptual standpoint, but that they are totally impractical.

+

+24

+00:01:13,410 --> 00:01:16,170

+They are too expensive to be used on real world software.

+

+25

+00:01:16,170 --> 00:01:18,350

+And finally, as we also said at the beginning,

+

+26

+00:01:18,350 --> 00:01:20,820

+one of the great things about white box testing and

+

+27

+00:01:20,820 --> 00:01:23,910

+coverage criteria, is that they are fully automatable. There are

+

+28

+00:01:23,910 --> 00:01:27,380

+tools that can take your code, instrument it automatically. And

+

+29

+00:01:27,380 --> 00:01:29,050

+when you run your test cases, they will tell

+

+30

+00:01:29,050 --> 00:01:31,330

+you, what is the level of coverage that you achieve

+

+31

+00:01:31,330 --> 00:01:34,410

+with your test at no cost for you. So there's

+

+32

+00:01:34,410 --> 00:01:36,763

+really no reason not to measure coverage of your code.