Auto Testing Concepts

Hello, my name is Dmitry Karlovsky and, unfortunately, I do not have time to write a large article, but I really want to share some ideas. So let me test you a little programming note. Today we will talk about automatic testing:


  1. Why are we writing tests?
  2. What are the tests?
  3. How do we write tests?
  4. How are they worth writing?
  5. Why is unit testing bad?

Правильная пирамида тестирования


Automated Testing Tasks


From more important to less:


  1. Detect defects as early as possible. Before the user sees, before putting it on the server, before giving it for testing, before committing it.
  2. Localization of the problem. The test affects only part of the code.
  3. Speed ​​up development. Test execution is much faster than manual verification.
  4. Actual documentation. The test is a simple and guaranteed up-to-date example of use.

Orthogonal classifications


  1. Object classification
  2. Classification by test type
  3. Classification by type of testing process

Just in case, I emphasize that we are talking exclusively about automatic testing.


Test Objects


  1. A module or unit is a minimal piece of code that can be tested independently of the rest of the code. Unit testing is also known as unit testing.
  2. A component is a relatively independent part of the application. May include other components and modules.
  3. An application or system is a degenerate case of a component that indirectly includes all other components.

Test types


  1. Functional - verify compliance with functionality requirements
  2. Integration - checking compatibility of neighboring test objects
  3. Load - verify compliance with performance requirements

Types of Testing Processes


  1. Acceptance - checking for new / changed functionality.
  2. Regression - checking the absence of defects in unchanged functionality.
  3. Smoke - checking the basic functionality for obvious defects.
  4. Full - check all functionality.
  5. Configuration - checking all functionality on different configurations.

Number of tests


  • Tests are code.
  • Any code takes time to write.
  • Any code takes time to support.
  • Any code may contain errors.

The more tests, the slower the development.


Completeness of testing


  • Tests should verify all user scenarios.
  • Tests should go into each branch of the logic.
  • Tests should check all equivalence classes.
  • Tests should check all boundary conditions.
  • Tests should test the response to non-standard conditions.

The more complete the tests, the faster refactoring and testing, and as a result, the delivery of new functionality.


Business priorities


  1. Maximize development speed. The developer needs to write a minimum of tests that run quickly.
  2. Minimization of defects. It is necessary to provide maximum coverage.
  3. Minimize development costs. It is necessary to spend a minimum of effort on writing and maintaining code (including tests).

Testing Strategies


Depending on the priorities , several main strategies can be distinguished:


  1. Quality . We write functional tests for all modules . We check their compatibility with integration tests. Add tests for all non-degenerate components . Do not forget about integration for components . Sprinkle with tests the entire application . Multilevel comprehensive testing will require a lot of time and resources, but will make it more likely to identify defects.
  2. Speed . We use only smoke testing of the application. We know for sure that the basic functions work, and we will fix the rest, if all of a sudden. Thus, we quickly deliver functionality, but we spend a lot of resources on bringing it to mind.
  3. Cost . We write tests only for the entire application. Critical defects are thus detected in advance, which reduces the cost of support and, as a result, the relatively high speed of delivery of new functionality.
  4. Quality . скорость . We cover all (including degenerate) components with tests, which gives maximum coverage with a minimum of tests, and therefore a minimum of defects at high speed, resulting in a relatively low cost .

Application example


So that my analytics is not completely unfounded, let's create the simplest application of two components. It will contain a name input field and a block with the output of a greeting addressed to that name.


$my_hello $mol_list
    rows /
        <= Input $mol_string
            value?val <=> name?val \
        <= Output $my_hello_message
            target <= name -

$my_hello_message $mol_view
    sub /
        \Hello, 
        <= target \

For those who are not familiar with this notation, I suggest taking a look at the equivalent TypeScript code:


export class $my_hello extends $mol_list {

    rows() {
        return [ this.Input() , this.Output() ]
    }

    @mem
    Input() {
        return this.$.$mol_string.make({
            value : next => this.name( next ) ,
        })
    }

    @mem
    Output() {
        return this.$.$my_hello_message.make({
            target : ()=> this.name() ,
        })
    }

    @mem
    name( next = '' ) { return next }

}

export class $my_hello_message extends $mol_view {

    sub() {
        return [ 'Hello, ' , this.target() ]
    }

    target() {
        return ''
    }

}

@mem - reactive caching decorator. this.$ - di context. Binding happens through redefinition of properties. .make just instantiates and overrides the specified properties.


Component testing


With this approach, we use real dependencies whenever possible.


What should be dipped in any case:


  1. Interaction with the outside world (http, localStorage, location, etc.)
  2. Non-deterministic (Math.random, Date.now, etc.)
  3. Especially slow things (calculating a crypto hash and so on)
  4. Asynchrony (synchronous tests are easier to understand and debug)

So, first we write a test for a nested component:


// Components tests of $my_hello_message
$mol_test({

    'print greeting to defined target'() {
        const app = new $my_hello_message
        app.target = ()=> 'Jin'
        $mol_assert_equal( app.sub().join( '' ) , 'Hello, Jin' )
    } ,

})

And now we add tests for the external component:


// Components tests of $my_hello
$mol_test({

    'contains Input and Output'() {
        const app = new $my_hello

        $mol_assert_like( app.sub() , [
            app.Input() ,
            app.Output() ,
        ] )
    } ,

    'print greeting with name from input'() {
        const app = new $my_hello
        $mol_assert_equal( app.Output().sub().join( '' ) , 'Hello, ' )

        app.Input().value( 'Jin' )
        $mol_assert_equal( app.Output().sub().join( '' ), 'Hello, Jin' )
    } ,

})

As you can see, all we needed was a public component interface. Please note that it doesn’t matter to us through what property and how the value is passed to Output. We check precisely the requirements: that the displayed greeting corresponds to the name entered by the user.


Unit testing


For unit tests, you must isolate the module from the rest of the code. When a module does not interact with other modules in any way, the tests are the same as the component ones:


// Unit tests of $my_hello_message
$mol_test({

    'print greeting to defined target'() {
        const app = new $my_hello_message
        app.target = ()=> 'Jin'
        $mol_assert_equal( app.sub().join( '' ), 'Hello, Jin' )
    } ,

})

If the module needs other modules, they are replaced by stubs and we verify that communication with them occurs as expected.


// Unit tests of $my_hello
$mol_test({

    'contains Input and Output'() {
        const app = new $my_hello

        const Input = {} as $mol_string
        app.Input = ()=> Input

        const Output = {} as $mol_hello_message
        app.Output = ()=> Output

        $mol_assert_like( app.sub() , [
            Input ,
            Output ,
        ] )
    } ,

    'Input value binds to name'() {
        const app = new $my_hello
        app.$ = Object.create( $ )

        const Input = {} as $mol_string
        app.$.$mol_string = function(){ return Input } as any

        $mol_assert_equal( app.name() , '' )

        Input.value( 'Jin' )
        $mol_assert_equal( app.name() , 'Jin' )
    } ,

    'Output target binds to name'() {
        const app = new $my_hello
        app.$ = Object.create( $ )

        const Output = {} as $my_hello_message
        app.$.$mol_hello_message = function(){ return Output } as any

        $mol_assert_equal( Output.title() , '' )

        app.name( 'Jin' )
        $mol_assert_equal( Output.title() , 'Jin' )
    } ,

})

Mocking is not free - it leads to more complicated tests. But the saddest thing is that after checking the work with mokas, you cannot be sure that with real modules all this will work correctly. If you were careful, you already noticed that in the last code we expect that the name should be passed through the property title . And this leads us to two types of errors:


  1. The correct module code can give errors on the mok.
  2. A defective module code may not give errors on the mocks.

And, finally, the tests, it turns out, do not check the requirements (I remind you that a greeting with a substituted name should be displayed), but the implementation (such and such a method with such and such parameters is called inside). This means that the tests are fragile.


Хрупкие тесты — такие тесты, которые ломаются при эквивалентных изменениях реализации.

Эквивалентные изменения — такие изменения реализации, которые не ломают соответствие кода функциональным требованиям.

Test driven development


The TDD algorithm is quite simple and quite useful:


  1. We write a test , make sure that it crashes, which means that the test really tests something and changes in the code are really necessary.
  2. We write the code until the test stops falling, which means that we have met all the requirements.
  3. We refactor the code , making sure that the test does not fail, which means that our code still complies with the requirements.

If we write fragile tests, then at the refactor step they will constantly fall, requiring research and adjustment, which reduces the programmer’s productivity.


Integration tests


To overcome the cases that remained after the unit tests, we came up with an additional type of tests - integration tests. Here we take several modules and verify that they interact correctly:


// Integration tests of $my_hello
$mol_test({

    'print greeting with name'() {
        const app = new $my_hello

        $mol_assert_equal( app.Output().sub().join( '' ) , 'Hello, ' )

        app.Input().value( 'Jin' )
        $mol_assert_equal( app.Output().sub().join( '' ), 'Hello, Jin' )
    } ,

})

Yeah, we got the very last component test. In other words, in one way or another, we wrote all the component tests checking the requirements, but additionally fixed a concrete implementation of the logic in the tests. This is usually redundant.


Statistics


Criteria Cascaded component Modular + Integrational
CLOS 17 34 + 8
Complexity Simple Complex
Incapsulation Black box White box
Fragility Low High
Coverage Full Extra
Velocity High Low
Duration Low High

Related Links