You do not need autotest developers

In the era of the universal introduction of agile methodologies and Devops, no one doubts that regression should be automated. Especially if the company is talking about Continuous Delivery. Everyone rushed to hunt the developers of autotests, from which the market becomes overheated.


In this article, I’ll talk about the fact that in fact the developer of autotests is not such an important role in the team. They are not needed, especially if you are implementing scrum. And all of these agile and devops can be implemented without these people. So if someone tells you that they in the team are testing everything with their hands - because for some reason they don’t have an autotest developer - don’t believe them. They test with their hands, because in another way they are lazy. Or do not know how.


The problem of a separate team of developers of autotests


At Alfa Bank, I was engaged in the development of test automation. And for more than two years, we were able to completely abandon such a position as the developer of autotests. Immediately make a reservation, we are talking about units where electronic products are made. And not about the whole bank.
When I first arrived, the structure of the organization and the process strongly resembled this picture typical of many:


There is one:


  • department / development team;
  • department / testing team;
  • department / team of analysts;

All of them simultaneously work on different products / projects. I was lucky a little more, and at that time they began to think about testing automation. Therefore, a test automation team was also present .


And to manage these functional wells, you usually need a Project Manager. At the same time, our testing process was as follows:
Процесс тестирования


  1. The tester team received an artifact from the development team at the entrance. The guys conducted functional testing. At the same time, automatic regression was started if it was developed for this product. And most of the cases were tested manually.
  2. Then this artifact was deployed to the prod. Where customers already go.
  3. And only after that the tasks were started in jira for the team of autotests (hereinafter I will call the developers of autotests).
  4. Implementation of the tasks took place with a delay of 1 to 4 iterations.

And as a rule, testing automation was achieved due to the joint work of the tester and the developer of autotests. Where one / one comes up with test cases and tests the application with his hands, and the second one automates these test cases. And their interaction is coordinated by the RP and the army of managers. Everyone there Test manager and Test leads.


I will not say "what is wrong with functional wells." Everything is wrong with them. At least because each department produces "riddle artifacts" that are part of a future feature / product. Timlids and PMs always need to keep abreast in this process. Transfer the artifact puzzle to another department, make sure that the departments communicate with each other, etc.


Therefore, we will not dwell on this separately.


Let's move right to a more common workflow scheme. When there are product teams and a separate service team of developers of autotests. At that time, we already had some of the teams working on kanban - and some on scrum. Despite this, the problems remained the same. Now we had teams, inside of which there were all the roles necessary for product / project development. And the developer of autotests is still in a separate team of automators. Moreover, this approach was a necessary necessity for us, because:


  • the number of our autotests was not enough in order to provide each team with test automation;
  • due to the presence of a large technical debt, the guys could not “drop it” and immediately switch to what the teams needed right now;
  • there were also some systems for which autotests were developed long ago and needed to be supported. What the expertise within the teams was not enough.

Despite the fact that our testing process has changed and began to look like this:
Рабочий процесс тестирования в скрам-команде


  1. An artifact comes to the tester on the team, which is a small part of the user story.
  2. It performs acceptance testing and runs automatic regression.
  3. If the regression is not fully automated, then part of the regression is tested manually.
  4. Then the artifact is deployed to the prod. And the tester passes the task to the autotests for automation.

Well, then all the same expectation of when test cases will be automated. That is, the problems remained. More precisely, there are more of them. Because now the teams also cannot sprint, because there are unclosed tasks for testing automation.


Let's summarize what issues we have remained open with a separate team of autotests.


Problems of a dedicated autotester team


  1. There is no quick value from autotests. As a rule, they were developed with a delay of 1 or more sprints. Because there was a queue for these very autotests. And the team received value from them only in the form of automated regression. And very often it happened that by the time the feature is automated, the team will redo it. And auto-tests quickly become irrelevant.
  2. The cost of the product grew. This is a consequence of the first paragraph. Due to the fact that manual testing was reduced with a significant delay, the team had to spend money on test automation with reliable deferred value. Not to mention the fact that the presence of proxy management increased the feedback loop, and this also increased the cost of the product.
  3. There was no transparency during the testing process. No one in the teams knew what the autotests were doing. I did not understand their workload and by what principle they prioritized.
  4. Autotesters are “bought” people (outsourcing). And in this case, SM or PO believed that they were “coming” people. And they should not be part of the team. And the guys themselves didn’t really want to become part of the team, because where is the guarantee that their employer will not “transfer” them once again to another project?
  5. The business did not understand why it should pay in the hope of a long-term effect.

And now attention - what to do RP / SM, in which PO did not pawn money for automation? Or is the product already under development for about 3 months, but they still cannot find a developer of autotests for their product? Or is there a lack of people in the team of autotests, someone went on vacation / got sick? Forget about automation and long live manual regression for 2 weeks? Or wait for the right number of people to appear? And at the same time incur financial losses from the untimely launch of the product, or vice versa, from the fact that the product was released in a “stalled” state?


Итак, очевидно, подход с выделенной командой автотестеров в среде, где есть agile — не применим , и надо искать другие способы перестройки процесса для достижения автоматизации тестирования.

To do this, let's look at the forms of cooperation and interaction of developers of autotests with other members of product teams.


How testers work with autotests


  • Testers are customers for autotest developers.
  • The autotester team is a service team for product teams. And such a team has its own team lead, which proxies requests from testers to autotests. In the worst case, testers also have their own team lead, which proxies requests from testers to team leader autotest developers. And in fact, we get a broken phone, a long feedback loop. And almost zero efficiency. Typically, with such an organizational structure, people speak at conferences with topics about the fact that test automation is long and expensive, and that testing with "hands" is not a shame.
  • And even if communication is not so bad, you can catch other problems in the style: the developer (s) at a particular moment in time are busy with other projects and cannot answer tester’s requests here and now.
  • Testers do not trust autotests. And repeat automated suites - manually.

How developers "collaborate" with autotests


  • Developers in theory are also customers of the testing automation team. But in practice, rarely did they really send their requests to them.

When a developer creates some kind of library to solve the problems of his product, and it can be useful for testing, it’s in his own interest to pass this tool to autotests. But if you have a separate team, and especially if you have your own team lead, many developers simply refuse to get involved in communication debates. As a result, we have the following problems:


  • Duplication of automation tools / frameworks due to lack of communication with developers.
  • Often the same problems are solved with different tools. For example, for testing, you need to pull logs from Elasticsearch. Developers, for example, already have their own written library for this task, but instead of reusing and developing an existing tool, autotests usually write their own, or find a new tool for this.
  • Auto-testers use tools that, for some reason, developers may then refuse to support or develop. For example, our developers used maven as a project collector. And when the task arose of integrating testing automation into the development process, I had to abandon the old framework completely. Since translating it to gradle turned out to be more difficult than writing a new one. And developers could not use maven, since all of our infrastructure and environment were already “sharpened” under gradle.

How management "collaborates" with autotests


Management collaborates with developers of autotests, mainly by requesting and receiving regular reports on the implementation of work, which should contain answers to standard questions, for example:


  • What do we get from test automation?
  • How much does the product cost?
  • Given the cost, do we benefit from test automation enough?
  • What bad can happen if automation financing is suspended?
  • What is the cost of technical support for written autotests?

As a result, here are the main testing automation files when dedicated autotests do it:


  1. Opacity of the amount of automated test coverage
  2. We develop what is not used when testing the product. For example, they wrote or automated test cases that were not included in the launch of the regression suite. Accordingly, such autotests do not bring value.
  3. We do not analyze the test results: developed -> launched -> did not analyze the results. And then after a while the team discovers "collapsed" autotests due to the fact that the test data is very outdated. This is a better case.
  4. Unstable test cases: we constantly spend a lot of money on stabilizing test cases. This is the problem of not diving into the context of the application that we are trying to automatically test.

Finding a solution to a problem


Attempt # 1: 1 tester and 1 autotester allocated to the command


The first thing that occurred to me was to try to provide each team with an autotest developer. The key word is to try. For the whole year, I interviewed more than 200 candidates, and only three of them were on our team. Nevertheless, we still decided to try and at least pilot the process when the developer of autotests inside the team. Our testing process has undergone changes again:
Процесс тестирования с автотестером внутри команды


  1. Now, when the artifact came for testing, the tester did the acceptance.
  2. Then the tests for this artifact are immediately automated.
  3. Thus, all our regression is automated.
  4. And the artifact is deployed on the prod with already implemented autotests.

It would seem that everything is perfect. But after a couple of sprints, the following was revealed:


  • The product / project did not generate the corresponding load on the autotester. This is despite the large number of meetings that were present at the team. And which, on average, spent about 10 hours in a weekly sprint.
  • At the same time, the autotester refuses to engage in functional testing if you try to leave him alone in the team.
  • But the tester is not so competent to write self-tests.
  • When they offered to write the services for the application to the developer of autotests, he refused, because his skills were not enough. And strangely enough, not all autotests are interested in developing themselves as a developer.

Attempt # 2: AutoTest Developer is part of 2-3 teams


Then I thought that if the autotester is busy at about 50%, then maybe try to "fumble" it on 2 teams? And here is what we got:


It turns out that the developer has about 20 hours left for coding in a weekly sprint. And here the problem turned out to be simple: he simply did not have enough time. Switching the context between the product led to the fact that now he could not quickly get involved in the automation process. And again, we have a problem with the fact that the development of autotests is behind the development of the product. In addition, it turned out to be very difficult for the teams to synchronize so that their meetings did not overlap and that the developer had time for all the meetings with the teams.


And at the same time, he also could not refuse these meetings, since he was losing the context of the application being developed, and automation became less effective.


Attempt # 3 or Successful Attempt: Teach testers how to write autotests


Then the hypothesis came to mind that if we train testers to develop autotests independently, then we will fix all our pains. So where did we start?


  1. To get started, we built the correct testing pyramid. According to her, our test strategy was to have tests on different layers of the application. And between each layer there should be integration tests. And only acceptance tests should be tested by hands. In this case, immediately after acceptance - they are automated. That is, in the next iteration, the tester again tests ONLY the change.
    Пирамида тестирования
  2. "Smeared" the process of testing automation by command. Since the tests were different, they were developed by different team members. Api tests were developed by the api developer. The front-end developer covered his UI components with tests. The tester should design a test model and implement integration tests that were run by the browser (selenium tests).
  3. Used simple tools to automate testing. We decided not to complicate our life and chose the simplest wrapper over selenium. Currently it is selenide if your autotests are written in java.
  4. They created a tool for writing autotests by non-programmers. An article on our blog has already been written about this library (Akita BDD) https://habrahabr.ru/company/alfa/blog/350238/ . And thanks to the fact that we use BDD, we were able to engage analysts in writing autotests. By the way, the library in open source: https://github.com/alfa-laboratory/akita and https://github.com/alfa-laboratory/akita-testing-template
  5. They taught testers a little programming. The average training time took from 2 weeks to 2 months.

Due to the fact that we “smeared” test automation by command, the tester managed to do both manual testing and automation. Due to this cross-functionality within the team, some testers have pumped so much that they even sometimes began to help the team with the development of microservices.


When the team itself is involved in the development of autotests, they themselves are responsible for the test coverage and understand how many tests have already been written and what needs to be added to reduce the time for manual regression. There is no duplication of automation of the same scenarios. Since developers are aware of them, and when writing their e2e tests and unit tests, they can warn the tester about the absence of the need to automate certain scenarios.


The problem of outdated autotests is quickly resolved. When the product develops rapidly, the availability of add. the person responsible for automation generates a lot of meaningless work for him. Because while he is today automating a prototype of some new set of test cases, a designer with PO may decide that the logic will be completely different. And it turns out that the next day he again needs to redo his tests. Only being always inside the team, you can "keep abreast of the pulse" and understand which tests it makes sense to automate now, and which makes sense to wait.


At the same time, the core of the library itself is supported by testers themselves. Only more interested in this. Which made it interesting to write code, and they would be contributing akita-bdd. As a rule, all new chips and steps appear from other teams that have tried something inside themselves and decided to share, making a pull request to the library. All communication takes place in the community inside the banking slack, where the guys find out the need for a particular step, and after that they rummage it.


But will the quality of the tests suffer?


Perhaps some of you are wondering, what if the team does not cope with the creation of a new test framework? Or will autotests be of poor quality? After all, they do not have a unique expertise of the developer of autotests? So, I am of the opinion that autotests are not unicorns. And the ideal candidate for writing this very framework would be a member of a team that needs automation.


I will tell you a story that happened to me. When I first came to Alfa Bank, there was already my own framework for testing automation. It was developed by a dedicated team of developers of autotests, about which I already spoke. It was developed for 2-3 years. It was a monstrous Frankenstein, learning to use which even an experienced developer was difficult. Accordingly, any attempts to teach the tester to automate tests with his help ended in failure. As well as attempts to drag this tool into teams.


Then we decided to pilot the development of a new tool. But it is the team that should develop it, and not the person / team apart from production. As a result of one project, we got a prototype of this tool, which we dragged into a couple of new teams. And according to the results of the implementation of their products, we have an overgrown prototype with a large number of developments that the teams created themselves and themselves solved the problems with a similar context. We analyzed what they did, and, choosing the best, made a library.


That is, it was the same prototype, only a java developer from one of those teams was a little bewildering. He brought order and beauty to the architecture of the application and improved the quality of the code of the library itself so that it was not a trash.


Now this library is used in more than 20 teams. And it itself is developing - testers from teams are constantly contributing and supplementing with new steps if necessary.


И все инновации, как правило, происходили именно в контексте команды. И наличие их разнообразного опыта в миксе способствовало лучшему пониманию и лучшим решениям.

Perhaps you ask, and where does the refusal from the developers of autotests, when you were telling us now about team work on autotests. The fact is that the situation with the presence of a developer of autotests and developers within the team resembles a situation when there are too many “cooks in the kitchen”. That is, it leads to the fact that team members step on each other (or on each other's code). And in the end, we get a picture when application developers stop writing autotests code, which means they no longer know the context of the problem of writing autotests and how they could solve them or prevent them from writing their part of the code.


Another reason for creating teams that write autotests without involving a dedicated person for this task: since people work together in a team, they better represent the entire stack and context of the application being developed. This means that they will develop it, taking into account the fact that they will also need to develop autotests later.


Let's look at a specific example: when our front-end developer started trying to write autotests, and experienced the pain of writing xpath requests to various ui components on a page, he suggested creating unique css class name at the time of page layout for convenient searching for an element on the page. Thus, we were able to stabilize the tests and speed up their writing by simplifying the search for these elements. And the front developer just got a new rule in his work - which did not complicate his workflow even a bit


Well, when we combine everything into a cross-functional team, we include all these dependencies in it and do not need any coordination. The level of control is getting much smaller.


findings


  • Having a dedicated autotester team is not agile .
  • Having dedicated autotest developers is expensive for the product / team
  • Searching / hunting for such people is expensive and long. And it slows down product
  • development
  • The development of autotests lags
  • far behind product development. As a result, we get not the maximum value from test automation.

And I would like to add to my wish that my story is not a silver bullet for everyone. But many of our approaches may well start you if you consider the following:


  • Тестировщик тоже инженер. И покуда команда к нему не начнет относиться как к инженеру, у того не возникнет мотивации развиваться или учиться программировать.
  • Команда тоже отвечает за качество. Не тестировщик, на которого в случае чего повесят всю ответственность за качество продукта. А вся команда. Включая заказчика (PO).
  • Автотесты тоже часть продукта. Покуда вы будете думать, что автотесты — это продукт для другого продукта, так и будут закрываться спринты без написанных автотестов. А автотесты будут вашим техдолгом, который как правило не закрывается. Важно понимать, что автотесты — это то, что гарантирует качество вашего продукта.

And finally, the team itself should want to write autotests and decide for itself who will do what. Without coercion "from above."


p.s. Если вам интересен мой опыт, то приглашаю в свой блог ( http:§§travieso.me ). There are published all my speeches, articles, lectures and notes.

Only registered users can participate in the survey. Please come in.

How many autotest developers do you have?

  • 49.2% 1-5 61
  • 4.0% 5-15 five
  • 2.4% > 15 3
  • 11.3% We do not need test automation fourteen
  • 37.1% We have automation done by application developers 46