About the life, freedom, and the pursuit of happiness of the user API

We are constantly confronted with systems created by other people. Whether it’s the UI of applications in a smartphone or the cloud infrastructure of the modern Internet, it is the interaction process that determines our feelings, impressions, and ultimately our attitude to technology. We can be in the role of engineers, developers or ordinary users - user experience is important everywhere. Around systems with good UX, a society of happy, contented and productive people is formed; poor UX only leads to pain and suffering.

Even if you don’t specifically give yourself a report, then when creating new software, be sure to create a user experience. When the code is already written, people begin to interact with it. Maybe these are the developers from your team. Maybe it’s mobile developers trying to use your API, or system administrators who are trying to figure out why everything broke down at night. The examples themselves may be completely different in essence, but general principles apply to them. In this hub, we’ll talk about ideas about UX, API design, learning psychology, and other related areas. Consider applying good practices at various levels of application development. Whatever you do - writing databases, libraries, hypermedia APIs or mobile applications - sooner or later, someone will touch your code - and let him enjoy it, верно?

The report of Dylan Beatty at DotNext 2017 Moscow, which was included in the top three reports (according to participants' reviews), was used as a prototype of the habropost . Dylan has been building websites since the distant 1992. Ninety-second year, in terms of web application development - the times are really ancient. He is the owner of Microsoft MVP in the Visual Studio and Developer Technologies category. In his hometown of London, he organizes .NET User Group events. Dylan always has time to communicate and exchange ideas, so you can write him on this email: dylan@dylanbeattie.net , go to the website: www.dylanbeattie.net or tweet: @dylanbeattie .


Today I would like to talk with you about the idea of ​​a “happy” code. Spotlight, which I have been working with for 15 years, is engaged in show business, which includes acting, television, cinema, etc. In this environment, there is one generally accepted clip-art depicting two Greek theater masks - comedy and tragedy (or joy and sadness). During my professional career, I have witnessed various projects. There were those who, seemingly, in all respects deserved success. They had exactly everything: an excellent team, an interesting task, good technology, etc. But, strangely enough, these guys were waiting for failure. After some three months, they came to their office and there nothing was waiting for them except a huge sea of ​​problems. And other projects, on the contrary, seemed crazy at the start. They did not even have money - only two people and one impossible task. And in an absolutely incredible way, these projects were a success. Three months later, the team had a gorgeous product, while their spirit was at their best, things were burning in their hands, tasks were solved quickly one after another. There is something to think about here. In my opinion, those teams that always feel bad, who don’t like to come to work, who are hard and they just do, that wear themselves further - these teams produce bad software. And vice versa, those teams that rely on the positive - deliver good products. Throughout my career, I have not observed a single such team in which people were burdened by their work, while still producing excellent IT systems. I decided to research this issue and try to understand how to make teams happy. After all, it’s not about, say, purchasing table football from the office. Understand that most programmers like to code . These people came to programming for a reason, but because once they sat down at a computer and realized that they were very interested. And the fact that they can earn a living from this occupation is a great success for them. But experience leaves its mark. And we will discuss what this imprint happens with and how it affects the happiness of the programmer.

Detectability and dopamine

A detectable system is such a system that you can study its work freely and independently. It is not scary to interact with such a system: it will not let you do something irreparable, because it is configured safely, according to the sandbox principle. Moreover, such a system usually actively offers you ways to master it - and not boring, but fascinating, such as if you have a toy or a puzzle in your hands. One of the advantages of such a system is that it really does not require any training manuals or documentation. But there is another feature. Imagine this situation. You are sitting at work and are starting to fix bugs. One fix, another, third. You feel that the work has gone, and now you can’t stop, as with Super Mario - you try more and more. It’s 3 o’clock in the morning, and you are all daring. A study was conducted, showing that at the moment when a person manages to solve a problem, a dopamine surge is observed in his brain. It is known that the same substance is responsible for the sensations experienced by the gamer when he once again wins, as well as a drug addict preparing to take a dose. Thanks to dopamine, a person assimilates things faster and better, knowledge remains for a longer period and is easier to use in practice.

It turns out that if we create a system, working with which the programmer will feel as if he clicks interesting puzzles and puzzles one after another, then he will experience these small emissions of dopamine - he will work more productively and more professionally.

Learning curve

In psychology, there is such a thing as a learning curve. It shows the relationship between the amount of time a person spent on studying a particular thing and the degree of competence that he acquired.

Here are two curves: blue - with a steep rise and red - more gentle. Which one is better? Opinions are usually divided in half. Someone prefers to follow the “steep” path: to make a difficult breakthrough, but to achieve results faster. Such people learn the Haskell language in 24 hours. Others advocate a slower, but “even” learning process. These are those who have been using Microsoft Word all their lives, but do not yet know how to make a page break. Both methods are not bad and have a right to exist.

Taboo number 1: areas of decline

A common problem is a curve like this:

The portion of the graph decline immediately following the local peak reflects the situation when the user is forced to roll back, because during the study of the material he understood something wrong and is now forced to relearn. Anyone who has been involved in ASP.NET WebForms probably remembers how easily OnClick, OnItemDataBound, etc. are mastered one by one. And now you already feel that you have learned how to make websites. But then the boss tells you to connect the streaming media system and set up the video broadcast. You go into Visual Studio, look for streaming media control here, but don't find it. ASP.NET WebForms has created a narrow, specific view of the web. Probably the abstraction laid in it was designed to lure VB developers into web programming. Because all ASP.NET WebForms operates on are buttons, events, clicks, data binding. But it turns out that you need to understand HTTP and HTML, requests, responses, stateless protocols and more. So you find yourself at the very peak of the curve. And most likely, on this day you will leave work in a bad mood, realizing that you have spent your energy in vain and now you have to start again. Nobody likes to work like that.

Taboo # 2: Curve Leap

Another scenario that you want to avoid looks like this:

This is a kind of collision with a brick wall. You have already begun to learn a little, and suddenly something incredibly complex, not giving in to your understanding, appears on your way and you are safely stuck. I remember how the concept of recursion in functional programming was first explained to me. I understood absolutely nothing. I was advised to just train - apply it again and again. But this is a dubious approach. In theory, this is not even a jump, but a break in the curve:

In fact, there are not one, but two curves. Getting from one to another is a tangible challenge for you. And overcoming this abyss, you find that it is also difficult for you to help someone else in this. Any concept is really to figure out. This is a recursion or a Y-combinator - sooner or later it will “light up” you. I personally, today do not understand the monad, although it is clear that one day I will overcome it. A person can do a lot. Nevertheless, we must try to develop our systems so that the user who is studying it does not encounter particularly difficult obstacles, if possible.

User experience

I first thought about UX when working with Castle Windsor. After downloading and installing it, I inserted the code from a training blog into the field, pressed F5 and saw the following error:

I read the whole text. “It looks like you forgot to register your HTTP module ...” Pay attention to how they are friendly to you. The developers did not just throw you a mistake. They seemed to know that, probably, you will have a poor understanding of what you are working with, and therefore they brought a piece of XML code in the comment and told you to copy it into your config file. After I followed their instructions, I really started everything. The person who wrote the handling of this exception could easily leave you NullReferenceException or ConfigurationException but he didn’t. He knew that a large number of users would face this problem and knew the reason. And the reason, relatively speaking, is that these users did not perform some action, because for them it was not obvious, because they did not open any tutorial - they just found the ready-made code and launched it. The developers made sure to arrive in time exactly at the time of launch using. Among developers, there is an interesting belief that UX is not their concern, that it is their business to deal with backends, databases, APIs and more. I do not agree with that. With any of your code, one way or another, they will interact. If you are creating a database schema, be aware that someone will try to retrieve data from there. Write an API - someone will use it for an application. Разрабатываете приложение — кто-то будет отвечать за его поддержку, пока вы в отпуске.

When we write code, we determine how interaction will occur with various groups of our users (with fellow developers, with customers, with partners). It is very important to consider the experience and patterns of their work: what you learn will help you develop your systems more intelligently in the future. So, we will consider examples.

First day at work, company X

Imagine that you are a web developer and you have just been hired by a new team. And then you come to the office on Monday morning. They show you where the coffee machine is, where the fire exit is, then they finally sit down at the desktop and say that, for starters, you familiarize yourself with the site and launch it locally. So, first you need a site code. You go to github. You open the repository called “website”, but it turns out to be empty. You see another repository “website2”, you open it, but here it’s also empty. Then find "website2015", you think that probably everything is here - but no. Find some other “website”, but there are only a few Perl scripts. Then you politely turn to a colleague next to the question, where do you find the site code. It turns out that everything is stored in the finance repository. Why in finance? Because, as explained to you, the site developer worked in a team of financiers ... Okay, you move on. Find the code, run it, but alas, nothing works: there is a bunch of required dll files. You again have to turn to a colleague nearby - now about the dll files. He quickly understands what you mean, says that everything is okay, now he will connect you to his C drive and you will copy all the necessary files to yourself. A complicated scheme, you think ... But anyway, you get all the necessary files in your hands. Press start. But again, the problem: there is no connection to the database. In the best case, you just continue to yank your colleague all day, and then in the evening you will still start the site on your machine. If you are unlucky, then you will expect such an error:

In fact, money was paid to someone for this execution and for the text that you see. This someone knew more about the nature of the error, but decided that he would not tell you anything (he doesn’t seem to care), and you somehow figure it out yourself. Frankly, the experience is not the best. Usually, good developers, having gone through such torment, immediately undertake to document the process. They make up a small guide in which they describe the entire sequence of steps, attach the necessary files, explain what needs to be done to connect to the database, etc. But this kind of documentation, as a rule, expires very quickly: in a week someone will update one of the dll files and your caring guide will instantly lose value.

First day at work, company Y

You have just been hired by a web developer. Monday morning you are at the office for the first time. The first thing you are instructed to do is to launch a site locally, the code of which you will find in the Applejack repository. You go to GitHub, clone the repository, press F5 and see: “Restoring NuGet packages”. Take a note: the company has a NuGet server configured; here they store in the public domain all the dll files so that no one needs to ask them from a colleague nearby. The recovery is completed, but the launch fails: "Error connecting to the database." You go to the same repository and find all the database scripts in a folder called “SQL”. Let's stop and analyze. There is definitely a rush of dopamine. After all, you just coped with the task. In this case, you did not have to ask anyone for help - you yourself raised the system and started it. Having easily solved the problem, you simultaneously remembered how everything works. You easily recorded, for example, that the repository has a separate folder for the database and that there is a description of the scheme. And when in a few days you need to figure out how a particular table of the database is arranged, you can easily restore the logic and find a file with a diagram - you won’t have to ask anyone. The whole procedure for launching the site takes no more than a couple of hours. After that, your colleague suggests you go for lunch. You go with him for a hamburger and along the way ask why the repository with the site is called “Applejack”. He explains to you that all the repositories here are called by the names of the heroes “My Little Pony”. It doesn’t matter how to call a code - presence is important names. Therefore, they printed out a list of the names of all the ponies from the cartoon, and each time they create a new project, they simply select a new one. This seems to be stupid. But it turns out that this approach is incredibly convenient. When the monitoring system sends you a notification saying that a problem has been found in the work of Applejack, be sure that you will immediately understand what code we are talking about. Using names is also very convenient when you conduct, say, financial analysis and you need to get a picture of the distribution of resources for various projects. Programmers have the concept of a limited context, the idea of ​​splitting a system into domains. Names help you get similar structures. “Applejack”, “Brandy Snap”, “Cherry Jubilee” - each of them appeals to its system, covers everything that is connected with it. Уже через день вас перестанут забавлять эти имена и вы ощутите, насколько система удобна.

UX: a bit of history

This is what my first computer looked like:

It was Intel 286 on which MS DOS, famous for its floppy disks, was launched. When you turned it on, it first made a whole series of sounds, after which the following appeared on the screen:

You had a working computer in your hands. But then, so that you do not try to do, most often he answered you the same way:

It was an expensive computer and at the same time an impenetrable concrete wall. It was impossible to work with him: you endlessly saw "A>", he cursed at every command you entered. Therefore, you acquired a thick reference book and looked for the exact syntax of the command there. Another popular PC at the time was the Macintosh, running Mac OS 7. When you turned it on, a small icon of a smiling computer first appeared:

Then, “Welcome to Macintosh” lights up:

For a while, the computer clicked and quit, after which the download was completed and an interactive desktop appeared on the screen with a lot of clickable:

You had a mouse at hand. You quickly mastered it and could easily control the cursor on the screen. You examined the desktop a bit: “File” - obviously here you can manage stored files, “Edit” - editing, “View” - viewing. Mack embodies the idea of ​​affordance (the idea of ​​opportunity) - when the system demonstrates what it can. And instead of climbing into the directory, you can enjoy exploring the system: choose from the proposed, try the possibilities. Sometimes people abuse this approach. We have features, they say, so let's add a button for each of them in the interface - it will be convenient. They get something like this:

As you can see, there are countless buttons and a small window in the middle in which the user will write code. The bottom line is that such an interface is no better than DOS-ovsky with its eternal "A>". Because it’s crazy to put all the available functionality into buttons. The user will not know what to start with, nor what this or that button is responsible for, nor how to work with all of this.

UX: the best

Speaking about UX, I always highly recommend watching this video:

I think that many have learned the first level of the game Portal 2. Not only is it funny and addictive - the authors taught you how to move, how to interact with the environment, which keys to use for what, within a minute. That should be your movement along the learning curve!

Customizability of functionality

Another example I will give is MS Edge, an alternative to the Microsoft web browser platform. Suppose you are a programmer and you just installed MS Edge for yourself. You open a page, right-click on it and for some reason you see only two options: “Select All” and “Print”. How so, you think, I'm a software engineer, where are all my tools for work? You go to the menu indicated by three points and find here “F12 Developer Tools”. Next time you will quickly find this option in the list: it is very notably marked by the hot combination “F12”. You enable this option and Edge writes to you: "Now the items" Check item "and" View source "will be displayed in the context menu." The Edge setup was well thought out by the developers. Regarding the source code, as you understand, 99% of all people using web browsers never open it in their life. It’s hard to imagine that your mother is calling you, says that her Internet has broken, that she opened the source code, but forgot what to do next. The first thing you advise her to do is close the source code. For the developer, this tool is very convenient: it allows you to solve some problems in a time of the order of 10-15 seconds. Thus, Edge, configured by default, does not load you with all the multitude of its functionality. If you need this functionality, you can enable it without any problems.

Now let's talk about what we can add to our own code in order to improve the experience and patterns of those who use it. I believe that you are all familiar with the idea of ​​code completion: you write the name of the object, put a dot - and you see a list of everything that is available for use.

You write Console.ForegroundColor , put an equal sign and the system, realizing that you have turned to an enumerated type, offers you the options available:

The technology is called IntelliSense and greatly facilitates the work of a programmer. You do not need to refer to the directory - just put a dot and choose from the existing one. We all use this feature, but how many have tried to implement it in our own code to help those people who will use it? Недавно я осознал, что, несмотря на 15 лет работы с .NET , я всё равно не умею составлять строки SQL подключения: я просто не помню их синтаксиса. When declaring a new SQLConnection, they tell me that I need to insert a string, but they don’t say anything about how to construct it.

I have to go to ConnectionString.com and look for information there. As a solution, I created my small SQL.Connect library. I put one single class in it, defined a static method in it, and put a piece from the documentation in the comment on the method. Now, at the moment when you need to specify the connection string, a syntax prompt pops up for me. The problem is resolved. Implementing such a solution is very simple: you literally need to insert a piece of XML data into the documentation for the method (a special kind of comment).

If you use Visual Studio, Visual Studio Mac, Code Writer, or any other environment that supports user documentation, your comment will be recognized and will appear in a pop-up window as a hint as soon as you open the bracket.

In this case, you do not have to stop, switch to the browser window, google syntax. Writing code will be much more convenient. But it is important to note that such solutions work well only when you know exactly what and how the user will do. Sometimes the script for the target user working with your class library is strict, without branching. But if not everything is so simple, then we must take care to help the user, suggest what options are currently available to him, offer a convenient interface for selection.


UX has a pattern called “signposting” (literally: “road signs”). Suppose you know that as part of working with your application, the user is at some crossroads, changing some code, etc. What can we do to tell him the available options and resources? Let's talk about the HTTP API, which is one of the best implementations of the signposting idea today. Why is it so easy for us to use web pages? Because they are full of links, which are powerful navigation tools. The links you see on the page show you where you can go. Click on the link - you get to another page; I didn’t like the page - click "back", find another link, click on it - you get to a new page. The network is such a conveniently researched system thanks to the competent “signposting”. These patterns can well be borrowed for our APIs. Over the past year, my team and I have been working on a project originally conceived as a REST API with built-in Hypermedia resources. We received some JSON code and, using the HAL (Hypermedia Application Language), inserted links in it that would point to the resources available from the page. At some point, we needed to create several tools that would help us research and debug the written code, which we did. Subsequently, we decided that we would publish these tools along with the documentation for our API so that our users could also use them if necessary. One of these applications looks like this:

In theory, anyone with the necessary rights will be able to go into it, switch to sandbox mode and then see the executable code of the main application. You can click on the links inside the JSON code and thus navigate through different pages.

Thus, in order to explore our API, developers will not need to write their programs, view documentation - they will be able to explore system, considering it and interacting with it. We started precisely by creating support for clicks, navigation, moving from one resource to another. Later we added tools for interacting with Hypermedia here. Now you can select a resource, display it in a browser and directly from there execute the PATCH, POST, PUT, DELETE request. Such solutions, again, eliminate the need for documentation. You will refer to it rather as an engineering specification (when you need information about the encoding table used for a particular field, etc.). If you just research the system, want to get an idea of ​​how it works in order to be able to implement your own solution on top of it, then I like the idea of ​​just putting it on the open Internet, чтобы люди могли играть этим как им заблагорассудится.

For the most part, we talked with you about our interaction with other developers. But there is another numerous category of people who will work with our code - those who are engaged in supporting the system. These are engineers who advise customers by phone, your back office team, the operational teams of the companies to whom you supply your product, etc. ... Therefore, I will devote the rest of the conversation to monitoring and logging.


Scenario one

Morning, you are sitting at work. The bell rings. The person on the other end of the tube immediately states: "You have a problem with the system!" Wow charge! You listen, take moderately useless notes, ask for contacts, and hang up. And you are like this: "A problem with the system? What does he mean in general?" The next bell rings, and there too: "you have a problem with the system." You are like: “With which system at least?”, The client: “With the one on the Internet! It doesn’t work!”. People continue and continue to call, but cannot explain anything useful.

Time passes, and finally someone promises to send you a screenshot. You are perked up, the screen should make your work easier! But what they send you looks like this:

What are your chances of fixing this problem? The application's codebase has approximately 60,000 lines of code and probably about 5,000 HTTP requests. It will take 20 minutes to verify each request - well, should you spend 6 weeks on one error? All that you know: "The request timed out." You do not know which of the systems failed, nor the cause of it. The client network may be out of order. Apparently you still have to work out all 5000 requests. The prospect of a six-week quest does not please either you or your boss. But there is no other way: complaints continue to be received from customers.

Scenario Two

You come to work in the morning. Calls have not yet begun. You come to a large screen hanging on the wall.

This displays the monitoring status of various parts of the system running in production. Usually the fields are green. Today, three of them glow in red.

You see that problems have arisen with the main site, with Intranet and with CDN. There were no calls yet, evil posts on Twitter, too. This is good: users have not yet had time to encounter problems. You urgently take trouble. Obviously, the main site and Intranet cannot work while there is a failure with the CDN. Therefore, first of all you will repair CDN. You go into the necessary subsystem, start looking for the cause of the failure here. Soon you find it: one of the certificates was either revoked or expired. Everything, the main work you have done. Further action is clear. Thus, you were able to identify, diagnose and fix the problem before the user encountered it. At the same time, your team was also aware of what was happening and could control the situation. In fact, the photo with the monitor was shot in my London office. We made such a system solely for reasons of convenience - so that the team in the course of work could see the current state of the system. And you know what additional effect we noticed? It happens that a colleague from another team comes into your room with some nonsense question, such as whether you have chosen a dish for the upcoming Christmas lunch. Now, if a person comes to us on such a matter and sees red fields on the screen, he immediately left without question, realizing that now is not the time for his bullshit. And sometimes we were even offered to bring coffee and sandwiches. This screen unexpectedly raised our team spirit. And to deal with crashes, exceptions and other problems production has now become much easier.

Balance required

In the same way as with interfaces, your monitoring screens should by no means be overloaded with information. On the one hand, this system should not be a light bulb that just lights up red when a problem appears somewhere in the system - such monitoring is practically useless. On the other hand, you cannot display everything or almost everything - it is just as inconvenient.

You must understand that some small part of your living system will always not work as planned. According to people from NASA, when Saturn-5, a system with about 6,000,000 moving parts, was launched, 99.9% of them worked well. This means that about 6,000 parts failed. However, the flight is considered successful. In any system that is sufficiently complicated from an engineering point of view, at any given time there will always be something that works slowly or incorrectly. Some node of one of your clusters will be sure to slow down, because one of its databases decided to reindex. Choosing what data about the state of the system to display, you should find some balance: to show not too little, but not too much.

Cars are equipped with an excellent monitoring system. With a shortage of gasoline, one light comes on, and with problems with the brakes, another. In total, you need to control about 6 pieces, each of which is responsible for its subsystem. And having received a signal, you will already figure out what actions to take. Now get distracted for a minute and remember the old Startrek series.

The engine for the Enterprise starship, on which they were working, was installed in the very center of the room and, thus, was visible to everyone. In this case, the engine housing is made transparent; everything that is usually hidden from us behind the dashboard, here, on the contrary, is clearly visible. The creators of the show did it for a reason: the engine was to become part of the situation and, in a sense, part of the narrative. Try to make your application the same. You can create dashboards that will open the curtain for your users, give them the opportunity to review the system and what is happening in it.


The tool for researching the API that I demonstrated earlier on the backend is provided by the Nancy system, a special framework for creating the HTTP API on .NET.

If you have a production system, then you simply add the URI “/ _Nancy /” to the end and, after receiving the details, you end up in a personal dashboard.

You will see the current status of your live application. This is no longer a site that you run locally through Visual Studio - here you monitor the application in real conditions, you can monitor any configurations, components, registered services. And this tool is especially useful to you if you need to urgently diagnose a problem. With Nancy, you can see all the roots compiled in the application. In the code, you could give the program quite definite directions, but there is always a chance that something went wrong. Perhaps somewhere there was a case of case sensitivity, somewhere a piece of code was missed, etc. You can enable special tracing and monitor all requests (including successful ones). For example, ask her to display all requests, which will be received in the production system within the next 30 seconds - and you can view any of these requests in detail (headers, approval data, authentication details, etc.). This is a very powerful and convenient tool. Proper monitoring organization is very important. For some time now, we constantly think about how it is better for us to give access to the application state to those who work with it, what is important to show about its current work, etc. But displaying the current state of the system is only one side of the coin. There is another - logging.


You come to work and see a red field on your big screen. As it turns out, the database server processor is up to 100% loaded.

But this is nothing more than a point on the chart. She does not reveal the cause. I know at least three scenarios of how a DB server processor can load up to 100%:

1. The load grew gradually and systematically, and only after a long time did it reach a maximum and provoked failures.

2 ․ Another scenario looks like this:

This picture of resource consumption is very characteristic of systems that regularly perform large backups. A similar case was in our practice. Our server had to backup every night. The amount of data in the database increased every day. The volume of each subsequent backup was proportionally increasing, and, once, reaching a critical figure, it took up the entire processor resource - the server downtime occurred right in the middle of the night.

3 ․ Another server operation option:

We observed such a scenario more often than others. During development, everything is going well, and CPU loading jumps when you deploy a project to production. As you can see, the fact of processor overload is not enough in itself. To take any action, you will need, first, to see some picture of what happened. First you need to understand what period of time is displayed on the chart - the last 5 hours of work or the last six months?

If you do not know what happened to you last week, it will be difficult for you to assess how normal things are happening to you today. Your application, its infrastructure and operating conditions are unique. The only thing you can try to make a start from when comparing is the picture of the system before critical events.


Here is a status chart of one of our production systems, displayed by the Redgate SQL monitoring system.

Here we show the activity of our customers in the system in 24 hours. This shows that most users log into the system at about 10 am, actively use it for several hours, then take a lunch break, and then continue to work until 6 pm. If we find a problem in our system, then first of all we look at this chart: is it different from the one that was a week ago, two weeks ago? Features of Redgate allow you to overlay charts for different time periods. This gives us the most convenient diagnostic tool. With the help of it, we detect the presence of a problem, its time frame, something that one can be associated with (with the next deployment, with loads, etc.). In addition, graphs allow us to see trends. Looking at them, we can predict, for example, that when you need to increase the size of the database. It is much better if you will be able to predict and plan this kind of spending. The fact that you need money for additional hardware or for the Amazon cloud is more pleasant to find out in advance, and not when the thunder strikes.

Application Logging

Now let's talk about logging your application - about how to write code so that those who use it would be easier to understand what happened. Customary for .NET are 5 levels of logging: FATAL , ERROR , WARN , INFO and DEBUG . The first and most important rule: do not allow logging within the framework of the main system. Logs should not be saved to drive C, otherwise it will be immediately clogged. Take the logging system out. Investing in a logging system is quite important, especially if work on projects is carried out by several teams. You can use ready-made systems: Graphite, Splunk, Logstash, etc .; there are SaaS solutions such as Graylog, Logit.io. The system may also be your own development. But as soon as the team has a common logging system, it turns out that everyone writes to the logs in their own way: someone adds an INFO message every 10 seconds, another writes FATAL every time the user forgets the password. In order for the union of the logs to give something intelligible, it is necessary to establish some rules, namely, to develop a clear, memorable logic of the situation in which each of the levels is used. Next, I offer you my version.


Fatal level: the application does not respond, while many users are affected, immediate attention is required. Perhaps not all parts of your system will be able to register messages at this level. If the application is initially responsible for the low-priority task, then its errors can no longer be considered fatal. An application that is turned on at night to generate a seating arrangement for the upcoming dinner, even with a complete refusal, should not add messages with a tag to the log FATAL . FATAL implies a reason to wake the team at night. Everything that can wait until the morning hardly relates here.


Errors and warnings: exceeding the response timeout by API, deadlock in database operations, etc., ERROR is something that any of you nevertheless noticed, but in principle, may try to eliminate it yourself (for example, try to restart). WARN - something that no one noticed. If you request data on the exchange rate through the API, but for some reason, the request has exceeded the allowed waiting time, then within 10 minutes and even an hour you can quite use the cached value. Of course, if the API does not work for a long time, then you will increase the status of the log message.


Informational messages: made for reporting, while everything is in order in the system. If you returned from a four-day holiday weekend and do not see a single entry in the logs, then you will have two versions: either the system really worked perfectly, without a single error and warning, or your logging system crashed at the very start of the weekend. In this case, of course, the second is more likely. Messages INFO must be in the log. They report clearing the application cache, turning servers on and off, elastic scaling or load balancing, etc. These messages are needed for you to see that the system is working, to feel its pulse. No need to generate them in the thousands - just one message per minute.


I would really like you to take one practical thing from this lecture: the next time you debug the code and are going to write Console.WriteLine - pause, install Log4net, NLog or Serilog, and now write debugging to the log. When starting the system in production, you simply disable the logging of these messages (for this it is enough to register one additional flag in your configuration file). The important thing is that all these debugging messages can always be returned if someone needs to re-work with your code.

The DEBUG can and should record all: facts startup methods of their working time, return values, callbacks recursive functions, etc. All these things are important for debugging complex algorithms.. There is a type of error that does not occur before you launch your product in production - moreover, suddenly and at 3 a.m. If this happens, then for those people who will urgently raise the system, the ability to include your debugging code will be, if not saving, then certainly no less useful than then for you. People will be able to see your logic and this will help them figure it out faster.

Other names?

In my opinion, the reason developers confuse the logging levels is because of their bad names that are not talking about anything. I really liked the option proposed in return by Daniel Lebrerero on his blog. In a slightly adapted form, it looks like this:

In conclusion: the rules of the "happy" code

  1. Names Remember about Applejack, about names from My Little Pony. If you do not like ponies, then let them be brands of malt whiskey, city names - whatever you want. Think of your own way and start using names, they will greatly help you and your team in everything regarding the differentiation of context and interface boundaries.
  2. Learning Curves. Try to make them smooth. The curve can be steep (designed for those who are ready to learn your system overnight) and can be more gentle (for those who are not ready to overwork). Avoid areas of recession (recession). Do not create your own version of ASP.NET WebForms: people should not be trained and then relearned. If possible, do not let the curve jump - too complex things that the user will be stuck on. To do this, consider all those moments when the user may fail, think over the text of the errors - provide the user with maximum support.
  3. Signposting Analyze: what actions your application offers the user, what choice he has at each step of the work. If, for example, you need to configure something, provide an implementation of the fluent interface that will visually show the available options. Suppose that by entering “Database” and putting an end to it, he sees a list: “Encryption”, “Credentials”, “Timeout”, etc. Demonstrate to the user what your system can do.
  4. Transparency. Show people what's under the hood. Give access to dashboards, to logs, expose metrics. Collect and disseminate all the information that you used during the development phase. Since it was useful to you when creating the system, it may turn out to be useful later.

And most importantly, remember: when developing, you pretty much define user experience. Whatever development tools you create, someone will use them. And it depends only on you whether your user will go home that day tired and upset, or happy and energetic, ready to return with pleasure and continue work tomorrow. Do what you can for their happiness and pass the baton. Your happy users will start creating good products themselves.