GraphQL for web development

GraphQL solves similar problems. With GraphQL, instead of a lot of “stupid” endpoints, you have one smart pen that can work with complex queries and generate data in the form in which the client requests them.

At the same time, GraphQL can work on top of REST, i.e. the data source is not a database, but a rest API. Due to the declarativeness of GraphQL, due to the fact that all this is friendly with React and Redux, your client becomes easier.

In fact, GraphQL seems to me to be an implementation of BFF with its own protocol and strict query language.

This is an excellent solution, but it has several drawbacks, in particular, with typing, with the differentiation of rights, and in general it is a relatively fresh approach. That’s why we haven’t switched to it yet, but in the future it seems to me the most optimal way to create an API.

Best Friends Forever

No technical solution will work correctly without organizational changes. You still need documentation, guarantees that the response format will not suddenly change, etc.

At the same time, you need to understand that we are all in the same boat. To an abstract customer, whether it’s a manager or your supervisor, by and large it doesn’t matter whether you have GraphQL there or BFF. It is more important for him that the task is solved and no errors pop up on the prod. For him, there is no particular difference whose fault the error occurred in the product — the fault of the front or the back. Therefore, you need to negotiate with backenders.

In addition, the flaws of the backup, which I mentioned at the beginning of the report, do not always arise due to someone’s malicious actions. It is quite possible that the fesh parameter also has some meaning.

Pay attention to the commit date. It turns out that recently fesh celebrated its seventeenth anniversary.

Do you see any strange identifiers on the left? This is SVN, simply because there was no git in 2001. Not github as a service, but gita as a version control system. It appeared only in 2005.

Documentation

So, all we need is not to quarrel with the backenders, but to agree. This can only be done if we find a single source of truth. Such a source should be documentation.

The most important thing here is to write the documentation before we start working on the functionality. As with a marriage contract, it is better to agree on everything on the shore.

How does it work? Relatively speaking, three people gather: a manager, a frontender and a backender. The front-runner is well versed in the subject area, so his participation is critically important. They gather and start thinking about the API: which paths, which responses should be returned, down to the name and format of the fields.

Swagger

A good option for API documentation is the Swagger format, now it is called OpenAPI. It is better to use Swagger in YAML format, because, unlike JSON, it is better read by a person, and there is no difference for a machine.

As a result, all agreements are recorded in Swagger format and published to a shared repository. The documentation for the sales backend should be in the wizard.

The master is protected from commits, the code gets into it only through the requests pool, you can’t push into it. The representative of the front team is obliged to conduct a review of the request pool, without his apruva, the code does not go to the master. This protects you from unexpected API changes without prior notice.

So, you gathered, wrote Swagger, and thus actually signed a contract. From this moment on, as a frontender, you can start your work without waiting for the creation of a real API. After all, what was the point of splitting into client and server if we can’t work in parallel and client developers have to wait for server developers? If we have a “contract”, then we can safely parallel this case.

Faker.js

Faker is perfect for these purposes. This is a library for generating a huge amount of fake data. It can generate different types of data: dates, names, addresses, etc., all this is well localized, there is support for the Russian language.

At the same time, the faker is friends with the swager, and you can safely raise a mock server, which, based on the Swagger scheme, will generate fake answers for you in the right ways.

Validation

Swagger can be converted into a json schema, and with the help of tools such as ajv, you can validate backend responses right in runtime, in your BFF, and in case of discrepancies report to testers, backenders themselves, etc.

Let’s say the tester finds some bug on the site, for example, when clicking on the button, nothing happens. What does the tester do? Puts a ticket on the frontender: “this is your button, so it’s not pressed, fix it.”

If there is a validator between you and the backend, then the tester will know that the button is actually being pressed, just the backend sends the wrong answer. Incorrect is an answer that the front does not expect, i.e. does not correspond to the “contract”. And here it is already necessary either to repair the back or to change the contract.

Conclusions

We take an active part in API design. We design the API so that it will be convenient to use it in 17 years.

We require Swagger documentation. There is no documentation — the backup work has not been completed.

There is documentation — we publish it in git, and any changes to the API interface must be approved by a representative of the front team.

We raise the fake server and start working on the front without waiting for the real API.

We put the node under the frontend and validate all the answers. Plus, we get the ability to aggregate, normalize and cache data.

Leave a Reply

Your email address will not be published. Required fields are marked *