Please contact the owner of the site that linked you to the original URL and let them know their link is broken.
+
+
+
+
\ No newline at end of file
diff --git a/docs/static/CNAME b/CNAME
similarity index 100%
rename from docs/static/CNAME
rename to CNAME
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index 6844fff5..00000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,184 +0,0 @@
-# How do we work together?
-
-## You belong with us
-
-If you reached down to this page, you probably belong with us 💜. We are in an ever-going quest for better software practices. This journey can bring two things to your benefit: A lot of learning and global impact on many people's craft. Does this sounds attractive?
-
-## Consider the shortened guide first
-
-Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, small code enhancement or anything that is small and obvious - start by reading the [shortened guide here](/docs/docs/contribution/contribution-short-guide.md). As you'll expand your engagement with this repo, it might be a good idea to visit this long guide again
-
-
-## Philosophy
-
-Our main selling point is our philosophy, our philosophy is 'make it SIMPLE'. There is one really important holy grail in software - Speed. The faster you move, the more features and value is created for the users. The faster you move, more improvements cycles are deployed and the software/ops become better. [Researches show](https://puppet.com/resources/report/2020-state-of-devops-report) that faster team produces software that is more reliable. Complexity is the enemy of speed - Commonly apps are big, sophisticated, has a lot of internal abstractions and demand long training before being productive. Our mission is to minimize complexity, get onboarded developers up to speed quickly, or in simple words - Let the reader of the code understand it in a breeze. If you make simplicity a 1st principle - Great things will come your way.
-
-
-
-Big words, how exactly? Here are few examples:
-
-**- Simple language -** We use TypeScript because we believe in types, but we minimize advanced features. This boils down to using functions only, sometimes also classes. No abstracts, generic, complex types or anything that demand more CPU cycles from the reader.
-
-**- Less generic -** Yes, you read it right. If you can code a function that covers less scenarios but is shorter and simpler to understand - Consider this option first. Sometimes one if forced to make things generic - That's fine, at least we minimized the amount of complex code locations
-
-**- Simple tools -** Need to use some 3rd party for some task? Choose the library that is doing the minimal amount of work. For example, when seeking a library that parses JWT tokens - avoid picking a super-fancy framework that can solve any authorization path (e.g., Passport). Instead, Opt for a library that is doing exactly this. This will result in code that is simpler to understand and reduced bug surface
-
-**- Prefer Node/JavaScript built-in tooling -** Some new frameworks have abstractions over some standard tooling. They have their way of defining modules, libraries and others which demand learning one more concept and being exposed to unnecessary layer of bugs. Our preferred way is the vanilla way, if it's part of JavaScript/Node - We use it. For example, should we need to group a bunch of files as a logical modules - We use ESM to export the relevant files and functions
-
-
-
-
-
-## Workflow
-
-### Got a small change? Choose the fast lane
-
-Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, linting rules, look&feel fixes, fixing TYPOs, comments or anything that is small and obvious - Just fork to your machine, code, ensure all tests pass (e.g., `npm test`), PR with a meaningful title, get **1** approver before merging. That's it.
-
-
-
-### Need to change the code itself? Here is a typical workflow
-
-| | **➡️ Idea** | **➡ Design decisions** | **➡ Code** | **➡️ Merge** |
-|---------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **When** | Got an idea how to improve? Want to handle an existing issue? | When the change implies some major decisions, those should be discussed in advance | When got confirmation from core maintainer that the design decisions are sensible | When you have accomplished a *short iteration* . If the whole change is small, PR in the end |
-| **What** | **1.** Create an issue (if doesn't exist) **2.** Label the issue with the its type (e.g., question, bug) and the area of improvement (e.g., area-generator, area-express) **3.** Comment and specify your intent to handle this issue | **1.** Within the issue, specify your overall approach/design. Or just open a discussion **2.** If choosing a 3rd party library, ensure to follow our standard decision and comparison template. [Example can be found here](./docs/decisions/configuration-library.md) | **1.** Do it with passions 💜 **2.** Follow our coding guide. Keep it simple. Stay loyal to our philosophy **3.** Run all the quality measures frequently (testing, linting) | **1.** Share your progress early by submit a [work in progress PR](https://github.blog/2019-02-14-introducing-draft-pull-requests/) **2.** Ensure all CI checks pass (e.g., testing) **3.** Get at least one approval before merging |
-
-## Roles
-
-
-## Project structure
-
-### High-level sections
-
-The repo has 3 root folders that represents what we do:
-
-- **docs** - Anything we write to make this project super easy to work with
-- **code-generator** - A tool with great DX to choose and generate the right app for the user
-- **code-templates** - The code that we generate with the right patterns and practices
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Practica] -->|How we create apps| B(Code Generators)
- A -->|The code that we generate!| C(Code Templates)
- A -->|How we explain ourself| D(Docs)
-
-
-```
-
-### The code templates
-
-Typically, the two main sections are the Microservice (apps) and cross-cutting-concern libraries:
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Code Templates] -->|The example Microservice/app| B(Services)
- B -->|Where the API, logic and data lives| D(Example Microservice)
- A -->|Cross Microservice concerns| C(Libraries)
- C -->|Explained in a dedicated section| K(*Multiple libraries like logger)
- style D stroke:#333,stroke-width:4px
-
-
-```
-
-**The Microservice structure**
-
-
-The entry-point of the generated code is an example Microservice that exposes API and has the traditional layers of a component:
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Services] -->|Where the API, logic and data lives| D(Example Microservice)
- A -->|Almost empty, used to exemplify Microservice communication| E(Collaborator Microservice)
- D -->|The web layer with REST/Graph| G(Web/API layer)
- N -->|Docker-compose based DB, MQ and Cache| F(Infrastructure)
- D -->|Where the business lives| M(Domain layer)
- D -->|Anything related with database| N(Data-access layer)
- D -->|Component-wide testing| S(Testing)
- style D stroke:#333,stroke-width:4px
-```
-
-**Libraries**
-
-All libraries are independent npm packages that can be testing in isolation
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Libraries] --> B(Logger)
- A[Libraries] --> |Token-based auth| C(Authorization)
- A[Libraries] --> |Retrieve and validate the configuration| D(Configuration)
- A[Libraries] --> E(Error handler)
- A[Libraries] --> E(MetricsService)
- A[Libraries] --> Z(More to come...)
- style Z stroke:#333,stroke-width:4px
-```
-
-### The code generator structure
-
-## Packages (domains)
-
-This solution is built around independent domains that share _almost_ nothing with others. It is recommended to start with understanding a single and small domain (package), then expanding and getting acquainted with more. This is also an opportunity to master a specific topic that you're passionate about. Following is our packages list, choose where you wish to contribute first
-
-
-
-
-
-| **Package** | **What** | **Status** | **Chosen libs** | **Quick links** |
-|---------------------------------- |----------------------------------------------------------- |----------------------------------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--------------------------------------------- |
-| microservice/express | A web layer of an example Microservice based on expressjs | 🧓🏽 Stable | - | - [Code & readme]() - [Issues & ideas]() |
-| microservice/fastify | A web layer of an example Microservice based on Fastify | 🐣 Not started
(Take the heel, open an issue) | - | - [Code & readme]() - [Issues & ideas]() |
-| microservice/dal/prisma | A DAL layer of an example Microservice based on Prisma.js | 🐥 Beta/skeleton | - | - [Code & readme]() - [Issues & ideas]() |
-| library/logger | A logging library wrapper | 🐥 Beta/skeleton
Why: [Decision here](https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md) | - [Code & readme]() - [Issues & ideas]() |
-| library/jwt-based-authentication | A library that authenticates requests with JWT token | 🧓🏽 Stable | [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken)
Why: [Decision here](https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md) | - [Code & readme]() - [Issues & ideas]() |
-
-
-
-## Development machine setup
-
-✅ Ensure Node, Docker and [NVM](https://github.com/nvm-sh/nvm#installing-and-updating) are installed
-
-✅ Configure GitHub and npm 2FA!
-
-✅ Clone the repo if you are a maintainer, or fork it if have no collaborators permissions
-
-✅ With your terminal, ensure the right Node version is installed:
-
-```
-nvm use
-```
-
-✅ Install dependencies:
-
-
-```
-npm i
-```
-
-✅ Ensure all tests pass:
-
-```
-npm t
-```
-
-✅ Code. Run the test. And vice versa
-
-
-## Areas to focus on
-
-
-
-
-## Supported Node.js version
-
-- The generated code should be compatible with Node.js versions >14.0.0.
-- It's fair to demand LTS version from the repository maintainers (the generator code)
-
-
-## Code structure
-
-Soon
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index 15dba040..00000000
--- a/LICENSE
+++ /dev/null
@@ -1,21 +0,0 @@
-MIT License
-
-Copyright (c) 2022 bestpractices
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/README.md b/README.md
deleted file mode 100644
index 14a66c1c..00000000
--- a/README.md
+++ /dev/null
@@ -1,278 +0,0 @@
-
-
-
-
-### Generate a Node.js app that is packed with best practices AND simplicity in mind. Based off our repo [Node.js best practices](https://github.com/goldbergyoni/nodebestpractices) (96,100 stars)
-
-
-
-
- [Twitter](https://twitter.com/nodepractices) |  [Documentation site](https://practica.dev/)
-
-
-
-
-# A One Paragraph Overview
-
-Although Node.js has great frameworks 💚, they were never meant to be dev & production ready immediately (e.g., no architecture layers, DB schemas, docker file, etc etc). Practica.js aims to bridge the gap. Based on your preferred framework, we generate example code that demonstrates a full Microservice flow, from API to DB, that is packed with good practices. For example, we include a battle-tested error handler, sanitized API response, hardened dockerfile, thoughtful 3-tier folder structure, great testing templates with DB, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are [neatly and thoughtfully documented](https://practica.dev/decisions). We strive to keep things as simple and standard as possible and base our work on the popular guide: [Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices)
-
-**1 min video 👇, ensure audio is activated**
-
-
-
-https://user-images.githubusercontent.com/8571500/170464232-43355e43-98cf-4069-b9fc-6bc303a39efc.mp4
-
-
-
-
-# `Table of Contents`
-
-- [`Super-Quick Setup`](#super-quick-setup)
-- [`Our Philosophies and Unique Values`](#our-philosophies-and-unique-values)
-- [`Practices and Features`](#practices-and-features)
-- [`The People Behind Practica.js`](#the-people-behind-practicajs)
-- [`Our best practices guide, 78,000 stars ✨`](https://github.com/goldbergyoni/nodebestpractices)
-- [`Contribution guide`](https://github.com/practicajs/practica/blob/main/CONTRIBUTING.md)
-- [`Documentation site`](https://practica.dev/)
-- [`YouTube`](https://www.youtube.com/channel/UCKrSJ0-jm7YVTM_hO7Me9eA)
-- Coming Soon:
- - Example Applications
- - [Express, PostgreSQL, with common best practices](https://github.com/practicajs/practica/blob/main/docs/not-ready-yet.md)
- - [Express, mongo-db, with common best practices](https://github.com/practicajs/practica/blob/main/docs/not-ready-yet.md)
- - [Express, PostgreSQL, with all best practices (advanced)](https://github.com/practicajs/practica/blob/main/docs/not-ready-yet.md)
- - [Minimal with project setup configuration only](https://github.com/practicajs/practica/blob/main/docs/not-ready-yet.md)
- More Flavours
- - Fastify, PostgreSQL
- - Fastify, mongo-db
- - Generate Your Own Interactively
- - More coming soon
-
-
-
-
-# Super-Quick Setup
-
-
-
-### Run Practica.js from the Command Line
-
-
-Run practica CLI and generate our default app (you can customize it using different flags):
-
-```bash
-npx @practica/create-node-app immediate --install-dependencies
-```
-
-✨ And you're done! That's it, the code's all been generated. Our default setup includes Fastify for the web layer, Sequelize for the data access and PostgreSQL
-
-Prefer express and Prisma? Just pass the right flags to the CLI:
-
-```bash
-npx @practica/create-node-app immediate --install-dependencies --web-framework=express --orm=prisma
-```
-
-Prefer other DB? We use standard ORMs, read its docs and switch DB. This is your code, do whatever you like
-
-
-
-
-### Start the Project
-
-```bash
-cd {your chosen folder name}
-npm install
-```
-
-Then choose whether to start the app:
-
-```bash
-npm run
-```
-
-or run the tests:
-
-```bash
-npm test
-```
-
-Pretty straightforward, right?
-
-You just got a Node.js Monorepo solution with one example component/Microservice and multiple libraries. Based on this hardened solution you can build a robust application. The example component/Microservice is located under: *{your chosen folder name}/services/order-service*. This is where you'll find the API and a good spot to start your journey from
-
-
-
-### Next Steps
-
-- ✅ Start coding. The code we generate is minimal by design and based on known libraries. This should help you get up to speed quickly.
-- ✅ Read our ['coding with practica'](https://practica.dev/the-basics/coding-with-practica/) guide
-- ✅ Master it by reading our [docs at https://practica.dev](https://practica.dev).
-
-
-
-
-# Our Philosophies and Unique Values
-
-### 1. Best Practices _on top of_ known Node.js frameworks
-
-We don't re-invent the wheel. Rather, we use your favorite framework and empower it with structure and real examples. With a single command you can get an Express/Fastify-based codebase with many thoughtful best practices inside
-
-
-
-### 2. Simplicity, how Node.js was intended
-
-Keeping it simple, flat, and based on native Node/JS capabilities is part of this project's DNA. We believe that too many abstractions, high-complexity or fancy language features can quickly become a stumbling block for the team
-
-To name a few examples, our code flow is flat with almost no level of indirection, no DI - it's just simple functions calling other functions. Although using TypeScript, almost no features are being used besides types, for modularization we simply use... Node.js modules
-
-
-
-### 3. Supports many technologies and frameworks
-
-Good Practices and Simplicity is the name of the game with Practica. There is no need to narrow our code to a specific framework or database. We aim to support the popular Node.js frameworks and data access approaches
-
-
-
-
-
-# Practices and Features
-
-We apply dozens of practices and optimizations. You can opt in or out for most of these features using option flags on our CLI. The following table lists just a few examples out of the [full list of features we provide](https://practicajs.org/features).
-
-| **Feature** | **Explanation** | **Flag** | **Docs** |
-| ----------- | --------------- | -------- | -------- |
-| Monorepo setup | Generates two components (e.g., Microservices) in a single repository with interactions between the two | --mr, --monorepo | [Docs here]() |
-| Output escaping and sanitizing | Clean-out outgoing responses from potential HTML security risks like XSS | --oe, --output-escape | [Docs here]() |
-| Integration (component) testing | Generates full-blown component/integration tests setup including DB | --t, --tests | [Docs here]() |
-| Unique request ID (Correlation ID) | Generates module that creates a unique correlation/request ID for every incoming request. This is available for any other object during the request life-span. Internally it uses Node's built-in [AsyncLocalStorage](https://nodejs.org/api/async_hooks.html#class-asynclocalstorage) | --coi, --correlation-id | [Docs here]() |
-| Dockerfile | Generates dockerfile that embodies >20 best practices | --df, --docker-file | [Docs here]() |
-| Strong-schema configuration | A configuration module that dynamically load run-time configuration keys and includes a strong schema so it can fail fast | Built-in with basic app | [Docs here](https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.MD) |
-
-📗 **See our full list of features [here](https://practica.dev/features)**
-
-
-
-# The People Behind Practica.js
-
-## Steering Committee
-
-Practica is a community-driven open-source project. It's being led voluntarily by engineers from many different companies. These companies are just a few who encourage their engineers to contribute and keep this project moving. 💚
-
-
-
-A Nasdaq 100 company, a world leader in design software
-
-
-
-Leader IoT provider, part of 'Cox Communication', the 3rd largest cable company in the US
-
-## Core Team
-
-
-
-
Yoni Goldberg
Independent Node.js consultant
-
-
-
Michael Solomon
Node.js lead
-
-
-
-
Raz Luvaton
Node.js developer
-
-
-
Daniel Gluskin
Node.js lead
-
-
-
-
-
Ariel Steiner
Node.js developer
-
-
-
Tomer Kohane
Frontend geek
-
-
-
-
-
Dan Goldberg
Node.js lead
-
-
-
Ron Dahan
Node.js expert
-
-
-
-
-
-
-
-
-
-# Partners
-
-These companies are keen for continuous improvement and their engineers to have been known to contribute during work hours.
-
-
-
-
-## Our Amazing Contributors 💚
-
-A million thanks to these great people who have contributed code to our project:
-
-
-
-
-
-
-
-
-
-
-
diff --git a/assets/css/styles.aebdace5.css b/assets/css/styles.aebdace5.css
new file mode 100644
index 00000000..a23e2d6e
--- /dev/null
+++ b/assets/css/styles.aebdace5.css
@@ -0,0 +1 @@
+.col,.container{padding:0 var(--ifm-spacing-horizontal);width:100%}.markdown>h2,.markdown>h3,.markdown>h4,.markdown>h5,.markdown>h6{margin-bottom:calc(var(--ifm-heading-vertical-rhythm-bottom)*var(--ifm-leading))}.markdown li,body{word-wrap:break-word}body,ol ol,ol ul,ul ol,ul ul{margin:0}pre,table{overflow:auto}blockquote,pre{margin:0 0 var(--ifm-spacing-vertical)}.breadcrumbs__link,.button{transition-timing-function:var(--ifm-transition-timing-default)}.button,code{vertical-align:middle}.button--outline.button--active,.button--outline:active,.button--outline:hover,:root{--ifm-button-color:var(--ifm-font-color-base-inverse)}.menu__link:hover,a{transition:color var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.navbar--dark,:root{--ifm-navbar-link-hover-color:var(--ifm-color-primary)}.menu,.navbar-sidebar{overflow-x:hidden}:root,html[data-theme=dark]{--ifm-color-emphasis-500:var(--ifm-color-gray-500)}.toggleButton_gllP,html{-webkit-tap-highlight-color:transparent}.clean-list,.containsTaskList_mC6p,.details_lb9f>summary,.dropdown__menu,.menu__list{list-style:none}:root{--ifm-color-scheme:light;--ifm-dark-value:10%;--ifm-darker-value:15%;--ifm-darkest-value:30%;--ifm-light-value:15%;--ifm-lighter-value:30%;--ifm-lightest-value:50%;--ifm-contrast-background-value:90%;--ifm-contrast-foreground-value:70%;--ifm-contrast-background-dark-value:70%;--ifm-contrast-foreground-dark-value:90%;--ifm-color-primary:#3578e5;--ifm-color-secondary:#ebedf0;--ifm-color-success:#00a400;--ifm-color-info:#54c7ec;--ifm-color-warning:#ffba00;--ifm-color-danger:#fa383e;--ifm-color-primary-dark:#306cce;--ifm-color-primary-darker:#2d66c3;--ifm-color-primary-darkest:#2554a0;--ifm-color-primary-light:#538ce9;--ifm-color-primary-lighter:#72a1ed;--ifm-color-primary-lightest:#9abcf2;--ifm-color-primary-contrast-background:#ebf2fc;--ifm-color-primary-contrast-foreground:#102445;--ifm-color-secondary-dark:#d4d5d8;--ifm-color-secondary-darker:#c8c9cc;--ifm-color-secondary-darkest:#a4a6a8;--ifm-color-secondary-light:#eef0f2;--ifm-color-secondary-lighter:#f1f2f5;--ifm-color-secondary-lightest:#f5f6f8;--ifm-color-secondary-contrast-background:#fdfdfe;--ifm-color-secondary-contrast-foreground:#474748;--ifm-color-success-dark:#009400;--ifm-color-success-darker:#008b00;--ifm-color-success-darkest:#007300;--ifm-color-success-light:#26b226;--ifm-color-success-lighter:#4dbf4d;--ifm-color-success-lightest:#80d280;--ifm-color-success-contrast-background:#e6f6e6;--ifm-color-success-contrast-foreground:#003100;--ifm-color-info-dark:#4cb3d4;--ifm-color-info-darker:#47a9c9;--ifm-color-info-darkest:#3b8ba5;--ifm-color-info-light:#6ecfef;--ifm-color-info-lighter:#87d8f2;--ifm-color-info-lightest:#aae3f6;--ifm-color-info-contrast-background:#eef9fd;--ifm-color-info-contrast-foreground:#193c47;--ifm-color-warning-dark:#e6a700;--ifm-color-warning-darker:#d99e00;--ifm-color-warning-darkest:#b38200;--ifm-color-warning-light:#ffc426;--ifm-color-warning-lighter:#ffcf4d;--ifm-color-warning-lightest:#ffdd80;--ifm-color-warning-contrast-background:#fff8e6;--ifm-color-warning-contrast-foreground:#4d3800;--ifm-color-danger-dark:#e13238;--ifm-color-danger-darker:#d53035;--ifm-color-danger-darkest:#af272b;--ifm-color-danger-light:#fb565b;--ifm-color-danger-lighter:#fb7478;--ifm-color-danger-lightest:#fd9c9f;--ifm-color-danger-contrast-background:#ffebec;--ifm-color-danger-contrast-foreground:#4b1113;--ifm-color-white:#fff;--ifm-color-black:#000;--ifm-color-gray-0:var(--ifm-color-white);--ifm-color-gray-100:#f5f6f7;--ifm-color-gray-200:#ebedf0;--ifm-color-gray-300:#dadde1;--ifm-color-gray-400:#ccd0d5;--ifm-color-gray-500:#bec3c9;--ifm-color-gray-600:#8d949e;--ifm-color-gray-700:#606770;--ifm-color-gray-800:#444950;--ifm-color-gray-900:#1c1e21;--ifm-color-gray-1000:var(--ifm-color-black);--ifm-color-emphasis-0:var(--ifm-color-gray-0);--ifm-color-emphasis-100:var(--ifm-color-gray-100);--ifm-color-emphasis-200:var(--ifm-color-gray-200);--ifm-color-emphasis-300:var(--ifm-color-gray-300);--ifm-color-emphasis-400:var(--ifm-color-gray-400);--ifm-color-emphasis-600:var(--ifm-color-gray-600);--ifm-color-emphasis-700:var(--ifm-color-gray-700);--ifm-color-emphasis-800:var(--ifm-color-gray-800);--ifm-color-emphasis-900:var(--ifm-color-gray-900);--ifm-color-emphasis-1000:var(--ifm-color-gray-1000);--ifm-color-content:var(--ifm-color-emphasis-900);--ifm-color-content-inverse:var(--ifm-color-emphasis-0);--ifm-color-content-secondary:#525860;--ifm-background-color:#0000;--ifm-background-surface-color:var(--ifm-color-content-inverse);--ifm-global-border-width:1px;--ifm-global-radius:0.4rem;--ifm-hover-overlay:#0000000d;--ifm-font-color-base:var(--ifm-color-content);--ifm-font-color-base-inverse:var(--ifm-color-content-inverse);--ifm-font-color-secondary:var(--ifm-color-content-secondary);--ifm-font-family-base:system-ui,-apple-system,Segoe UI,Roboto,Ubuntu,Cantarell,Noto Sans,sans-serif,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";--ifm-font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;--ifm-font-size-base:100%;--ifm-font-weight-light:300;--ifm-font-weight-normal:400;--ifm-font-weight-semibold:500;--ifm-font-weight-bold:700;--ifm-font-weight-base:var(--ifm-font-weight-normal);--ifm-line-height-base:1.65;--ifm-global-spacing:1rem;--ifm-spacing-vertical:var(--ifm-global-spacing);--ifm-spacing-horizontal:var(--ifm-global-spacing);--ifm-transition-fast:200ms;--ifm-transition-slow:400ms;--ifm-transition-timing-default:cubic-bezier(0.08,0.52,0.52,1);--ifm-global-shadow-lw:0 1px 2px 0 #0000001a;--ifm-global-shadow-md:0 5px 40px #0003;--ifm-global-shadow-tl:0 12px 28px 0 #0003,0 2px 4px 0 #0000001a;--ifm-z-index-dropdown:100;--ifm-z-index-fixed:200;--ifm-z-index-overlay:400;--ifm-container-width:1140px;--ifm-container-width-xl:1320px;--ifm-code-background:#f6f7f8;--ifm-code-border-radius:var(--ifm-global-radius);--ifm-code-font-size:90%;--ifm-code-padding-horizontal:0.1rem;--ifm-code-padding-vertical:0.1rem;--ifm-pre-background:var(--ifm-code-background);--ifm-pre-border-radius:var(--ifm-code-border-radius);--ifm-pre-color:inherit;--ifm-pre-line-height:1.45;--ifm-pre-padding:1rem;--ifm-heading-color:inherit;--ifm-heading-margin-top:0;--ifm-heading-margin-bottom:var(--ifm-spacing-vertical);--ifm-heading-font-family:var(--ifm-font-family-base);--ifm-heading-font-weight:var(--ifm-font-weight-bold);--ifm-heading-line-height:1.25;--ifm-h1-font-size:2rem;--ifm-h2-font-size:1.5rem;--ifm-h3-font-size:1.25rem;--ifm-h4-font-size:1rem;--ifm-h5-font-size:0.875rem;--ifm-h6-font-size:0.85rem;--ifm-image-alignment-padding:1.25rem;--ifm-leading-desktop:1.25;--ifm-leading:calc(var(--ifm-leading-desktop)*1rem);--ifm-list-left-padding:2rem;--ifm-list-margin:1rem;--ifm-list-item-margin:0.25rem;--ifm-list-paragraph-margin:1rem;--ifm-table-cell-padding:0.75rem;--ifm-table-background:#0000;--ifm-table-stripe-background:#00000008;--ifm-table-border-width:1px;--ifm-table-border-color:var(--ifm-color-emphasis-300);--ifm-table-head-background:inherit;--ifm-table-head-color:inherit;--ifm-table-head-font-weight:var(--ifm-font-weight-bold);--ifm-table-cell-color:inherit;--ifm-link-color:var(--ifm-color-primary);--ifm-link-decoration:none;--ifm-link-hover-color:var(--ifm-link-color);--ifm-link-hover-decoration:underline;--ifm-paragraph-margin-bottom:var(--ifm-leading);--ifm-blockquote-font-size:var(--ifm-font-size-base);--ifm-blockquote-border-left-width:2px;--ifm-blockquote-padding-horizontal:var(--ifm-spacing-horizontal);--ifm-blockquote-padding-vertical:0;--ifm-blockquote-shadow:none;--ifm-blockquote-color:var(--ifm-color-emphasis-800);--ifm-blockquote-border-color:var(--ifm-color-emphasis-300);--ifm-hr-background-color:var(--ifm-color-emphasis-500);--ifm-hr-height:1px;--ifm-hr-margin-vertical:1.5rem;--ifm-scrollbar-size:7px;--ifm-scrollbar-track-background-color:#f1f1f1;--ifm-scrollbar-thumb-background-color:silver;--ifm-scrollbar-thumb-hover-background-color:#a7a7a7;--ifm-alert-background-color:inherit;--ifm-alert-border-color:inherit;--ifm-alert-border-radius:var(--ifm-global-radius);--ifm-alert-border-width:0px;--ifm-alert-border-left-width:5px;--ifm-alert-color:var(--ifm-font-color-base);--ifm-alert-padding-horizontal:var(--ifm-spacing-horizontal);--ifm-alert-padding-vertical:var(--ifm-spacing-vertical);--ifm-alert-shadow:var(--ifm-global-shadow-lw);--ifm-avatar-intro-margin:1rem;--ifm-avatar-intro-alignment:inherit;--ifm-avatar-photo-size:3rem;--ifm-badge-background-color:inherit;--ifm-badge-border-color:inherit;--ifm-badge-border-radius:var(--ifm-global-radius);--ifm-badge-border-width:var(--ifm-global-border-width);--ifm-badge-color:var(--ifm-color-white);--ifm-badge-padding-horizontal:calc(var(--ifm-spacing-horizontal)*0.5);--ifm-badge-padding-vertical:calc(var(--ifm-spacing-vertical)*0.25);--ifm-breadcrumb-border-radius:1.5rem;--ifm-breadcrumb-spacing:0.5rem;--ifm-breadcrumb-color-active:var(--ifm-color-primary);--ifm-breadcrumb-item-background-active:var(--ifm-hover-overlay);--ifm-breadcrumb-padding-horizontal:0.8rem;--ifm-breadcrumb-padding-vertical:0.4rem;--ifm-breadcrumb-size-multiplier:1;--ifm-breadcrumb-separator:url('data:image/svg+xml;utf8,');--ifm-breadcrumb-separator-filter:none;--ifm-breadcrumb-separator-size:0.5rem;--ifm-breadcrumb-separator-size-multiplier:1.25;--ifm-button-background-color:inherit;--ifm-button-border-color:var(--ifm-button-background-color);--ifm-button-border-width:var(--ifm-global-border-width);--ifm-button-font-weight:var(--ifm-font-weight-bold);--ifm-button-padding-horizontal:1.5rem;--ifm-button-padding-vertical:0.375rem;--ifm-button-size-multiplier:1;--ifm-button-transition-duration:var(--ifm-transition-fast);--ifm-button-border-radius:calc(var(--ifm-global-radius)*var(--ifm-button-size-multiplier));--ifm-button-group-spacing:2px;--ifm-card-background-color:var(--ifm-background-surface-color);--ifm-card-border-radius:calc(var(--ifm-global-radius)*2);--ifm-card-horizontal-spacing:var(--ifm-global-spacing);--ifm-card-vertical-spacing:var(--ifm-global-spacing);--ifm-toc-border-color:var(--ifm-color-emphasis-300);--ifm-toc-link-color:var(--ifm-color-content-secondary);--ifm-toc-padding-vertical:0.5rem;--ifm-toc-padding-horizontal:0.5rem;--ifm-dropdown-background-color:var(--ifm-background-surface-color);--ifm-dropdown-font-weight:var(--ifm-font-weight-semibold);--ifm-dropdown-link-color:var(--ifm-font-color-base);--ifm-dropdown-hover-background-color:var(--ifm-hover-overlay);--ifm-footer-background-color:var(--ifm-color-emphasis-100);--ifm-footer-color:inherit;--ifm-footer-link-color:var(--ifm-color-emphasis-700);--ifm-footer-link-hover-color:var(--ifm-color-primary);--ifm-footer-link-horizontal-spacing:0.5rem;--ifm-footer-padding-horizontal:calc(var(--ifm-spacing-horizontal)*2);--ifm-footer-padding-vertical:calc(var(--ifm-spacing-vertical)*2);--ifm-footer-title-color:inherit;--ifm-footer-logo-max-width:min(30rem,90vw);--ifm-hero-background-color:var(--ifm-background-surface-color);--ifm-hero-text-color:var(--ifm-color-emphasis-800);--ifm-menu-color:var(--ifm-color-emphasis-700);--ifm-menu-color-active:var(--ifm-color-primary);--ifm-menu-color-background-active:var(--ifm-hover-overlay);--ifm-menu-color-background-hover:var(--ifm-hover-overlay);--ifm-menu-link-padding-horizontal:0.75rem;--ifm-menu-link-padding-vertical:0.375rem;--ifm-menu-link-sublist-icon:url('data:image/svg+xml;utf8,');--ifm-menu-link-sublist-icon-filter:none;--ifm-navbar-background-color:var(--ifm-background-surface-color);--ifm-navbar-height:3.75rem;--ifm-navbar-item-padding-horizontal:0.75rem;--ifm-navbar-item-padding-vertical:0.25rem;--ifm-navbar-link-color:var(--ifm-font-color-base);--ifm-navbar-link-active-color:var(--ifm-link-color);--ifm-navbar-padding-horizontal:var(--ifm-spacing-horizontal);--ifm-navbar-padding-vertical:calc(var(--ifm-spacing-vertical)*0.5);--ifm-navbar-shadow:var(--ifm-global-shadow-lw);--ifm-navbar-search-input-background-color:var(--ifm-color-emphasis-200);--ifm-navbar-search-input-color:var(--ifm-color-emphasis-800);--ifm-navbar-search-input-placeholder-color:var(--ifm-color-emphasis-500);--ifm-navbar-search-input-icon:url('data:image/svg+xml;utf8,');--ifm-navbar-sidebar-width:83vw;--ifm-pagination-border-radius:var(--ifm-global-radius);--ifm-pagination-color-active:var(--ifm-color-primary);--ifm-pagination-font-size:1rem;--ifm-pagination-item-active-background:var(--ifm-hover-overlay);--ifm-pagination-page-spacing:0.2em;--ifm-pagination-padding-horizontal:calc(var(--ifm-spacing-horizontal)*1);--ifm-pagination-padding-vertical:calc(var(--ifm-spacing-vertical)*0.25);--ifm-pagination-nav-border-radius:var(--ifm-global-radius);--ifm-pagination-nav-color-hover:var(--ifm-color-primary);--ifm-pills-color-active:var(--ifm-color-primary);--ifm-pills-color-background-active:var(--ifm-hover-overlay);--ifm-pills-spacing:0.125rem;--ifm-tabs-color:var(--ifm-font-color-secondary);--ifm-tabs-color-active:var(--ifm-color-primary);--ifm-tabs-color-active-border:var(--ifm-tabs-color-active);--ifm-tabs-padding-horizontal:1rem;--ifm-tabs-padding-vertical:1rem;--docusaurus-progress-bar-color:var(--ifm-color-primary);--ifm-color-primary:#2e8555;--ifm-color-primary-dark:#29784c;--ifm-color-primary-darker:#277148;--ifm-color-primary-darkest:#205d3b;--ifm-color-primary-light:#33925d;--ifm-color-primary-lighter:#359962;--ifm-color-primary-lightest:#3cad6e;--ifm-code-font-size:95%;--docusaurus-announcement-bar-height:auto;--docusaurus-collapse-button-bg:#0000;--docusaurus-collapse-button-bg-hover:#0000001a;--doc-sidebar-width:300px;--doc-sidebar-hidden-width:30px;--docusaurus-tag-list-border:var(--ifm-color-emphasis-300)}.badge--danger,.badge--info,.badge--primary,.badge--secondary,.badge--success,.badge--warning{--ifm-badge-border-color:var(--ifm-badge-background-color)}.button--link,.button--outline{--ifm-button-background-color:#0000}*{box-sizing:border-box}html{-webkit-font-smoothing:antialiased;-webkit-text-size-adjust:100%;text-size-adjust:100%;background-color:var(--ifm-background-color);color:var(--ifm-font-color-base);color-scheme:var(--ifm-color-scheme);font:var(--ifm-font-size-base)/var(--ifm-line-height-base) var(--ifm-font-family-base);text-rendering:optimizelegibility}iframe{border:0;color-scheme:auto}.container{margin:0 auto;max-width:var(--ifm-container-width)}.container--fluid{max-width:inherit}.row{display:flex;flex-wrap:wrap;margin:0 calc(var(--ifm-spacing-horizontal)*-1)}.margin-bottom--none,.margin-vert--none,.markdown>:last-child{margin-bottom:0!important}.margin-top--none,.margin-vert--none{margin-top:0!important}.row--no-gutters{margin-left:0;margin-right:0}.margin-horiz--none,.margin-right--none{margin-right:0!important}.row--no-gutters>.col{padding-left:0;padding-right:0}.row--align-top{align-items:flex-start}.row--align-bottom{align-items:flex-end}.menuExternalLink_NmtK,.row--align-center{align-items:center}.row--align-stretch{align-items:stretch}.row--align-baseline{align-items:baseline}.col{--ifm-col-width:100%;flex:1 0;margin-left:0;max-width:var(--ifm-col-width)}.padding-bottom--none,.padding-vert--none{padding-bottom:0!important}.padding-top--none,.padding-vert--none{padding-top:0!important}.padding-horiz--none,.padding-left--none{padding-left:0!important}.padding-horiz--none,.padding-right--none{padding-right:0!important}.col[class*=col--]{flex:0 0 var(--ifm-col-width)}.col--1{--ifm-col-width:8.33333%}.col--offset-1{margin-left:8.33333%}.col--2{--ifm-col-width:16.66667%}.col--offset-2{margin-left:16.66667%}.col--3{--ifm-col-width:25%}.col--offset-3{margin-left:25%}.col--4{--ifm-col-width:33.33333%}.col--offset-4{margin-left:33.33333%}.col--5{--ifm-col-width:41.66667%}.col--offset-5{margin-left:41.66667%}.col--6{--ifm-col-width:50%}.col--offset-6{margin-left:50%}.col--7{--ifm-col-width:58.33333%}.col--offset-7{margin-left:58.33333%}.col--8{--ifm-col-width:66.66667%}.col--offset-8{margin-left:66.66667%}.col--9{--ifm-col-width:75%}.col--offset-9{margin-left:75%}.col--10{--ifm-col-width:83.33333%}.col--offset-10{margin-left:83.33333%}.col--11{--ifm-col-width:91.66667%}.col--offset-11{margin-left:91.66667%}.col--12{--ifm-col-width:100%}.col--offset-12{margin-left:100%}.margin-horiz--none,.margin-left--none{margin-left:0!important}.margin--none{margin:0!important}.margin-bottom--xs,.margin-vert--xs{margin-bottom:.25rem!important}.margin-top--xs,.margin-vert--xs{margin-top:.25rem!important}.margin-horiz--xs,.margin-left--xs{margin-left:.25rem!important}.margin-horiz--xs,.margin-right--xs{margin-right:.25rem!important}.margin--xs{margin:.25rem!important}.margin-bottom--sm,.margin-vert--sm{margin-bottom:.5rem!important}.margin-top--sm,.margin-vert--sm{margin-top:.5rem!important}.margin-horiz--sm,.margin-left--sm{margin-left:.5rem!important}.margin-horiz--sm,.margin-right--sm{margin-right:.5rem!important}.margin--sm{margin:.5rem!important}.margin-bottom--md,.margin-vert--md{margin-bottom:1rem!important}.margin-top--md,.margin-vert--md{margin-top:1rem!important}.margin-horiz--md,.margin-left--md{margin-left:1rem!important}.margin-horiz--md,.margin-right--md{margin-right:1rem!important}.margin--md{margin:1rem!important}.margin-bottom--lg,.margin-vert--lg{margin-bottom:2rem!important}.margin-top--lg,.margin-vert--lg{margin-top:2rem!important}.margin-horiz--lg,.margin-left--lg{margin-left:2rem!important}.margin-horiz--lg,.margin-right--lg{margin-right:2rem!important}.margin--lg{margin:2rem!important}.margin-bottom--xl,.margin-vert--xl{margin-bottom:5rem!important}.margin-top--xl,.margin-vert--xl{margin-top:5rem!important}.margin-horiz--xl,.margin-left--xl{margin-left:5rem!important}.margin-horiz--xl,.margin-right--xl{margin-right:5rem!important}.margin--xl{margin:5rem!important}.padding--none{padding:0!important}.padding-bottom--xs,.padding-vert--xs{padding-bottom:.25rem!important}.padding-top--xs,.padding-vert--xs{padding-top:.25rem!important}.padding-horiz--xs,.padding-left--xs{padding-left:.25rem!important}.padding-horiz--xs,.padding-right--xs{padding-right:.25rem!important}.padding--xs{padding:.25rem!important}.padding-bottom--sm,.padding-vert--sm{padding-bottom:.5rem!important}.padding-top--sm,.padding-vert--sm{padding-top:.5rem!important}.padding-horiz--sm,.padding-left--sm{padding-left:.5rem!important}.padding-horiz--sm,.padding-right--sm{padding-right:.5rem!important}.padding--sm{padding:.5rem!important}.padding-bottom--md,.padding-vert--md{padding-bottom:1rem!important}.padding-top--md,.padding-vert--md{padding-top:1rem!important}.padding-horiz--md,.padding-left--md{padding-left:1rem!important}.padding-horiz--md,.padding-right--md{padding-right:1rem!important}.padding--md{padding:1rem!important}.padding-bottom--lg,.padding-vert--lg{padding-bottom:2rem!important}.padding-top--lg,.padding-vert--lg{padding-top:2rem!important}.padding-horiz--lg,.padding-left--lg{padding-left:2rem!important}.padding-horiz--lg,.padding-right--lg{padding-right:2rem!important}.padding--lg{padding:2rem!important}.padding-bottom--xl,.padding-vert--xl{padding-bottom:5rem!important}.padding-top--xl,.padding-vert--xl{padding-top:5rem!important}.padding-horiz--xl,.padding-left--xl{padding-left:5rem!important}.padding-horiz--xl,.padding-right--xl{padding-right:5rem!important}.padding--xl{padding:5rem!important}code{background-color:var(--ifm-code-background);border:.1rem solid #0000001a;border-radius:var(--ifm-code-border-radius);font-family:var(--ifm-font-family-monospace);font-size:var(--ifm-code-font-size);padding:var(--ifm-code-padding-vertical) var(--ifm-code-padding-horizontal)}a code{color:inherit}pre{background-color:var(--ifm-pre-background);border-radius:var(--ifm-pre-border-radius);color:var(--ifm-pre-color);font:var(--ifm-code-font-size)/var(--ifm-pre-line-height) var(--ifm-font-family-monospace);padding:var(--ifm-pre-padding)}pre code{background-color:initial;border:none;font-size:100%;line-height:inherit;padding:0}kbd{background-color:var(--ifm-color-emphasis-0);border:1px solid var(--ifm-color-emphasis-400);border-radius:.2rem;box-shadow:inset 0 -1px 0 var(--ifm-color-emphasis-400);color:var(--ifm-color-emphasis-800);font:80% var(--ifm-font-family-monospace);padding:.15rem .3rem}h1,h2,h3,h4,h5,h6{color:var(--ifm-heading-color);font-family:var(--ifm-heading-font-family);font-weight:var(--ifm-heading-font-weight);line-height:var(--ifm-heading-line-height);margin:var(--ifm-heading-margin-top) 0 var(--ifm-heading-margin-bottom) 0}h1{font-size:var(--ifm-h1-font-size)}h2{font-size:var(--ifm-h2-font-size)}h3{font-size:var(--ifm-h3-font-size)}h4{font-size:var(--ifm-h4-font-size)}h5{font-size:var(--ifm-h5-font-size)}h6{font-size:var(--ifm-h6-font-size)}img{max-width:100%}img[align=right]{padding-left:var(--image-alignment-padding)}img[align=left]{padding-right:var(--image-alignment-padding)}.markdown{--ifm-h1-vertical-rhythm-top:3;--ifm-h2-vertical-rhythm-top:2;--ifm-h3-vertical-rhythm-top:1.5;--ifm-heading-vertical-rhythm-top:1.25;--ifm-h1-vertical-rhythm-bottom:1.25;--ifm-heading-vertical-rhythm-bottom:1}.markdown:after,.markdown:before{content:"";display:table}.markdown:after{clear:both}.markdown h1:first-child{--ifm-h1-font-size:3rem;margin-bottom:calc(var(--ifm-h1-vertical-rhythm-bottom)*var(--ifm-leading))}.markdown>h2{--ifm-h2-font-size:2rem;margin-top:calc(var(--ifm-h2-vertical-rhythm-top)*var(--ifm-leading))}.markdown>h3{--ifm-h3-font-size:1.5rem;margin-top:calc(var(--ifm-h3-vertical-rhythm-top)*var(--ifm-leading))}.markdown>h4,.markdown>h5,.markdown>h6{margin-top:calc(var(--ifm-heading-vertical-rhythm-top)*var(--ifm-leading))}.markdown>p,.markdown>pre,.markdown>ul{margin-bottom:var(--ifm-leading)}.markdown li>p{margin-top:var(--ifm-list-paragraph-margin)}.markdown li+li{margin-top:var(--ifm-list-item-margin)}ol,ul{margin:0 0 var(--ifm-list-margin);padding-left:var(--ifm-list-left-padding)}ol ol,ul ol{list-style-type:lower-roman}ol ol ol,ol ul ol,ul ol ol,ul ul ol{list-style-type:lower-alpha}table{border-collapse:collapse;display:block;margin-bottom:var(--ifm-spacing-vertical)}table thead tr{border-bottom:2px solid var(--ifm-table-border-color)}table thead,table tr:nth-child(2n){background-color:var(--ifm-table-stripe-background)}table tr{background-color:var(--ifm-table-background);border-top:var(--ifm-table-border-width) solid var(--ifm-table-border-color)}table td,table th{border:var(--ifm-table-border-width) solid var(--ifm-table-border-color);padding:var(--ifm-table-cell-padding)}table th{background-color:var(--ifm-table-head-background);color:var(--ifm-table-head-color);font-weight:var(--ifm-table-head-font-weight)}table td{color:var(--ifm-table-cell-color)}strong{font-weight:var(--ifm-font-weight-bold)}a{color:var(--ifm-link-color);text-decoration:var(--ifm-link-decoration)}a:hover{color:var(--ifm-link-hover-color);text-decoration:var(--ifm-link-hover-decoration)}.button:hover,.text--no-decoration,.text--no-decoration:hover,a:not([href]){text-decoration:none}p{margin:0 0 var(--ifm-paragraph-margin-bottom)}blockquote{border-left:var(--ifm-blockquote-border-left-width) solid var(--ifm-blockquote-border-color);box-shadow:var(--ifm-blockquote-shadow);color:var(--ifm-blockquote-color);font-size:var(--ifm-blockquote-font-size);padding:var(--ifm-blockquote-padding-vertical) var(--ifm-blockquote-padding-horizontal)}blockquote>:first-child{margin-top:0}blockquote>:last-child{margin-bottom:0}hr{background-color:var(--ifm-hr-background-color);border:0;height:var(--ifm-hr-height);margin:var(--ifm-hr-margin-vertical) 0}.shadow--lw{box-shadow:var(--ifm-global-shadow-lw)!important}.shadow--md{box-shadow:var(--ifm-global-shadow-md)!important}.shadow--tl{box-shadow:var(--ifm-global-shadow-tl)!important}.text--primary,.wordWrapButtonEnabled_EoeP .wordWrapButtonIcon_Bwma{color:var(--ifm-color-primary)}.text--secondary{color:var(--ifm-color-secondary)}.text--success{color:var(--ifm-color-success)}.text--info{color:var(--ifm-color-info)}.text--warning{color:var(--ifm-color-warning)}.text--danger{color:var(--ifm-color-danger)}.text--center{text-align:center}.text--left{text-align:left}.text--justify{text-align:justify}.text--right{text-align:right}.text--capitalize{text-transform:capitalize}.text--lowercase{text-transform:lowercase}.admonitionHeading_tbUL,.alert__heading,.text--uppercase{text-transform:uppercase}.text--light{font-weight:var(--ifm-font-weight-light)}.text--normal{font-weight:var(--ifm-font-weight-normal)}.text--semibold{font-weight:var(--ifm-font-weight-semibold)}.text--bold{font-weight:var(--ifm-font-weight-bold)}.text--italic{font-style:italic}.text--truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text--break{word-wrap:break-word!important;word-break:break-word!important}.clean-btn{background:none;border:none;color:inherit;cursor:pointer;font-family:inherit;padding:0}.alert,.alert .close{color:var(--ifm-alert-foreground-color)}.clean-list{padding-left:0}.alert--primary{--ifm-alert-background-color:var(--ifm-color-primary-contrast-background);--ifm-alert-background-color-highlight:#3578e526;--ifm-alert-foreground-color:var(--ifm-color-primary-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-primary-dark)}.alert--secondary{--ifm-alert-background-color:var(--ifm-color-secondary-contrast-background);--ifm-alert-background-color-highlight:#ebedf026;--ifm-alert-foreground-color:var(--ifm-color-secondary-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-secondary-dark)}.alert--success{--ifm-alert-background-color:var(--ifm-color-success-contrast-background);--ifm-alert-background-color-highlight:#00a40026;--ifm-alert-foreground-color:var(--ifm-color-success-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-success-dark)}.alert--info{--ifm-alert-background-color:var(--ifm-color-info-contrast-background);--ifm-alert-background-color-highlight:#54c7ec26;--ifm-alert-foreground-color:var(--ifm-color-info-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-info-dark)}.alert--warning{--ifm-alert-background-color:var(--ifm-color-warning-contrast-background);--ifm-alert-background-color-highlight:#ffba0026;--ifm-alert-foreground-color:var(--ifm-color-warning-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-warning-dark)}.alert--danger{--ifm-alert-background-color:var(--ifm-color-danger-contrast-background);--ifm-alert-background-color-highlight:#fa383e26;--ifm-alert-foreground-color:var(--ifm-color-danger-contrast-foreground);--ifm-alert-border-color:var(--ifm-color-danger-dark)}.alert{--ifm-code-background:var(--ifm-alert-background-color-highlight);--ifm-link-color:var(--ifm-alert-foreground-color);--ifm-link-hover-color:var(--ifm-alert-foreground-color);--ifm-link-decoration:underline;--ifm-tabs-color:var(--ifm-alert-foreground-color);--ifm-tabs-color-active:var(--ifm-alert-foreground-color);--ifm-tabs-color-active-border:var(--ifm-alert-border-color);background-color:var(--ifm-alert-background-color);border:var(--ifm-alert-border-width) solid var(--ifm-alert-border-color);border-left-width:var(--ifm-alert-border-left-width);border-radius:var(--ifm-alert-border-radius);box-shadow:var(--ifm-alert-shadow);padding:var(--ifm-alert-padding-vertical) var(--ifm-alert-padding-horizontal)}.alert__heading{align-items:center;display:flex;font:700 var(--ifm-h5-font-size)/var(--ifm-heading-line-height) var(--ifm-heading-font-family);margin-bottom:.5rem}.alert__icon{display:inline-flex;margin-right:.4em}.alert__icon svg{fill:var(--ifm-alert-foreground-color);stroke:var(--ifm-alert-foreground-color);stroke-width:0}.alert .close{margin:calc(var(--ifm-alert-padding-vertical)*-1) calc(var(--ifm-alert-padding-horizontal)*-1) 0 0;opacity:.75}.alert .close:focus,.alert .close:hover{opacity:1}.alert a{text-decoration-color:var(--ifm-alert-border-color)}.alert a:hover{text-decoration-thickness:2px}.avatar{column-gap:var(--ifm-avatar-intro-margin);display:flex}.avatar__photo{border-radius:50%;display:block;height:var(--ifm-avatar-photo-size);overflow:hidden;width:var(--ifm-avatar-photo-size)}.card--full-height,.navbar__logo img,body,html{height:100%}.avatar__photo--sm{--ifm-avatar-photo-size:2rem}.avatar__photo--lg{--ifm-avatar-photo-size:4rem}.avatar__photo--xl{--ifm-avatar-photo-size:6rem}.avatar__intro{display:flex;flex:1 1;flex-direction:column;justify-content:center;text-align:var(--ifm-avatar-intro-alignment)}.badge,.breadcrumbs__item,.breadcrumbs__link,.button,.dropdown>.navbar__link:after{display:inline-block}.avatar__name{font:700 var(--ifm-h4-font-size)/var(--ifm-heading-line-height) var(--ifm-font-family-base)}.avatar__subtitle{margin-top:.25rem}.avatar--vertical{--ifm-avatar-intro-alignment:center;--ifm-avatar-intro-margin:0.5rem;align-items:center;flex-direction:column}.badge{background-color:var(--ifm-badge-background-color);border:var(--ifm-badge-border-width) solid var(--ifm-badge-border-color);border-radius:var(--ifm-badge-border-radius);color:var(--ifm-badge-color);font-size:75%;font-weight:var(--ifm-font-weight-bold);line-height:1;padding:var(--ifm-badge-padding-vertical) var(--ifm-badge-padding-horizontal)}.badge--primary{--ifm-badge-background-color:var(--ifm-color-primary)}.badge--secondary{--ifm-badge-background-color:var(--ifm-color-secondary);color:var(--ifm-color-black)}.breadcrumbs__link,.button.button--secondary.button--outline:not(.button--active):not(:hover){color:var(--ifm-font-color-base)}.badge--success{--ifm-badge-background-color:var(--ifm-color-success)}.badge--info{--ifm-badge-background-color:var(--ifm-color-info)}.badge--warning{--ifm-badge-background-color:var(--ifm-color-warning)}.badge--danger{--ifm-badge-background-color:var(--ifm-color-danger)}.breadcrumbs{margin-bottom:0;padding-left:0}.breadcrumbs__item:not(:last-child):after{background:var(--ifm-breadcrumb-separator) center;content:" ";display:inline-block;filter:var(--ifm-breadcrumb-separator-filter);height:calc(var(--ifm-breadcrumb-separator-size)*var(--ifm-breadcrumb-size-multiplier)*var(--ifm-breadcrumb-separator-size-multiplier));margin:0 var(--ifm-breadcrumb-spacing);opacity:.5;width:calc(var(--ifm-breadcrumb-separator-size)*var(--ifm-breadcrumb-size-multiplier)*var(--ifm-breadcrumb-separator-size-multiplier))}.breadcrumbs__item--active .breadcrumbs__link{background:var(--ifm-breadcrumb-item-background-active);color:var(--ifm-breadcrumb-color-active)}.breadcrumbs__link{border-radius:var(--ifm-breadcrumb-border-radius);font-size:calc(1rem*var(--ifm-breadcrumb-size-multiplier));padding:calc(var(--ifm-breadcrumb-padding-vertical)*var(--ifm-breadcrumb-size-multiplier)) calc(var(--ifm-breadcrumb-padding-horizontal)*var(--ifm-breadcrumb-size-multiplier));transition-duration:var(--ifm-transition-fast);transition-property:background,color}.breadcrumbs__link:any-link:hover,.breadcrumbs__link:link:hover,.breadcrumbs__link:visited:hover,area[href].breadcrumbs__link:hover{background:var(--ifm-breadcrumb-item-background-active);text-decoration:none}.breadcrumbs--sm{--ifm-breadcrumb-size-multiplier:0.8}.breadcrumbs--lg{--ifm-breadcrumb-size-multiplier:1.2}.button{background-color:var(--ifm-button-background-color);border:var(--ifm-button-border-width) solid var(--ifm-button-border-color);border-radius:var(--ifm-button-border-radius);cursor:pointer;font-size:calc(.875rem*var(--ifm-button-size-multiplier));font-weight:var(--ifm-button-font-weight);line-height:1.5;padding:calc(var(--ifm-button-padding-vertical)*var(--ifm-button-size-multiplier)) calc(var(--ifm-button-padding-horizontal)*var(--ifm-button-size-multiplier));text-align:center;transition-duration:var(--ifm-button-transition-duration);transition-property:color,background,border-color;-webkit-user-select:none;user-select:none;white-space:nowrap}.button,.button:hover{color:var(--ifm-button-color)}.button--outline{--ifm-button-color:var(--ifm-button-border-color)}.button--outline:hover{--ifm-button-background-color:var(--ifm-button-border-color)}.button--link{--ifm-button-border-color:#0000;color:var(--ifm-link-color);text-decoration:var(--ifm-link-decoration)}.button--link.button--active,.button--link:active,.button--link:hover{color:var(--ifm-link-hover-color);text-decoration:var(--ifm-link-hover-decoration)}.button.disabled,.button:disabled,.button[disabled]{opacity:.65;pointer-events:none}.button--sm{--ifm-button-size-multiplier:0.8}.button--lg{--ifm-button-size-multiplier:1.35}.button--block{display:block;width:100%}.button.button--secondary{color:var(--ifm-color-gray-900)}:where(.button--primary){--ifm-button-background-color:var(--ifm-color-primary);--ifm-button-border-color:var(--ifm-color-primary)}:where(.button--primary):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-primary-dark);--ifm-button-border-color:var(--ifm-color-primary-dark)}.button--primary.button--active,.button--primary:active{--ifm-button-background-color:var(--ifm-color-primary-darker);--ifm-button-border-color:var(--ifm-color-primary-darker)}:where(.button--secondary){--ifm-button-background-color:var(--ifm-color-secondary);--ifm-button-border-color:var(--ifm-color-secondary)}:where(.button--secondary):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-secondary-dark);--ifm-button-border-color:var(--ifm-color-secondary-dark)}.button--secondary.button--active,.button--secondary:active{--ifm-button-background-color:var(--ifm-color-secondary-darker);--ifm-button-border-color:var(--ifm-color-secondary-darker)}:where(.button--success){--ifm-button-background-color:var(--ifm-color-success);--ifm-button-border-color:var(--ifm-color-success)}:where(.button--success):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-success-dark);--ifm-button-border-color:var(--ifm-color-success-dark)}.button--success.button--active,.button--success:active{--ifm-button-background-color:var(--ifm-color-success-darker);--ifm-button-border-color:var(--ifm-color-success-darker)}:where(.button--info){--ifm-button-background-color:var(--ifm-color-info);--ifm-button-border-color:var(--ifm-color-info)}:where(.button--info):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-info-dark);--ifm-button-border-color:var(--ifm-color-info-dark)}.button--info.button--active,.button--info:active{--ifm-button-background-color:var(--ifm-color-info-darker);--ifm-button-border-color:var(--ifm-color-info-darker)}:where(.button--warning){--ifm-button-background-color:var(--ifm-color-warning);--ifm-button-border-color:var(--ifm-color-warning)}:where(.button--warning):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-warning-dark);--ifm-button-border-color:var(--ifm-color-warning-dark)}.button--warning.button--active,.button--warning:active{--ifm-button-background-color:var(--ifm-color-warning-darker);--ifm-button-border-color:var(--ifm-color-warning-darker)}:where(.button--danger){--ifm-button-background-color:var(--ifm-color-danger);--ifm-button-border-color:var(--ifm-color-danger)}:where(.button--danger):not(.button--outline):hover{--ifm-button-background-color:var(--ifm-color-danger-dark);--ifm-button-border-color:var(--ifm-color-danger-dark)}.button--danger.button--active,.button--danger:active{--ifm-button-background-color:var(--ifm-color-danger-darker);--ifm-button-border-color:var(--ifm-color-danger-darker)}.button-group{display:inline-flex;gap:var(--ifm-button-group-spacing)}.button-group>.button:not(:first-child){border-bottom-left-radius:0;border-top-left-radius:0}.button-group>.button:not(:last-child){border-bottom-right-radius:0;border-top-right-radius:0}.button-group--block{display:flex;justify-content:stretch}.button-group--block>.button{flex-grow:1}.card{background-color:var(--ifm-card-background-color);border-radius:var(--ifm-card-border-radius);box-shadow:var(--ifm-global-shadow-lw);display:flex;flex-direction:column;overflow:hidden}.card__image{padding-top:var(--ifm-card-vertical-spacing)}.card__image:first-child{padding-top:0}.card__body,.card__footer,.card__header{padding:var(--ifm-card-vertical-spacing) var(--ifm-card-horizontal-spacing)}.card__body:not(:last-child),.card__footer:not(:last-child),.card__header:not(:last-child){padding-bottom:0}.card__body>:last-child,.card__footer>:last-child,.card__header>:last-child{margin-bottom:0}.card__footer{margin-top:auto}.table-of-contents{font-size:.8rem;margin-bottom:0;padding:var(--ifm-toc-padding-vertical) 0}.table-of-contents,.table-of-contents ul{list-style:none;padding-left:var(--ifm-toc-padding-horizontal)}.table-of-contents li{margin:var(--ifm-toc-padding-vertical) var(--ifm-toc-padding-horizontal)}.table-of-contents__left-border{border-left:1px solid var(--ifm-toc-border-color)}.table-of-contents__link{color:var(--ifm-toc-link-color);display:block}.table-of-contents__link--active,.table-of-contents__link--active code,.table-of-contents__link:hover,.table-of-contents__link:hover code{color:var(--ifm-color-primary);text-decoration:none}.close{color:var(--ifm-color-black);float:right;font-size:1.5rem;font-weight:var(--ifm-font-weight-bold);line-height:1;opacity:.5;padding:1rem;transition:opacity var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.close:hover{opacity:.7}.close:focus,.theme-code-block-highlighted-line .codeLineNumber_Tfdd:before{opacity:.8}.dropdown{display:inline-flex;font-weight:var(--ifm-dropdown-font-weight);position:relative;vertical-align:top}.dropdown--hoverable:hover .dropdown__menu,.dropdown--show .dropdown__menu{opacity:1;pointer-events:all;transform:translateY(-1px);visibility:visible}#nprogress,.dropdown__menu,.navbar__item.dropdown .navbar__link:not([href]){pointer-events:none}.dropdown--right .dropdown__menu{left:inherit;right:0}.dropdown--nocaret .navbar__link:after{content:none!important}.dropdown__menu{background-color:var(--ifm-dropdown-background-color);border-radius:var(--ifm-global-radius);box-shadow:var(--ifm-global-shadow-md);left:0;max-height:80vh;min-width:10rem;opacity:0;overflow-y:auto;padding:.5rem;position:absolute;top:calc(100% - var(--ifm-navbar-item-padding-vertical) + .3rem);transform:translateY(-.625rem);transition-duration:var(--ifm-transition-fast);transition-property:opacity,transform,visibility;transition-timing-function:var(--ifm-transition-timing-default);visibility:hidden;z-index:var(--ifm-z-index-dropdown)}.sidebar_re4s,.tableOfContents_bqdL{max-height:calc(100vh - var(--ifm-navbar-height) - 2rem);overflow-y:auto}.menu__caret,.menu__link,.menu__list-item-collapsible{border-radius:.25rem;transition:background var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.dropdown__link{border-radius:.25rem;color:var(--ifm-dropdown-link-color);display:block;font-size:.875rem;margin-top:.2rem;padding:.25rem .5rem;white-space:nowrap}.dropdown__link--active,.dropdown__link:hover{background-color:var(--ifm-dropdown-hover-background-color);color:var(--ifm-dropdown-link-color);text-decoration:none}.dropdown__link--active,.dropdown__link--active:hover{--ifm-dropdown-link-color:var(--ifm-link-color)}.dropdown>.navbar__link:after{border-color:currentcolor #0000;border-style:solid;border-width:.4em .4em 0;content:"";margin-left:.3em;position:relative;top:2px;transform:translateY(-50%)}.footer{background-color:var(--ifm-footer-background-color);color:var(--ifm-footer-color);padding:var(--ifm-footer-padding-vertical) var(--ifm-footer-padding-horizontal)}.footer--dark{--ifm-footer-background-color:#303846;--ifm-footer-color:var(--ifm-footer-link-color);--ifm-footer-link-color:var(--ifm-color-secondary);--ifm-footer-title-color:var(--ifm-color-white)}.footer__links{margin-bottom:1rem}.footer__link-item{color:var(--ifm-footer-link-color);line-height:2}.footer__link-item:hover{color:var(--ifm-footer-link-hover-color)}.footer__link-separator{margin:0 var(--ifm-footer-link-horizontal-spacing)}.footer__logo{margin-top:1rem;max-width:var(--ifm-footer-logo-max-width)}.footer__title{color:var(--ifm-footer-title-color);font:700 var(--ifm-h4-font-size)/var(--ifm-heading-line-height) var(--ifm-font-family-base);margin-bottom:var(--ifm-heading-margin-bottom)}.menu,.navbar__link{font-weight:var(--ifm-font-weight-semibold)}.docItemContainer_Djhp article>:first-child,.docItemContainer_Djhp header+*,.footer__item{margin-top:0}.admonitionContent_S0QG>:last-child,.collapsibleContent_i85q>:last-child,.footer__items{margin-bottom:0}.codeBlockStandalone_MEMb,[type=checkbox]{padding:0}.hero{align-items:center;background-color:var(--ifm-hero-background-color);color:var(--ifm-hero-text-color);display:flex;padding:4rem 2rem}.hero--primary{--ifm-hero-background-color:var(--ifm-color-primary);--ifm-hero-text-color:var(--ifm-font-color-base-inverse)}.hero--dark{--ifm-hero-background-color:#303846;--ifm-hero-text-color:var(--ifm-color-white)}.hero__title,.title_f1Hy{font-size:3rem}.hero__subtitle{font-size:1.5rem}.menu__list{margin:0;padding-left:0}.menu__caret,.menu__link{padding:var(--ifm-menu-link-padding-vertical) var(--ifm-menu-link-padding-horizontal)}.menu__list .menu__list{flex:0 0 100%;margin-top:.25rem;padding-left:var(--ifm-menu-link-padding-horizontal)}.menu__list-item:not(:first-child){margin-top:.25rem}.menu__list-item--collapsed .menu__list{height:0;overflow:hidden}.details_lb9f[data-collapsed=false].isBrowser_bmU9>summary:before,.details_lb9f[open]:not(.isBrowser_bmU9)>summary:before,.menu__list-item--collapsed .menu__caret:before,.menu__list-item--collapsed .menu__link--sublist:after{transform:rotate(90deg)}.menu__list-item-collapsible{display:flex;flex-wrap:wrap;position:relative}.menu__caret:hover,.menu__link:hover,.menu__list-item-collapsible--active,.menu__list-item-collapsible:hover{background:var(--ifm-menu-color-background-hover)}.menu__list-item-collapsible .menu__link--active,.menu__list-item-collapsible .menu__link:hover{background:none!important}.menu__caret,.menu__link{align-items:center;display:flex}.menu__link{color:var(--ifm-menu-color);flex:1;line-height:1.25}.menu__link:hover{color:var(--ifm-menu-color);text-decoration:none}.menu__caret:before,.menu__link--sublist-caret:after{content:"";height:1.25rem;transform:rotate(180deg);transition:transform var(--ifm-transition-fast) linear;width:1.25rem;filter:var(--ifm-menu-link-sublist-icon-filter)}.menu__link--sublist-caret:after{background:var(--ifm-menu-link-sublist-icon) 50%/2rem 2rem;margin-left:auto;min-width:1.25rem}.menu__link--active,.menu__link--active:hover{color:var(--ifm-menu-color-active)}.navbar__brand,.navbar__link{color:var(--ifm-navbar-link-color)}.menu__link--active:not(.menu__link--sublist){background-color:var(--ifm-menu-color-background-active)}.menu__caret:before{background:var(--ifm-menu-link-sublist-icon) 50%/2rem 2rem}.navbar--dark,html[data-theme=dark]{--ifm-menu-link-sublist-icon-filter:invert(100%) sepia(94%) saturate(17%) hue-rotate(223deg) brightness(104%) contrast(98%)}.navbar{background-color:var(--ifm-navbar-background-color);box-shadow:var(--ifm-navbar-shadow);height:var(--ifm-navbar-height);padding:var(--ifm-navbar-padding-vertical) var(--ifm-navbar-padding-horizontal)}.navbar,.navbar>.container,.navbar>.container-fluid{display:flex}.navbar--fixed-top{position:sticky;top:0;z-index:var(--ifm-z-index-fixed)}.navbar-sidebar,.navbar-sidebar__backdrop{bottom:0;opacity:0;position:fixed;transition-duration:var(--ifm-transition-fast);transition-timing-function:ease-in-out;left:0;top:0;visibility:hidden}.navbar__inner{display:flex;flex-wrap:wrap;justify-content:space-between;width:100%}.navbar__brand{align-items:center;display:flex;margin-right:1rem;min-width:0}.navbar__brand:hover{color:var(--ifm-navbar-link-hover-color);text-decoration:none}.announcementBarContent_xLdY,.navbar__title{flex:1 1 auto}.navbar__toggle{display:none;margin-right:.5rem}.navbar__logo{flex:0 0 auto;height:2rem;margin-right:.5rem}.navbar__items{align-items:center;display:flex;flex:1;min-width:0}.navbar__items--center{flex:0 0 auto}.navbar__items--center .navbar__brand{margin:0}.navbar__items--center+.navbar__items--right{flex:1}.navbar__items--right{flex:0 0 auto;justify-content:flex-end}.navbar__items--right>:last-child{padding-right:0}.navbar__item{display:inline-block;padding:var(--ifm-navbar-item-padding-vertical) var(--ifm-navbar-item-padding-horizontal)}.navbar__link--active,.navbar__link:hover{color:var(--ifm-navbar-link-hover-color);text-decoration:none}.navbar--dark,.navbar--primary{--ifm-menu-color:var(--ifm-color-gray-300);--ifm-navbar-link-color:var(--ifm-color-gray-100);--ifm-navbar-search-input-background-color:#ffffff1a;--ifm-navbar-search-input-placeholder-color:#ffffff80;color:var(--ifm-color-white)}.navbar--dark{--ifm-navbar-background-color:#242526;--ifm-menu-color-background-active:#ffffff0d;--ifm-navbar-search-input-color:var(--ifm-color-white)}.navbar--primary{--ifm-navbar-background-color:var(--ifm-color-primary);--ifm-navbar-link-hover-color:var(--ifm-color-white);--ifm-menu-color-active:var(--ifm-color-white);--ifm-navbar-search-input-color:var(--ifm-color-emphasis-500)}.navbar__search-input{-webkit-appearance:none;appearance:none;background:var(--ifm-navbar-search-input-background-color) var(--ifm-navbar-search-input-icon) no-repeat .75rem center/1rem 1rem;border:none;border-radius:2rem;color:var(--ifm-navbar-search-input-color);cursor:text;display:inline-block;font-size:.9rem;height:2rem;padding:0 .5rem 0 2.25rem;width:12.5rem}.navbar__search-input::placeholder{color:var(--ifm-navbar-search-input-placeholder-color)}.navbar-sidebar{background-color:var(--ifm-navbar-background-color);box-shadow:var(--ifm-global-shadow-md);transform:translate3d(-100%,0,0);transition-property:opacity,visibility,transform;width:var(--ifm-navbar-sidebar-width)}.navbar-sidebar--show .navbar-sidebar,.navbar-sidebar__items{transform:translateZ(0)}.navbar-sidebar--show .navbar-sidebar,.navbar-sidebar--show .navbar-sidebar__backdrop{opacity:1;visibility:visible}.navbar-sidebar__backdrop{background-color:#0009;right:0;transition-property:opacity,visibility}.navbar-sidebar__brand{align-items:center;box-shadow:var(--ifm-navbar-shadow);display:flex;flex:1;height:var(--ifm-navbar-height);padding:var(--ifm-navbar-padding-vertical) var(--ifm-navbar-padding-horizontal)}.navbar-sidebar__items{display:flex;height:calc(100% - var(--ifm-navbar-height));transition:transform var(--ifm-transition-fast) ease-in-out}.navbar-sidebar__items--show-secondary{transform:translate3d(calc((var(--ifm-navbar-sidebar-width))*-1),0,0)}.navbar-sidebar__item{flex-shrink:0;padding:.5rem;width:calc(var(--ifm-navbar-sidebar-width))}.navbar-sidebar__back{background:var(--ifm-menu-color-background-active);font-size:15px;font-weight:var(--ifm-button-font-weight);margin:0 0 .2rem -.5rem;padding:.6rem 1.5rem;position:relative;text-align:left;top:-.5rem;width:calc(100% + 1rem)}.navbar-sidebar__close{display:flex;margin-left:auto}.pagination{column-gap:var(--ifm-pagination-page-spacing);display:flex;font-size:var(--ifm-pagination-font-size);padding-left:0}.pagination--sm{--ifm-pagination-font-size:0.8rem;--ifm-pagination-padding-horizontal:0.8rem;--ifm-pagination-padding-vertical:0.2rem}.pagination--lg{--ifm-pagination-font-size:1.2rem;--ifm-pagination-padding-horizontal:1.2rem;--ifm-pagination-padding-vertical:0.3rem}.pagination__item{display:inline-flex}.pagination__item>span{padding:var(--ifm-pagination-padding-vertical)}.pagination__item--active .pagination__link{color:var(--ifm-pagination-color-active)}.pagination__item--active .pagination__link,.pagination__item:not(.pagination__item--active):hover .pagination__link{background:var(--ifm-pagination-item-active-background)}.pagination__item--disabled,.pagination__item[disabled]{opacity:.25;pointer-events:none}.pagination__link{border-radius:var(--ifm-pagination-border-radius);color:var(--ifm-font-color-base);display:inline-block;padding:var(--ifm-pagination-padding-vertical) var(--ifm-pagination-padding-horizontal);transition:background var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.pagination__link:hover,.sidebarItemLink_mo7H:hover{text-decoration:none}.pagination-nav{grid-gap:var(--ifm-spacing-horizontal);display:grid;gap:var(--ifm-spacing-horizontal);grid-template-columns:repeat(2,1fr)}.pagination-nav__link{border:1px solid var(--ifm-color-emphasis-300);border-radius:var(--ifm-pagination-nav-border-radius);display:block;height:100%;line-height:var(--ifm-heading-line-height);padding:var(--ifm-global-spacing);transition:border-color var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.pagination-nav__link:hover{border-color:var(--ifm-pagination-nav-color-hover);text-decoration:none}.pagination-nav__link--next{grid-column:2/3;text-align:right}.pagination-nav__label{font-size:var(--ifm-h4-font-size);font-weight:var(--ifm-heading-font-weight);word-break:break-word}.pagination-nav__link--prev .pagination-nav__label:before{content:"« "}.pagination-nav__link--next .pagination-nav__label:after{content:" »"}.pagination-nav__sublabel{color:var(--ifm-color-content-secondary);font-size:var(--ifm-h5-font-size);font-weight:var(--ifm-font-weight-semibold);margin-bottom:.25rem}.pills__item,.sidebarItemTitle_pO2u,.tabs{font-weight:var(--ifm-font-weight-bold)}.pills{display:flex;gap:var(--ifm-pills-spacing);padding-left:0}.pills__item{border-radius:.5rem;cursor:pointer;display:inline-block;padding:.25rem 1rem;transition:background var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.tabs,:not(.containsTaskList_mC6p>li)>.containsTaskList_mC6p{padding-left:0}.pills__item--active{color:var(--ifm-pills-color-active)}.pills__item--active,.pills__item:not(.pills__item--active):hover{background:var(--ifm-pills-color-background-active)}.pills--block{justify-content:stretch}.pills--block .pills__item{flex-grow:1;text-align:center}.tabs{color:var(--ifm-tabs-color);display:flex;margin-bottom:0;overflow-x:auto}.tabs__item{border-bottom:3px solid #0000;border-radius:var(--ifm-global-radius);cursor:pointer;display:inline-flex;padding:var(--ifm-tabs-padding-vertical) var(--ifm-tabs-padding-horizontal);transition:background-color var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.tabs__item--active{border-bottom-color:var(--ifm-tabs-color-active-border);border-bottom-left-radius:0;border-bottom-right-radius:0;color:var(--ifm-tabs-color-active)}.tabs__item:hover{background-color:var(--ifm-hover-overlay)}.tabs--block{justify-content:stretch}.tabs--block .tabs__item{flex-grow:1;justify-content:center}html[data-theme=dark]{--ifm-color-scheme:dark;--ifm-color-emphasis-0:var(--ifm-color-gray-1000);--ifm-color-emphasis-100:var(--ifm-color-gray-900);--ifm-color-emphasis-200:var(--ifm-color-gray-800);--ifm-color-emphasis-300:var(--ifm-color-gray-700);--ifm-color-emphasis-400:var(--ifm-color-gray-600);--ifm-color-emphasis-600:var(--ifm-color-gray-400);--ifm-color-emphasis-700:var(--ifm-color-gray-300);--ifm-color-emphasis-800:var(--ifm-color-gray-200);--ifm-color-emphasis-900:var(--ifm-color-gray-100);--ifm-color-emphasis-1000:var(--ifm-color-gray-0);--ifm-background-color:#1b1b1d;--ifm-background-surface-color:#242526;--ifm-hover-overlay:#ffffff0d;--ifm-color-content:#e3e3e3;--ifm-color-content-secondary:#fff;--ifm-breadcrumb-separator-filter:invert(64%) sepia(11%) saturate(0%) hue-rotate(149deg) brightness(99%) contrast(95%);--ifm-code-background:#ffffff1a;--ifm-scrollbar-track-background-color:#444;--ifm-scrollbar-thumb-background-color:#686868;--ifm-scrollbar-thumb-hover-background-color:#7a7a7a;--ifm-table-stripe-background:#ffffff12;--ifm-toc-border-color:var(--ifm-color-emphasis-200);--ifm-color-primary-contrast-background:#102445;--ifm-color-primary-contrast-foreground:#ebf2fc;--ifm-color-secondary-contrast-background:#474748;--ifm-color-secondary-contrast-foreground:#fdfdfe;--ifm-color-success-contrast-background:#003100;--ifm-color-success-contrast-foreground:#e6f6e6;--ifm-color-info-contrast-background:#193c47;--ifm-color-info-contrast-foreground:#eef9fd;--ifm-color-warning-contrast-background:#4d3800;--ifm-color-warning-contrast-foreground:#fff8e6;--ifm-color-danger-contrast-background:#4b1113;--ifm-color-danger-contrast-foreground:#ffebec}#nprogress .bar{background:var(--docusaurus-progress-bar-color);height:2px;left:0;position:fixed;top:0;width:100%;z-index:1031}#nprogress .peg{box-shadow:0 0 10px var(--docusaurus-progress-bar-color),0 0 5px var(--docusaurus-progress-bar-color);height:100%;opacity:1;position:absolute;right:0;transform:rotate(3deg) translateY(-4px);width:100px}[data-theme=dark]{--ifm-color-primary:#25c2a0;--ifm-color-primary-dark:#21af90;--ifm-color-primary-darker:#1fa588;--ifm-color-primary-darkest:#1a8870;--ifm-color-primary-light:#29d5b0;--ifm-color-primary-lighter:#32d8b4;--ifm-color-primary-lightest:#4fddbf}.docusaurus-highlight-code-line{background-color:#0000001a;display:block;margin:0 calc(var(--ifm-pre-padding)*-1);padding:0 var(--ifm-pre-padding)}[data-theme=dark] .docusaurus-highlight-code-line{background-color:#0000004d}.skipToContent_fXgn{background-color:var(--ifm-background-surface-color);color:var(--ifm-color-emphasis-900);left:100%;padding:calc(var(--ifm-global-spacing)/2) var(--ifm-global-spacing);position:fixed;top:1rem;z-index:calc(var(--ifm-z-index-fixed) + 1)}.skipToContent_fXgn:focus{box-shadow:var(--ifm-global-shadow-md);left:1rem}.closeButton_CVFx{line-height:0;padding:0}.content_knG7{font-size:85%;padding:5px 0;text-align:center}.content_knG7 a{color:inherit;text-decoration:underline}.announcementBar_mb4j{align-items:center;background-color:var(--ifm-color-white);border-bottom:1px solid var(--ifm-color-emphasis-100);color:var(--ifm-color-black);display:flex;height:var(--docusaurus-announcement-bar-height)}#__docusaurus-base-url-issue-banner-container,.docSidebarContainer_b6E3,.sidebarLogo_isFc,.themedImage_ToTc,[data-theme=dark] .lightToggleIcon_pyhR,[data-theme=light] .darkToggleIcon_wfgR,html[data-announcement-bar-initially-dismissed=true] .announcementBar_mb4j{display:none}.announcementBarPlaceholder_vyr4{flex:0 0 10px}.announcementBarClose_gvF7{align-self:stretch;flex:0 0 30px}.toggle_vylO{height:2rem;width:2rem}.toggleButton_gllP{align-items:center;border-radius:50%;display:flex;height:100%;justify-content:center;transition:background var(--ifm-transition-fast);width:100%}.toggleButton_gllP:hover{background:var(--ifm-color-emphasis-200)}.toggleButtonDisabled_aARS{cursor:not-allowed}.darkNavbarColorModeToggle_X3D1:hover{background:var(--ifm-color-gray-800)}[data-theme=dark] .themedImage--dark_i4oU,[data-theme=light] .themedImage--light_HNdA,html:not([data-theme]) .themedComponent--light_NU7w{display:initial}.iconExternalLink_nPIU{margin-left:.3rem}.iconLanguage_nlXk{margin-right:5px;vertical-align:text-bottom}.navbarHideable_m1mJ{transition:transform var(--ifm-transition-fast) ease}.navbarHidden_jGov{transform:translate3d(0,calc(-100% - 2px),0)}.errorBoundaryError_a6uf{color:red;white-space:pre-wrap}body:not(.navigation-with-keyboard) :not(input):focus{outline:0}.footerLogoLink_BH7S{opacity:.5;transition:opacity var(--ifm-transition-fast) var(--ifm-transition-timing-default)}.footerLogoLink_BH7S:hover,.hash-link:focus,:hover>.hash-link{opacity:1}.mainWrapper_z2l0{display:flex;flex:1 0 auto;flex-direction:column}.docusaurus-mt-lg{margin-top:3rem}#__docusaurus{display:flex;flex-direction:column;min-height:100%}.sidebar_re4s{position:sticky;top:calc(var(--ifm-navbar-height) + 2rem)}.sidebarItemTitle_pO2u{font-size:var(--ifm-h3-font-size)}.container_mt6G,.sidebarItemList_Yudw{font-size:.9rem}.sidebarItem__DBe{margin-top:.7rem}.sidebarItemLink_mo7H{color:var(--ifm-font-color-base);display:block}.sidebarItemLinkActive_I1ZP{color:var(--ifm-color-primary)!important}.backToTopButton_sjWU{background-color:var(--ifm-color-emphasis-200);border-radius:50%;bottom:1.3rem;box-shadow:var(--ifm-global-shadow-lw);height:3rem;opacity:0;position:fixed;right:1.3rem;transform:scale(0);transition:all var(--ifm-transition-fast) var(--ifm-transition-timing-default);visibility:hidden;width:3rem;z-index:calc(var(--ifm-z-index-fixed) - 1)}.buttonGroup__atx button,.codeBlockContainer_Ckt0{background:var(--prism-background-color);color:var(--prism-color)}.backToTopButton_sjWU:after{background-color:var(--ifm-color-emphasis-1000);content:" ";display:inline-block;height:100%;-webkit-mask:var(--ifm-menu-link-sublist-icon) 50%/2rem 2rem no-repeat;mask:var(--ifm-menu-link-sublist-icon) 50%/2rem 2rem no-repeat;width:100%}.backToTopButtonShow_xfvO{opacity:1;transform:scale(1);visibility:visible}[data-theme=dark]:root{--docusaurus-collapse-button-bg:#ffffff0d;--docusaurus-collapse-button-bg-hover:#ffffff1a}.collapseSidebarButton_PEFL{display:none;margin:0}.docMainContainer_gTbr,.docPage__5DB{display:flex;width:100%}.docPage__5DB{flex:1 0}.docsWrapper_BCFX{display:flex;flex:1 0 auto}.authorCol_Hf19{flex-grow:1!important;max-width:inherit!important}.imageOnlyAuthorRow_pa_O{display:flex;flex-flow:row wrap}.imageOnlyAuthorCol_G86a{margin-left:.3rem;margin-right:.3rem}.codeBlockContainer_Ckt0{border-radius:var(--ifm-code-border-radius);box-shadow:var(--ifm-global-shadow-lw);margin-bottom:var(--ifm-leading)}.codeBlockContent_biex{border-radius:inherit;direction:ltr;position:relative}.codeBlockTitle_Ktv7{border-bottom:1px solid var(--ifm-color-emphasis-300);border-top-left-radius:inherit;border-top-right-radius:inherit;font-size:var(--ifm-code-font-size);font-weight:500;padding:.75rem var(--ifm-pre-padding)}.codeBlock_bY9V{--ifm-pre-background:var(--prism-background-color);margin:0;padding:0}.codeBlockTitle_Ktv7+.codeBlockContent_biex .codeBlock_bY9V{border-top-left-radius:0;border-top-right-radius:0}.codeBlockLines_e6Vv{float:left;font:inherit;min-width:100%;padding:var(--ifm-pre-padding)}.codeBlockLinesWithNumbering_o6Pm{display:table;padding:var(--ifm-pre-padding) 0}.buttonGroup__atx{column-gap:.2rem;display:flex;position:absolute;right:calc(var(--ifm-pre-padding)/2);top:calc(var(--ifm-pre-padding)/2)}.buttonGroup__atx button{align-items:center;border:1px solid var(--ifm-color-emphasis-300);border-radius:var(--ifm-global-radius);display:flex;line-height:0;opacity:0;padding:.4rem;transition:opacity var(--ifm-transition-fast) ease-in-out}.buttonGroup__atx button:focus-visible,.buttonGroup__atx button:hover{opacity:1!important}.theme-code-block:hover .buttonGroup__atx button{opacity:.4}.iconEdit_Z9Sw{margin-right:.3em;vertical-align:sub}:where(:root){--docusaurus-highlighted-code-line-bg:#484d5b}:where([data-theme=dark]){--docusaurus-highlighted-code-line-bg:#646464}.theme-code-block-highlighted-line{background-color:var(--docusaurus-highlighted-code-line-bg);display:block;margin:0 calc(var(--ifm-pre-padding)*-1);padding:0 var(--ifm-pre-padding)}.codeLine_lJS_{counter-increment:a;display:table-row}.codeLineNumber_Tfdd{background:var(--ifm-pre-background);display:table-cell;left:0;overflow-wrap:normal;padding:0 var(--ifm-pre-padding);position:sticky;text-align:right;width:1%}.codeLineNumber_Tfdd:before{content:counter(a);opacity:.4}.codeLineContent_feaV{padding-right:var(--ifm-pre-padding)}.tag_zVej{border:1px solid var(--docusaurus-tag-list-border);transition:border var(--ifm-transition-fast)}.tag_zVej:hover{--docusaurus-tag-list-border:var(--ifm-link-color);text-decoration:none}.tagRegular_sFm0{border-radius:var(--ifm-global-radius);font-size:90%;padding:.2rem .5rem .3rem}.tagWithCount_h2kH{align-items:center;border-left:0;display:flex;padding:0 .5rem 0 1rem;position:relative}.tagWithCount_h2kH:after,.tagWithCount_h2kH:before{border:1px solid var(--docusaurus-tag-list-border);content:"";position:absolute;top:50%;transition:inherit}.tagWithCount_h2kH:before{border-bottom:0;border-right:0;height:1.18rem;right:100%;transform:translate(50%,-50%) rotate(-45deg);width:1.18rem}.tagWithCount_h2kH:after{border-radius:50%;height:.5rem;left:0;transform:translateY(-50%);width:.5rem}.tagWithCount_h2kH span{background:var(--ifm-color-secondary);border-radius:var(--ifm-global-radius);color:var(--ifm-color-black);font-size:.7rem;line-height:1.2;margin-left:.3rem;padding:.1rem .4rem}.tag_Nnez{display:inline-block;margin:.5rem .5rem 0 1rem}.theme-code-block:hover .copyButtonCopied_obH4{opacity:1!important}.copyButtonIcons_eSgA{height:1.125rem;position:relative;width:1.125rem}.copyButtonIcon_y97N,.copyButtonSuccessIcon_LjdS{fill:currentColor;height:inherit;left:0;opacity:inherit;position:absolute;top:0;transition:all var(--ifm-transition-fast) ease;width:inherit}.copyButtonSuccessIcon_LjdS{color:#00d600;left:50%;opacity:0;top:50%;transform:translate(-50%,-50%) scale(.33)}.copyButtonCopied_obH4 .copyButtonIcon_y97N{opacity:0;transform:scale(.33)}.copyButtonCopied_obH4 .copyButtonSuccessIcon_LjdS{opacity:1;transform:translate(-50%,-50%) scale(1);transition-delay:75ms}.tags_jXut{display:inline}.tag_QGVx{display:inline-block;margin:0 .4rem .5rem 0}.lastUpdated_vwxv{font-size:smaller;font-style:italic;margin-top:.2rem}.tocCollapsibleButton_TO0P{align-items:center;display:flex;font-size:inherit;justify-content:space-between;padding:.4rem .8rem;width:100%}.tocCollapsibleButton_TO0P:after{background:var(--ifm-menu-link-sublist-icon) 50% 50%/2rem 2rem no-repeat;content:"";filter:var(--ifm-menu-link-sublist-icon-filter);height:1.25rem;transform:rotate(180deg);transition:transform var(--ifm-transition-fast);width:1.25rem}.tocCollapsibleButtonExpanded_MG3E:after,.tocCollapsibleExpanded_sAul{transform:none}.tocCollapsible_ETCw{background-color:var(--ifm-menu-color-background-active);border-radius:var(--ifm-global-radius);margin:1rem 0}.tocCollapsibleContent_vkbj>ul{border-left:none;border-top:1px solid var(--ifm-color-emphasis-300);font-size:15px;padding:.2rem 0}.tocCollapsibleContent_vkbj ul li{margin:.4rem .8rem}.tocCollapsibleContent_vkbj a{display:block}.wordWrapButtonIcon_Bwma{height:1.2rem;width:1.2rem}.details_lb9f{--docusaurus-details-summary-arrow-size:0.38rem;--docusaurus-details-transition:transform 200ms ease;--docusaurus-details-decoration-color:grey}.details_lb9f>summary{cursor:pointer;padding-left:1rem;position:relative}.details_lb9f>summary::-webkit-details-marker{display:none}.details_lb9f>summary:before{border-color:#0000 #0000 #0000 var(--docusaurus-details-decoration-color);border-style:solid;border-width:var(--docusaurus-details-summary-arrow-size);content:"";left:0;position:absolute;top:.45rem;transform:rotate(0);transform-origin:calc(var(--docusaurus-details-summary-arrow-size)/2) 50%;transition:var(--docusaurus-details-transition)}.collapsibleContent_i85q{border-top:1px solid var(--docusaurus-details-decoration-color);margin-top:1rem;padding-top:1rem}.details_b_Ee{--docusaurus-details-decoration-color:var(--ifm-alert-border-color);--docusaurus-details-transition:transform var(--ifm-transition-fast) ease;border:1px solid var(--ifm-alert-border-color);margin:0 0 var(--ifm-spacing-vertical)}.anchorWithStickyNavbar_LWe7{scroll-margin-top:calc(var(--ifm-navbar-height) + .5rem)}.anchorWithHideOnScrollNavbar_WYt5{scroll-margin-top:.5rem}.hash-link{opacity:0;padding-left:.5rem;transition:opacity var(--ifm-transition-fast);-webkit-user-select:none;user-select:none}.hash-link:before{content:"#"}.img_ev3q{height:auto}.admonition_LlT9{margin-bottom:1em}.admonitionHeading_tbUL{font:var(--ifm-heading-font-weight) var(--ifm-h5-font-size)/var(--ifm-heading-line-height) var(--ifm-heading-font-family);margin-bottom:.3rem}.admonitionHeading_tbUL code{text-transform:none}.admonitionIcon_kALy{display:inline-block;margin-right:.4em;vertical-align:middle}.admonitionIcon_kALy svg{fill:var(--ifm-alert-foreground-color);display:inline-block;height:1.6em;width:1.6em}.blogPostFooterDetailsFull_mRVl{flex-direction:column}.tableOfContents_bqdL{position:sticky;top:calc(var(--ifm-navbar-height) + 1rem)}.breadcrumbHomeIcon_YNFT{height:1.1rem;position:relative;top:1px;vertical-align:top;width:1.1rem}.breadcrumbsContainer_Z_bl{--ifm-breadcrumb-size-multiplier:0.8;margin-bottom:.8rem}@media (min-width:997px){.collapseSidebarButton_PEFL,.expandButton_m80_{background-color:var(--docusaurus-collapse-button-bg)}:root{--docusaurus-announcement-bar-height:30px}.announcementBarClose_gvF7,.announcementBarPlaceholder_vyr4{flex-basis:50px}.searchBox_ZlJk{padding:var(--ifm-navbar-item-padding-vertical) var(--ifm-navbar-item-padding-horizontal)}.collapseSidebarButton_PEFL{border:1px solid var(--ifm-toc-border-color);border-radius:0;bottom:0;display:block!important;height:40px;position:sticky}.collapseSidebarButtonIcon_kv0_{margin-top:4px;transform:rotate(180deg)}.expandButtonIcon_BlDH,[dir=rtl] .collapseSidebarButtonIcon_kv0_{transform:rotate(0)}.collapseSidebarButton_PEFL:focus,.collapseSidebarButton_PEFL:hover,.expandButton_m80_:focus,.expandButton_m80_:hover{background-color:var(--docusaurus-collapse-button-bg-hover)}.menuHtmlItem_M9Kj{padding:var(--ifm-menu-link-padding-vertical) var(--ifm-menu-link-padding-horizontal)}.menu_SIkG{flex-grow:1;padding:.5rem}@supports (scrollbar-gutter:stable){.menu_SIkG{padding:.5rem 0 .5rem .5rem;scrollbar-gutter:stable}}.menuWithAnnouncementBar_GW3s{margin-bottom:var(--docusaurus-announcement-bar-height)}.sidebar_njMd{display:flex;flex-direction:column;height:100%;padding-top:var(--ifm-navbar-height);width:var(--doc-sidebar-width)}.sidebarWithHideableNavbar_wUlq{padding-top:0}.sidebarHidden_VK0M{opacity:0;visibility:hidden}.sidebarLogo_isFc{align-items:center;color:inherit!important;display:flex!important;margin:0 var(--ifm-navbar-padding-horizontal);max-height:var(--ifm-navbar-height);min-height:var(--ifm-navbar-height);text-decoration:none!important}.sidebarLogo_isFc img{height:2rem;margin-right:.5rem}.expandButton_m80_{align-items:center;display:flex;height:100%;justify-content:center;position:absolute;right:0;top:0;transition:background-color var(--ifm-transition-fast) ease;width:100%}[dir=rtl] .expandButtonIcon_BlDH{transform:rotate(180deg)}.docSidebarContainer_b6E3{border-right:1px solid var(--ifm-toc-border-color);-webkit-clip-path:inset(0);clip-path:inset(0);display:block;margin-top:calc(var(--ifm-navbar-height)*-1);transition:width var(--ifm-transition-fast) ease;width:var(--doc-sidebar-width);will-change:width}.docSidebarContainerHidden_b3ry{cursor:pointer;width:var(--doc-sidebar-hidden-width)}.sidebarViewport_Xe31{height:100%;max-height:100vh;position:sticky;top:0}.docMainContainer_gTbr{flex-grow:1;max-width:calc(100% - var(--doc-sidebar-width))}.docMainContainerEnhanced_Uz_u{max-width:calc(100% - var(--doc-sidebar-hidden-width))}.docItemWrapperEnhanced_czyv{max-width:calc(var(--ifm-container-width) + var(--doc-sidebar-width))!important}.lastUpdated_vwxv{text-align:right}.tocMobile_ITEo{display:none}.docItemCol_VOVn{max-width:75%!important}}@media (min-width:1440px){.container{max-width:var(--ifm-container-width-xl)}}@media (max-width:996px){.col{--ifm-col-width:100%;flex-basis:var(--ifm-col-width);margin-left:0}.footer{--ifm-footer-padding-horizontal:0}.colorModeToggle_DEke,.footer__link-separator,.navbar__item,.sidebar_re4s,.tableOfContents_bqdL{display:none}.footer__col{margin-bottom:calc(var(--ifm-spacing-vertical)*3)}.footer__link-item{display:block}.hero{padding-left:0;padding-right:0}.navbar>.container,.navbar>.container-fluid{padding:0}.navbar__toggle{display:inherit}.navbar__search-input{width:9rem}.pills--block,.tabs--block{flex-direction:column}.searchBox_ZlJk{position:absolute;right:var(--ifm-navbar-padding-horizontal)}.docItemContainer_F8PC{padding:0 .3rem}}@media (max-width:576px){.markdown h1:first-child{--ifm-h1-font-size:2rem}.markdown>h2{--ifm-h2-font-size:1.5rem}.markdown>h3{--ifm-h3-font-size:1.25rem}.title_f1Hy{font-size:2rem}}@media (hover:hover){.backToTopButton_sjWU:hover{background-color:var(--ifm-color-emphasis-300)}}@media (pointer:fine){.thin-scrollbar{scrollbar-width:thin}.thin-scrollbar::-webkit-scrollbar{height:var(--ifm-scrollbar-size);width:var(--ifm-scrollbar-size)}.thin-scrollbar::-webkit-scrollbar-track{background:var(--ifm-scrollbar-track-background-color);border-radius:10px}.thin-scrollbar::-webkit-scrollbar-thumb{background:var(--ifm-scrollbar-thumb-background-color);border-radius:10px}.thin-scrollbar::-webkit-scrollbar-thumb:hover{background:var(--ifm-scrollbar-thumb-hover-background-color)}}@media (prefers-reduced-motion:reduce){:root{--ifm-transition-fast:0ms;--ifm-transition-slow:0ms}}@media print{.announcementBar_mb4j,.footer,.menu,.navbar,.pagination-nav,.table-of-contents,.tocMobile_ITEo{display:none}.tabs{page-break-inside:avoid}.codeBlockLines_e6Vv{white-space:pre-wrap}}
\ No newline at end of file
diff --git a/docs/static/img/3-tiers.png b/assets/images/3-tiers-fb96effa6ad8f8f08b594f3455628305.png
similarity index 100%
rename from docs/static/img/3-tiers.png
rename to assets/images/3-tiers-fb96effa6ad8f8f08b594f3455628305.png
diff --git a/docs/static/img/abstractions-vs-simplicity.png b/assets/images/abstractions-vs-simplicity-a30a663aac02326729e09af03290388e.png
similarity index 100%
rename from docs/static/img/abstractions-vs-simplicity.png
rename to assets/images/abstractions-vs-simplicity-a30a663aac02326729e09af03290388e.png
diff --git a/docs/static/img/docs/balance.png b/assets/images/balance-fd441003eba7cf60655af6099ee55ce6.png
similarity index 100%
rename from docs/static/img/docs/balance.png
rename to assets/images/balance-fd441003eba7cf60655af6099ee55ce6.png
diff --git a/docs/blog/use-case/blocking-complexity-tree.jpg b/assets/images/blocking-complexity-tree-dd1cde956e00160fe4fadf67d6dd3649.jpg
similarity index 100%
rename from docs/blog/use-case/blocking-complexity-tree.jpg
rename to assets/images/blocking-complexity-tree-dd1cde956e00160fe4fadf67d6dd3649.jpg
diff --git a/docs/blog/is-prisma-better/count-docs.png b/assets/images/count-docs-71e2e829f7c59b9d652603c03c373dea.png
similarity index 100%
rename from docs/blog/is-prisma-better/count-docs.png
rename to assets/images/count-docs-71e2e829f7c59b9d652603c03c373dea.png
diff --git a/docs/blog/pattern-to-reconsider/crab.webp b/assets/images/crab-161f2b8e5ab129c2a175920691a845c0.webp
similarity index 100%
rename from docs/blog/pattern-to-reconsider/crab.webp
rename to assets/images/crab-161f2b8e5ab129c2a175920691a845c0.webp
diff --git a/docs/blog/use-case/deferred-complexity-tree.jpg b/assets/images/deferred-complexity-tree-3407b9e6f355d2e32aacfc0bd7216de4.jpg
similarity index 100%
rename from docs/blog/use-case/deferred-complexity-tree.jpg
rename to assets/images/deferred-complexity-tree-3407b9e6f355d2e32aacfc0bd7216de4.jpg
diff --git a/docs/blog/use-case/library-catalog.webp b/assets/images/library-catalog-37d0f18aa61b71ed77ae72a945f3c1de.webp
similarity index 100%
rename from docs/blog/use-case/library-catalog.webp
rename to assets/images/library-catalog-37d0f18aa61b71ed77ae72a945f3c1de.webp
diff --git a/docs/blog/which-monorepo/monorepo-high-level.png b/assets/images/monorepo-high-level-291b29cc962144a43d78143889ba5d3b.png
similarity index 100%
rename from docs/blog/which-monorepo/monorepo-high-level.png
rename to assets/images/monorepo-high-level-291b29cc962144a43d78143889ba5d3b.png
diff --git a/docs/static/img/monorepo-structure.png b/assets/images/monorepo-structure-d3796dd4b9597a4f74c8c13fcb055511.png
similarity index 100%
rename from docs/static/img/monorepo-structure.png
rename to assets/images/monorepo-structure-d3796dd4b9597a4f74c8c13fcb055511.png
diff --git a/docs/static/img/on-top-of-frameworks.png b/assets/images/on-top-of-frameworks-ae0faae30dd942814098bd544a00e13f.png
similarity index 100%
rename from docs/static/img/on-top-of-frameworks.png
rename to assets/images/on-top-of-frameworks-ae0faae30dd942814098bd544a00e13f.png
diff --git a/docs/blog/is-prisma-better/one-hump.png b/assets/images/one-hump-dbd2860e9cff3ebe16ced6cf7c4ec64f.png
similarity index 100%
rename from docs/blog/is-prisma-better/one-hump.png
rename to assets/images/one-hump-dbd2860e9cff3ebe16ced6cf7c4ec64f.png
diff --git a/docs/blog/is-prisma-better/pg-driver-is-faster.png b/assets/images/pg-driver-is-faster-88ee7217dd06fff1cc35ee2e8ccc3736.png
similarity index 100%
rename from docs/blog/is-prisma-better/pg-driver-is-faster.png
rename to assets/images/pg-driver-is-faster-88ee7217dd06fff1cc35ee2e8ccc3736.png
diff --git a/docs/static/img/practica-logo.png b/assets/images/practica-logo-dec9868d9568eacfa5507f97b16271d8.png
similarity index 100%
rename from docs/static/img/practica-logo.png
rename to assets/images/practica-logo-dec9868d9568eacfa5507f97b16271d8.png
diff --git a/docs/blog/10-masterpiece-articles/selective-unit-tests.png b/assets/images/selective-unit-tests-b5303f3a425ab038c9aede3d14214abc.png
similarity index 100%
rename from docs/blog/10-masterpiece-articles/selective-unit-tests.png
rename to assets/images/selective-unit-tests-b5303f3a425ab038c9aede3d14214abc.png
diff --git a/docs/blog/is-prisma-better/sequelize-log.png b/assets/images/sequelize-log-af147131006e4207620f8e3918724ecc.png
similarity index 100%
rename from docs/blog/is-prisma-better/sequelize-log.png
rename to assets/images/sequelize-log-af147131006e4207620f8e3918724ecc.png
diff --git a/docs/blog/10-masterpiece-articles/spectrum-of-testing.png b/assets/images/spectrum-of-testing-16da74a9b2c05eee95923f75e09bc713.png
similarity index 100%
rename from docs/blog/10-masterpiece-articles/spectrum-of-testing.png
rename to assets/images/spectrum-of-testing-16da74a9b2c05eee95923f75e09bc713.png
diff --git a/docs/blog/is-prisma-better/suite.png b/assets/images/suite-4d046fac9ca9db57eafa55c4a7eac116.png
similarity index 100%
rename from docs/blog/is-prisma-better/suite.png
rename to assets/images/suite-4d046fac9ca9db57eafa55c4a7eac116.png
diff --git a/docs/static/img/tech-stack.png b/assets/images/tech-stack-2703d0573d35db925b7d317e9e2d1827.png
similarity index 100%
rename from docs/static/img/tech-stack.png
rename to assets/images/tech-stack-2703d0573d35db925b7d317e9e2d1827.png
diff --git a/docs/blog/10-masterpiece-articles/the-3-phases.jpeg b/assets/images/the-3-phases-06497437466da49c00ce842bb19d7a6d.jpeg
similarity index 100%
rename from docs/blog/10-masterpiece-articles/the-3-phases.jpeg
rename to assets/images/the-3-phases-06497437466da49c00ce842bb19d7a6d.jpeg
diff --git a/docs/blog/crucial-tests/the-hidden-corners.png b/assets/images/the-hidden-corners-44855c2e5d9184502e1dc72b07d53cef.png
similarity index 100%
rename from docs/blog/crucial-tests/the-hidden-corners.png
rename to assets/images/the-hidden-corners-44855c2e5d9184502e1dc72b07d53cef.png
diff --git a/docs/blog/is-prisma-better/throughput-benchmark.png b/assets/images/throughput-benchmark-91b84b17d860e3769a11be3835d6961a.png
similarity index 100%
rename from docs/blog/is-prisma-better/throughput-benchmark.png
rename to assets/images/throughput-benchmark-91b84b17d860e3769a11be3835d6961a.png
diff --git a/docs/blog/is-prisma-better/two-humps.png b/assets/images/two-humps-c54bed6a1428c1ad0f7e028d10a44206.png
similarity index 100%
rename from docs/blog/is-prisma-better/two-humps.png
rename to assets/images/two-humps-c54bed6a1428c1ad0f7e028d10a44206.png
diff --git a/docs/blog/use-case/use-case-coverage.png b/assets/images/use-case-coverage-3f223674f7783dfc904109647ad99304.png
similarity index 100%
rename from docs/blog/use-case/use-case-coverage.png
rename to assets/images/use-case-coverage-3f223674f7783dfc904109647ad99304.png
diff --git a/docs/blog/use-case/use-code-example.png b/assets/images/use-code-example-6d6c34330ad8a86f7c511123d4d5f654.png
similarity index 100%
rename from docs/blog/use-case/use-code-example.png
rename to assets/images/use-code-example-6d6c34330ad8a86f7c511123d4d5f654.png
diff --git a/assets/js/01a85c17.3889a5e1.js b/assets/js/01a85c17.3889a5e1.js
new file mode 100644
index 00000000..42cc55b9
--- /dev/null
+++ b/assets/js/01a85c17.3889a5e1.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8209],{6669:(e,t,a)=>{a.d(t,{A:()=>p});var l=a(6540),r=a(53),n=a(9408),s=a(4581),i=a(5489),c=a(1312);const m={sidebar:"sidebar_re4s",sidebarItemTitle:"sidebarItemTitle_pO2u",sidebarItemList:"sidebarItemList_Yudw",sidebarItem:"sidebarItem__DBe",sidebarItemLink:"sidebarItemLink_mo7H",sidebarItemLinkActive:"sidebarItemLinkActive_I1ZP"};function o(e){let{sidebar:t}=e;return l.createElement("aside",{className:"col col--3"},l.createElement("nav",{className:(0,r.A)(m.sidebar,"thin-scrollbar"),"aria-label":(0,c.T)({id:"theme.blog.sidebar.navAriaLabel",message:"Blog recent posts navigation",description:"The ARIA label for recent posts in the blog sidebar"})},l.createElement("div",{className:(0,r.A)(m.sidebarItemTitle,"margin-bottom--md")},t.title),l.createElement("ul",{className:(0,r.A)(m.sidebarItemList,"clean-list")},t.items.map((e=>l.createElement("li",{key:e.permalink,className:m.sidebarItem},l.createElement(i.A,{isNavLink:!0,to:e.permalink,className:m.sidebarItemLink,activeClassName:m.sidebarItemLinkActive},e.title)))))))}var u=a(5600);function g(e){let{sidebar:t}=e;return l.createElement("ul",{className:"menu__list"},t.items.map((e=>l.createElement("li",{key:e.permalink,className:"menu__list-item"},l.createElement(i.A,{isNavLink:!0,to:e.permalink,className:"menu__link",activeClassName:"menu__link--active"},e.title)))))}function b(e){return l.createElement(u.GX,{component:g,props:e})}function d(e){let{sidebar:t}=e;const a=(0,s.l)();return t?.items.length?"mobile"===a?l.createElement(b,{sidebar:t}):l.createElement(o,{sidebar:t}):null}function p(e){const{sidebar:t,toc:a,children:s,...i}=e,c=t&&t.items.length>0;return l.createElement(n.A,i,l.createElement("div",{className:"container margin-vert--lg"},l.createElement("div",{className:"row"},l.createElement(d,{sidebar:t}),l.createElement("main",{className:(0,r.A)("col",{"col--7":c,"col--9 col--offset-1":!c}),itemScope:!0,itemType:"http://schema.org/Blog"},s),a&&l.createElement("div",{className:"col col--2"},a))))}},9158:(e,t,a)=>{a.r(t),a.d(t,{default:()=>p});var l=a(6540),r=a(53),n=a(1312);const s=()=>(0,n.T)({id:"theme.tags.tagsPageTitle",message:"Tags",description:"The title of the tag list page"});var i=a(1003),c=a(7559),m=a(6669),o=a(6133);const u={tag:"tag_Nnez"};function g(e){let{letterEntry:t}=e;return l.createElement("article",null,l.createElement("h2",null,t.letter),l.createElement("ul",{className:"padding--none"},t.tags.map((e=>l.createElement("li",{key:e.permalink,className:u.tag},l.createElement(o.A,e))))),l.createElement("hr",null))}function b(e){let{tags:t}=e;const a=function(e){const t={};return Object.values(e).forEach((e=>{const a=function(e){return e[0].toUpperCase()}(e.label);t[a]??=[],t[a].push(e)})),Object.entries(t).sort(((e,t)=>{let[a]=e,[l]=t;return a.localeCompare(l)})).map((e=>{let[t,a]=e;return{letter:t,tags:a.sort(((e,t)=>e.label.localeCompare(t.label)))}}))}(t);return l.createElement("section",{className:"margin-vert--lg"},a.map((e=>l.createElement(g,{key:e.letter,letterEntry:e}))))}var d=a(1463);function p(e){let{tags:t,sidebar:a}=e;const n=s();return l.createElement(i.e3,{className:(0,r.A)(c.G.wrapper.blogPages,c.G.page.blogTagsListPage)},l.createElement(i.be,{title:n}),l.createElement(d.A,{tag:"blog_tags_list"}),l.createElement(m.A,{sidebar:a},l.createElement("h1",null,n),l.createElement(b,{tags:t})))}},6133:(e,t,a)=>{a.d(t,{A:()=>i});var l=a(6540),r=a(53),n=a(5489);const s={tag:"tag_zVej",tagRegular:"tagRegular_sFm0",tagWithCount:"tagWithCount_h2kH"};function i(e){let{permalink:t,label:a,count:i}=e;return l.createElement(n.A,{href:t,className:(0,r.A)(s.tag,i?s.tagWithCount:s.tagRegular)},a,i&&l.createElement("span",null,i))}}}]);
\ No newline at end of file
diff --git a/assets/js/04975d12.2f38dbfa.js b/assets/js/04975d12.2f38dbfa.js
new file mode 100644
index 00000000..7b79ee8f
--- /dev/null
+++ b/assets/js/04975d12.2f38dbfa.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2933],{3901:a=>{a.exports=JSON.parse('{"permalink":"/blog/tags/nock","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/0a44bc10.88e5bed1.js b/assets/js/0a44bc10.88e5bed1.js
new file mode 100644
index 00000000..ed88fc05
--- /dev/null
+++ b/assets/js/0a44bc10.88e5bed1.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8480],{5070:a=>{a.exports=JSON.parse('{"permalink":"/blog/tags/practica","page":1,"postsPerPage":10,"totalPages":1,"totalCount":3,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/0f5bddc1.afe0e6f6.js b/assets/js/0f5bddc1.afe0e6f6.js
new file mode 100644
index 00000000..8863275a
--- /dev/null
+++ b/assets/js/0f5bddc1.afe0e6f6.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[7144],{9443:s=>{s.exports=JSON.parse('{"label":"supertest","permalink":"/blog/tags/supertest","allTagsPath":"/blog/tags","count":2}')}}]);
\ No newline at end of file
diff --git a/assets/js/14f3c1c8.ad35a58d.js b/assets/js/14f3c1c8.ad35a58d.js
new file mode 100644
index 00000000..0e0dba50
--- /dev/null
+++ b/assets/js/14f3c1c8.ad35a58d.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2976],{4474:a=>{a.exports=JSON.parse('{"permalink":"/blog/tags/prisma","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/15b89b76.c736a9a1.js b/assets/js/15b89b76.c736a9a1.js
new file mode 100644
index 00000000..0728cfa1
--- /dev/null
+++ b/assets/js/15b89b76.c736a9a1.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[1341],{3247:a=>{a.exports=JSON.parse('{"label":"testing","permalink":"/blog/tags/testing","allTagsPath":"/blog/tags","count":4}')}}]);
\ No newline at end of file
diff --git a/assets/js/1774.120cdaed.js b/assets/js/1774.120cdaed.js
new file mode 100644
index 00000000..25e897ec
--- /dev/null
+++ b/assets/js/1774.120cdaed.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[1774],{1774:(e,t,a)=>{a.r(t),a.d(t,{default:()=>c});var n=a(6540),l=a(1312),o=a(1003),r=a(9408);function c(){return n.createElement(n.Fragment,null,n.createElement(o.be,{title:(0,l.T)({id:"theme.NotFound.title",message:"Page Not Found"})}),n.createElement(r.A,null,n.createElement("main",{className:"container margin-vert--xl"},n.createElement("div",{className:"row"},n.createElement("div",{className:"col col--6 col--offset-3"},n.createElement("h1",{className:"hero__title"},n.createElement(l.A,{id:"theme.NotFound.title",description:"The title of the 404 page"},"Page Not Found")),n.createElement("p",null,n.createElement(l.A,{id:"theme.NotFound.p1",description:"The first paragraph of the 404 page"},"We could not find what you were looking for.")),n.createElement("p",null,n.createElement(l.A,{id:"theme.NotFound.p2",description:"The 2nd paragraph of the 404 page"},"Please contact the owner of the site that linked you to the original URL and let them know their link is broken.")))))))}}}]);
\ No newline at end of file
diff --git a/assets/js/17896441.a4ccf7d4.js b/assets/js/17896441.a4ccf7d4.js
new file mode 100644
index 00000000..2db6d591
--- /dev/null
+++ b/assets/js/17896441.a4ccf7d4.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8401],{5022:(e,t,n)=>{n.r(t),n.d(t,{default:()=>ie});var a=n(6540),l=n(1003),o=n(9532);const r=a.createContext(null);function s(e){let{children:t,content:n}=e;const l=function(e){return(0,a.useMemo)((()=>({metadata:e.metadata,frontMatter:e.frontMatter,assets:e.assets,contentTitle:e.contentTitle,toc:e.toc})),[e])}(n);return a.createElement(r.Provider,{value:l},t)}function c(){const e=(0,a.useContext)(r);if(null===e)throw new o.dV("DocProvider");return e}function i(){const{metadata:e,frontMatter:t,assets:n}=c();return a.createElement(l.be,{title:e.title,description:e.description,keywords:t.keywords,image:n.image??t.image})}var d=n(53),m=n(4581),u=n(8168),v=n(1312),b=n(9022);function p(e){const{previous:t,next:n}=e;return a.createElement("nav",{className:"pagination-nav docusaurus-mt-lg","aria-label":(0,v.T)({id:"theme.docs.paginator.navAriaLabel",message:"Docs pages",description:"The ARIA label for the docs pagination"})},t&&a.createElement(b.A,(0,u.A)({},t,{subLabel:a.createElement(v.A,{id:"theme.docs.paginator.previous",description:"The label used to navigate to the previous doc"},"Previous")})),n&&a.createElement(b.A,(0,u.A)({},n,{subLabel:a.createElement(v.A,{id:"theme.docs.paginator.next",description:"The label used to navigate to the next doc"},"Next"),isNext:!0})))}function h(){const{metadata:e}=c();return a.createElement(p,{previous:e.previous,next:e.next})}var f=n(4586),E=n(5489),g=n(4070),A=n(7559),L=n(5597),C=n(2252);const N={unreleased:function(e){let{siteTitle:t,versionMetadata:n}=e;return a.createElement(v.A,{id:"theme.docs.versions.unreleasedVersionLabel",description:"The label used to tell the user that he's browsing an unreleased doc version",values:{siteTitle:t,versionLabel:a.createElement("b",null,n.label)}},"This is unreleased documentation for {siteTitle} {versionLabel} version.")},unmaintained:function(e){let{siteTitle:t,versionMetadata:n}=e;return a.createElement(v.A,{id:"theme.docs.versions.unmaintainedVersionLabel",description:"The label used to tell the user that he's browsing an unmaintained doc version",values:{siteTitle:t,versionLabel:a.createElement("b",null,n.label)}},"This is documentation for {siteTitle} {versionLabel}, which is no longer actively maintained.")}};function _(e){const t=N[e.versionMetadata.banner];return a.createElement(t,e)}function x(e){let{versionLabel:t,to:n,onClick:l}=e;return a.createElement(v.A,{id:"theme.docs.versions.latestVersionSuggestionLabel",description:"The label used to tell the user to check the latest version",values:{versionLabel:t,latestVersionLink:a.createElement("b",null,a.createElement(E.A,{to:n,onClick:l},a.createElement(v.A,{id:"theme.docs.versions.latestVersionLinkLabel",description:"The label used for the latest version suggestion link label"},"latest version")))}},"For up-to-date documentation, see the {latestVersionLink} ({versionLabel}).")}function T(e){let{className:t,versionMetadata:n}=e;const{siteConfig:{title:l}}=(0,f.A)(),{pluginId:o}=(0,g.vT)({failfast:!0}),{savePreferredVersionName:r}=(0,L.g1)(o),{latestDocSuggestion:s,latestVersionSuggestion:c}=(0,g.HW)(o),i=s??(m=c).docs.find((e=>e.id===m.mainDocId));var m;return a.createElement("div",{className:(0,d.A)(t,A.G.docs.docVersionBanner,"alert alert--warning margin-bottom--md"),role:"alert"},a.createElement("div",null,a.createElement(_,{siteTitle:l,versionMetadata:n})),a.createElement("div",{className:"margin-top--md"},a.createElement(x,{versionLabel:c.label,to:i.path,onClick:()=>r(c.name)})))}function k(e){let{className:t}=e;const n=(0,C.r)();return n.banner?a.createElement(T,{className:t,versionMetadata:n}):null}function H(e){let{className:t}=e;const n=(0,C.r)();return n.badge?a.createElement("span",{className:(0,d.A)(t,A.G.docs.docVersionBadge,"badge badge--secondary")},a.createElement(v.A,{id:"theme.docs.versionBadge.label",values:{versionLabel:n.label}},"Version: {versionLabel}")):null}function U(e){let{lastUpdatedAt:t,formattedLastUpdatedAt:n}=e;return a.createElement(v.A,{id:"theme.lastUpdated.atDate",description:"The words used to describe on which date a page has been last updated",values:{date:a.createElement("b",null,a.createElement("time",{dateTime:new Date(1e3*t).toISOString()},n))}}," on {date}")}function y(e){let{lastUpdatedBy:t}=e;return a.createElement(v.A,{id:"theme.lastUpdated.byUser",description:"The words used to describe by who the page has been last updated",values:{user:a.createElement("b",null,t)}}," by {user}")}function w(e){let{lastUpdatedAt:t,formattedLastUpdatedAt:n,lastUpdatedBy:l}=e;return a.createElement("span",{className:A.G.common.lastUpdated},a.createElement(v.A,{id:"theme.lastUpdated.lastUpdatedAtBy",description:"The sentence used to display when a page has been last updated, and by who",values:{atDate:t&&n?a.createElement(U,{lastUpdatedAt:t,formattedLastUpdatedAt:n}):"",byUser:l?a.createElement(y,{lastUpdatedBy:l}):""}},"Last updated{atDate}{byUser}"),!1)}var M=n(1943),I=n(2053);const B={lastUpdated:"lastUpdated_vwxv"};function O(e){return a.createElement("div",{className:(0,d.A)(A.G.docs.docFooterTagsRow,"row margin-bottom--sm")},a.createElement("div",{className:"col"},a.createElement(I.A,e)))}function V(e){let{editUrl:t,lastUpdatedAt:n,lastUpdatedBy:l,formattedLastUpdatedAt:o}=e;return a.createElement("div",{className:(0,d.A)(A.G.docs.docFooterEditMetaRow,"row")},a.createElement("div",{className:"col"},t&&a.createElement(M.A,{editUrl:t})),a.createElement("div",{className:(0,d.A)("col",B.lastUpdated)},(n||l)&&a.createElement(w,{lastUpdatedAt:n,formattedLastUpdatedAt:o,lastUpdatedBy:l})))}function P(){const{metadata:e}=c(),{editUrl:t,lastUpdatedAt:n,formattedLastUpdatedAt:l,lastUpdatedBy:o,tags:r}=e,s=r.length>0,i=!!(t||n||o);return s||i?a.createElement("footer",{className:(0,d.A)(A.G.docs.docFooter,"docusaurus-mt-lg")},s&&a.createElement(O,{tags:r}),i&&a.createElement(V,{editUrl:t,lastUpdatedAt:n,lastUpdatedBy:o,formattedLastUpdatedAt:l})):null}var S=n(1422),D=n(5195);const G={tocCollapsibleButton:"tocCollapsibleButton_TO0P",tocCollapsibleButtonExpanded:"tocCollapsibleButtonExpanded_MG3E"};function R(e){let{collapsed:t,...n}=e;return a.createElement("button",(0,u.A)({type:"button"},n,{className:(0,d.A)("clean-btn",G.tocCollapsibleButton,!t&&G.tocCollapsibleButtonExpanded,n.className)}),a.createElement(v.A,{id:"theme.TOCCollapsible.toggleButtonLabel",description:"The label used by the button on the collapsible TOC component"},"On this page"))}const F={tocCollapsible:"tocCollapsible_ETCw",tocCollapsibleContent:"tocCollapsibleContent_vkbj",tocCollapsibleExpanded:"tocCollapsibleExpanded_sAul"};function z(e){let{toc:t,className:n,minHeadingLevel:l,maxHeadingLevel:o}=e;const{collapsed:r,toggleCollapsed:s}=(0,S.u)({initialState:!0});return a.createElement("div",{className:(0,d.A)(F.tocCollapsible,!r&&F.tocCollapsibleExpanded,n)},a.createElement(R,{collapsed:r,onClick:s}),a.createElement(S.N,{lazy:!0,className:F.tocCollapsibleContent,collapsed:r},a.createElement(D.A,{toc:t,minHeadingLevel:l,maxHeadingLevel:o})))}const j={tocMobile:"tocMobile_ITEo"};function q(){const{toc:e,frontMatter:t}=c();return a.createElement(z,{toc:e,minHeadingLevel:t.toc_min_heading_level,maxHeadingLevel:t.toc_max_heading_level,className:(0,d.A)(A.G.docs.docTocMobile,j.tocMobile)})}var $=n(7763);function W(){const{toc:e,frontMatter:t}=c();return a.createElement($.A,{toc:e,minHeadingLevel:t.toc_min_heading_level,maxHeadingLevel:t.toc_max_heading_level,className:A.G.docs.docTocDesktop})}var Y=n(1107),Z=n(7780);function J(e){let{children:t}=e;const n=function(){const{metadata:e,frontMatter:t,contentTitle:n}=c();return t.hide_title||void 0!==n?null:e.title}();return a.createElement("div",{className:(0,d.A)(A.G.docs.docMarkdown,"markdown")},n&&a.createElement("header",null,a.createElement(Y.A,{as:"h1"},n)),a.createElement(Z.A,null,t))}var K=n(1754),Q=n(9169),X=n(6025);function ee(e){return a.createElement("svg",(0,u.A)({viewBox:"0 0 24 24"},e),a.createElement("path",{d:"M10 19v-5h4v5c0 .55.45 1 1 1h3c.55 0 1-.45 1-1v-7h1.7c.46 0 .68-.57.33-.87L12.67 3.6c-.38-.34-.96-.34-1.34 0l-8.36 7.53c-.34.3-.13.87.33.87H5v7c0 .55.45 1 1 1h3c.55 0 1-.45 1-1z",fill:"currentColor"}))}const te={breadcrumbHomeIcon:"breadcrumbHomeIcon_YNFT"};function ne(){const e=(0,X.A)("/");return a.createElement("li",{className:"breadcrumbs__item"},a.createElement(E.A,{"aria-label":(0,v.T)({id:"theme.docs.breadcrumbs.home",message:"Home page",description:"The ARIA label for the home page in the breadcrumbs"}),className:"breadcrumbs__link",href:e},a.createElement(ee,{className:te.breadcrumbHomeIcon})))}const ae={breadcrumbsContainer:"breadcrumbsContainer_Z_bl"};function le(e){let{children:t,href:n,isLast:l}=e;const o="breadcrumbs__link";return l?a.createElement("span",{className:o,itemProp:"name"},t):n?a.createElement(E.A,{className:o,href:n,itemProp:"item"},a.createElement("span",{itemProp:"name"},t)):a.createElement("span",{className:o},t)}function oe(e){let{children:t,active:n,index:l,addMicrodata:o}=e;return a.createElement("li",(0,u.A)({},o&&{itemScope:!0,itemProp:"itemListElement",itemType:"https://schema.org/ListItem"},{className:(0,d.A)("breadcrumbs__item",{"breadcrumbs__item--active":n})}),t,a.createElement("meta",{itemProp:"position",content:String(l+1)}))}function re(){const e=(0,K.OF)(),t=(0,Q.Dt)();return e?a.createElement("nav",{className:(0,d.A)(A.G.docs.docBreadcrumbs,ae.breadcrumbsContainer),"aria-label":(0,v.T)({id:"theme.docs.breadcrumbs.navAriaLabel",message:"Breadcrumbs",description:"The ARIA label for the breadcrumbs"})},a.createElement("ul",{className:"breadcrumbs",itemScope:!0,itemType:"https://schema.org/BreadcrumbList"},t&&a.createElement(ne,null),e.map(((t,n)=>{const l=n===e.length-1;return a.createElement(oe,{key:n,active:l,index:n,addMicrodata:!!t.href},a.createElement(le,{href:t.href,isLast:l},t.label))})))):null}const se={docItemContainer:"docItemContainer_Djhp",docItemCol:"docItemCol_VOVn"};function ce(e){let{children:t}=e;const n=function(){const{frontMatter:e,toc:t}=c(),n=(0,m.l)(),l=e.hide_table_of_contents,o=!l&&t.length>0;return{hidden:l,mobile:o?a.createElement(q,null):void 0,desktop:!o||"desktop"!==n&&"ssr"!==n?void 0:a.createElement(W,null)}}();return a.createElement("div",{className:"row"},a.createElement("div",{className:(0,d.A)("col",!n.hidden&&se.docItemCol)},a.createElement(k,null),a.createElement("div",{className:se.docItemContainer},a.createElement("article",null,a.createElement(re,null),a.createElement(H,null),n.mobile,a.createElement(J,null,t),a.createElement(P,null)),a.createElement(h,null))),n.desktop&&a.createElement("div",{className:"col col--3"},n.desktop))}function ie(e){const t=`docs-doc-id-${e.content.metadata.unversionedId}`,n=e.content;return a.createElement(s,{content:e.content},a.createElement(l.e3,{className:t},a.createElement(i,null),a.createElement(ce,null,a.createElement(n,null))))}},7763:(e,t,n)=>{n.d(t,{A:()=>d});var a=n(8168),l=n(6540),o=n(53),r=n(5195);const s={tableOfContents:"tableOfContents_bqdL",docItemContainer:"docItemContainer_F8PC"},c="table-of-contents__link toc-highlight",i="table-of-contents__link--active";function d(e){let{className:t,...n}=e;return l.createElement("div",{className:(0,o.A)(s.tableOfContents,"thin-scrollbar",t)},l.createElement(r.A,(0,a.A)({},n,{linkClassName:c,linkActiveClassName:i})))}},5195:(e,t,n)=>{n.d(t,{A:()=>b});var a=n(8168),l=n(6540),o=n(6342);function r(e){const t=e.map((e=>({...e,parentIndex:-1,children:[]}))),n=Array(7).fill(-1);t.forEach(((e,t)=>{const a=n.slice(2,e.level);e.parentIndex=Math.max(...a),n[e.level]=t}));const a=[];return t.forEach((e=>{const{parentIndex:n,...l}=e;n>=0?t[n].children.push(l):a.push(l)})),a}function s(e){let{toc:t,minHeadingLevel:n,maxHeadingLevel:a}=e;return t.flatMap((e=>{const t=s({toc:e.children,minHeadingLevel:n,maxHeadingLevel:a});return function(e){return e.level>=n&&e.level<=a}(e)?[{...e,children:t}]:t}))}function c(e){const t=e.getBoundingClientRect();return t.top===t.bottom?c(e.parentNode):t}function i(e,t){let{anchorTopOffset:n}=t;const a=e.find((e=>c(e).top>=n));if(a){return function(e){return e.top>0&&e.bottom{e.current=t?0:document.querySelector(".navbar").clientHeight}),[t]),e}function m(e){const t=(0,l.useRef)(void 0),n=d();(0,l.useEffect)((()=>{if(!e)return()=>{};const{linkClassName:a,linkActiveClassName:l,minHeadingLevel:o,maxHeadingLevel:r}=e;function s(){const e=function(e){return Array.from(document.getElementsByClassName(e))}(a),s=function(e){let{minHeadingLevel:t,maxHeadingLevel:n}=e;const a=[];for(let l=t;l<=n;l+=1)a.push(`h${l}.anchor`);return Array.from(document.querySelectorAll(a.join()))}({minHeadingLevel:o,maxHeadingLevel:r}),c=i(s,{anchorTopOffset:n.current}),d=e.find((e=>c&&c.id===function(e){return decodeURIComponent(e.href.substring(e.href.indexOf("#")+1))}(e)));e.forEach((e=>{!function(e,n){n?(t.current&&t.current!==e&&t.current.classList.remove(l),e.classList.add(l),t.current=e):e.classList.remove(l)}(e,e===d)}))}return document.addEventListener("scroll",s),document.addEventListener("resize",s),s(),()=>{document.removeEventListener("scroll",s),document.removeEventListener("resize",s)}}),[e,n])}function u(e){let{toc:t,className:n,linkClassName:a,isChild:o}=e;return t.length?l.createElement("ul",{className:o?void 0:n},t.map((e=>l.createElement("li",{key:e.id},l.createElement("a",{href:`#${e.id}`,className:a??void 0,dangerouslySetInnerHTML:{__html:e.value}}),l.createElement(u,{isChild:!0,toc:e.children,className:n,linkClassName:a}))))):null}const v=l.memo(u);function b(e){let{toc:t,className:n="table-of-contents table-of-contents__left-border",linkClassName:c="table-of-contents__link",linkActiveClassName:i,minHeadingLevel:d,maxHeadingLevel:u,...b}=e;const p=(0,o.p)(),h=d??p.tableOfContents.minHeadingLevel,f=u??p.tableOfContents.maxHeadingLevel,E=function(e){let{toc:t,minHeadingLevel:n,maxHeadingLevel:a}=e;return(0,l.useMemo)((()=>s({toc:r(t),minHeadingLevel:n,maxHeadingLevel:a})),[t,n,a])}({toc:t,minHeadingLevel:h,maxHeadingLevel:f});return m((0,l.useMemo)((()=>{if(c&&i)return{linkClassName:c,linkActiveClassName:i,minHeadingLevel:h,maxHeadingLevel:f}}),[c,i,h,f])),l.createElement(v,(0,a.A)({toc:E,className:n,linkClassName:c},b))}},2252:(e,t,n)=>{n.d(t,{n:()=>r,r:()=>s});var a=n(6540),l=n(9532);const o=a.createContext(null);function r(e){let{children:t,version:n}=e;return a.createElement(o.Provider,{value:n},t)}function s(){const e=(0,a.useContext)(o);if(null===e)throw new l.dV("DocsVersionProvider");return e}}}]);
\ No newline at end of file
diff --git a/assets/js/17da2d17.4d3b35b8.js b/assets/js/17da2d17.4d3b35b8.js
new file mode 100644
index 00000000..696fe9cb
--- /dev/null
+++ b/assets/js/17da2d17.4d3b35b8.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[3768],{5680:(e,t,r)=>{r.d(t,{xA:()=>p,yg:()=>f});var a=r(6540);function o(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function i(e,t){var r=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),r.push.apply(r,a)}return r}function n(e){for(var t=1;t=0||(o[r]=e[r]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,r)&&(o[r]=e[r])}return o}var l=a.createContext({}),c=function(e){var t=a.useContext(l),r=t;return e&&(r="function"==typeof e?e(t):n(n({},t),e)),r},p=function(e){var t=c(e.components);return a.createElement(l.Provider,{value:t},e.children)},d="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},g=a.forwardRef((function(e,t){var r=e.components,o=e.mdxType,i=e.originalType,l=e.parentName,p=s(e,["components","mdxType","originalType","parentName"]),d=c(r),g=o,f=d["".concat(l,".").concat(g)]||d[g]||u[g]||i;return r?a.createElement(f,n(n({ref:t},p),{},{components:r})):a.createElement(f,n({ref:t},p))}));function f(e,t){var r=arguments,o=t&&t.mdxType;if("string"==typeof e||o){var i=r.length,n=new Array(i);n[0]=g;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[d]="string"==typeof e?e:o,n[1]=s;for(var c=2;c{r.r(t),r.d(t,{assets:()=>l,contentTitle:()=>n,default:()=>u,frontMatter:()=>i,metadata:()=>s,toc:()=>c});var a=r(8168),o=(r(6540),r(5680));const i={slug:"practica-is-alive",date:"2022-07-15T10:00",hide_table_of_contents:!0,title:"Practica.js v0.0.1 is alive",authors:["goldbergyoni"],tags:["node.js","express","fastify"]},n="Practica.js v0.0.1 is alive",s={permalink:"/blog/practica-is-alive",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/practica-is-alive/index.md",source:"@site/blog/practica-is-alive/index.md",title:"Practica.js v0.0.1 is alive",description:"\ud83e\udd73 We're thrilled to launch the very first version of Practica.js.",date:"2022-07-15T10:00:00.000Z",formattedDate:"July 15, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"fastify",permalink:"/blog/tags/fastify"}],readingTime:1.21,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"practica-is-alive",date:"2022-07-15T10:00",hide_table_of_contents:!0,title:"Practica.js v0.0.1 is alive",authors:["goldbergyoni"],tags:["node.js","express","fastify"]},prevItem:{title:"Popular Node.js patterns and tools to re-consider",permalink:"/blog/popular-nodejs-pattern-and-tools-to-reconsider"}},l={authorsImageUrls:[void 0]},c=[{value:"What is Practica is one paragraph",id:"what-is-practica-is-one-paragraph",level:2},{value:"90 seconds video",id:"90-seconds-video",level:2},{value:"How to get started",id:"how-to-get-started",level:2}],p={toc:c},d="wrapper";function u(e){let{components:t,...r}=e;return(0,o.yg)(d,(0,a.A)({},p,r,{components:t,mdxType:"MDXLayout"}),(0,o.yg)("p",null,"\ud83e\udd73 We're thrilled to launch the very first version of Practica.js."),(0,o.yg)("h2",{id:"what-is-practica-is-one-paragraph"},"What is Practica is one paragraph"),(0,o.yg)("p",null,"Although Node.js has great frameworks \ud83d\udc9a, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are ",(0,o.yg)("a",{parentName:"p",href:"./decisions/index"},"neatly and thoughtfully documented"),". We strive to keep things as simple and standard as possible and base our work off the popular guide: ",(0,o.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js Best Practices"),"."),(0,o.yg)("p",null,"Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app. "),(0,o.yg)("h2",{id:"90-seconds-video"},"90 seconds video"),(0,o.yg)("iframe",{width:"1024",height:"768",src:"https://www.youtube.com/embed/F6kAs2VEcKw",title:"YouTube video player",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:!0}),(0,o.yg)("h2",{id:"how-to-get-started"},"How to get started"),(0,o.yg)("p",null,"To get up to speed quickly, read our ",(0,o.yg)("a",{parentName:"p",href:"https://practica.dev/the-basics/getting-started-quickly"},"getting started guide"),"."))}u.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/1a7530a6.e927eca4.js b/assets/js/1a7530a6.e927eca4.js
new file mode 100644
index 00000000..daa4f50b
--- /dev/null
+++ b/assets/js/1a7530a6.e927eca4.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2505],{5680:(e,r,t)=>{t.d(r,{xA:()=>p,yg:()=>y});var n=t(6540);function a(e,r,t){return r in e?Object.defineProperty(e,r,{value:t,enumerable:!0,configurable:!0,writable:!0}):e[r]=t,e}function o(e,r){var t=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);r&&(n=n.filter((function(r){return Object.getOwnPropertyDescriptor(e,r).enumerable}))),t.push.apply(t,n)}return t}function i(e){for(var r=1;r=0||(a[t]=e[t]);return a}(e,r);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,t)&&(a[t]=e[t])}return a}var g=n.createContext({}),s=function(e){var r=n.useContext(g),t=r;return e&&(t="function"==typeof e?e(r):i(i({},r),e)),t},p=function(e){var r=s(e.components);return n.createElement(g.Provider,{value:r},e.children)},u="mdxType",c={inlineCode:"code",wrapper:function(e){var r=e.children;return n.createElement(n.Fragment,{},r)}},m=n.forwardRef((function(e,r){var t=e.components,a=e.mdxType,o=e.originalType,g=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),u=s(t),m=a,y=u["".concat(g,".").concat(m)]||u[m]||c[m]||o;return t?n.createElement(y,i(i({ref:r},p),{},{components:t})):n.createElement(y,i({ref:r},p))}));function y(e,r){var t=arguments,a=r&&r.mdxType;if("string"==typeof e||a){var o=t.length,i=new Array(o);i[0]=m;var l={};for(var g in r)hasOwnProperty.call(r,g)&&(l[g]=r[g]);l.originalType=e,l[u]="string"==typeof e?e:a,i[1]=l;for(var s=2;s{t.r(r),t.d(r,{assets:()=>g,contentTitle:()=>i,default:()=>c,frontMatter:()=>o,metadata:()=>l,toc:()=>s});var n=t(8168),a=(t(6540),t(5680));const o={id:"features",sidebar_position:5},i="Coming soon: Features and practices",l={unversionedId:"features",id:"features",title:"Coming soon: Features and practices",description:"WIP - This doc is being written these days",source:"@site/docs/features-reference.md",sourceDirName:".",slug:"/features",permalink:"/features",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/features-reference.md",tags:[],version:"current",sidebarPosition:5,frontMatter:{id:"features",sidebar_position:5},sidebar:"tutorialSidebar",previous:{title:"Docker base image",permalink:"/decisions/docker-base-image"},next:{title:"Common questions",permalink:"/questions"}},g={},s=[{value:"1. Logger",id:"1-logger",level:2},{value:"1.1 Logger Library",id:"11-logger-library",level:3},{value:"1.2 Prevent infinite logger serialization loop",id:"12-prevent-infinite-logger-serialization-loop",level:3},{value:"2. Configuration",id:"2-configuration",level:2},{value:"2.1 Configuration retriever module",id:"21-configuration-retriever-module",level:3},{value:"3. Testing experience",id:"3-testing-experience",level:2},{value:"3.1 Slow tests detection",id:"31-slow-tests-detection",level:3},{value:"3.2 Autocomplete",id:"32-autocomplete",level:3},{value:"4. Docker",id:"4-docker",level:2},{value:"4.1 Secured dockerfile",id:"41-secured-dockerfile",level:3},{value:"4.1 Layered build",id:"41-layered-build",level:3},{value:"4.2 Compact base image",id:"42-compact-base-image",level:3},{value:"4.2 Testing docker-compose",id:"42-testing-docker-compose",level:3},{value:"5. Database",id:"5-database",level:2},{value:"5.1 Sequelize ORM",id:"51-sequelize-orm",level:3},{value:"5.2 Prisma ORM",id:"52-prisma-orm",level:3},{value:"5.3 Migration",id:"53-migration",level:3},{value:"6. Request-level store",id:"6-request-level-store",level:2},{value:"6.1 Automatic correlation-id",id:"61-automatic-correlation-id",level:3}],p={toc:s},u="wrapper";function c(e){let{components:r,...t}=e;return(0,a.yg)(u,(0,n.A)({},p,t,{components:r,mdxType:"MDXLayout"}),(0,a.yg)("h1",{id:"coming-soon-features-and-practices"},"Coming soon: Features and practices"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},(0,a.yg)("em",{parentName:"strong"},"WIP - This doc is being written these days"))),(0,a.yg)("p",null,"This list will outline all the capabilities and roadmap of Practica.js"),(0,a.yg)("p",null,"Here will come a filter panel to search by categories, what's strategic, and more"),(0,a.yg)("h2",{id:"1-logger"},"1. Logger"),(0,a.yg)("h3",{id:"11-logger-library"},"1.1 Logger Library"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," A reputable and hardened logger"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #strategic #logger"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/pinojs/pino"},"Pino.js")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com"},"(Decision log here)")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udf81 Bundles:")," example-flow, full-flow"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc1 CLI flags:")," ",(0,a.yg)("inlineCode",{parentName:"p"},"--logger=true|false")),(0,a.yg)("h3",{id:"12-prevent-infinite-logger-serialization-loop"},"1.2 Prevent infinite logger serialization loop"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Limit logged JSON depth when cyclic reference is introduced"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #logger"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/partial.png"})," Idea, not implemented"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/pinojs/pino"},"Pino.js")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com"},"(Decision log here)")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udf81 Bundles:")," example-flow, full-flow"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc1 CLI flags:")," None, always true"),(0,a.yg)("h2",{id:"2-configuration"},"2. Configuration"),(0,a.yg)("h3",{id:"21-configuration-retriever-module"},"2.1 Configuration retriever module"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," A configuration retriever module that packs good practices"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #strategic #configuration"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/mozilla/node-convict"},"Convict")," ",(0,a.yg)("a",{parentName:"p",href:"/decisions/configuration-library"},"(Decision log here)")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udf81 Bundles:")," example-flow, full-flow"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc1 CLI flags:")," -"),(0,a.yg)("h2",{id:"3-testing-experience"},"3. Testing experience"),(0,a.yg)("h3",{id:"31-slow-tests-detection"},"3.1 Slow tests detection"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Slow tests automatically shown clearly in the console and exported to a json report"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #dx #testing"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/sholzmayer/jest-performance-reporter"},"jest-performance-reporter")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udf81 Bundles:")," example-flow, full-flow"),(0,a.yg)("h3",{id:"32-autocomplete"},"3.2 Autocomplete"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," When running tests in watch mode and choosing filename or test name patterns autocomplete will assist you"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #dx #testing"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/jest-community/jest-watch-typeahead"},"jest-watch-typeahead")),(0,a.yg)("h2",{id:"4-docker"},"4. Docker"),(0,a.yg)("h3",{id:"41-secured-dockerfile"},"4.1 Secured dockerfile"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," We build a production-ready .dockerfile that avoids leaking secrets and leaving dev dependencies in"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #security #docker"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," N/A"),(0,a.yg)("h3",{id:"41-layered-build"},"4.1 Layered build"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," The poduction artifact omit building tools to stay more compact and minimize attack sutface"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #security #docker"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," N/A"),(0,a.yg)("h3",{id:"42-compact-base-image"},"4.2 Compact base image"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," A small, ~100MB, base image of Node is used"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #docker"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," N/A"),(0,a.yg)("h3",{id:"42-testing-docker-compose"},"4.2 Testing docker-compose"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Testing optimized database and other infrastrucuture running from docker-compose during the automated tests"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #testing #docker #database"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," N/A"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Additional 100 features will come here")),(0,a.yg)("h2",{id:"5-database"},"5. Database"),(0,a.yg)("h3",{id:"51-sequelize-orm"},"5.1 Sequelize ORM"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Support for one of the most popular and matured ORM - Sequelize"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #orm #db"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," Sequelize"),(0,a.yg)("h3",{id:"52-prisma-orm"},"5.2 Prisma ORM"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Support for one of an emerging and type safe ORM - Prisma"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #orm #db"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," Prisma"),(0,a.yg)("h3",{id:"53-migration"},"5.3 Migration"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Includes migration files and commands for production-safe updates"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #orm #db"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," Prisma"),(0,a.yg)("h2",{id:"6-request-level-store"},"6. Request-level store"),(0,a.yg)("h3",{id:"61-automatic-correlation-id"},"6.1 Automatic correlation-id"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"What:")," Automatically emit unique correlation id to every log line"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"Tags:")," #log #tracing"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc77\ud83c\udffe Status:")," ",(0,a.yg)("img",{src:"/img/full.png"})," Production-ready, more hardening is welcome"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Chosen libraries:")," N/A"))}c.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/1be78505.3aed8880.js b/assets/js/1be78505.3aed8880.js
new file mode 100644
index 00000000..d4d6ca06
--- /dev/null
+++ b/assets/js/1be78505.3aed8880.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8714,1774],{10:(e,t,n)=>{n.r(t),n.d(t,{default:()=>ge});var a=n(6540),o=n(53),l=n(1003),r=n(7559),c=n(2967),i=n(1754),s=n(2252),d=n(6588),m=n(9408),u=n(1312),b=n(3104),p=n(5062);const h={backToTopButton:"backToTopButton_sjWU",backToTopButtonShow:"backToTopButtonShow_xfvO"};function E(){const{shown:e,scrollToTop:t}=function(e){let{threshold:t}=e;const[n,o]=(0,a.useState)(!1),l=(0,a.useRef)(!1),{startScroll:r,cancelScroll:c}=(0,b.gk)();return(0,b.Mq)(((e,n)=>{let{scrollY:a}=e;const r=n?.scrollY;r&&(l.current?l.current=!1:a>=r?(c(),o(!1)):a{e.location.hash&&(l.current=!0,o(!1))})),{shown:n,scrollToTop:()=>r(0)}}({threshold:300});return a.createElement("button",{"aria-label":(0,u.T)({id:"theme.BackToTopButton.buttonAriaLabel",message:"Scroll back to top",description:"The ARIA label for the back to top button"}),className:(0,o.A)("clean-btn",r.G.common.backToTopButton,h.backToTopButton,e&&h.backToTopButtonShow),type:"button",onClick:t})}var f=n(3109),g=n(6347),v=n(4581),_=n(6342),A=n(3465),C=n(8168);function k(e){return a.createElement("svg",(0,C.A)({width:"20",height:"20","aria-hidden":"true"},e),a.createElement("g",{fill:"#7a7a7a"},a.createElement("path",{d:"M9.992 10.023c0 .2-.062.399-.172.547l-4.996 7.492a.982.982 0 01-.828.454H1c-.55 0-1-.453-1-1 0-.2.059-.403.168-.551l4.629-6.942L.168 3.078A.939.939 0 010 2.528c0-.548.45-.997 1-.997h2.996c.352 0 .649.18.828.45L9.82 9.472c.11.148.172.347.172.55zm0 0"}),a.createElement("path",{d:"M19.98 10.023c0 .2-.058.399-.168.547l-4.996 7.492a.987.987 0 01-.828.454h-3c-.547 0-.996-.453-.996-1 0-.2.059-.403.168-.551l4.625-6.942-4.625-6.945a.939.939 0 01-.168-.55 1 1 0 01.996-.997h3c.348 0 .649.18.828.45l4.996 7.492c.11.148.168.347.168.55zm0 0"})))}const S={collapseSidebarButton:"collapseSidebarButton_PEFL",collapseSidebarButtonIcon:"collapseSidebarButtonIcon_kv0_"};function N(e){let{onClick:t}=e;return a.createElement("button",{type:"button",title:(0,u.T)({id:"theme.docs.sidebar.collapseButtonTitle",message:"Collapse sidebar",description:"The title attribute for collapse button of doc sidebar"}),"aria-label":(0,u.T)({id:"theme.docs.sidebar.collapseButtonAriaLabel",message:"Collapse sidebar",description:"The title attribute for collapse button of doc sidebar"}),className:(0,o.A)("button button--secondary button--outline",S.collapseSidebarButton),onClick:t},a.createElement(k,{className:S.collapseSidebarButtonIcon}))}var T=n(5041),I=n(9532);const x=Symbol("EmptyContext"),B=a.createContext(x);function w(e){let{children:t}=e;const[n,o]=(0,a.useState)(null),l=(0,a.useMemo)((()=>({expandedItem:n,setExpandedItem:o})),[n]);return a.createElement(B.Provider,{value:l},t)}var y=n(1422),L=n(9169),M=n(5489),H=n(2303);function P(e){let{categoryLabel:t,onClick:n}=e;return a.createElement("button",{"aria-label":(0,u.T)({id:"theme.DocSidebarItem.toggleCollapsedCategoryAriaLabel",message:"Toggle the collapsible sidebar category '{label}'",description:"The ARIA label to toggle the collapsible sidebar category"},{label:t}),type:"button",className:"clean-btn menu__caret",onClick:n})}function G(e){let{item:t,onItemClick:n,activePath:l,level:c,index:s,...d}=e;const{items:m,label:u,collapsible:b,className:p,href:h}=t,{docs:{sidebar:{autoCollapseCategories:E}}}=(0,_.p)(),f=function(e){const t=(0,H.A)();return(0,a.useMemo)((()=>e.href?e.href:!t&&e.collapsible?(0,i._o)(e):void 0),[e,t])}(t),g=(0,i.w8)(t,l),v=(0,L.ys)(h,l),{collapsed:A,setCollapsed:k}=(0,y.u)({initialState:()=>!!b&&(!g&&t.collapsed)}),{expandedItem:S,setExpandedItem:N}=function(){const e=(0,a.useContext)(B);if(e===x)throw new I.dV("DocSidebarItemsExpandedStateProvider");return e}(),T=function(e){void 0===e&&(e=!A),N(e?null:s),k(e)};return function(e){let{isActive:t,collapsed:n,updateCollapsed:o}=e;const l=(0,I.ZC)(t);(0,a.useEffect)((()=>{t&&!l&&n&&o(!1)}),[t,l,n,o])}({isActive:g,collapsed:A,updateCollapsed:T}),(0,a.useEffect)((()=>{b&&null!=S&&S!==s&&E&&k(!0)}),[b,S,s,k,E]),a.createElement("li",{className:(0,o.A)(r.G.docs.docSidebarItemCategory,r.G.docs.docSidebarItemCategoryLevel(c),"menu__list-item",{"menu__list-item--collapsed":A},p)},a.createElement("div",{className:(0,o.A)("menu__list-item-collapsible",{"menu__list-item-collapsible--active":v})},a.createElement(M.A,(0,C.A)({className:(0,o.A)("menu__link",{"menu__link--sublist":b,"menu__link--sublist-caret":!h&&b,"menu__link--active":g}),onClick:b?e=>{n?.(t),h?T(!1):(e.preventDefault(),T())}:()=>{n?.(t)},"aria-current":v?"page":void 0,"aria-expanded":b?!A:void 0,href:b?f??"#":f},d),u),h&&b&&a.createElement(P,{categoryLabel:u,onClick:e=>{e.preventDefault(),T()}})),a.createElement(y.N,{lazy:!0,as:"ul",className:"menu__list",collapsed:A},a.createElement(K,{items:m,tabIndex:A?-1:0,onItemClick:n,activePath:l,level:c+1})))}var F=n(6654),W=n(3186);const D={menuExternalLink:"menuExternalLink_NmtK"};function V(e){let{item:t,onItemClick:n,activePath:l,level:c,index:s,...d}=e;const{href:m,label:u,className:b,autoAddBaseUrl:p}=t,h=(0,i.w8)(t,l),E=(0,F.A)(m);return a.createElement("li",{className:(0,o.A)(r.G.docs.docSidebarItemLink,r.G.docs.docSidebarItemLinkLevel(c),"menu__list-item",b),key:u},a.createElement(M.A,(0,C.A)({className:(0,o.A)("menu__link",!E&&D.menuExternalLink,{"menu__link--active":h}),autoAddBaseUrl:p,"aria-current":h?"page":void 0,to:m},E&&{onClick:n?()=>n(t):void 0},d),u,!E&&a.createElement(W.A,null)))}const U={menuHtmlItem:"menuHtmlItem_M9Kj"};function z(e){let{item:t,level:n,index:l}=e;const{value:c,defaultStyle:i,className:s}=t;return a.createElement("li",{className:(0,o.A)(r.G.docs.docSidebarItemLink,r.G.docs.docSidebarItemLinkLevel(n),i&&[U.menuHtmlItem,"menu__list-item"],s),key:l,dangerouslySetInnerHTML:{__html:c}})}function R(e){let{item:t,...n}=e;switch(t.type){case"category":return a.createElement(G,(0,C.A)({item:t},n));case"html":return a.createElement(z,(0,C.A)({item:t},n));default:return a.createElement(V,(0,C.A)({item:t},n))}}function j(e){let{items:t,...n}=e;return a.createElement(w,null,t.map(((e,t)=>a.createElement(R,(0,C.A)({key:t,item:e,index:t},n)))))}const K=(0,a.memo)(j),q={menu:"menu_SIkG",menuWithAnnouncementBar:"menuWithAnnouncementBar_GW3s"};function O(e){let{path:t,sidebar:n,className:l}=e;const c=function(){const{isActive:e}=(0,T.Mj)(),[t,n]=(0,a.useState)(e);return(0,b.Mq)((t=>{let{scrollY:a}=t;e&&n(0===a)}),[e]),e&&t}();return a.createElement("nav",{"aria-label":(0,u.T)({id:"theme.docs.sidebar.navAriaLabel",message:"Docs sidebar",description:"The ARIA label for the sidebar navigation"}),className:(0,o.A)("menu thin-scrollbar",q.menu,c&&q.menuWithAnnouncementBar,l)},a.createElement("ul",{className:(0,o.A)(r.G.docs.docSidebarMenu,"menu__list")},a.createElement(K,{items:n,activePath:t,level:1})))}const X="sidebar_njMd",Y="sidebarWithHideableNavbar_wUlq",Z="sidebarHidden_VK0M",$="sidebarLogo_isFc";function J(e){let{path:t,sidebar:n,onCollapse:l,isHidden:r}=e;const{navbar:{hideOnScroll:c},docs:{sidebar:{hideable:i}}}=(0,_.p)();return a.createElement("div",{className:(0,o.A)(X,c&&Y,r&&Z)},c&&a.createElement(A.A,{tabIndex:-1,className:$}),a.createElement(O,{path:t,sidebar:n}),i&&a.createElement(N,{onClick:l}))}const Q=a.memo(J);var ee=n(5600),te=n(9876);const ne=e=>{let{sidebar:t,path:n}=e;const l=(0,te.M)();return a.createElement("ul",{className:(0,o.A)(r.G.docs.docSidebarMenu,"menu__list")},a.createElement(K,{items:t,activePath:n,onItemClick:e=>{"category"===e.type&&e.href&&l.toggle(),"link"===e.type&&l.toggle()},level:1}))};function ae(e){return a.createElement(ee.GX,{component:ne,props:e})}const oe=a.memo(ae);function le(e){const t=(0,v.l)(),n="desktop"===t||"ssr"===t,o="mobile"===t;return a.createElement(a.Fragment,null,n&&a.createElement(Q,e),o&&a.createElement(oe,e))}const re={expandButton:"expandButton_m80_",expandButtonIcon:"expandButtonIcon_BlDH"};function ce(e){let{toggleSidebar:t}=e;return a.createElement("div",{className:re.expandButton,title:(0,u.T)({id:"theme.docs.sidebar.expandButtonTitle",message:"Expand sidebar",description:"The ARIA label and title attribute for expand button of doc sidebar"}),"aria-label":(0,u.T)({id:"theme.docs.sidebar.expandButtonAriaLabel",message:"Expand sidebar",description:"The ARIA label and title attribute for expand button of doc sidebar"}),tabIndex:0,role:"button",onKeyDown:t,onClick:t},a.createElement(k,{className:re.expandButtonIcon}))}const ie={docSidebarContainer:"docSidebarContainer_b6E3",docSidebarContainerHidden:"docSidebarContainerHidden_b3ry",sidebarViewport:"sidebarViewport_Xe31"};function se(e){let{children:t}=e;const n=(0,d.t)();return a.createElement(a.Fragment,{key:n?.name??"noSidebar"},t)}function de(e){let{sidebar:t,hiddenSidebarContainer:n,setHiddenSidebarContainer:l}=e;const{pathname:c}=(0,g.zy)(),[i,s]=(0,a.useState)(!1),d=(0,a.useCallback)((()=>{i&&s(!1),!i&&(0,f.O)()&&s(!0),l((e=>!e))}),[l,i]);return a.createElement("aside",{className:(0,o.A)(r.G.docs.docSidebarContainer,ie.docSidebarContainer,n&&ie.docSidebarContainerHidden),onTransitionEnd:e=>{e.currentTarget.classList.contains(ie.docSidebarContainer)&&n&&s(!0)}},a.createElement(se,null,a.createElement("div",{className:(0,o.A)(ie.sidebarViewport,i&&ie.sidebarViewportHidden)},a.createElement(le,{sidebar:t,path:c,onCollapse:d,isHidden:i}),i&&a.createElement(ce,{toggleSidebar:d}))))}const me={docMainContainer:"docMainContainer_gTbr",docMainContainerEnhanced:"docMainContainerEnhanced_Uz_u",docItemWrapperEnhanced:"docItemWrapperEnhanced_czyv"};function ue(e){let{hiddenSidebarContainer:t,children:n}=e;const l=(0,d.t)();return a.createElement("main",{className:(0,o.A)(me.docMainContainer,(t||!l)&&me.docMainContainerEnhanced)},a.createElement("div",{className:(0,o.A)("container padding-top--md padding-bottom--lg",me.docItemWrapper,t&&me.docItemWrapperEnhanced)},n))}const be={docPage:"docPage__5DB",docsWrapper:"docsWrapper_BCFX","themedComponent--light":"themedComponent--light_NU7w"};function pe(e){let{children:t}=e;const n=(0,d.t)(),[o,l]=(0,a.useState)(!1);return a.createElement(m.A,{wrapperClassName:be.docsWrapper},a.createElement(E,null),a.createElement("div",{className:be.docPage},n&&a.createElement(de,{sidebar:n.items,hiddenSidebarContainer:o,setHiddenSidebarContainer:l}),a.createElement(ue,{hiddenSidebarContainer:o},t)))}var he=n(1774),Ee=n(1463);function fe(e){const{versionMetadata:t}=e;return a.createElement(a.Fragment,null,a.createElement(Ee.A,{version:t.version,tag:(0,c.tU)(t.pluginId,t.version)}),a.createElement(l.be,null,t.noIndex&&a.createElement("meta",{name:"robots",content:"noindex, nofollow"})))}function ge(e){const{versionMetadata:t}=e,n=(0,i.mz)(e);if(!n)return a.createElement(he.default,null);const{docElement:c,sidebarName:m,sidebarItems:u}=n;return a.createElement(a.Fragment,null,a.createElement(fe,e),a.createElement(l.e3,{className:(0,o.A)(r.G.wrapper.docsPages,r.G.page.docsDocPage,e.versionMetadata.className)},a.createElement(s.n,{version:t},a.createElement(d.V,{name:m,items:u},a.createElement(pe,null,c)))))}},1774:(e,t,n)=>{n.r(t),n.d(t,{default:()=>c});var a=n(6540),o=n(1312),l=n(1003),r=n(9408);function c(){return a.createElement(a.Fragment,null,a.createElement(l.be,{title:(0,o.T)({id:"theme.NotFound.title",message:"Page Not Found"})}),a.createElement(r.A,null,a.createElement("main",{className:"container margin-vert--xl"},a.createElement("div",{className:"row"},a.createElement("div",{className:"col col--6 col--offset-3"},a.createElement("h1",{className:"hero__title"},a.createElement(o.A,{id:"theme.NotFound.title",description:"The title of the 404 page"},"Page Not Found")),a.createElement("p",null,a.createElement(o.A,{id:"theme.NotFound.p1",description:"The first paragraph of the 404 page"},"We could not find what you were looking for.")),a.createElement("p",null,a.createElement(o.A,{id:"theme.NotFound.p2",description:"The 2nd paragraph of the 404 page"},"Please contact the owner of the site that linked you to the original URL and let them know their link is broken.")))))))}},2252:(e,t,n)=>{n.d(t,{n:()=>r,r:()=>c});var a=n(6540),o=n(9532);const l=a.createContext(null);function r(e){let{children:t,version:n}=e;return a.createElement(l.Provider,{value:n},t)}function c(){const e=(0,a.useContext)(l);if(null===e)throw new o.dV("DocsVersionProvider");return e}}}]);
\ No newline at end of file
diff --git a/assets/js/1fe9a2e9.fafd22ae.js b/assets/js/1fe9a2e9.fafd22ae.js
new file mode 100644
index 00000000..128a20c4
--- /dev/null
+++ b/assets/js/1fe9a2e9.fafd22ae.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[6763],{5680:(e,t,r)=>{r.d(t,{xA:()=>p,yg:()=>m});var o=r(6540);function a(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function n(e,t){var r=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),r.push.apply(r,o)}return r}function s(e){for(var t=1;t=0||(a[r]=e[r]);return a}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,r)&&(a[r]=e[r])}return a}var l=o.createContext({}),c=function(e){var t=o.useContext(l),r=t;return e&&(r="function"==typeof e?e(t):s(s({},t),e)),r},p=function(e){var t=c(e.components);return o.createElement(l.Provider,{value:t},e.children)},d="mdxType",g={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},u=o.forwardRef((function(e,t){var r=e.components,a=e.mdxType,n=e.originalType,l=e.parentName,p=i(e,["components","mdxType","originalType","parentName"]),d=c(r),u=a,m=d["".concat(l,".").concat(u)]||d[u]||g[u]||n;return r?o.createElement(m,s(s({ref:t},p),{},{components:r})):o.createElement(m,s({ref:t},p))}));function m(e,t){var r=arguments,a=t&&t.mdxType;if("string"==typeof e||a){var n=r.length,s=new Array(n);s[0]=u;var i={};for(var l in t)hasOwnProperty.call(t,l)&&(i[l]=t[l]);i.originalType=e,i[d]="string"==typeof e?e:a,s[1]=i;for(var c=2;c{r.r(t),r.d(t,{assets:()=>l,contentTitle:()=>s,default:()=>g,frontMatter:()=>n,metadata:()=>i,toc:()=>c});var o=r(8168),a=(r(6540),r(5680));const n={slug:"popular-nodejs-pattern-and-tools-to-reconsider",date:"2022-08-02T10:00",hide_table_of_contents:!0,title:"Popular Node.js patterns and tools to re-consider",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},s="Popular Node.js tools and patterns to re-consider",i={permalink:"/blog/popular-nodejs-pattern-and-tools-to-reconsider",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/pattern-to-reconsider/index.md",source:"@site/blog/pattern-to-reconsider/index.md",title:"Popular Node.js patterns and tools to re-consider",description:"Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?",date:"2022-08-02T10:00:00.000Z",formattedDate:"August 2, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"nestjs",permalink:"/blog/tags/nestjs"},{label:"fastify",permalink:"/blog/tags/fastify"},{label:"passport",permalink:"/blog/tags/passport"},{label:"dotenv",permalink:"/blog/tags/dotenv"},{label:"supertest",permalink:"/blog/tags/supertest"},{label:"practica",permalink:"/blog/tags/practica"},{label:"testing",permalink:"/blog/tags/testing"}],readingTime:21.09,hasTruncateMarker:!0,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"popular-nodejs-pattern-and-tools-to-reconsider",date:"2022-08-02T10:00",hide_table_of_contents:!0,title:"Popular Node.js patterns and tools to re-consider",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},prevItem:{title:"Which Monorepo is right for a Node.js BACKEND\xa0now?",permalink:"/blog/monorepo-backend"},nextItem:{title:"Practica.js v0.0.1 is alive",permalink:"/blog/practica-is-alive"}},l={authorsImageUrls:[void 0]},c=[{value:"TOC - Patterns to reconsider",id:"toc---patterns-to-reconsider",level:2}],p={toc:c},d="wrapper";function g(e){let{components:t,...n}=e;return(0,a.yg)(d,(0,o.A)({},p,n,{components:t,mdxType:"MDXLayout"}),(0,a.yg)("p",null,"Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?"),(0,a.yg)("p",null,"In his novel book 'Atomic Habits' the author James Clear states that:"),(0,a.yg)("blockquote",null,(0,a.yg)("p",{parentName:"blockquote"},'"Mastery is created by habits. However, sometimes when we\'re on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst')),(0,a.yg)("p",null,"We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change"),(0,a.yg)("p",null,"Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples. "),(0,a.yg)("p",null,"Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not \"don't use this tool!\" but rather becoming familiar with other techniques that, ",(0,a.yg)("em",{parentName:"p"},"under some circumstances")," might be a better fit"),(0,a.yg)("p",null,(0,a.yg)("img",{alt:"Animals and frameworks shed their skin",src:r(6738).A,width:"600",height:"400"})),(0,a.yg)("p",null,(0,a.yg)("em",{parentName:"p"},"The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell")),(0,a.yg)("h2",{id:"toc---patterns-to-reconsider"},"TOC - Patterns to reconsider"),(0,a.yg)("ol",null,(0,a.yg)("li",{parentName:"ol"},"Dotenv"),(0,a.yg)("li",{parentName:"ol"},"Calling a service from a controller"),(0,a.yg)("li",{parentName:"ol"},"Nest.js dependency injection for all classes"),(0,a.yg)("li",{parentName:"ol"},"Passport.js"),(0,a.yg)("li",{parentName:"ol"},"Supertest"),(0,a.yg)("li",{parentName:"ol"},"Fastify utility decoration"),(0,a.yg)("li",{parentName:"ol"},"Logging from a catch clause"),(0,a.yg)("li",{parentName:"ol"},"Morgan logger"),(0,a.yg)("li",{parentName:"ol"},"NODE_ENV")))}g.isMDXComponent=!0},6738:(e,t,r)=>{r.d(t,{A:()=>o});const o=r.p+"assets/images/crab-161f2b8e5ab129c2a175920691a845c0.webp"}}]);
\ No newline at end of file
diff --git a/assets/js/211a5e1a.bae98c00.js b/assets/js/211a5e1a.bae98c00.js
new file mode 100644
index 00000000..dd59457c
--- /dev/null
+++ b/assets/js/211a5e1a.bae98c00.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[7266],{5680:(e,t,a)=>{a.d(t,{xA:()=>p,yg:()=>h});var n=a(6540);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var s=n.createContext({}),g=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},p=function(e){var t=g(e.components);return n.createElement(s.Provider,{value:t},e.children)},d="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},c=n.forwardRef((function(e,t){var a=e.components,r=e.mdxType,o=e.originalType,s=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),d=g(a),c=r,h=d["".concat(s,".").concat(c)]||d[c]||u[c]||o;return a?n.createElement(h,i(i({ref:t},p),{},{components:a})):n.createElement(h,i({ref:t},p))}));function h(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var o=a.length,i=new Array(o);i[0]=c;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l[d]="string"==typeof e?e:r,i[1]=l;for(var g=2;g{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>u,frontMatter:()=>o,metadata:()=>l,toc:()=>g});var n=a(8168),r=(a(6540),a(5680));const o={sidebar_position:2,sidebar_label:"Long guide"},i="The comprehensive contribution guide",l={unversionedId:"contribution/contribution-long-guide",id:"contribution/contribution-long-guide",title:"The comprehensive contribution guide",description:"You belong with us",source:"@site/docs/contribution/contribution-long-guide.md",sourceDirName:"contribution",slug:"/contribution/contribution-long-guide",permalink:"/contribution/contribution-long-guide",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/contribution/contribution-long-guide.md",tags:[],version:"current",sidebarPosition:2,frontMatter:{sidebar_position:2,sidebar_label:"Long guide"},sidebar:"tutorialSidebar",previous:{title:"Short guide",permalink:"/contribution/contribution-short-guide"},next:{title:"Library picking guidelines",permalink:"/contribution/vendor-pick-guidelines"}},s={},g=[{value:"You belong with us",id:"you-belong-with-us",level:2},{value:"Consider the shortened guide first",id:"consider-the-shortened-guide-first",level:2},{value:"Philosophy",id:"philosophy",level:2},{value:"Workflow",id:"workflow",level:2},{value:"Got a small change? Choose the fast lane",id:"got-a-small-change-choose-the-fast-lane",level:3},{value:"Need to change the code itself? Here is a typical workflow",id:"need-to-change-the-code-itself-here-is-a-typical-workflow",level:3},{value:"Roles",id:"roles",level:2},{value:"Project structure",id:"project-structure",level:2},{value:"High-level sections",id:"high-level-sections",level:3},{value:"The code templates",id:"the-code-templates",level:3},{value:"The code generator structure",id:"the-code-generator-structure",level:3},{value:"Packages (domains)",id:"packages-domains",level:2},{value:"Development machine setup",id:"development-machine-setup",level:2},{value:"Areas to focus on",id:"areas-to-focus-on",level:2},{value:"Supported Node.js version",id:"supported-nodejs-version",level:2},{value:"Code structure",id:"code-structure",level:2}],p={toc:g},d="wrapper";function u(e){let{components:t,...o}=e;return(0,r.yg)(d,(0,n.A)({},p,o,{components:t,mdxType:"MDXLayout"}),(0,r.yg)("h1",{id:"the-comprehensive-contribution-guide"},"The comprehensive contribution guide"),(0,r.yg)("h2",{id:"you-belong-with-us"},"You belong with us"),(0,r.yg)("p",null,"If you reached down to this page, you probably belong with us \ud83d\udc9c. We are in an ever-going quest for better software practices. This journey can bring two things to your benefit: A lot of learning and global impact on many people's craft. Does this sounds attractive?"),(0,r.yg)("h2",{id:"consider-the-shortened-guide-first"},"Consider the shortened guide first"),(0,r.yg)("hr",null),(0,r.yg)("p",null,"Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, small code enhancement or anything that is small and obvious - start by reading the ",(0,r.yg)("a",{parentName:"p",href:"/contribution/contribution-short-guide"},"shortened guide here"),". As you'll expand your engagement with this repo, it might be a good idea to visit this long guide again"),(0,r.yg)("h2",{id:"philosophy"},"Philosophy"),(0,r.yg)("p",null,"Our main selling point is our philosophy, our philosophy is 'make it SIMPLE'. There is one really important holy grail in software - Speed. The faster you move, the more features and value is created for the users. The faster you move, more improvements cycles are deployed and the software/ops become better. ",(0,r.yg)("a",{parentName:"p",href:"https://puppet.com/resources/report/2020-state-of-devops-report"},"Researches show")," that faster team produces software that is more reliable. Complexity is the enemy of speed - Commonly apps are big, sophisticated, has a lot of internal abstractions and demand long training before being productive. Our mission is to minimize complexity, get onboarded developers up to speed quickly, or in simple words - Let the reader of the code understand it in a breeze. If you make simplicity a 1st principle - Great things will come your way."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"The sweet spot",src:a(4971).A,width:"1150",height:"713"})),(0,r.yg)("p",null,"Big words, how exactly? Here are few examples:"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- Simple language -")," We use TypeScript because we believe in types, but we minimize advanced features. This boils down to using functions only, sometimes also classes. No abstracts, generic, complex types or anything that demand more CPU cycles from the reader."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- Less generic -")," Yes, you read it right. If you can code a function that covers less scenarios but is shorter and simpler to understand - Consider this option first. Sometimes one if forced to make things generic - That's fine, at least we minimized the amount of complex code locations"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- Simple tools -")," Need to use some 3rd party for some task? Choose the library that is doing the minimal amount of work. For example, when seeking a library that parses JWT tokens - avoid picking a super-fancy framework that can solve any authorization path (e.g., Passport). Instead, Opt for a library that is doing exactly this. This will result in code that is simpler to understand and reduced bug surface"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- Prefer Node/JavaScript built-in tooling -")," Some new frameworks have abstractions over some standard tooling. They have their way of defining modules, libraries and others which demand learning one more concept and being exposed to unnecessary layer of bugs. Our preferred way is the vanilla way, if it's part of JavaScript/Node - We use it. For example, should we need to group a bunch of files as a logical modules - We use ESM to export the relevant files and functions"),(0,r.yg)("p",null,(0,r.yg)("a",{parentName:"p",href:"http://www/no-link-yet"},"Our full coding guide will come here soon")),(0,r.yg)("h2",{id:"workflow"},"Workflow"),(0,r.yg)("h3",{id:"got-a-small-change-choose-the-fast-lane"},"Got a small change? Choose the fast lane"),(0,r.yg)("p",null,"Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, linting rules, look&feel fixes, fixing TYPOs, comments or anything that is small and obvious - Just fork to your machine, code, ensure all tests pass (e.g., ",(0,r.yg)("inlineCode",{parentName:"p"},"npm test"),"), PR with a meaningful title, get ",(0,r.yg)("strong",{parentName:"p"},"1")," approver before merging. That's it."),(0,r.yg)("h3",{id:"need-to-change-the-code-itself-here-is-a-typical-workflow"},"Need to change the code itself? Here is a typical workflow"),(0,r.yg)("table",null,(0,r.yg)("thead",{parentName:"table"},(0,r.yg)("tr",{parentName:"thead"},(0,r.yg)("th",{parentName:"tr",align:null}),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"\u27a1\ufe0f Idea")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"\u27a1 Design decisions")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"\u27a1 Code")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"\u27a1\ufe0f Merge")))),(0,r.yg)("tbody",{parentName:"table"},(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"When")),(0,r.yg)("td",{parentName:"tr",align:null},"Got an idea how to improve? Want to handle an existing issue?"),(0,r.yg)("td",{parentName:"tr",align:null},"When the change implies some major decisions, those should be discussed in advance"),(0,r.yg)("td",{parentName:"tr",align:null},"When got confirmation from core maintainer that the design decisions are sensible"),(0,r.yg)("td",{parentName:"tr",align:null},"When you have accomplished a ",(0,r.yg)("em",{parentName:"td"},"short iteration")," . If the whole change is small, PR in the end")),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"What")),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"1.")," Create an issue (if doesn't exist) ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"2.")," Label the issue with the its type (e.g., question, bug) and the area of improvement (e.g., area-generator, area-express) ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"3.")," Comment and specify your intent to handle this issue"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"1.")," Within the issue, specify your overall approach/design. Or just open a discussion ",(0,r.yg)("strong",{parentName:"td"},"2.")," If choosing a 3rd party library, ensure to follow our standard decision and comparison template. ",(0,r.yg)("a",{parentName:"td",href:"/decisions/configuration-library"},"Example can be found here")),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"1.")," Do it with passions \ud83d\udc9c ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"2.")," Follow our coding guide. Keep it simple. Stay loyal to our philosophy ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"3.")," Run all the quality measures frequently (testing, linting)"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"td"},"1.")," Share your progress early by submit a ",(0,r.yg)("a",{parentName:"td",href:"https://github.blog/2019-02-14-introducing-draft-pull-requests/"},"work in progress PR")," ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"2.")," Ensure all CI checks pass (e.g., testing) ",(0,r.yg)("br",null)," ",(0,r.yg)("strong",{parentName:"td"},"3.")," Get at least one approval before merging")))),(0,r.yg)("h2",{id:"roles"},"Roles"),(0,r.yg)("h2",{id:"project-structure"},"Project structure"),(0,r.yg)("h3",{id:"high-level-sections"},"High-level sections"),(0,r.yg)("p",null,"The repo has 3 root folders that represents what we do:"),(0,r.yg)("ul",null,(0,r.yg)("li",{parentName:"ul"},(0,r.yg)("strong",{parentName:"li"},"docs")," - Anything we write to make this project super easy to work with"),(0,r.yg)("li",{parentName:"ul"},(0,r.yg)("strong",{parentName:"li"},"code-generator")," - A tool with great DX to choose and generate the right app for the user"),(0,r.yg)("li",{parentName:"ul"},(0,r.yg)("strong",{parentName:"li"},"code-templates")," - The code that we generate with the right patterns and practices")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-mermaid"},"%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%\ngraph\n A[Practica] --\x3e|How we create apps| B(Code Generators)\n A --\x3e|The code that we generate!| C(Code Templates)\n A --\x3e|How we explain ourself| D(Docs)\n\n\n")),(0,r.yg)("h3",{id:"the-code-templates"},"The code templates"),(0,r.yg)("p",null,"Typically, the two main sections are the Microservice (apps) and cross-cutting-concern libraries:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-mermaid"},"%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%\ngraph\n A[Code Templates] --\x3e|The example Microservice/app| B(Services)\n B --\x3e|Where the API, logic and data lives| D(Example Microservice)\n A --\x3e|Cross Microservice concerns| C(Libraries)\n C --\x3e|Explained in a dedicated section| K(*Multiple libraries like logger)\n style D stroke:#333,stroke-width:4px\n\n\n")),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"The Microservice structure")),(0,r.yg)("p",null,"The entry-point of the generated code is an example Microservice that exposes API and has the traditional layers of a component:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-mermaid"},"%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%\ngraph\n A[Services] --\x3e|Where the API, logic and data lives| D(Example Microservice)\n A --\x3e|Almost empty, used to exemplify Microservice communication| E(Collaborator Microservice)\n D --\x3e|The web layer with REST/Graph| G(Web/API layer) \n N --\x3e|Docker-compose based DB, MQ and Cache| F(Infrastructure)\n D --\x3e|Where the business lives| M(Domain layer) \n D --\x3e|Anything related with database| N(Data-access layer)\n D --\x3e|Component-wide testing| S(Testing)\n style D stroke:#333,stroke-width:4px\n")),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"Libraries")),(0,r.yg)("p",null,"All libraries are independent npm packages that can be testing in isolation"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-mermaid"},"%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%\ngraph\n A[Libraries] --\x3e B(Logger)\n A[Libraries] --\x3e |Token-based auth| C(Authorization)\n A[Libraries] --\x3e |Retrieve and validate the configuration| D(Configuration)\n A[Libraries] --\x3e E(Error handler)\n A[Libraries] --\x3e E(MetricsService)\n A[Libraries] --\x3e Z(More to come...)\n style Z stroke:#333,stroke-width:4px\n")),(0,r.yg)("h3",{id:"the-code-generator-structure"},"The code generator structure"),(0,r.yg)("h2",{id:"packages-domains"},"Packages (domains)"),(0,r.yg)("p",null,"This solution is built around independent domains that share ",(0,r.yg)("em",{parentName:"p"},"almost")," nothing with others. It is recommended to start with understanding a single and small domain (package), then expanding and getting acquainted with more. This is also an opportunity to master a specific topic that you're passionate about. Following is our packages list, choose where you wish to contribute first"),(0,r.yg)("table",null,(0,r.yg)("thead",{parentName:"table"},(0,r.yg)("tr",{parentName:"thead"},(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Package")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"What")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Status")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Chosen libs")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Quick links")))),(0,r.yg)("tbody",{parentName:"table"},(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"microservice/express"),(0,r.yg)("td",{parentName:"tr",align:null},"A web layer of an example Microservice based on expressjs"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83e\uddd3\ud83c\udffd Stable"),(0,r.yg)("td",{parentName:"tr",align:null},"-"),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"microservice/fastify"),(0,r.yg)("td",{parentName:"tr",align:null},"A web layer of an example Microservice based on Fastify"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83d\udc23 Not started",(0,r.yg)("br",null),(0,r.yg)("br",null),"(Take the heel, open an issue)"),(0,r.yg)("td",{parentName:"tr",align:null},"-"),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"microservice/dal/prisma"),(0,r.yg)("td",{parentName:"tr",align:null},"A DAL layer of an example Microservice based on Prisma.js"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83d\udc25 Beta/skeleton"),(0,r.yg)("td",{parentName:"tr",align:null},"-"),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"library/logger"),(0,r.yg)("td",{parentName:"tr",align:null},"A logging library wrapper"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83d\udc25 Beta/skeleton",(0,r.yg)("br",null),(0,r.yg)("br",null),"(Take it!)"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("a",{parentName:"td",href:"https://github.com/pinojs/pino"},"Pino"),(0,r.yg)("br",null),(0,r.yg)("br",null),"Why: ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Decision here")),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"library/configuration"),(0,r.yg)("td",{parentName:"tr",align:null},"A library that validates, reads and serve configuration"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83e\uddd2\ud83c\udffb Solid",(0,r.yg)("br",null),(0,r.yg)("br",null),"(Improvements needed)"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("a",{parentName:"td",href:"https://www.npmjs.com/package/convict"},"Convict"),(0,r.yg)("br",null),(0,r.yg)("br",null),"Why: ",(0,r.yg)("a",{parentName:"td",href:"https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md"},"Decision here")),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"library/jwt-based-authentication"),(0,r.yg)("td",{parentName:"tr",align:null},"A library that authenticates requests with JWT token"),(0,r.yg)("td",{parentName:"tr",align:null},"\ud83e\uddd3\ud83c\udffd Stable"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("a",{parentName:"td",href:"https://www.npmjs.com/package/jsonwebtoken"},"jsonwebtoken"),(0,r.yg)("br",null),(0,r.yg)("br",null),"Why: ",(0,r.yg)("br",null),(0,r.yg)("a",{parentName:"td",href:"https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md"},"Decision here")),(0,r.yg)("td",{parentName:"tr",align:null},"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Code & readme"),(0,r.yg)("br",null),"- ",(0,r.yg)("a",{parentName:"td",href:"http://not-exist-yet"},"Issues & ideas"))))),(0,r.yg)("h2",{id:"development-machine-setup"},"Development machine setup"),(0,r.yg)("p",null,"\u2705 Ensure Node, Docker and ",(0,r.yg)("a",{parentName:"p",href:"https://github.com/nvm-sh/nvm#installing-and-updating"},"NVM")," are installed"),(0,r.yg)("p",null,"\u2705 Configure GitHub and npm 2FA!"),(0,r.yg)("p",null,"\u2705 Close the repo if you are a maintainer, or fork it if have no collaborators permissions"),(0,r.yg)("p",null,"\u2705 With your terminal, ensure the right Node version is installed:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre"},"nvm use\n")),(0,r.yg)("p",null,"\u2705 Install dependencies:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre"},"nvm i\n")),(0,r.yg)("p",null,"\u2705 Ensure all tests pass:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre"},"npm t\n")),(0,r.yg)("p",null,"\u2705 Code. Run the test. And vice versa"),(0,r.yg)("h2",{id:"areas-to-focus-on"},"Areas to focus on"),(0,r.yg)("p",null,(0,r.yg)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/8571500/157631757-849584a3-1701-4248-8516-a7d60066089c.png",alt:"domains"})),(0,r.yg)("h2",{id:"supported-nodejs-version"},"Supported Node.js version"),(0,r.yg)("ul",null,(0,r.yg)("li",{parentName:"ul"},"The generated code should be compatible with Node.js versions >14.0.0."),(0,r.yg)("li",{parentName:"ul"},"It's fair to demand LTS version from the repository maintainers (the generator code)")),(0,r.yg)("h2",{id:"code-structure"},"Code structure"),(0,r.yg)("p",null,"Soon"))}u.isMDXComponent=!0},4971:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/balance-fd441003eba7cf60655af6099ee55ce6.png"}}]);
\ No newline at end of file
diff --git a/assets/js/27c1859b.6664616b.js b/assets/js/27c1859b.6664616b.js
new file mode 100644
index 00000000..cc17f778
--- /dev/null
+++ b/assets/js/27c1859b.6664616b.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2642],{8799:a=>{a.exports=JSON.parse('{"label":"practica","permalink":"/blog/tags/practica","allTagsPath":"/blog/tags","count":3}')}}]);
\ No newline at end of file
diff --git a/assets/js/2b2237c5.2e8d1fa3.js b/assets/js/2b2237c5.2e8d1fa3.js
new file mode 100644
index 00000000..21411ab4
--- /dev/null
+++ b/assets/js/2b2237c5.2e8d1fa3.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[3849],{1966:c=>{c.exports=JSON.parse('{"name":"docusaurus-plugin-content-docs","id":"default"}')}}]);
\ No newline at end of file
diff --git a/assets/js/2b812b66.476980f6.js b/assets/js/2b812b66.476980f6.js
new file mode 100644
index 00000000..647a6957
--- /dev/null
+++ b/assets/js/2b812b66.476980f6.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[6230],{5680:(e,t,a)=>{a.d(t,{xA:()=>g,yg:()=>d});var n=a(6540);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function s(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function o(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var s=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var l=n.createContext({}),p=function(e){var t=n.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):o(o({},t),e)),a},g=function(e){var t=p(e.components);return n.createElement(l.Provider,{value:t},e.children)},c="mdxType",h={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},u=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,s=e.originalType,l=e.parentName,g=r(e,["components","mdxType","originalType","parentName"]),c=p(a),u=i,d=c["".concat(l,".").concat(u)]||c[u]||h[u]||s;return a?n.createElement(d,o(o({ref:t},g),{},{components:a})):n.createElement(d,o({ref:t},g))}));function d(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var s=a.length,o=new Array(s);o[0]=u;var r={};for(var l in t)hasOwnProperty.call(t,l)&&(r[l]=t[l]);r.originalType=e,r[c]="string"==typeof e?e:i,o[1]=r;for(var p=2;p{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>o,default:()=>h,frontMatter:()=>s,metadata:()=>r,toc:()=>p});var n=a(8168),i=(a(6540),a(5680));const s={slug:"a-compilation-of-outstanding-testing-articles-with-javaScript",date:"2023-08-06T10:00",hide_table_of_contents:!0,title:"A compilation of outstanding testing articles (with JavaScript)",authors:["goldbergyoni"],tags:["node.js","testing","javascript","tdd","unit","integration"]},o=void 0,r={permalink:"/blog/a-compilation-of-outstanding-testing-articles-with-javaScript",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/10-masterpiece-articles/index.md",source:"@site/blog/10-masterpiece-articles/index.md",title:"A compilation of outstanding testing articles (with JavaScript)",description:"What's special about this article?",date:"2023-08-06T10:00:00.000Z",formattedDate:"August 6, 2023",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"testing",permalink:"/blog/tags/testing"},{label:"javascript",permalink:"/blog/tags/javascript"},{label:"tdd",permalink:"/blog/tags/tdd"},{label:"unit",permalink:"/blog/tags/unit"},{label:"integration",permalink:"/blog/tags/integration"}],readingTime:12.025,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"a-compilation-of-outstanding-testing-articles-with-javaScript",date:"2023-08-06T10:00",hide_table_of_contents:!0,title:"A compilation of outstanding testing articles (with JavaScript)",authors:["goldbergyoni"],tags:["node.js","testing","javascript","tdd","unit","integration"]},prevItem:{title:"About the sweet and powerful 'use case' code pattern",permalink:"/blog/about-the-sweet-and-powerful-use-case-code-pattern"},nextItem:{title:"Testing the dark scenarios of your Node.js application",permalink:"/blog/testing-the-dark-scenarios-of-your-nodejs-application"}},l={authorsImageUrls:[void 0]},p=[{value:"What's special about this article?",id:"whats-special-about-this-article",level:2},{value:"\ud83d\udcc4 1. 'Selective Unit Testing \u2013 Costs and Benefits'",id:"-1-selective-unit-testing--costs-and-benefits",level:2},{value:"\ud83d\udcc4 2. 'Testing implementation details' (JavaScript example)",id:"-2-testing-implementation-details-javascript-example",level:2},{value:"\ud83d\udcc4 3. 'Testing Microservices, the sane way'",id:"-3-testing-microservices-the-sane-way",level:2},{value:"\ud83d\udcc4 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)",id:"-4-how-to-unit-test-with-nodejs-javascript-examples-for-beginners",level:2},{value:"\ud83d\udcc4 5. 'Unit test fetish'",id:"-5-unit-test-fetish",level:2},{value:"\ud83d\udcc4 6. 'Mocking is a Code Smell' (JavaScript examples)",id:"-6-mocking-is-a-code-smell-javascript-examples",level:2},{value:"\ud83d\udcc4 7. 'Why Good Developers Write Bad Unit Tests'",id:"-7-why-good-developers-write-bad-unit-tests",level:2},{value:"\ud83d\udcc4 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)",id:"-8-an-overview-of-javascript-testing-in-2022-javascript-examples",level:2},{value:"\ud83d\udcc4 9. Testing in Production, the safe way",id:"-9-testing-in-production-the-safe-way",level:2},{value:"\ud83d\udcc4 10. 'Please don't mock me' (JavaScript examples, from JSConf)",id:"-10-please-dont-mock-me-javascript-examples-from-jsconf",level:2},{value:"\ud83d\udcc4 Shameless plug: my articles",id:"-shameless-plug-my-articles",level:3},{value:"\ud83c\udf81 Bonus: Some other great testing content",id:"-bonus-some-other-great-testing-content",level:3}],g={toc:p},c="wrapper";function h(e){let{components:t,...s}=e;return(0,i.yg)(c,(0,n.A)({},g,s,{components:t,mdxType:"MDXLayout"}),(0,i.yg)("h2",{id:"whats-special-about-this-article"},"What's special about this article?"),(0,i.yg)("p",null,"As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was ",(0,i.yg)("em",{parentName:"p"},"shockingly good")," and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language"),(0,i.yg)("p",null,"Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling"),(0,i.yg)("p",null,"Too busy to read them all? Search for articles that are decorated with a medal \ud83c\udfc5, these are a true masterpiece pieces of content that you never wanna miss"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"Before we start:")," If you haven't heard, I launched my comprehensive Node.js testing course a week ago (",(0,i.yg)("a",{parentName:"p",href:"https://testjavascript.com/curriculum2/"},"curriculum here"),"). There are less than 48 hours left for the ",(0,i.yg)("a",{parentName:"p",href:"https://courses.testjavascript.com/p/node-js-javascript-testing-from-a-to-z"},"\ud83c\udf81 special launch deal")),(0,i.yg)("p",null,"Here they are, 10 outstanding testing articles:"),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-1-selective-unit-testing--costs-and-benefits"},"\ud83d\udcc4 1. 'Selective Unit Testing \u2013 Costs and Benefits'"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Steve Sanderson"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under ",(0,i.yg)("em",{parentName:"p"},"various scenarios"),'. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the ',(0,i.yg)("em",{parentName:"p"},"costs and benefits per module"),". The article classifies multiple scenarios where the net value of unit tests is high or low, for example:"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"If your code is basically obvious \u2013 so at a glance you can see exactly what it does \u2013 then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any")),(0,i.yg)("p",null,"The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"When unit shines",src:a(5603).A,width:"453",height:"328"})),(0,i.yg)("p",null,"Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a ",(0,i.yg)("a",{parentName:"p",href:"https://www.crispy-engineering.com/p/why-test-diamond-model-makes-sense"},"the testing diamond"),"). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 9 min (1850 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/"},"https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-2-testing-implementation-details-javascript-example"},"\ud83d\udcc4 2. 'Testing implementation details' (JavaScript example)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Kent C Dodds"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"\"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details: "),(0,i.yg)("ol",{parentName:"blockquote"},(0,i.yg)("li",{parentName:"ol"},"Can break when you refactor application code. ",(0,i.yg)("em",{parentName:"li"},"False negatives")),(0,i.yg)("li",{parentName:"ol"},"May not fail when you break application code. ",(0,i.yg)("em",{parentName:"li"},"False positives"),'"'))),(0,i.yg)("p",null,"p.s. This author has another outstanding post about a modern testing strategy, checkout this one as well - ",(0,i.yg)("a",{parentName:"p",href:"https://kentcdodds.com/blog/write-tests"},"'Write tests. Not too many. Mostly integration'")),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 13 min (2600 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://kentcdodds.com/blog/testing-implementation-details"},"https://kentcdodds.com/blog/testing-implementation-details")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-3-testing-microservices-the-sane-way"},"\ud83d\udcc4 3. 'Testing Microservices, the sane way'"),(0,i.yg)("p",null,"\ud83c\udfc5 This is a masterpiece"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Cindy Sridharan"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment"),(0,i.yg)("p",null,"This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for \"big unit tests\" (i.e., component tests) as it strikes a great balance between developers comfort and realism"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"I coined the term \u201cstep-up testing\u201d, the general idea being to test at one layer above what\u2019s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:")),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"When unit shines",src:a(5894).A,width:"546",height:"409"})),(0,i.yg)("p",null,"Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," > 2 hours (10,500 words with many links)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16"},"https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-4-how-to-unit-test-with-nodejs-javascript-examples-for-beginners"},"\ud83d\udcc4 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Ryan Jones"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," ",(0,i.yg)("em",{parentName:"p"},"One single recommendation for beginners:")," Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world"),(0,i.yg)("p",null,"This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about ",(0,i.yg)("a",{parentName:"p",href:"https://www.testim.io/blog/sinon-js-tutorial/"},"test doubles (mocking)")),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 16 min (3000 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56"},"https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-5-unit-test-fetish"},"\ud83d\udcc4 5. 'Unit test fetish'"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Martin S\xfastrik"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 5 min (1000 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://250bpm.com/blog:40/"},"https://250bpm.com/blog:40/")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-6-mocking-is-a-code-smell-javascript-examples"},"\ud83d\udcc4 6. 'Mocking is a Code Smell' (JavaScript examples)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Eric Elliott"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but ",(0,i.yg)("em",{parentName:"p"},"many")," mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},'"Mocking is required when our decomposition strategy has failed"')),(0,i.yg)("p",null,"The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more"),(0,i.yg)("p",null,"The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 32 min (6,300 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a"},"https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-7-why-good-developers-write-bad-unit-tests"},"\ud83d\udcc4 7. 'Why Good Developers Write Bad Unit Tests'"),(0,i.yg)("p",null,"\ud83c\udfc5 This is a masterpiece"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Michael Lynch"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," I love this one so much. The author exemplifies how ",(0,i.yg)("em",{parentName:"p"},"unexpectedly")," it is sometimes the good developers with their great intentions who write bad tests:"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the \u201crules\u201d they learned in production code without examining whether they\u2019re appropriate for tests. As a result, they build skyscrapers at the beach")),(0,i.yg)("p",null,"Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 11 min (2,2000 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://mtlynch.io/good-developers-bad-tests/"},"https://mtlynch.io/good-developers-bad-tests/")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-8-an-overview-of-javascript-testing-in-2022-javascript-examples"},"\ud83d\udcc4 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Vitali Zaidman"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples."),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},'"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."')),(0,i.yg)("p",null," The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 37 min (7,400 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870"},"https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-9-testing-in-production-the-safe-way"},"\ud83d\udcc4 9. Testing in Production, the safe way"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Cindy Sridharan"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add ",(0,i.yg)("em",{parentName:"p"},"additional")," layer of confidence by ",(0,i.yg)("em",{parentName:"p"},"safely")," testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"I\u2019m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias. ")),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"It\u2019s still better than having nothing - but \u201cworks in staging\u201d is only one step better than \u201cworks on my machine\u201d.")),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Testing in production",src:a(233).A,width:"680",height:"480"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 54 min (10,725 words)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1"},"https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1")),(0,i.yg)("br",null),(0,i.yg)("h2",{id:"-10-please-dont-mock-me-javascript-examples-from-jsconf"},"\ud83d\udcc4 10. 'Please don't mock me' (JavaScript examples, from JSConf)"),(0,i.yg)("p",null,"\ud83c\udfc5 This is a masterpiece"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\u270d\ufe0f Author:")," Justin Searls "),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd16 Abstract:")," This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:"),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},'"A test that never fails is a bad test because it doesn\'t tell you anything. Design tests to fail"')),(0,i.yg)("p",null,"Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc53 Read time:")," 39 min"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udd17 Link:")," ",(0,i.yg)("a",{parentName:"p",href:"https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148"},"https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148")),(0,i.yg)("br",null),(0,i.yg)("h3",{id:"-shameless-plug-my-articles"},"\ud83d\udcc4 Shameless plug: my articles"),(0,i.yg)("p",null,"Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?"),(0,i.yg)("ul",null,(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Node.js testing - beyond the basics")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/goldbergyoni/javascript-testing-best-practices"},"50+ JavaScript testing best practices")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://yonigoldberg.medium.com/fighting-javascript-tests-complexity-with-the-basic-principles-87b7622eac9a"},"Writing clean JavaScript tests"))),(0,i.yg)("h3",{id:"-bonus-some-other-great-testing-content"},"\ud83c\udf81 Bonus: Some other great testing content"),(0,i.yg)("p",null,"These articles are also great, some are highly popular:"),(0,i.yg)("ul",null,(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://www.youtube.com/watch?v=5pwv3cuo3Qk"},"Property-Based Testing for everyone")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://www.hillelwayne.com/post/metamorphic-testing/"},"METAMORPHIC TESTING")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://medium.com/@eugenkiss/lean-testing-or-why-unit-tests-are-worse-than-you-think-b6500139a009"},"Lean Testing or Why Unit Tests are Worse than You Think")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://martinfowler.com/articles/microservice-testing/?utm_source=pocket_saves"},"Testing Strategies in a Microservice Architecture")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://kentbeck.github.io/TestDesiderata/"},"Test Desiderata")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://dhh.dk/2014/tdd-is-dead-long-live-testing.html"},"TDD is dead. Long live testing")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://dhh.dk/2014/test-induced-design-damage.html"},"Test-induced-design-damage")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://www.jamesshore.com/v2/projects/nullables/testing-without-mocks"},"testing-without-mocks")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://blog.developer.adobe.com/testing-error-handling-in-node-js-567323397114"},"Testing Node.js error handling"))),(0,i.yg)("p",null,"p.s. Last reminder, less than 48 hours left for my ",(0,i.yg)("a",{parentName:"p",href:"https://courses.testjavascript.com/p/node-js-javascript-testing-from-a-to-z"},"online course \ud83c\udf81 special launch offer")))}h.isMDXComponent=!0},5603:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/selective-unit-tests-b5303f3a425ab038c9aede3d14214abc.png"},5894:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/spectrum-of-testing-16da74a9b2c05eee95923f75e09bc713.png"},233:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/the-3-phases-06497437466da49c00ce842bb19d7a6d.jpeg"}}]);
\ No newline at end of file
diff --git a/assets/js/2bae3136.3b5cd56d.js b/assets/js/2bae3136.3b5cd56d.js
new file mode 100644
index 00000000..75b5e75b
--- /dev/null
+++ b/assets/js/2bae3136.3b5cd56d.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[5342],{6126:a=>{a.exports=JSON.parse('{"label":"domain","permalink":"/blog/tags/domain","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/2e5a46d8.8e4229de.js b/assets/js/2e5a46d8.8e4229de.js
new file mode 100644
index 00000000..891574a6
--- /dev/null
+++ b/assets/js/2e5a46d8.8e4229de.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[6872],{5093:t=>{t.exports=JSON.parse('{"permalink":"/blog/tags/unit","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/2e8e3662.93591541.js b/assets/js/2e8e3662.93591541.js
new file mode 100644
index 00000000..d867df3c
--- /dev/null
+++ b/assets/js/2e8e3662.93591541.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2618],{1510:a=>{a.exports=JSON.parse('{"label":"nock","permalink":"/blog/tags/nock","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/2fdff385.88afe582.js b/assets/js/2fdff385.88afe582.js
new file mode 100644
index 00000000..d7f7437d
--- /dev/null
+++ b/assets/js/2fdff385.88afe582.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[1833],{3621:s=>{s.exports=JSON.parse('{"label":"decisions","permalink":"/blog/tags/decisions","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/379b65ab.f26029b3.js b/assets/js/379b65ab.f26029b3.js
new file mode 100644
index 00000000..61b8d562
--- /dev/null
+++ b/assets/js/379b65ab.f26029b3.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[9022],{5680:(e,t,n)=>{n.d(t,{xA:()=>p,yg:()=>g});var r=n(6540);function a(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function o(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function s(e){for(var t=1;t=0||(a[n]=e[n]);return a}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(a[n]=e[n])}return a}var l=r.createContext({}),c=function(e){var t=r.useContext(l),n=t;return e&&(n="function"==typeof e?e(t):s(s({},t),e)),n},p=function(e){var t=c(e.components);return r.createElement(l.Provider,{value:t},e.children)},h="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return r.createElement(r.Fragment,{},t)}},u=r.forwardRef((function(e,t){var n=e.components,a=e.mdxType,o=e.originalType,l=e.parentName,p=i(e,["components","mdxType","originalType","parentName"]),h=c(n),u=a,g=h["".concat(l,".").concat(u)]||h[u]||d[u]||o;return n?r.createElement(g,s(s({ref:t},p),{},{components:n})):r.createElement(g,s({ref:t},p))}));function g(e,t){var n=arguments,a=t&&t.mdxType;if("string"==typeof e||a){var o=n.length,s=new Array(o);s[0]=u;var i={};for(var l in t)hasOwnProperty.call(t,l)&&(i[l]=t[l]);i.originalType=e,i[h]="string"==typeof e?e:a,s[1]=i;for(var c=2;c{n.r(t),n.d(t,{assets:()=>l,contentTitle:()=>s,default:()=>d,frontMatter:()=>o,metadata:()=>i,toc:()=>c});var r=n(8168),a=(n(6540),n(5680));const o={slug:"popular-nodejs-pattern-and-tools-to-reconsider",date:"2022-08-02T10:00",hide_table_of_contents:!0,title:"Popular Node.js patterns and tools to re-consider",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},s="Popular Node.js tools and patterns to re-consider",i={permalink:"/blog/popular-nodejs-pattern-and-tools-to-reconsider",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/pattern-to-reconsider/index.md",source:"@site/blog/pattern-to-reconsider/index.md",title:"Popular Node.js patterns and tools to re-consider",description:"Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?",date:"2022-08-02T10:00:00.000Z",formattedDate:"August 2, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"nestjs",permalink:"/blog/tags/nestjs"},{label:"fastify",permalink:"/blog/tags/fastify"},{label:"passport",permalink:"/blog/tags/passport"},{label:"dotenv",permalink:"/blog/tags/dotenv"},{label:"supertest",permalink:"/blog/tags/supertest"},{label:"practica",permalink:"/blog/tags/practica"},{label:"testing",permalink:"/blog/tags/testing"}],readingTime:21.09,hasTruncateMarker:!0,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"popular-nodejs-pattern-and-tools-to-reconsider",date:"2022-08-02T10:00",hide_table_of_contents:!0,title:"Popular Node.js patterns and tools to re-consider",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},prevItem:{title:"Which Monorepo is right for a Node.js BACKEND\xa0now?",permalink:"/blog/monorepo-backend"},nextItem:{title:"Practica.js v0.0.1 is alive",permalink:"/blog/practica-is-alive"}},l={authorsImageUrls:[void 0]},c=[{value:"TOC - Patterns to reconsider",id:"toc---patterns-to-reconsider",level:2},{value:"1. Dotenv as your configuration source",id:"1-dotenv-as-your-configuration-source",level:2},{value:"2. Calling a 'fat' service from the API controller",id:"2-calling-a-fat-service-from-the-api-controller",level:2},{value:"3. Nest.js: Wire everything with dependency injection",id:"3-nestjs-wire-everything-with-dependency-injection",level:2},{value:"1 min pause: A word or two about me, the author",id:"1-min-pause-a-word-or-two-about-me-the-author",level:2},{value:"4. Passport.js for token authentication",id:"4-passportjs-for-token-authentication",level:2},{value:"5. Supertest for integration/API testing",id:"5-supertest-for-integrationapi-testing",level:2},{value:"6. Fastify decorate for non request/web utilities",id:"6-fastify-decorate-for-non-requestweb-utilities",level:2},{value:"7. Logging from a catch clause",id:"7-logging-from-a-catch-clause",level:2},{value:"8. Use Morgan logger for express web requests",id:"8-use-morgan-logger-for-express-web-requests",level:2},{value:"9. Having conditional code based on NODE_ENV value",id:"9-having-conditional-code-based-on-node_env-value",level:2},{value:"Closing",id:"closing",level:2},{value:"Some of my other articles",id:"some-of-my-other-articles",level:2}],p={toc:c},h="wrapper";function d(e){let{components:t,...o}=e;return(0,a.yg)(h,(0,r.A)({},p,o,{components:t,mdxType:"MDXLayout"}),(0,a.yg)("p",null,"Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?"),(0,a.yg)("p",null,"In his novel book 'Atomic Habits' the author James Clear states that:"),(0,a.yg)("blockquote",null,(0,a.yg)("p",{parentName:"blockquote"},'"Mastery is created by habits. However, sometimes when we\'re on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst')),(0,a.yg)("p",null,"We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change"),(0,a.yg)("p",null,"Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples. "),(0,a.yg)("p",null,"Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not \"don't use this tool!\" but rather becoming familiar with other techniques that, ",(0,a.yg)("em",{parentName:"p"},"under some circumstances")," might be a better fit"),(0,a.yg)("p",null,(0,a.yg)("img",{alt:"Animals and frameworks shed their skin",src:n(6738).A,width:"600",height:"400"})),(0,a.yg)("p",null,(0,a.yg)("em",{parentName:"p"},"The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell")),(0,a.yg)("h2",{id:"toc---patterns-to-reconsider"},"TOC - Patterns to reconsider"),(0,a.yg)("ol",null,(0,a.yg)("li",{parentName:"ol"},"Dotenv"),(0,a.yg)("li",{parentName:"ol"},"Calling a service from a controller"),(0,a.yg)("li",{parentName:"ol"},"Nest.js dependency injection for all classes"),(0,a.yg)("li",{parentName:"ol"},"Passport.js"),(0,a.yg)("li",{parentName:"ol"},"Supertest"),(0,a.yg)("li",{parentName:"ol"},"Fastify utility decoration"),(0,a.yg)("li",{parentName:"ol"},"Logging from a catch clause"),(0,a.yg)("li",{parentName:"ol"},"Morgan logger"),(0,a.yg)("li",{parentName:"ol"},"NODE_ENV")),(0,a.yg)("h2",{id:"1-dotenv-as-your-configuration-source"},"1. Dotenv as your configuration source"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," A super popular technique in which the app configurable values (e.g., DB user name) are stored in a simple text file. Then, when the app loads, the dotenv library sets all the text file values as environment variables so the code can read this"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// .env file\nUSER_SERVICE_URL=https://users.myorg.com\n\n//start.js\nrequire('dotenv').config();\n\n//blog-post-service.js\nrepository.savePost(post);\n//update the user number of posts, read the users service URL from an environment variable\nawait axios.put(`${process.env.USER_SERVICE_URL}/api/user/${post.userId}/incrementPosts`)\n\n")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," 21,806,137 downloads/week!"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," Dotenv is so easy and intuitive to start with, so one might easily overlook fundamental features: For example, it's hard to infer the configuration schema and realize the meaning of each key and its typing. Consequently, there is no built-in way to fail fast when a mandatory key is missing - a flow might fail after starting and presenting some side effects (e.g., DB records were already mutated before the failure). In the example above, the blog post will be saved to DB, and only then will the code realize that a mandatory key is missing - This leaves the app hanging in an invalid state. On top of this, in the presence of many keys, it's impossible to organize them hierarchically. If not enough, it encourages developers to commit this .env file which might contain production values - this happens because there is no clear way to define development defaults. Teams usually work around this by committing .env.example file and then asking whoever pulls code to rename this file manually. If they remember to of course"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," Some configuration libraries provide out of the box solution to all of these needs. They encourage a clear schema and the possibility to validate early and fail if needed. See ",(0,a.yg)("a",{parentName:"p",href:"https://practica.dev/decisions/configuration-library"},"comparison of options here"),". One of the better alternatives is ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/mozilla/node-convict"},"'convict'"),", down below is the same example, this time with Convict, hopefully it's better now:"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'// config.js\nexport default {\n userService: {\n url: {\n // Hierarchical, documented and strongly typed \ud83d\udc47\n doc: "The URL of the user management service including a trailing slash",\n format: "url",\n default: "http://localhost:4001",\n nullable: false,\n env: "USER_SERVICE_URL",\n },\n },\n //more keys here\n};\n\n//start.js\nimport convict from "convict";\nimport configSchema from "config";\nconvict(configSchema);\n// Fail fast!\nconvictConfigurationProvider.validate();\n\n//blog-post.js\nrepository.savePost(post);\n// Will never arrive here if the URL is not set\nawait axios.put(\n `${convict.get(userService.url)}/api/user/${post.userId}/incrementPosts`\n);\n')),(0,a.yg)("h2",{id:"2-calling-a-fat-service-from-the-api-controller"},"2. Calling a 'fat' service from the API controller"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," Consider a reader of our code who wishes to understand the entire ",(0,a.yg)("em",{parentName:"p"},"high-level")," flow or delve into a very ",(0,a.yg)("em",{parentName:"p"},"specific")," part. She first lands on the API controller, where requests start. Unlike what its name implies, this controller layer is just an adapter and kept really thin and straightforward. Great thus far. Then the controller calls a big 'service' with thousands of lines of code that represent the entire logic"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// user-controller\nrouter.post('/', async (req, res, next) => {\n await userService.add(req.body);\n // Might have here try-catch or error response logic\n}\n\n// user-service\nexports function add(newUser){\n // Want to understand quickly? Need to understand the entire user service, 1500 loc\n // It uses technical language and reuse narratives of other flows\n this.copyMoreFieldsToUser(newUser)\n const doesExist = this.updateIfAlreadyExists(newUser)\n if(!doesExist){\n addToCache(newUser);\n }\n // 20 more lines that demand navigating to other functions in order to get the intent\n}\n\n\n")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," It's hard to pull solid numbers here, I could confidently say that in ",(0,a.yg)("em",{parentName:"p"},"most")," of the app that I see, this is the case"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," We're here to tame complexities. One of the useful techniques is deferring a complexity to the later stage possible. In this case though, the reader of the code (hopefully) starts her journey through the tests and the controller - things are simple in these areas. Then, as she lands on the big service - she gets tons of complexity and small details, although she is focused on understanding the overall flow or some specific logic. This is ",(0,a.yg)("strong",{parentName:"p"},"unnecessary")," complexity"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," The controller should call a particular type of service, a ",(0,a.yg)("strong",{parentName:"p"},"use-case")," , which is responsible for ",(0,a.yg)("em",{parentName:"p"},"summarizing")," the flow in a business and simple language. Each flow/feature is described using a use-case, each contains 4-10 lines of code, that tell the story without technical details. It mostly orchestrates other small services, clients, and repositories that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily. She can now ",(0,a.yg)("strong",{parentName:"p"},"choose")," where she would like to focus. She is now exposed only to ",(0,a.yg)("strong",{parentName:"p"},"necessary")," complexity. This technique also encourages partitioning the code to the smaller object that the use-case orchestrates. Bonus: By looking at coverage reports, one can tell which features are covered, not just files/functions"),(0,a.yg)("p",null,"This idea by the way is formalized in the ",(0,a.yg)("a",{parentName:"p",href:"https://www.bookdepository.com/Clean-Architecture-Robert-Martin/9780134494166?redirected=true&utm_medium=Google&utm_campaign=Base1&utm_source=IL&utm_content=Clean-Architecture&selectCurrency=ILS&w=AFF9AU99ZB4MTDA8VTRQ&gclid=Cj0KCQjw3eeXBhD7ARIsAHjssr92kqLn60dnfQCLjbkaqttdgvhRV5dqKtnY680GCNDvKp-16HtZp24aAg6GEALw_wcB"},"'clean architecture' book")," - I'm not a big fan of 'fancy' architectures, but see - it's worth cherry-picking techniques from every source. You may walk-through our ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Node.js best practices starter, practica.js"),", and examine the use-cases code"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// add-order-use-case.js\nexport async function addOrder(newOrder: addOrderDTO) {\n orderValidation.assertOrderIsValid(newOrder);\n const userWhoOrdered = await userServiceClient.getUserWhoOrdered(\n newOrder.userId\n );\n paymentTermsService.assertPaymentTerms(\n newOrder.paymentTermsInDays,\n userWhoOrdered.terms\n );\n\n const response = await orderRepository.addOrder(newOrder);\n\n return response;\n}\n")),(0,a.yg)("h2",{id:"3-nestjs-wire-everything-with-dependency-injection"},"3. Nest.js: Wire ",(0,a.yg)("em",{parentName:"h2"},"everything")," with dependency injection"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," If you're doing Nest.js, besides having a powerful framework in your hands, you probably use DI for ",(0,a.yg)("em",{parentName:"p"},"everything")," and make every class injectable. Say you have a weather-service that depends upon humidity-service, and ",(0,a.yg)("strong",{parentName:"p"},"there is no requirement to swap"),' the humidity-service with alternative providers. Nevertheless, you inject humidity-service into the weather-service. It becomes part of your development style, "why not" you think - I may need to stub it during testing or replace it in the future'),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-typescript"},"// humidity-service.ts - not customer facing\n@Injectable()\nexport class GoogleHumidityService {\n\n async getHumidity(when: Datetime): Promise {\n // Fetches from some specific cloud service\n }\n}\n\n// weather-service.ts - customer facing\nimport { GoogleHumidityService } from './humidity-service.ts';\n\nexport type weatherInfo{\n temperature: number,\n humidity: number\n}\n\nexport class WeatherService {\n constructor(private humidityService: GoogleHumidityService) {}\n\n async GetWeather(when: Datetime): Promise {\n // Fetch temperature from somewhere and then humidity from GoogleHumidityService\n }\n}\n\n// app.module.ts\n@Module({\n providers: [GoogleHumidityService, WeatherService],\n})\nexport class AppModule {}\n")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," No numbers here but I could confidently say that in ",(0,a.yg)("em",{parentName:"p"},"all")," of the Nest.js app that I've seen, this is the case. In the popular ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/lujakob/nestjs-realworld-example-app"},"'nestjs-realworld-example-ap[p']("),") all the services are 'injectable'"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," Dependency injection is not a priceless coding style but a pattern you should pull in the right moment, like any other pattern. Why? Because any pattern has a price. What price, you ask? First, encapsulation is violated. Clients of the weather-service are now aware that other providers are being used ",(0,a.yg)("em",{parentName:"p"},"internally"),". Some clients may get tempted to override providers also it's not under their responsibility. Second, it's another layer of complexity to learn, maintain, and one more way to shoot yourself in the legs. StackOverflow owes some of its revenues to Nest.js DI - plenty of discussions try to solve this puzzle (e.g. did you know that in case of circular dependencies the order of imports matters?). Third, there is the performance thing - Nest.js, for example struggled to provide a decent start time for serverless environments and had to introduce ",(0,a.yg)("a",{parentName:"p",href:"https://docs.nestjs.com/fundamentals/lazy-loading-modules"},"lazy loaded modules"),". Don't get me wrong, ",(0,a.yg)("strong",{parentName:"p"},"in some cases"),", there is a good case for DI: When a need arises to decouple a dependency from its caller, or to allow clients to inject custom implementations (e.g., the strategy pattern). ",(0,a.yg)("strong",{parentName:"p"},"In such case"),", when there is a value, you may consider whether the ",(0,a.yg)("em",{parentName:"p"},"value of DI is worth its price"),". If you don't have this case, why pay for nothing?"),(0,a.yg)("p",null,"I recommend reading the first paragraphs of this blog post ",(0,a.yg)("a",{parentName:"p",href:"https://www.tonymarston.net/php-mysql/dependency-injection-is-evil.html"},"'Dependency Injection is EVIL'")," (and absolutely don't agree with this bold words)"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," 'Lean-ify' your engineering approach - avoid using any tool unless it serves a real-world need immediately. Start simple, a dependent class should simply import its dependency and use it - Yeah, using the plain Node.js module system ('require'). Facing a situation when there is a need to factor dynamic objects? There are a handful of simple patterns, simpler than DI, that you should consider, like 'if/else', factory function, and more. Are singletons requested? Consider techniques with lower costs like the module system with factory function. Need to stub/mock for testing? Monkey patching might be better than DI: better clutter your test code a bit than clutter your production code. Have a strong need to hide from an object where its dependencies are coming from? You sure? Use DI!"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-typescript"},'// humidity-service.ts - not customer facing\nexport async function getHumidity(when: Datetime): Promise {\n // Fetches from some specific cloud service\n}\n\n// weather-service.ts - customer facing\nimport { getHumidity } from "./humidity-service.ts";\n\n// \u2705 No wiring is happening externally, all is flat and explicit. Simple\nexport async function getWeather(when: Datetime): Promise {\n // Fetch temperature from somewhere and then humidity from GoogleHumidityService\n // Nobody needs to know about it, its an implementation details\n await getHumidity(when);\n}\n')),(0,a.yg)("hr",null),(0,a.yg)("h2",{id:"1-min-pause-a-word-or-two-about-me-the-author"},"1 min pause: A word or two about me, the author"),(0,a.yg)("p",null,"My name is Yoni Goldberg, I'm a Node.js developer and consultant. I wrote few code-books like ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/javascript-testing-best-practices"},"JavaScript testing best practices")," and ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices")," (100,000 stars \u2728\ud83e\udd79). That said, my best guide is ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Node.js testing practices")," which only few read \ud83d\ude1e. I shall release ",(0,a.yg)("a",{parentName:"p",href:"https://testjavascript.com/"},"an advanced Node.js testing course soon")," and also hold workshops for teams. I'm also a core maintainer of ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Practica.js")," which is a Node.js starter that creates a production-ready example Node Monorepo solution that is based on the standards and simplicity. It might be your primary option when starting a new Node.js solution"),(0,a.yg)("hr",null),(0,a.yg)("h2",{id:"4-passportjs-for-token-authentication"},"4. Passport.js for token authentication"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," Commonly, you're in need to issue or/and authenticate JWT tokens. Similarly, you might need to allow login from ",(0,a.yg)("em",{parentName:"p"},"one")," single social network like Google/Facebook. When faced with these kinds of needs, Node.js developers rush to the glorious library ",(0,a.yg)("a",{parentName:"p",href:"https://www.passportjs.org/"},"Passport.js")," like butterflies are attracted to light"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," 1,389,720 weekly downloads"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," When tasked with guarding your routes with JWT token - you're just a few lines of code shy from ticking the goal. Instead of messing up with a new framework, instead of introducing levels of indirections (you call passport, then it calls you), instead of spending time learning new abstractions - use a JWT library directly. Libraries like ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/auth0/node-jsonwebtoken"},"jsonwebtoken")," or ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/nearform/fast-jwt"},"fast-jwt")," are simple and well maintained. Have concerns with the security hardening? Good point, your concerns are valid. But would you not get better hardening with a direct understanding of your configuration and flow? Will hiding things behind a framework help? Even if you prefer the hardening of a battle-tested framework, Passport doesn't handle a handful of security risks like secrets/token, secured user management, DB protection, and more. My point, you probably anyway need fully-featured user and authentication management platforms. Various cloud services and OSS projects, can tick all of those security concerns. Why then start in the first place with a framework that doesn't satisfy your security needs? It seems like many who opt for Passport.js are not fully aware of which needs are satisfied and which are left open. All of that said, Passport definitely shines when looking for a quick way to support ",(0,a.yg)("em",{parentName:"p"},"many")," social login providers"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," Is token authentication in order? These few lines of code below might be all you need. You may also glimpse into ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica/tree/main/src/code-templates/libraries/jwt-token-verifier"},"Practica.js wrapper around these libraries"),". A real-world project at scale typically need more: supporting async JWT ",(0,a.yg)("a",{parentName:"p",href:"https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets"},"(JWKS)"),", securely manage and rotate the secrets to name a few examples. In this case, OSS solution like [keycloak (",(0,a.yg)("a",{parentName:"p",href:"https://github.com/keycloak/keycloak"},"https://github.com/keycloak/keycloak"),") or commercial options like Auth0","[https://github.com/auth0]"," are alternatives to consider"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// jwt-middleware.js, a simplified version - Refer to Practica.js to see some more corner cases\nconst middleware = (req, res, next) => {\n if(!req.headers.authorization){\n res.sendStatus(401)\n }\n\n jwt.verify(req.headers.authorization, options.secret, (err: any, jwtContent: any) => {\n if (err) {\n return res.sendStatus(401);\n }\n\n req.user = jwtContent.data;\n\n next();\n });\n")),(0,a.yg)("h2",{id:"5-supertest-for-integrationapi-testing"},"5. Supertest for integration/API testing"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," When testing against an API (i.e., component, integration, E2E tests), the library ",(0,a.yg)("a",{parentName:"p",href:"https://www.npmjs.com/package/supertest"},"supertest")," provides a sweet syntax that can both detect the web server address, make HTTP call and also assert on the response. Three in one"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'test("When adding invalid user, then the response is 400", (done) => {\n const request = require("supertest");\n const app = express();\n // Arrange\n const userToAdd = {\n name: undefined,\n };\n\n // Act\n request(app)\n .post("/user")\n .send(userToAdd)\n .expect("Content-Type", /json/)\n .expect(400, done);\n\n // Assert\n // We already asserted above \u261d\ud83c\udffb as part of the request\n});\n')),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," 2,717,744 weekly downloads"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," You already have your assertion library (Jest? Chai?), it has a great error highlighting and comparison - you trust it. Why code some tests using another assertion syntax? Not to mention, Supertest's assertion errors are not as descriptive as Jest and Chai. It's also cumbersome to mix HTTP client + assertion library instead of choosing the best for each mission. Speaking of the best, there are more standard, popular, and better-maintained HTTP clients (like fetch, axios and other friends). Need another reason? Supertest might encourage coupling the tests to Express as it offers a constructor that gets an Express object. This constructor infers the API address automatically (useful when using dynamic test ports). This couples the test to the implementation and won't work in the case where you wish to run the same tests against a remote process (the API doesn't live with the tests). My repository ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"'Node.js testing best practices'")," holds examples of how tests can infer the API port and address"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," A popular and standard HTTP client library like Node.js Fetch or Axios. In ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Practica.js")," (a Node.js starter that packs many best practices) we use Axios. It allows us to configure a HTTP client that is shared among all the tests: We bake inside a JWT token, headers, and a base URL. Another good pattern that we look at, is making each Microservice generate HTTP client library for its consumers. This brings strong-type experience to the clients, synchronizes the provider-consumer versions and as a bonus - The provider can test itself with the same library that its consumers are using"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'test("When adding invalid user, then the response is 400 and includes a reason", (done) => {\n const app = express();\n // Arrange\n const userToAdd = {\n name: undefined,\n };\n\n // Act\n const receivedResponse = axios.post(\n `http://localhost:${apiPort}/user`,\n userToAdd\n );\n\n // Assert\n // \u2705 Assertion happens in a dedicated stage and a dedicated library\n expect(receivedResponse).toMatchObject({\n status: 400,\n data: {\n reason: "no-name",\n },\n });\n});\n')),(0,a.yg)("h2",{id:"6-fastify-decorate-for-non-requestweb-utilities"},"6. Fastify decorate for non request/web utilities"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," ",(0,a.yg)("a",{parentName:"p",href:"https://github.com/fastify/fastify"},"Fastify")," introduces great patterns. Personally, I highly appreciate how it preserves the simplicity of Express while bringing more batteries. One thing that got me wondering is the 'decorate' feature which allows placing common utilities/services inside a widely accessible container object. I'm referring here specifically to the case where a cross-cutting concern utility/service is being used. Here is an example:"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// An example of a utility that is cross-cutting-concern. Could be logger or anything else\nfastify.decorate('metricsService', function (name) {\n fireMetric: () => {\n // My code that sends metrics to the monitoring system\n }\n})\n\nfastify.get('/api/orders', async function (request, reply) {\n this.metricsService.fireMetric({name: 'new-request'})\n // Handle the request\n})\n\n// my-business-logic.js\nexports function calculateSomething(){\n // How to fire a metric?\n}\n")),(0,a.yg)("p",null,"It should be noted that 'decoration' is also used to place values (e.g., user) inside a request - this is a slightly different case and a sensible one"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," Fastify has 696,122 weekly download and growing rapidly. The decorator concept is part of the framework's core"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," Some services and utilities serve cross-cutting-concern needs and should be accessible from other layers like domain (i.e, business logic, DAL). When placing utilities inside this object, the Fastify object might not be accessible to these layers. You probably don't want to couple your web framework with your business logic: Consider that some of your business logic and repositories might get invoked from non-REST clients like CRON, MQ, and similar - In these cases, Fastify won't get involved at all so better not trust it to be your service locator"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," A good old Node.js module is a standard way to expose and consume functionality. Need a singleton? Use the module system caching. Need to instantiate a service in correlation with a Fastify life-cycle hook (e.g., DB connection on start)? Call it from that Fastify hook. In the rare case where a highly dynamic and complex instantiation of dependencies is needed - DI is also a (complex) option to consider"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// \u2705 A simple usage of good old Node.js modules\n// metrics-service.js\n\nexports async function fireMetric(name){\n // My code that sends metrics to the monitoring system\n}\n\nimport {fireMetric} from './metrics-service.js'\n\nfastify.get('/api/orders', async function (request, reply) {\n metricsService.fireMetric({name: 'new-request'})\n})\n\n// my-business-logic.js\nexports function calculateSomething(){\n metricsService.fireMetric({name: 'new-request'})\n}\n")),(0,a.yg)("h2",{id:"7-logging-from-a-catch-clause"},"7. Logging from a catch clause"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," You catch an error somewhere deep in the code (not on the route level), then call logger.error to make this error observable. Seems simple and necessary"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"try{\n axios.post('https://thatService.io/api/users);\n}\ncatch(error){\n logger.error(error, this, {operation: addNewOrder});\n}\n")),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," Hard to put my hands on numbers but it's quite popular, right?"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," First, errors should get handled/logged in a central location. Error handling is a critical path. Various catch clauses are likely to behave differently without a centralized and unified behavior. For example, a request might arise to tag all errors with certain metadata, or on top of logging, to also fire a monitoring metric. Applying these requirements in ~100 locations is not a walk in the park. Second, catch clauses should be minimized to particular scenarios. By default, the natural flow of an error is bubbling down to the route/entry-point - from there, it will get forwarded to the error handler. Catch clauses are more verbose and error-prone - therefore it should serve two very specific needs: When one wishes to change the flow based on the error or enrich the error with more information (which is not the case in this example)"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," By default, let the error bubble down the layers and get caught by the entry-point global catch (e.g., Express error middleware). In cases when the error should trigger a different flow (e.g., retry) or there is value in enriching the error with more context - use a catch clause. In this case, ensure the .catch code also reports to the error handler"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},"// A case where we wish to retry upon failure\ntry{\n axios.post('https://thatService.io/api/users);\n}\ncatch(error){\n // \u2705 A central location that handles error\n errorHandler.handle(error, this, {operation: addNewOrder});\n callTheUserService(numOfRetries++);\n}\n")),(0,a.yg)("h2",{id:"8-use-morgan-logger-for-express-web-requests"},"8. Use Morgan logger for express web requests"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," In many web apps, you are likely to find a pattern that is being copy-pasted for ages - Using Morgan logger to log requests information:"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'const express = require("express");\nconst morgan = require("morgan");\n\nconst app = express();\n\napp.use(morgan("combined"));\n')),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:")," 2,901,574 downloads/week"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," Wait a second, you already have your main logger, right? Is it Pino? Winston? Something else? Great. Why deal with and configure yet another logger? I do appreciate the HTTP domain-specific language (DSL) of Morgan. The syntax is sweet! But does it justify having two loggers?"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," Put your chosen logger in a middleware and log the desired request/response properties:"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'// \u2705 Use your preferred logger for all the tasks\nconst logger = require("pino")();\napp.use((req, res, next) => {\n res.on("finish", () => {\n logger.info(`${req.url} ${res.statusCode}`); // Add other properties here\n });\n next();\n});\n')),(0,a.yg)("h2",{id:"9-having-conditional-code-based-on-node_env-value"},"9. Having conditional code based on ",(0,a.yg)("inlineCode",{parentName:"h2"},"NODE_ENV")," value"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:"),' To differentiate between development vs production configuration, it\'s common to set the environment variable NODE_ENV with "production|test". Doing so allows the various tooling to act differently. For example, some templating engines will cache compiled templates only in production. Beyond tooling, custom applications use this to specify behaviours that are unique to the development or production environment:'),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'if (process.env.NODE_ENV === "production") {\n // This is unlikely to be tested since test runner usually set NODE_ENV=test\n setLogger({ stdout: true, prettyPrint: false });\n // If this code branch above exists, why not add more production-only configurations:\n collectMetrics();\n} else {\n setLogger({ splunk: true, prettyPrint: true });\n}\n')),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83d\udcca How popular:"),' 5,034,323 code results in GitHub when searching for "NODE_ENV". It doesn\'t seem like a rare pattern'),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\ud83e\udd14 Why it might be wrong:")," Anytime your code checks whether it's production or not, this branch won't get hit by default in some test runner (e.g., Jest set ",(0,a.yg)("inlineCode",{parentName:"p"},"NODE_ENV=test"),"). In ",(0,a.yg)("em",{parentName:"p"},"any")," test runner, the developer must remember to test for each possible value of this environment variable. In the example above, ",(0,a.yg)("inlineCode",{parentName:"p"},"collectMetrics()")," will be tested for the first time in production. Sad smiley. Additionally, putting these conditions opens the door to add more differences between production and the developer machine - when this variable and conditions exists, a developer gets tempted to put some logic for production only. Theoretically, this can be tested: one can set ",(0,a.yg)("inlineCode",{parentName:"p"},'NODE_ENV = "production"')," in testing and cover the production branches (if she remembers...). But then, if you can test with ",(0,a.yg)("inlineCode",{parentName:"p"},"NODE_ENV='production'"),", what's the point in separating? Just consider everything to be 'production' and avoid this error-prone mental load"),(0,a.yg)("p",null,(0,a.yg)("strong",{parentName:"p"},"\u2600\ufe0f Better alternative:")," Any code that was written by us, must be tested. This implies avoiding any form of if(production)/else(development) conditions. Wouldn't anyway developers machine have different surrounding infrastructure than production (e.g., logging system)? They do, the environments are quite difference, but we feel comfortable with it. These infrastructural things are battle-tested, extraneous, and not part of our code. To keep the same code between dev/prod and still use different infrastructure - we put different values in the configuration (not in the code). For example, a typical logger emits JSON in production but in a development machine it emits 'pretty-print' colorful lines. To meet this, we set ENV VAR that tells whether what logging style we aim for:"),(0,a.yg)("pre",null,(0,a.yg)("code",{parentName:"pre",className:"language-javascript"},'//package.json\n"scripts": {\n "start": "LOG_PRETTY_PRINT=false index.js",\n "test": "LOG_PRETTY_PRINT=true jest"\n}\n\n//index.js\n//\u2705 No condition, same code for all the environments. The variations are defined externally in config or deployment files\nsetLogger({prettyPrint: process.env.LOG_PRETTY_PRINT})\n')),(0,a.yg)("h2",{id:"closing"},"Closing"),(0,a.yg)("p",null,"I hope that these thoughts, at least one of them, made you re-consider adding a new technique to your toolbox. In any case, let's keep our community vibrant, disruptive and kind. Respectful discussions are almost as important as the event loop. Almost."),(0,a.yg)("h2",{id:"some-of-my-other-articles"},"Some of my other articles"),(0,a.yg)("ul",null,(0,a.yg)("li",{parentName:"ul"},(0,a.yg)("a",{parentName:"li",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Book: Node.js testing best practices")),(0,a.yg)("li",{parentName:"ul"},(0,a.yg)("a",{parentName:"li",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Book: JavaScript testing best practices")),(0,a.yg)("li",{parentName:"ul"},(0,a.yg)("a",{parentName:"li",href:"https://yonigoldberg.medium.com/20-ways-to-become-a-better-node-js-developer-in-2020-d6bd73fcf424"},"How to be a better Node.js developer in 2020"),". The 2023 version is coming soon"),(0,a.yg)("li",{parentName:"ul"},(0,a.yg)("a",{parentName:"li",href:"https://github.com/practicajs/practica"},"Practica.js - A Node.js starter")),(0,a.yg)("li",{parentName:"ul"},(0,a.yg)("a",{parentName:"li",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices"))))}d.isMDXComponent=!0},6738:(e,t,n)=>{n.d(t,{A:()=>r});const r=n.p+"assets/images/crab-161f2b8e5ab129c2a175920691a845c0.webp"}}]);
\ No newline at end of file
diff --git a/assets/js/39bbf0fd.5d67b8c0.js b/assets/js/39bbf0fd.5d67b8c0.js
new file mode 100644
index 00000000..0b515880
--- /dev/null
+++ b/assets/js/39bbf0fd.5d67b8c0.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4903],{5680:(e,t,a)=>{a.d(t,{xA:()=>p,yg:()=>h});var r=a(6540);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,r)}return a}function o(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var s=r.createContext({}),c=function(e){var t=r.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):o(o({},t),e)),a},p=function(e){var t=c(e.components);return r.createElement(s.Provider,{value:t},e.children)},u="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return r.createElement(r.Fragment,{},t)}},m=r.forwardRef((function(e,t){var a=e.components,i=e.mdxType,n=e.originalType,s=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),u=c(a),m=i,h=u["".concat(s,".").concat(m)]||u[m]||d[m]||n;return a?r.createElement(h,o(o({ref:t},p),{},{components:a})):r.createElement(h,o({ref:t},p))}));function h(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var n=a.length,o=new Array(n);o[0]=m;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l[u]="string"==typeof e?e:i,o[1]=l;for(var c=2;c{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>o,default:()=>d,frontMatter:()=>n,metadata:()=>l,toc:()=>c});var r=a(8168),i=(a(6540),a(5680));const n={slug:"practica-v0.0.6-is-alive",date:"2022-12-10T10:00",hide_table_of_contents:!0,title:"Practica v0.0.6 is alive",authors:["goldbergyoni","razluvaton","danielgluskin","michaelsalomon"],tags:["node.js","express","practica","prisma"]},o=void 0,l={permalink:"/blog/practica-v0.0.6-is-alive",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/v0.6-is-alive/index.md",source:"@site/blog/v0.6-is-alive/index.md",title:"Practica v0.0.6 is alive",description:"Where is our focus now?",date:"2022-12-10T10:00:00.000Z",formattedDate:"December 10, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"practica",permalink:"/blog/tags/practica"},{label:"prisma",permalink:"/blog/tags/prisma"}],readingTime:1.47,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"},{name:"Raz Luvaton",title:"Practica.js core maintainer",url:"https://github.com/rluvaton",imageURL:"https://avatars.githubusercontent.com/u/16746759?v=4",key:"razluvaton"},{name:"Daniel Gluskin",title:"Practica.js core maintainer",url:"https://github.com/DanielGluskin",imageURL:"https://avatars.githubusercontent.com/u/17989958?v=4",key:"danielgluskin"},{name:"Michael Salomon",title:"Practica.js core maintainer",url:"https://github.com/mikicho",imageURL:"https://avatars.githubusercontent.com/u/11459632?v=4",key:"michaelsalomon"}],frontMatter:{slug:"practica-v0.0.6-is-alive",date:"2022-12-10T10:00",hide_table_of_contents:!0,title:"Practica v0.0.6 is alive",authors:["goldbergyoni","razluvaton","danielgluskin","michaelsalomon"],tags:["node.js","express","practica","prisma"]},prevItem:{title:"Testing the dark scenarios of your Node.js application",permalink:"/blog/testing-the-dark-scenarios-of-your-nodejs-application"},nextItem:{title:"Is Prisma better than your 'traditional' ORM?",permalink:"/blog/is-prisma-better-than-your-traditional-orm"}},s={authorsImageUrls:[void 0,void 0,void 0,void 0]},c=[{value:"Where is our focus now?",id:"where-is-our-focus-now",level:2},{value:"What's new?",id:"whats-new",level:2},{value:"Request-level store",id:"request-level-store",level:3},{value:"Hardened .dockerfile",id:"hardened-dockerfile",level:3},{value:"Additional ORM option: Prisma",id:"additional-orm-option-prisma",level:3},{value:"Many small enhancements",id:"many-small-enhancements",level:3},{value:"Where do I start?",id:"where-do-i-start",level:2}],p={toc:c},u="wrapper";function d(e){let{components:t,...a}=e;return(0,i.yg)(u,(0,r.A)({},p,a,{components:t,mdxType:"MDXLayout"}),(0,i.yg)("h2",{id:"where-is-our-focus-now"},"Where is our focus now?"),(0,i.yg)("p",null,"We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback"),(0,i.yg)("h2",{id:"whats-new"},"What's new?"),(0,i.yg)("h3",{id:"request-level-store"},"Request-level store"),(0,i.yg)("p",null,"Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in ",(0,i.yg)("a",{parentName:"p",href:"https://nodejs.org/api/async_context.html"},"AsyncLocal")," for this task"),(0,i.yg)("h3",{id:"hardened-dockerfile"},"Hardened .dockerfile"),(0,i.yg)("p",null,"Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from ",(0,i.yg)("a",{parentName:"p",href:"https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker/"},"this article")," and already apply 90% of the guidelines"),(0,i.yg)("h3",{id:"additional-orm-option-prisma"},"Additional ORM option: Prisma"),(0,i.yg)("p",null,"Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma"),(0,i.yg)("p",null,"Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this ",(0,i.yg)("a",{parentName:"p",href:"https://practica.dev/blog/is-prisma-better-than-your-traditional-orm/"},"blog post")),(0,i.yg)("h3",{id:"many-small-enhancements"},"Many small enhancements"),(0,i.yg)("p",null,"More than 10 PR were merged with CLI experience improvements, bug fixes, code patterns enhancements and more"),(0,i.yg)("h2",{id:"where-do-i-start"},"Where do I start?"),(0,i.yg)("p",null,"Definitely follow the ",(0,i.yg)("a",{parentName:"p",href:"https://practica.dev/the-basics/getting-started-quickly"},"getting started guide first")," and then read the guide ",(0,i.yg)("a",{parentName:"p",href:"https://practica.dev/the-basics/coding-with-practica"},"coding with practica")," to realize its full power and genuine value. We will be thankful to receive your feedback"))}d.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/3a5322a7.2450d04b.js b/assets/js/3a5322a7.2450d04b.js
new file mode 100644
index 00000000..a8cbb20e
--- /dev/null
+++ b/assets/js/3a5322a7.2450d04b.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[5715],{818:e=>{e.exports=JSON.parse('{"permalink":"/blog/tags/node-js","page":1,"postsPerPage":10,"totalPages":1,"totalCount":7,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/3aded9a5.a2037195.js b/assets/js/3aded9a5.a2037195.js
new file mode 100644
index 00000000..a1e695e6
--- /dev/null
+++ b/assets/js/3aded9a5.a2037195.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[361],{5680:(e,t,a)=>{a.d(t,{xA:()=>p,yg:()=>g});var r=a(6540);function n(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function i(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,r)}return a}function o(e){for(var t=1;t=0||(n[a]=e[a]);return n}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(n[a]=e[a])}return n}var l=r.createContext({}),c=function(e){var t=r.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):o(o({},t),e)),a},p=function(e){var t=c(e.components);return r.createElement(l.Provider,{value:t},e.children)},h="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return r.createElement(r.Fragment,{},t)}},u=r.forwardRef((function(e,t){var a=e.components,n=e.mdxType,i=e.originalType,l=e.parentName,p=s(e,["components","mdxType","originalType","parentName"]),h=c(a),u=n,g=h["".concat(l,".").concat(u)]||h[u]||d[u]||i;return a?r.createElement(g,o(o({ref:t},p),{},{components:a})):r.createElement(g,o({ref:t},p))}));function g(e,t){var a=arguments,n=t&&t.mdxType;if("string"==typeof e||n){var i=a.length,o=new Array(i);o[0]=u;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[h]="string"==typeof e?e:n,o[1]=s;for(var c=2;c{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>o,default:()=>d,frontMatter:()=>i,metadata:()=>s,toc:()=>c});var r=a(8168),n=(a(6540),a(5680));const i={slug:"is-prisma-better-than-your-traditional-orm",date:"2022-12-07T11:00",hide_table_of_contents:!0,title:"Is Prisma better than your 'traditional' ORM?",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},o=void 0,s={permalink:"/blog/is-prisma-better-than-your-traditional-orm",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/is-prisma-better/index.md",source:"@site/blog/is-prisma-better/index.md",title:"Is Prisma better than your 'traditional' ORM?",description:"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?",date:"2022-12-07T11:00:00.000Z",formattedDate:"December 7, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"nestjs",permalink:"/blog/tags/nestjs"},{label:"fastify",permalink:"/blog/tags/fastify"},{label:"passport",permalink:"/blog/tags/passport"},{label:"dotenv",permalink:"/blog/tags/dotenv"},{label:"supertest",permalink:"/blog/tags/supertest"},{label:"practica",permalink:"/blog/tags/practica"},{label:"testing",permalink:"/blog/tags/testing"}],readingTime:23.875,hasTruncateMarker:!0,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"is-prisma-better-than-your-traditional-orm",date:"2022-12-07T11:00",hide_table_of_contents:!0,title:"Is Prisma better than your 'traditional' ORM?",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},prevItem:{title:"Practica v0.0.6 is alive",permalink:"/blog/practica-v0.0.6-is-alive"},nextItem:{title:"Which Monorepo is right for a Node.js BACKEND\xa0now?",permalink:"/blog/monorepo-backend"}},l={authorsImageUrls:[void 0]},c=[{value:"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?",id:"intro---why-discuss-yet-another-orm-or-the-man-who-had-a-stain-on-his-fancy-suite",level:2}],p={toc:c},h="wrapper";function d(e){let{components:t,...i}=e;return(0,n.yg)(h,(0,r.A)({},p,i,{components:t,mdxType:"MDXLayout"}),(0,n.yg)("h2",{id:"intro---why-discuss-yet-another-orm-or-the-man-who-had-a-stain-on-his-fancy-suite"},"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?"),(0,n.yg)("p",null,(0,n.yg)("em",{parentName:"p"},"Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?")),(0,n.yg)("p",null,"Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained"),(0,n.yg)("p",null,(0,n.yg)("img",{alt:"Suite with stain",src:a(7424).A,width:"652",height:"489"})),(0,n.yg)("p",null,' Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don\'t feel delightful, some may say even mediocre. At least so I believed ',(0,n.yg)("em",{parentName:"p"},"before")," writing this article..."),(0,n.yg)("p",null,"From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?"),(0,n.yg)("p",null,"In ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Practica.js")," (the Node.js starter based off ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices with 83,000 stars"),") we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?"),(0,n.yg)("p",null,"This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs"),(0,n.yg)("p",null,"Ready to explore how good Prisma is and whether you should throw away your current tools?"))}d.isMDXComponent=!0},7424:(e,t,a)=>{a.d(t,{A:()=>r});const r=a.p+"assets/images/suite-4d046fac9ca9db57eafa55c4a7eac116.png"}}]);
\ No newline at end of file
diff --git a/assets/js/3d9c95a4.a94f442d.js b/assets/js/3d9c95a4.a94f442d.js
new file mode 100644
index 00000000..dee95f2c
--- /dev/null
+++ b/assets/js/3d9c95a4.a94f442d.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[279],{5680:(e,t,a)=>{a.d(t,{xA:()=>g,yg:()=>m});var n=a(6540);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var s=n.createContext({}),l=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},g=function(e){var t=l(e.components);return n.createElement(s.Provider,{value:t},e.children)},p="mdxType",c={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},d=n.forwardRef((function(e,t){var a=e.components,r=e.mdxType,o=e.originalType,s=e.parentName,g=A(e,["components","mdxType","originalType","parentName"]),p=l(a),d=r,m=p["".concat(s,".").concat(d)]||p[d]||c[d]||o;return a?n.createElement(m,i(i({ref:t},g),{},{components:a})):n.createElement(m,i({ref:t},g))}));function m(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var o=a.length,i=new Array(o);i[0]=d;var A={};for(var s in t)hasOwnProperty.call(t,s)&&(A[s]=t[s]);A.originalType=e,A[p]="string"==typeof e?e:r,i[1]=A;for(var l=2;l{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>c,frontMatter:()=>o,metadata:()=>A,toc:()=>l});var n=a(8168),r=(a(6540),a(5680));const o={slug:"/",sidebar_position:1},i=void 0,A={unversionedId:"home",id:"home",title:"home",description:"Best practices starter",source:"@site/docs/home.md",sourceDirName:".",slug:"/",permalink:"/",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/home.md",tags:[],version:"current",sidebarPosition:1,frontMatter:{slug:"/",sidebar_position:1},sidebar:"tutorialSidebar",next:{title:"What is practica.js",permalink:"/the-basics/what-is-practica"}},s={},l=[{value:"Generate a Node.js app that is packed with best practices AND simplicity in mind. Based off our repo Node.js best practices (77,000 stars)",id:"generate-a-nodejs-app-that-is-packed-with-best-practices-and-simplicity-in-mind-based-off-our-repo-nodejs-best-practices-77000-stars",level:3},{value:"1. Best Practices on top of known Node.js frameworks",id:"1-best-practices-on-top-of-known-nodejs-frameworks",level:3},{value:"2. Simplicity, how Node.js was intended",id:"2-simplicity-how-nodejs-was-intended",level:3},{value:"3. Supports many technologies and frameworks",id:"3-supports-many-technologies-and-frameworks",level:3}],g={toc:l},p="wrapper";function c(e){let{components:t,...o}=e;return(0,r.yg)(p,(0,n.A)({},g,o,{components:t,mdxType:"MDXLayout"}),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Best practices starter",src:a(9298).A,width:"1397",height:"410"})),(0,r.yg)("br",null),(0,r.yg)("h3",{id:"generate-a-nodejs-app-that-is-packed-with-best-practices-and-simplicity-in-mind-based-off-our-repo-nodejs-best-practices-77000-stars"},"Generate a Node.js app that is packed with best practices AND simplicity in mind. Based off our repo ",(0,r.yg)("a",{parentName:"h3",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices")," (77,000 stars)"),(0,r.yg)("br",null),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Discord",src:a(8935).A,width:"20",height:"20"})," ",(0,r.yg)("a",{parentName:"p",href:"https://discord.gg/9Nrarr7p"},"Discord discussions")," | ",(0,r.yg)("img",{alt:"Twitter",src:a(6206).A,width:"20",height:"20"})," ",(0,r.yg)("a",{parentName:"p",href:"https://twitter.com/nodepractices"},"Twitter")),(0,r.yg)("br",null),(0,r.yg)("h1",{id:"a-one-paragraph-overview"},"A One Paragraph Overview"),(0,r.yg)("p",null,"Although Node.js has great frameworks \ud83d\udc9a, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are ",(0,r.yg)("a",{parentName:"p",href:"./decisions/"},"neatly and thoughtfully documented"),". We strive to keep things as simple and standard as possible and base our work off the popular guide: ",(0,r.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js Best Practices")),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"1 min video \ud83d\udc47")),(0,r.yg)("iframe",{width:"1024",height:"768",src:"https://www.youtube.com/embed/F6kAs2VEcKw",title:"YouTube video player",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:!0}),(0,r.yg)("h1",{id:"our-philosophies-and-unique-values"},"Our Philosophies and Unique Values"),(0,r.yg)("h3",{id:"1-best-practices-on-top-of-known-nodejs-frameworks"},"1. Best Practices ",(0,r.yg)("em",{parentName:"h3"},"on top of")," known Node.js frameworks"),(0,r.yg)("p",null,"We don't re-invent the wheel. Rather, we use your favorite framework and empower it with structure and real examples. With a single command you can get an Express/Fastify-based codebase with ~100 examples of best practices inside."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Built on top of known frameworks",src:a(2462).A,width:"1370",height:"589"})),(0,r.yg)("h3",{id:"2-simplicity-how-nodejs-was-intended"},"2. Simplicity, how Node.js was intended"),(0,r.yg)("p",null,"Keeping it simple, flat and based on native Node/JS capabilities is part of this project DNA. We believe that too many abstractions, high-complexity or fancy language features can quickly become a stumbling block for the team. "),(0,r.yg)("p",null,"To name a few examples, our code flow is flat with almost no level of indirection, although using TypeScript - almost no features are being used besides types, for modularization we simply use Node.js modules"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Built on top of known frameworks",src:a(3114).A,width:"1642",height:"648"})),(0,r.yg)("h3",{id:"3-supports-many-technologies-and-frameworks"},"3. Supports many technologies and frameworks"),(0,r.yg)("p",null,"Good Practices and Simplicity is the name of the game with Practica. There is no need to narrow our code to a specific framework or database. We aim to support a majority of popular Node.js frameworks and databases."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Built on top of known frameworks",src:a(6244).A,width:"1249",height:"404"})),(0,r.yg)("br",null),(0,r.yg)("h1",{id:"practices-and-features"},"Practices and Features"),(0,r.yg)("p",null,"We apply more than 100 practices and optimizations. You can opt in or out for most of these features using option flags on our CLI. The follow table is just a few examples of features we provide. To see the full list of features, please visit our website ",(0,r.yg)("a",{parentName:"p",href:"https://practica.dev/dev/features/"},"here"),"."),(0,r.yg)("table",null,(0,r.yg)("thead",{parentName:"table"},(0,r.yg)("tr",{parentName:"thead"},(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Feature")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Explanation")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Flag")),(0,r.yg)("th",{parentName:"tr",align:null},(0,r.yg)("strong",{parentName:"th"},"Docs")))),(0,r.yg)("tbody",{parentName:"table"},(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Monorepo setup"),(0,r.yg)("td",{parentName:"tr",align:null},"Generates two components (e.g., Microservices) in a single repository with interactions between the two"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("inlineCode",{parentName:"td"},"--mr"),", ",(0,r.yg)("inlineCode",{parentName:"td"},"--monorepo")),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("a",{parentName:"td",href:"/decisions/monorepo"},"Docs here"))),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Output escaping and sanitizing"),(0,r.yg)("td",{parentName:"tr",align:null},"Clean-out outgoing responses from potential HTML security risks like XSS"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("inlineCode",{parentName:"td"},"--oe"),", ",(0,r.yg)("inlineCode",{parentName:"td"},"--output-escape")),(0,r.yg)("td",{parentName:"tr",align:null},"Docs coming soon")),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Integration (component) testing"),(0,r.yg)("td",{parentName:"tr",align:null},"Generates full-blown component/integration tests setup including DB"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("inlineCode",{parentName:"td"},"--t"),", ",(0,r.yg)("inlineCode",{parentName:"td"},"--tests")),(0,r.yg)("td",{parentName:"tr",align:null},"Docs coming soon")),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Unique request ID (Correlation ID)"),(0,r.yg)("td",{parentName:"tr",align:null},"Generates module that creates a unique correlation/request ID for every incoming request. This is available for any other object during the request life-span. Internally it uses Node's built-in ",(0,r.yg)("a",{parentName:"td",href:"https://nodejs.org/api/async_hooks.html#class-asynclocalstorage"},(0,r.yg)("inlineCode",{parentName:"a"},"AsyncLocalStorage"))),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("inlineCode",{parentName:"td"},"--coi"),", ",(0,r.yg)("inlineCode",{parentName:"td"},"--correlation-id")),(0,r.yg)("td",{parentName:"tr",align:null},"Docs coming soon")),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Dockerfile"),(0,r.yg)("td",{parentName:"tr",align:null},"Generates dockerfile that embodies 20> best practices"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("inlineCode",{parentName:"td"},"--df"),", ",(0,r.yg)("inlineCode",{parentName:"td"},"--docker-file")),(0,r.yg)("td",{parentName:"tr",align:null},"Docs coming soon")),(0,r.yg)("tr",{parentName:"tbody"},(0,r.yg)("td",{parentName:"tr",align:null},"Strong-schema configuration"),(0,r.yg)("td",{parentName:"tr",align:null},"A configuration module that dynamically load run-time configuration keys and includes a strong schema so it can fail fast"),(0,r.yg)("td",{parentName:"tr",align:null},"Built-in with basic app"),(0,r.yg)("td",{parentName:"tr",align:null},(0,r.yg)("a",{parentName:"td",href:"/decisions/configuration-library"},"Docs here"))))),(0,r.yg)("p",null,"\ud83d\udcd7 ",(0,r.yg)("strong",{parentName:"p"},"See our full list of features ",(0,r.yg)("a",{parentName:"strong",href:"https://practica.dev/features"},"here"))))}c.isMDXComponent=!0},3114:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/abstractions-vs-simplicity-a30a663aac02326729e09af03290388e.png"},8935:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAeGVYSWZNTQAqAAAACAAFARIAAwAAAAEAAQAAARoABQAAAAEAAABKARsABQAAAAEAAABSASgAAwAAAAEAAgAAh2kABAAAAAEAAABaAAAAAAAAAEgAAAABAAAASAAAAAEAAqACAAQAAAABAAAAFKADAAQAAAABAAAAFAAAAAAOLxVMAAAACXBIWXMAAAsTAAALEwEAmpwYAAACOGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyIKICAgICAgICAgICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NTA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+NTA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICAgICA8dGlmZjpSZXNvbHV0aW9uVW5pdD4yPC90aWZmOlJlc29sdXRpb25Vbml0PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KSRNW3wAABKVJREFUOBFlVFtsVFUU3eeec+6dO49OeRRMKVqB8kjEECAESdRoohF+6A8t+mGkJvDhqzUxIXyYakKMhiAfEsGY6odKbWNSf/jQyCtGjNoIKhqo2sbiUOhrOp2Ze+95us/A4OsmM3fm3n3WXnvttTeB/1zHj3/H9+3bLN1jay091j+2vFoVeQkSAo/OdT+1dpwQot37vRj7zq1Y999d5Obt5vfAgKUdHUT3nbzWVJ6Ou+JYtBtjVktl89pYTABz2porHqVDqTTvO9DVNrkLzwzimTrObcDeXuv19hJz7MRoZxTLwx5PN4tEQBzHIKWEGiBQoDQA8CgkcbnAuffiwe71H9fPOtAaYJ3Z2ydGe5Shh5M4ASkiqZGUMZZagzUiQ6WRnwb3TSwE3PMYQuieN16650idKalrdhSZGUP7K/MlIMTKRFomlUtokaG5xRAgEcbRsAEFhTGc+xnwQHce2r9+wGlaY9g3cK1ptjx/wVrSLEUsy5Hh+SyFxXkGN6YTEKiQYxhg+JIFHEYLCRSmBeRCIi34HHUuNHDY8NqBjZOOMxQr5acZalYtz8hKAmzTuixsWJcHD9NRRsD3a2Eog6tXg7EEvhqegs++mWPZMJGM55tLOu5CqNdZ7+nTTIyoncxKiIW12ZCSB7Y0QT7ng5Aam1ArwuXF6jlIZSDAeh+5n8LwpSIpJcR6VGH9oh17ccgLxlpbMGa1EBGUI03XtaZrYAmyYcxDPQkoDFDKgoeUfZ9i5zU0NgSwcU0IUzMRNToBDV7bqZ8vLPfkfCWvlckr7IDSQFpb0jUyngfw++gsXPplEoEcMMCFn67D2B/Fm6zxwZpVjRBVBVEiAWN0I94bPLQY6mJAIIPQJ7Cw0ce+YmeR1VvvX4bO/ReRkYTSfAJb9lyEdz8awcPO5BYWL0pj43x8r8AgBgAHJnGcuNVzUpGF+TSxmTQDgdZIYWlPtN8Fjz2YQIBNceV/cnAVLLsjg2V7NStlMxwWL2B2cg6lAFJURpS8rz+duIqdw3EKXMmaUQ+N4MH4n2VY27YIdjy6AjgCpFIMdm5fCW0rFsCVX4tuDIFjEm2IZn7oBn+k8tv4ODtz5iG1ZfsPQ2GGbZ2YEuTkqav24W1LSS4XwOdnx2HiRhXuXJap6Xp1IoJM6MO2zUthZjaCL768ZgvTijQ1BailGRoc7NA1g0Va9en50vPZTNh88nxRfvtjka9fGcDIWAWGr1SBeHOoKnYb/bfh7hD11fD95QqMTSnVsiTLo8p8IYpFn8tK6ivo2VeGO1Nh2G9VFSqxkdenY5bjijRkOYK5ASQ1C5WqBiZLxi7MUZUJKVcWE4io84ND9w3s3Xtr9OqD/cyrw92cp94kFrtmhExiaZNIUIIGdIAGW4urSzMPPUMDrjQFkcQ9Hx7eens5oNsA3D5zK+joy5uOROXKbrRJwZIcZ6kGn6fS1CKiMyPlAWV+1gfWwLHEgoyjTgfmztZ34j/mCqDOdM9z55pYLuhCv7UTStu0Uo2uqyhm0SgzgjYcSmZn+wbf2/G/BesI/utymv79YBfd3X2+9ckXzt37OH52d59uxbS0/t5pVv9dv/8FWzGOWyNPVJ0AAAAASUVORK5CYII="},2462:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/on-top-of-frameworks-ae0faae30dd942814098bd544a00e13f.png"},9298:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/practica-logo-dec9868d9568eacfa5507f97b16271d8.png"},6244:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/tech-stack-2703d0573d35db925b7d317e9e2d1827.png"},6206:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAEDmlDQ1BrQ0dDb2xvclNwYWNlR2VuZXJpY1JHQgAAOI2NVV1oHFUUPpu5syskzoPUpqaSDv41lLRsUtGE2uj+ZbNt3CyTbLRBkMns3Z1pJjPj/KRpKT4UQRDBqOCT4P9bwSchaqvtiy2itFCiBIMo+ND6R6HSFwnruTOzu5O4a73L3PnmnO9+595z7t4LkLgsW5beJQIsGq4t5dPis8fmxMQ6dMF90A190C0rjpUqlSYBG+PCv9rt7yDG3tf2t/f/Z+uuUEcBiN2F2Kw4yiLiZQD+FcWyXYAEQfvICddi+AnEO2ycIOISw7UAVxieD/Cyz5mRMohfRSwoqoz+xNuIB+cj9loEB3Pw2448NaitKSLLRck2q5pOI9O9g/t/tkXda8Tbg0+PszB9FN8DuPaXKnKW4YcQn1Xk3HSIry5ps8UQ/2W5aQnxIwBdu7yFcgrxPsRjVXu8HOh0qao30cArp9SZZxDfg3h1wTzKxu5E/LUxX5wKdX5SnAzmDx4A4OIqLbB69yMesE1pKojLjVdoNsfyiPi45hZmAn3uLWdpOtfQOaVmikEs7ovj8hFWpz7EV6mel0L9Xy23FMYlPYZenAx0yDB1/PX6dledmQjikjkXCxqMJS9WtfFCyH9XtSekEF+2dH+P4tzITduTygGfv58a5VCTH5PtXD7EFZiNyUDBhHnsFTBgE0SQIA9pfFtgo6cKGuhooeilaKH41eDs38Ip+f4At1Rq/sjr6NEwQqb/I/DQqsLvaFUjvAx+eWirddAJZnAj1DFJL0mSg/gcIpPkMBkhoyCSJ8lTZIxk0TpKDjXHliJzZPO50dR5ASNSnzeLvIvod0HG/mdkmOC0z8VKnzcQ2M/Yz2vKldduXjp9bleLu0ZWn7vWc+l0JGcaai10yNrUnXLP/8Jf59ewX+c3Wgz+B34Df+vbVrc16zTMVgp9um9bxEfzPU5kPqUtVWxhs6OiWTVW+gIfywB9uXi7CGcGW/zk98k/kmvJ95IfJn/j3uQ+4c5zn3Kfcd+AyF3gLnJfcl9xH3OfR2rUee80a+6vo7EK5mmXUdyfQlrYLTwoZIU9wsPCZEtP6BWGhAlhL3p2N6sTjRdduwbHsG9kq32sgBepc+xurLPW4T9URpYGJ3ym4+8zA05u44QjST8ZIoVtu3qE7fWmdn5LPdqvgcZz8Ww8BWJ8X3w0PhQ/wnCDGd+LvlHs8dRy6bLLDuKMaZ20tZrqisPJ5ONiCq8yKhYM5cCgKOu66Lsc0aYOtZdo5QCwezI4wm9J/v0X23mlZXOfBjj8Jzv3WrY5D+CsA9D7aMs2gGfjve8ArD6mePZSeCfEYt8CONWDw8FXTxrPqx/r9Vt4biXeANh8vV7/+/16ffMD1N8AuKD/A/8leAvFY9bLAAAAeGVYSWZNTQAqAAAACAAFARIAAwAAAAEAAQAAARoABQAAAAEAAABKARsABQAAAAEAAABSASgAAwAAAAEAAgAAh2kABAAAAAEAAABaAAAAAAAAAEgAAAABAAAASAAAAAEAAqACAAQAAAABAAAAFKADAAQAAAABAAAAFAAAAAAOLxVMAAAACXBIWXMAAAsTAAALEwEAmpwYAAACOGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyIKICAgICAgICAgICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NTA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+NTA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICAgICA8dGlmZjpSZXNvbHV0aW9uVW5pdD4yPC90aWZmOlJlc29sdXRpb25Vbml0PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KSRNW3wAAA0ZJREFUOBF9VE1rE1EUPZNMkjGTtGhpRaGodFFEFxVsKQpCEVxooRvBPyEUt7qRquDWpXuhu65c6aa4sB9WqFgQBGsXNtJYazVNMskkbzznTSathXpJ5s17c++5555333MqlUpEQwQHbgpI888pHAd84Z+j5hq1JLNzjvJpGyBsR4xzuB7BjaMAj2+1EKjWCJ1E2vCjHwL2c0A+6yAII6QY6BrjwMsAHzYNni0blBuAS0Am/q+REFp0OknAe+NpXDxF0BZj3TRQaUR4umAwVwaG85x30A4S1RLV6Jar+otceP0LCN628XwyjWMZMpRufwLgO5ld8OnEBEWWkmZASJRES4/rYqUyBd7gg/G4QAIlxgujQLau9JajytxrU0sm0Kgggdf5rqTvqS2okzLluT7IgAJH+fZRMiUWFrcifkmApUuP2HB9qR5hPO9gg5JMD7qYOJfBymYLm3sGX/Yi7DCBiCg2McXZdtCi2G21IpzxU3gxVcCDoQwWfxiM9aTw5KaPqREPj275mLmRRw9ZBZ2yEzCNXYZKI316WVI5iNBXSOHxZAETQw3bm34uxZ6L+21pPcTLnwZXiw5KzX8ZWkCb4RD1atPguO/g+nkqTTMEk9aytTKFY3JtHHu62/D6ZkvWi8MP2rl+lvLmt8HiVzYVXZsUtc0o4lF4BztVg/lSG2c9fqN/J4cgrHU11Bd+txn7s8DD5QAfv4XISnX+1K+y2ZUA80w4wNpEwLZV/Mk+9zXklCcIuyR2mfrdGc7YIyU0ne8a6cy+q+PuagOXKAWJIkd/u8Vkn9i+hlwRQ+10iW2yvmvgk+Eq/9vVCK82QsxtG4ywjdgI+0bQg1MLqESqXY4a60Se+cSbYo1b2NmJE5RhlMxqynrA1BnJZmnZAkpwHQKdjAwRGYdrvQ6LdaymCtBu6tSogsQkkU6LiAhYxHjb8GR4wGnu2sJWFF8ODBQROQhMCWWHN4BS43MNuD0QY+iUuSGDi1T3/pU0cottXl/xcUoABdTBs4w1S8oUs1FWMj1GpiQVUCVHN7ahh8ero9aMUOWtc5iJRRTdxJSBcwHbC5a922B3qE9dXdu6aesEy7L1vQKdE0oJwBGjEqvMur2tY6e/6ENQh3MBdJ4AAAAASUVORK5CYII="}}]);
\ No newline at end of file
diff --git a/assets/js/4067f4ab.1bebabc4.js b/assets/js/4067f4ab.1bebabc4.js
new file mode 100644
index 00000000..8ac1afec
--- /dev/null
+++ b/assets/js/4067f4ab.1bebabc4.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2005],{944:t=>{t.exports=JSON.parse('{"label":"unit-test","permalink":"/blog/tags/unit-test","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/409973dd.aa38985e.js b/assets/js/409973dd.aa38985e.js
new file mode 100644
index 00000000..4b23c648
--- /dev/null
+++ b/assets/js/409973dd.aa38985e.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[9759],{1339:a=>{a.exports=JSON.parse('{"label":"workflow","permalink":"/blog/tags/workflow","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/4bb443f0.529bb3d8.js b/assets/js/4bb443f0.529bb3d8.js
new file mode 100644
index 00000000..4f094145
--- /dev/null
+++ b/assets/js/4bb443f0.529bb3d8.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[1122],{8446:t=>{t.exports=JSON.parse('{"permalink":"/blog/tags/testing","page":1,"postsPerPage":10,"totalPages":1,"totalCount":4,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/4e20cbbc.24a15328.js b/assets/js/4e20cbbc.24a15328.js
new file mode 100644
index 00000000..655608a4
--- /dev/null
+++ b/assets/js/4e20cbbc.24a15328.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[3194],{3510:a=>{a.exports=JSON.parse('{"label":"integration","permalink":"/blog/tags/integration","allTagsPath":"/blog/tags","count":2}')}}]);
\ No newline at end of file
diff --git a/assets/js/51736f2d.52359975.js b/assets/js/51736f2d.52359975.js
new file mode 100644
index 00000000..bc684c80
--- /dev/null
+++ b/assets/js/51736f2d.52359975.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8905],{5680:(e,t,a)=>{a.d(t,{xA:()=>h,yg:()=>u});var n=a(6540);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function r(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function s(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var l=n.createContext({}),p=function(e){var t=n.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):s(s({},t),e)),a},h=function(e){var t=p(e.components);return n.createElement(l.Provider,{value:t},e.children)},c="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},g=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,r=e.originalType,l=e.parentName,h=o(e,["components","mdxType","originalType","parentName"]),c=p(a),g=i,u=c["".concat(l,".").concat(g)]||c[g]||d[g]||r;return a?n.createElement(u,s(s({ref:t},h),{},{components:a})):n.createElement(u,s({ref:t},h))}));function u(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var r=a.length,s=new Array(r);s[0]=g;var o={};for(var l in t)hasOwnProperty.call(t,l)&&(o[l]=t[l]);o.originalType=e,o[c]="string"==typeof e?e:i,s[1]=o;for(var p=2;p{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>s,default:()=>d,frontMatter:()=>r,metadata:()=>o,toc:()=>p});var n=a(8168),i=(a(6540),a(5680));const r={slug:"is-prisma-better-than-your-traditional-orm",date:"2022-12-07T11:00",hide_table_of_contents:!0,title:"Is Prisma better than your 'traditional' ORM?",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},s=void 0,o={permalink:"/blog/is-prisma-better-than-your-traditional-orm",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/is-prisma-better/index.md",source:"@site/blog/is-prisma-better/index.md",title:"Is Prisma better than your 'traditional' ORM?",description:"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?",date:"2022-12-07T11:00:00.000Z",formattedDate:"December 7, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"nestjs",permalink:"/blog/tags/nestjs"},{label:"fastify",permalink:"/blog/tags/fastify"},{label:"passport",permalink:"/blog/tags/passport"},{label:"dotenv",permalink:"/blog/tags/dotenv"},{label:"supertest",permalink:"/blog/tags/supertest"},{label:"practica",permalink:"/blog/tags/practica"},{label:"testing",permalink:"/blog/tags/testing"}],readingTime:23.875,hasTruncateMarker:!0,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"is-prisma-better-than-your-traditional-orm",date:"2022-12-07T11:00",hide_table_of_contents:!0,title:"Is Prisma better than your 'traditional' ORM?",authors:["goldbergyoni"],tags:["node.js","express","nestjs","fastify","passport","dotenv","supertest","practica","testing"]},prevItem:{title:"Practica v0.0.6 is alive",permalink:"/blog/practica-v0.0.6-is-alive"},nextItem:{title:"Which Monorepo is right for a Node.js BACKEND\xa0now?",permalink:"/blog/monorepo-backend"}},l={authorsImageUrls:[void 0]},p=[{value:"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?",id:"intro---why-discuss-yet-another-orm-or-the-man-who-had-a-stain-on-his-fancy-suite",level:2},{value:"TOC",id:"toc",level:2},{value:"Prisma basics in 3 minutes",id:"prisma-basics-in-3-minutes",level:2},{value:"What is the same?",id:"what-is-the-same",level:2},{value:"What is fundamentally different?",id:"what-is-fundamentally-different",level:2},{value:"1. Type safety across the board",id:"1-type-safety-across-the-board",level:3},{value:"2. Make you forget SQL",id:"2-make-you-forget-sql",level:2},{value:"3. Performance",id:"3-performance",level:2},{value:"4. No active records here!",id:"4-no-active-records-here",level:2},{value:"5. Documentation and developer-experience",id:"5-documentation-and-developer-experience",level:2},{value:"6. Observability, metrics, and tracing",id:"6-observability-metrics-and-tracing",level:2},{value:"7. Continuity - will it be here with us in 2024/2025",id:"7-continuity---will-it-be-here-with-us-in-20242025",level:2},{value:"Closing - what should you use now?",id:"closing---what-should-you-use-now",level:2},{value:"When will it shine?",id:"when-will-it-shine",level:3},{value:"Some of my other articles",id:"some-of-my-other-articles",level:2}],h={toc:p},c="wrapper";function d(e){let{components:t,...r}=e;return(0,i.yg)(c,(0,n.A)({},h,r,{components:t,mdxType:"MDXLayout"}),(0,i.yg)("h2",{id:"intro---why-discuss-yet-another-orm-or-the-man-who-had-a-stain-on-his-fancy-suite"},"Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?"),(0,i.yg)("p",null,(0,i.yg)("em",{parentName:"p"},"Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?")),(0,i.yg)("p",null,"Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Suite with stain",src:a(7424).A,width:"652",height:"489"})),(0,i.yg)("p",null,' Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don\'t feel delightful, some may say even mediocre. At least so I believed ',(0,i.yg)("em",{parentName:"p"},"before")," writing this article..."),(0,i.yg)("p",null,"From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?"),(0,i.yg)("p",null,"In ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Practica.js")," (the Node.js starter based off ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices with 83,000 stars"),") we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?"),(0,i.yg)("p",null,"This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs"),(0,i.yg)("p",null,"Ready to explore how good Prisma is and whether you should throw away your current tools?"),(0,i.yg)("h2",{id:"toc"},"TOC"),(0,i.yg)("ol",null,(0,i.yg)("li",{parentName:"ol"},"Prisma basics in 3 minutes"),(0,i.yg)("li",{parentName:"ol"},"Things that are mostly the same"),(0,i.yg)("li",{parentName:"ol"},"Differentiation"),(0,i.yg)("li",{parentName:"ol"},"Closing")),(0,i.yg)("h2",{id:"prisma-basics-in-3-minutes"},"Prisma basics in 3 minutes"),(0,i.yg)("p",null,"Just before delving into the strategic differences, for the benefit of those unfamiliar with Prisma - here is a quick 'hello-world' workflow with Prisma ORM. If you're already familiar with it - skipping to the next section sounds sensible. Simply put, Prisma dictates 3 key steps to get our ORM code working:"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"A. Define a model -")," Unlike almost any other ORM, Prisma brings a unique language (DSL) for modeling the database-to-code mapping. This proprietary syntax aims to express these models with minimum clutter (i.e., TypeScript generics and verbose code). Worried about having intellisense and validation? A well-crafted vscode extension gets you covered. In the following example, the prisma.schema file describes a DB with an Order table that has a one-to-many relation with a Country table:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-prisma"},"// prisma.schema file\nmodel Order {\n id Int @id @default(autoincrement())\n userId Int?\n paymentTermsInDays Int?\n deliveryAddress String? @db.VarChar(255)\n country Country @relation(fields: [countryId], references: [id])\n countryId Int\n}\n\nmodel Country {\n id Int @id @default(autoincrement())\n name String @db.VarChar(255)\n Order Order[]\n}\n")),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"B. Generate the client code -")," Another unusual technique: to get the ORM code ready, one must invoke Prisma's CLI and ask for it: "),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-bash"},"npx prisma generate\n")),(0,i.yg)("p",null,"Alternatively, if you wish to have your DB ready and the code generated with one command, just fire:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-bash"},"npx prisma migrate deploy\n")),(0,i.yg)("p",null,"This will generate migration files that you can execute later in production and also the ORM client code"),(0,i.yg)("p",null,"This will generate migration files that you can execute later in production and the TypeScript ORM code based on the model. The generated code location is defaulted under '","[root]","/NODE_MODULES/.prisma/client'. Every time the model changes, the code must get re-generated again. While most ORMs name this code 'repository' or 'entity' or 'active record', interestingly, Prisma calls it a 'client'. This shows part of its unique philosophy, which we will explore later"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"C. All good, use the client to interact with the DB -")," The generated client has a rich set of functions and types for your DB interactions. Just import the ORM/client code and use it:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"import { PrismaClient } from '.prisma/client';\n\nconst prisma = new PrismaClient();\n// A query example\nawait prisma.order.findMany({\n where: {\n paymentTermsInDays: 30,\n },\n orderBy: {\n id: 'asc',\n },\n });\n// Use the same client for insertion, deletion, updates, etc\n")),(0,i.yg)("p",null,"That's the nuts and bolts of Prisma. Is it different and better?"),(0,i.yg)("h2",{id:"what-is-the-same"},"What is the same?"),(0,i.yg)("p",null,"When comparing options, before outlining differences, it's useful to state what is actually similar among these products. Here is a partial list of features that both TypeORM, Sequelize and Prisma support"),(0,i.yg)("ul",null,(0,i.yg)("li",{parentName:"ul"},"Casual queries with sorting, filtering, distinct, group by, 'upsert' (update or create),etc"),(0,i.yg)("li",{parentName:"ul"},"Raw queries"),(0,i.yg)("li",{parentName:"ul"},"Full text search"),(0,i.yg)("li",{parentName:"ul"},"Association/relations of any type (e.g., many to many, self-relation, etc)"),(0,i.yg)("li",{parentName:"ul"},"Aggregation queries"),(0,i.yg)("li",{parentName:"ul"},"Pagination"),(0,i.yg)("li",{parentName:"ul"},"CLI"),(0,i.yg)("li",{parentName:"ul"},"Transactions"),(0,i.yg)("li",{parentName:"ul"},"Migration & seeding"),(0,i.yg)("li",{parentName:"ul"},"Hooks/events (called middleware in Prisma)"),(0,i.yg)("li",{parentName:"ul"},"Connection pool"),(0,i.yg)("li",{parentName:"ul"},"Based on various community benchmarks, no dramatic performance differences"),(0,i.yg)("li",{parentName:"ul"},"All have huge amount of stars and downloads")),(0,i.yg)("p",null,"Overall, I found TypeORM and Sequelize to be a little more feature rich. For example, the following features are not supported only in Prisma: GIS queries, DB-level custom constraints, DB replication, soft delete, caching, exclude queries and some more"),(0,i.yg)("p",null,"With that, shall we focus on what really set them apart and make a difference"),(0,i.yg)("h2",{id:"what-is-fundamentally-different"},"What is fundamentally different?"),(0,i.yg)("h3",{id:"1-type-safety-across-the-board"},"1. Type safety across the board"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," ORM's life is not easier since the TypeScript rise, to say the least. The need to support typed models/queries/etc yields a lot of developers sweat. Sequelize, for example, struggles to stabilize a TypeScript interface and, by now offers 3 different syntaxes + one external library (",(0,i.yg)("a",{parentName:"p",href:"https://github.com/sequelize/sequelize-typescript"},"sequelize-typescript"),") that offers yet another style. Look at the syntax below, this feels like an afterthought - a library that was not built for TypeScript and now tries to squeeze it in somehow. Despite the major investment, both Sequelize and TypeORM offer only partial type safety. Simple queries do return typed objects, but other common corner cases like attributes/projections leave you with brittle strings. Here are a few examples:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"// Sequelize pesky TypeScript interface\ntype OrderAttributes = {\n id: number,\n price: number,\n // other attributes...\n};\n\ntype OrderCreationAttributes = Optional;\n\n//\ud83d\ude2f Isn't this a weird syntax?\nclass Order extends Model, InferCreationAttributes> {\n declare id: CreationOptional;\n declare price: number;\n}\n")),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"// Sequelize loose query types\nawait getOrderModel().findAll({\n where: { noneExistingField: 'noneExistingValue' } //\ud83d\udc4d TypeScript will warn here\n attributes: ['none-existing-field', 'another-imaginary-column'], // No errors here although these columns do not exist\n include: 'no-such-table', //\ud83d\ude2f no errors here although this table doesn't exist\n });\n await getCountryModel().findByPk('price'); //\ud83d\ude2f No errors here although the price column is not a primary key\n")),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"// TypeORM loose query\nconst ordersOnSales: Post[] = await orderRepository.find({\n where: { onSale: true }, //\ud83d\udc4d TypeScript will warn here\n select: ['id', 'price'],\n})\nconsole.log(ordersOnSales[0].userId); //\ud83d\ude2f No errors here although the 'userId' column is not part of the returned object\n")),(0,i.yg)("p",null,"Isn't it ironic that a library called ",(0,i.yg)("strong",{parentName:"p"},"Type"),"ORM base its queries on strings?"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," It takes a totally different approach by generating per-project client code that is fully typed. This client embodies types for everything: every query, relations, sub-queries, everything (except migrations). While other ORMs struggles to infer types from discrete models (including associations that are declared in other files), Prisma's offline code generation is easier: It can look through the entire DB relations, use custom generation code and build an almost perfect TypeScript experience. Why 'almost' perfect? for some reason, Prisma advocates using plain SQL for migrations, which might result in a discrepancy between the code models and the DB schema. Other than that, this is how Prisma's client brings end to end type safety:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"await prisma.order.findMany({\n where: {\n noneExistingField: 1, //\ud83d\udc4d TypeScript error here\n },\n select: {\n noneExistingRelation: { //\ud83d\udc4d TypeScript error here\n select: { id: true }, \n },\n noneExistingField: true, //\ud83d\udc4d TypeScript error here\n },\n });\n\n await prisma.order.findUnique({\n where: { price: 50 }, //\ud83d\udc4d TypeScript error here\n });\n")),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," TypeScript support across the board is valuable for DX mostly. Luckily, we have another safety net: The project testing. Since tests are mandatory, having build-time type verification is important but not a life saver"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(6574).A,width:"200",height:"52"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," Definitely"),(0,i.yg)("h2",{id:"2-make-you-forget-sql"},"2. Make you forget SQL"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," Many avoid ORMs while preferring to interact with the DB using lower-level techniques. One of their arguments is against the efficiency of ORMs: Since the generated queries are not visible immediately to the developers, wasteful queries might get executed unknowingly. While all ORMs provide syntactic sugar over SQL, there are subtle differences in the level of abstraction. The more the ORM syntax resembles SQL, the more likely the developers will understand their own actions"),(0,i.yg)("p",null,"For example, TypeORM's query builder looks like SQL broken into convenient functions"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"await createQueryBuilder('order')\n .leftJoinAndSelect(\n 'order.userId',\n 'order.productId',\n 'country.name',\n 'country.id'\n )\n .getMany();\n")),(0,i.yg)("p",null,"A developer who read this code \ud83d\udc46 is likely to infer that a ",(0,i.yg)("em",{parentName:"p"},"join")," query between two tables will get executed"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," Prisma's mission statement is to simplify DB work, the following statement is taken from their homepage:"),(0,i.yg)("p",null,'"We designed its API to be intuitive, both for SQL veterans and ',(0,i.yg)("em",{parentName:"p"},"developers brand new to databases"),'"'),(0,i.yg)("p",null,"Being ambitious to appeal also to database layman, Prisma builds a syntax with a little bit higher abstraction, for example:"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"await prisma.order.findMany({\n select: {\n userId: true,\n productId: true,\n country: {\n select: { name: true, id: true },\n },\n },\n});\n\n")),(0,i.yg)("p",null,"No join is reminded here also it fetches records from two related tables (order, and country). Could you guess what SQL is being produced here? how many queries? One right, a simple join? Surprise, actually, two queries are made. Prisma fires one query per-table here, as the join logic happens on the ORM client side (not inside the DB). But why?? in some cases, mostly where there is a lot of repetition in the DB cartesian join, querying each side of the relation is more efficient. But in other cases, it's not. Prisma arbitrarily chose what they believe will perform better in ",(0,i.yg)("em",{parentName:"p"},"most")," cases. I checked, in my case it's ",(0,i.yg)("em",{parentName:"p"},"slower")," than doing a one-join query on the DB side. As a developer, I would miss this deficiency due to the high-level syntax (no join is mentioned). My point is, Prisma sweet and simple syntax might be a bless for developer who are brand new to databases and aim to achieve a working solution in a short time. For the longer term, having full awareness of the DB interactions is helpful, other ORMs encourage this awareness a little better"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," Any ORM will hide SQL details from their users - without developer's awareness no ORM will save the day"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(6574).A,width:"200",height:"52"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," Not necessarily"),(0,i.yg)("h2",{id:"3-performance"},"3. Performance"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," Speak to an ORM antagonist and you'll hear a common sensible argument: ORMs are much slower than a 'raw' approach. To an extent, this is a legit observation as ",(0,i.yg)("a",{parentName:"p",href:"https://welingtonfidelis.medium.com/pg-driver-vs-knex-js-vs-sequelize-vs-typeorm-f9ed53e9f802"},"most comparisons")," will show none-negligible differences between raw/query-builder and ORM."),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"raw is faster d",src:a(6467).A,width:"756",height:"510"}),"\n",(0,i.yg)("em",{parentName:"p"},"Example: a direct insert against the PG driver is much shorter ",(0,i.yg)("a",{parentName:"em",href:"https://welingtonfidelis.medium.com/pg-driver-vs-knex-js-vs-sequelize-vs-typeorm-f9ed53e9f802"},"Source"))," "),(0,i.yg)("p",null," It should also be noted that these benchmarks don't tell the entire story - on top of raw queries, every solution must build a mapper layer that maps the raw data to JS objects, nest the results, cast types, and more. This work is included within every ORM but not shown in benchmarks for the raw option. In reality, every team which doesn't use ORM would have to build their own small \"ORM\", including a mapper, which will also impact performance"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," It was my hope to see a magic here, eating the ORM cake without counting the calories, seeing Prisma achieving an almost 'raw' query speed. I had some good and logical reasons for this hope: Prisma uses a DB client built with Rust. Theoretically, it could serialize to and nest objects faster (in reality, this happens on the JS side). It was also built from the ground up and could build on the knowledge pilled in ORM space for years. Also, since it returns POJOs only (see bullet 'No Active Record here!') - no time should be spent on decorating objects with ORM fields"),(0,i.yg)("p",null,"You already got it, this hope was not fulfilled. Going with every community benchmark (",(0,i.yg)("a",{parentName:"p",href:"https://dev.to/josethz00/benchmark-prisma-vs-typeorm-3873"},"one"),", ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/edgedb/imdbench"},"two"),", ",(0,i.yg)("a",{parentName:"p",href:"https://deepkit.io/library"},"three"),"), Prisma at best is not faster than the average ORM. What is the reason? I can't tell exactly but it might be due the complicated system that must support Go, future languages, MongoDB and other non-relational DBs"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Prisma is not faster",src:a(5542).A,width:"1043",height:"469"}),"\n",(0,i.yg)("em",{parentName:"p"},"Example: Prisma is not faster than others. It should be noted that in other benchmarks Prisma scores higher and shows an 'average' performance ",(0,i.yg)("a",{parentName:"em",href:"https://github.com/edgedb/imdbench"},"Source"))),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," It's expected from ORM users to live peacefully with inferior performance, for many systems it won't make a great deal. With that, 10%-30% performance differences between various ORMs are not a key factor"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(6574).A,width:"200",height:"52"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," No"),(0,i.yg)("h2",{id:"4-no-active-records-here"},"4. No active records here!"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:"),' Node in its early days was heavily inspired by Ruby (e.g., testing "describe"), many great patterns were embraced, ',(0,i.yg)("a",{parentName:"p",href:"https://en.wikipedia.org/wiki/Active_record_pattern"},"Active Record")," is not among the successful ones. What is this pattern about in a nutshell? say you deal with Orders in your system, with Active Record an Order object/class will hold both the entity properties, possible also some of the logic functions and also CRUD functions. Many find this pattern to be awful, why? ideally, when coding some logic/flow, one should not keep her mind busy with side effects and DB narratives. It also might be that accessing some property unconsciously invokes a heavy DB call (i.e., lazy loading). If not enough, in case of heavy logic, unit tests might be in order (i.e., read ",(0,i.yg)("a",{parentName:"p",href:"https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/"},"'selective unit tests'"),") - it's going to be much harder to write unit tests against code that interacts with the DB. In fact, all of the respectable and popular architecture (e.g., DDD, clean, 3-tiers, etc) advocate to 'isolate the domain', separate the core/logic of the system from the surrounding technologies. With all of that said, both TypeORM and Sequelize support the Active Record pattern which is displayed in many examples within their documentation. Both also support other better patterns like the data mapper (see below), but they still open the door for doubtful patterns"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},'// TypeORM active records \ud83d\ude1f\n\n@Entity()\nclass Order extends BaseEntity {\n @PrimaryGeneratedColumn()\n id: number\n\n @Column()\n price: number\n\n @ManyToOne(() => Product, (product) => product.order)\n products: Product[]\n\n // Other columns here\n}\n\nfunction updateOrder(orderToUpdate: Order){\n if(orderToUpdate.price > 100){\n // some logic here\n orderToUpdate.status = "approval";\n orderToUpdate.save(); \n orderToUpdate.products.forEach((products) =>{ \n\n })\n orderToUpdate.usedConnection = ? \n }\n}\n\n\n\n')),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," The better alternative is the data mapper pattern. It acts as a bridge, an adapter, between simple object notations (domain objects with properties) to the DB language, typically SQL. Call it with a plain JS object, POJO, get it saved in the DB. Simple. It won't add functions to the result objects or do anything beyond returning pure data, no surprising side effects. In its purest sense, this is a DB-related utility and completely detached from the business logic. While both Sequelize and TypeORM support this, Prisma offers ",(0,i.yg)("em",{parentName:"p"},"only")," this style - no room for mistakes."),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},'// Prisma approach with a data mapper \ud83d\udc4d\n\n// This was generated automatically by Prisma\ntype Order {\n id: number\n\n price: number\n\n products: Product[]\n\n // Other columns here\n}\n\nfunction updateOrder(orderToUpdate: Order){\n if(orderToUpdate.price > 100){\n orderToUpdate.status = "approval";\n prisma.order.update({ where: { id: orderToUpdate.id }, data: orderToUpdate }); \n // Side effect \ud83d\udc46, but an explicit one. The thoughtful coder will move this to another function. Since it\'s happening outside, mocking is possible \ud83d\udc4d\n products.forEach((products) =>{ // No lazy loading, the data is already here \ud83d\udc4d\n\n })\n } \n}\n')),(0,i.yg)("p",null," In ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"Practica.js"),' we take it one step further and put the prisma models within the "DAL" layer and wrap it with the ',(0,i.yg)("a",{parentName:"p",href:"https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design"},"repository pattern"),". You may glimpse ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica/blob/21ff12ba19cceed9a3735c09d48184b5beb5c410/src/code-templates/services/order-service/domain/new-order-use-case.ts#L21"},"into the code here"),", this is the business flow that calls the DAL layer"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," On the one hand, this is a key architectural principle to follow but the other hand most ORMs ",(0,i.yg)("em",{parentName:"p"},"allow")," doing it right"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(1886).A,width:"200",height:"52"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," Yes!"),(0,i.yg)("h2",{id:"5-documentation-and-developer-experience"},"5. Documentation and developer-experience"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:"),' TypeORM and Sequelize documentation is mediocre, though TypeORM is a little better. Based on my personal experience they do get a little better over the years, but still by no mean they deserve to be called "good" or "great". For example, if you seek to learn about \'raw queries\' - Sequelize offers ',(0,i.yg)("a",{parentName:"p",href:"https://sequelize.org/docs/v6/core-concepts/raw-queries/"},"a very short page")," on this matter, TypeORM info is spread in multiple other pages. Looking to learn about pagination? Couldn't find Sequelize documents, TypeORM has ",(0,i.yg)("a",{parentName:"p",href:"https://typeorm.io/select-query-builder#using-pagination"},"some short explanation"),", 150 words only"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," Prisma documentation rocks! See their documents on similar topics: ",(0,i.yg)("a",{parentName:"p",href:"https://www.prisma.io/docs/concepts/components/prisma-client/raw-database-access"},"raw queries")," and ",(0,i.yg)("a",{parentName:"p",href:"https://www.prisma.io/docs/concepts/components/prisma-client/pagination"},"pagingation"),", thousands of words, and dozens of code examples. The writing itself is also great, feels like some professional writers were involved"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Prisma docs are comprehensive",src:a(5457).A,width:"683",height:"458"})),(0,i.yg)("p",null,"This chart above shows how comprehensive are Prisma docs (Obviously this by itself doesn't prove quality)"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," Great docs are a key to awareness and avoiding pitfalls"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(1886).A,width:"200",height:"52"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," You bet"),(0,i.yg)("h2",{id:"6-observability-metrics-and-tracing"},"6. Observability, metrics, and tracing"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," Good chances are (say about 99.9%) that you'll find yourself diagnostic slow queries in production or any other DB-related quirks. What can you expect from traditional ORMs in terms of observability? Mostly logging. ",(0,i.yg)("a",{parentName:"p",href:"https://sequelize.org/api/v7/interfaces/queryoptions#benchmark"},"Sequelize provides both logging")," of query duration and programmatic access to the connection pool state ({size,available,using,waiting}). ",(0,i.yg)("a",{parentName:"p",href:"https://orkhan.gitbook.io/typeorm/docs/logging"},"TypeORM provides only logging")," of queries that suppress a pre-defined duration threshold. This is better than nothing, but assuming you don't read production logs 24/7, you'd probably need more than logging - an alert to fire when things seem faulty. To achieve this, it's your responsibility to bridge this info to your preferred monitoring system. Another logging downside for this sake is verbosity - we need to emit tons of information to the logs when all we really care for is the average duration. Metrics can serve this purpose much better as we're about to see soon with Prisma"),(0,i.yg)("p",null,"What if you need to dig into which specific part of the query is slow? unfortunately, there is no breakdown of the query phases duration - it's being left to you as a black-box"),(0,i.yg)("pre",null,(0,i.yg)("code",{parentName:"pre",className:"language-javascript"},"// Sequelize - logging various DB information\n\n")),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Logging query duration",src:a(2944).A,width:"1694",height:"130"}),"\nLogging each query in order to realize trends and anomaly in the monitoring system"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," Since Prisma targets also enterprises, it must bring strong ops capabilities. Beautifully, it packs support for both ",(0,i.yg)("a",{parentName:"p",href:"https://www.prisma.io/docs/concepts/components/prisma-client/metrics"},"metrics")," and ",(0,i.yg)("a",{parentName:"p",href:"https://www.prisma.io/docs/concepts/components/prisma-client/opentelemetry-tracing"},"open telemetry tracing"),"!. For metrics, it generates custom JSON with metric keys and values so anyone can adapt this to any monitoring system (e.g., CloudWatch, statsD, etc). On top of this, it produces out of the box metrics in ",(0,i.yg)("a",{parentName:"p",href:"https://prometheus.io/"},"Prometheus")," format (one of the most popular monitoring platforms). For example, the metric 'prisma_client_queries_duration_histogram_ms' provides the average query length in the system overtime. What is even more impressive is the support for open-tracing - it feeds your OpenTelemetry collector with spans that describe the various phases of every query. For example, it might help realize what is the bottleneck in the query pipeline: Is it the DB connection, the query itself or the serialization?"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"prisma tracing",src:a(635).A,width:"975",height:"261"}),"\nPrisma visualizes the various query phases duration with open-telemtry "),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," Definitely"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," Goes without words how impactful is observability, however filling the gap in other ORM will demand no more than a few days"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(6574).A,width:"200",height:"52"})),(0,i.yg)("h2",{id:"7-continuity---will-it-be-here-with-us-in-20242025"},"7. Continuity - will it be here with us in 2024/2025"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udc81\u200d\u2642\ufe0f What is it about:")," We live quite peacefully with the risk of one of our dependencies to disappear. With ORM though, this risk demand special attention because our buy-in is higher (i.e., harder to replace) and maintaining it was proven to be harder. Just look at a handful of successful ORMs in the past: objection.js, waterline, bookshelf - all of these respectful project had 0 commits in the past month. The single maintainer of objection.js ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/Vincit/objection.js/issues/2335"},"announced that he won't work the project anymore"),". This high churn rate is not surprising given the huge amount of moving parts to maintain, the gazillion corner cases and the modest 'budget' OSS projects live with. Looking at OpenCollective shows that ",(0,i.yg)("a",{parentName:"p",href:"https://opencollective.com/sequelize#category-BUDGET"},"Sequelize")," and ",(0,i.yg)("a",{parentName:"p",href:"https://opencollective.com/typeorm"},"TypeORM")," are funded with ~1500$ month in average. This is barely enough to cover a daily Starbucks cappuccino and croissant (6.95$ x 365) for 5 maintainers. Nothing contrasts this model more than a startup company that just raised its series B - Prisma is ",(0,i.yg)("a",{parentName:"p",href:"https://www.prisma.io/blog/series-b-announcement-v8t12ksi6x#:~:text=We%20are%20excited%20to%20announce,teams%20%26%20organizations%20in%20this%20article."},"funded with 40,000,000$ (40 millions)")," and recruited 80 people! Should not this inspire us with high confidence about their continuity? I'll surprisingly suggest that quite the opposite is true"),(0,i.yg)("p",null,"See, an OSS ORM has to go over one huge hump, but a startup company must pass through TWO. The OSS project will struggle to achieve the critical mass of features, including some high technical barriers (e.g., TypeScript support, ESM). This typically lasts years, but once it does - a project can focus mostly on maintenance and step out of the danger zone. The good news for TypeORM and Sequelize is that they already did! Both struggled to keep their heads above the water, there were rumors in the past that ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/typeorm/typeorm/issues/3267"},"TypeORM is not maintained anymore"),", but they managed to go through this hump. I counted, both projects had approximately ~2000 PRs in the past 3 years! Going with ",(0,i.yg)("a",{parentName:"p",href:"https://repo-tracker.com/r/gh/sequelize/sequelize"},"repo-tracker"),", each see multiple commits every week. They both have vibrant traction, and the majority of features you would expect from an ORM. TypeORM even supports beyond-the-basics features like multi data source and caching. It's unlikely that now, once they reached the promise land - they will fade away. It might happen, there is no guarantee in the OSS galaxy, but the risk is low"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"One hump",src:a(7365).A,width:"926",height:"613"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83e\udd14 How Prisma is different:")," Prisma a little lags behind in terms of features, but with a budget of 40M$ - there are good reasons to believe that they will pass the first hump, achieving a critical mass of features. I'm more concerned with the second hump - showing revenues in 2 years or saying goodbye. As a company that is backed by venture capitals - the model is clear and cruel: In order to secure their next round, series B or C (depends whether the seed is counted), there must be a viable and proven business model. How do you 'sell' ORM? Prisma experiments with multiple products, none is mature yet or being paid for. How big is this risk? According to ",(0,i.yg)("a",{parentName:"p",href:"https://spdload.com/blog/startup-success-rate/"},"this startup companies success statistics"),', "About 65% of the Series A startups get series B, while 35% of the companies that get series A fail.". Since Prisma already gained a lot of love and adoption from the community, there success chances are higher than the average round A/B company, but even 20% or 10% chances to fade away is concerning'),(0,i.yg)("blockquote",null,(0,i.yg)("p",{parentName:"blockquote"},"This is terrifying news - companies happily choose a young commercial OSS product without realizing that there are 10-30% chances for this product to disappear")),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Two humps",src:a(3656).A,width:"989",height:"531"})),(0,i.yg)("p",null,"Some of startup companies who seek a viable business model do not shut the doors rather change the product, the license or the free features. This is not my subjective business analysis, here are few examples: ",(0,i.yg)("a",{parentName:"p",href:"https://techcrunch.com/2018/10/16/mongodb-switches-up-its-open-source-license/"},"MongoDB changed their license"),", this is why the majority had to host their Mongo DB over a single vendor. ",(0,i.yg)("a",{parentName:"p",href:"https://techcrunch.com/2019/02/21/redis-labs-changes-its-open-source-license-again/"},"Redis did something similar"),". What are the chances of Prisma pivoting to another type of product? It actually already happened before, Prisma 1 was mostly about graphQL client and server, ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/prisma/prisma1"},"it's now retired")),(0,i.yg)("p",null,"It's just fair to mention the other potential path - most round B companies do succeed to qualify for the next round, when this happens even bigger money will be involved in building the 'Ferrari' of JavaScript ORMs. I'm surely crossing my fingers for these great people, at the same time we have to be conscious about our choices"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83d\udcca How important:")," As important as having to code again the entire DB layer in a big system"),(0,i.yg)("p",null,(0,i.yg)("img",{alt:"Medium importance",src:a(537).A,width:"200",height:"53"})),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"\ud83c\udfc6 Is Prisma doing better?:")," Quite the opposite"),(0,i.yg)("h2",{id:"closing---what-should-you-use-now"},"Closing - what should you use now?"),(0,i.yg)("p",null,"Before proposing my key take away - which is the primary ORM, let's repeat the key learning that were introduced here:"),(0,i.yg)("ol",null,(0,i.yg)("li",{parentName:"ol"},"\ud83e\udd47 Prisma deserves a medal for its awesome DX, documentation, observability support and end-to-end TypeScript coverage"),(0,i.yg)("li",{parentName:"ol"},"\ud83e\udd14 There are reasons to be concerned about Prisma's business continuity as a young startup without a viable business model. Also Prisma's abstract client syntax might blind developers a little more than other ORMs"),(0,i.yg)("li",{parentName:"ol"},"\ud83c\udfa9 The contenders, TypeORM and Sequelize, matured and doing quite well: both have merged thousand PRs in the past 3 years to become more stable, they keep introducing new releases (see ",(0,i.yg)("a",{parentName:"li",href:"https://repo-tracker.com/r/gh/sequelize/sequelize"},"repo-tracker"),"), and for now holds more features than Prisma. Also, both show solid performance (for an ORM). Hats off to the maintainers!")),(0,i.yg)("p",null,"Based on these observations, which should you pick? which ORM will we use for ",(0,i.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica"},"practica.js"),"?"),(0,i.yg)("p",null,"Prisma is an excellent addition to Node.js ORMs family, but not the hassle-free one tool to rule them all. It's a mixed bag of many delicious candies and a few gotchas. Wouldn't it grow to tick all the boxes? Maybe, but unlikely. Once built, it's too hard to dramatically change the syntax and engine performance. Then, during the writing and speaking with the community, including some Prisma enthusiasts, I realized that it doesn't aim to be the can-do-everything 'Ferrari'. Its positioning seems to resemble more a convenient family car with a solid engine and awesome user experience. In other words, it probably aims for the enterprise space where there is mostly demand for great DX, OK performance, and business-class support"),(0,i.yg)("p",null,"In the end of this journey I see no dominant flawless 'Ferrari' ORM. I should probably change my perspective: Building ORM for the hectic modern JavaScript ecosystem is 10x harder than building a Java ORM back then in 2001. There is no stain in the shirt, it's a cool JavaScript swag. I learned to accept what we have, a rich set of features, tolerable performance, good enough for many systems. Need more? Don't use ORM. Nothing is going to change dramatically, it's now as good as it can be"),(0,i.yg)("h3",{id:"when-will-it-shine"},"When will it shine?"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"Surely use Prisma under these scenarios -")," If your data needs are rather simple; when time-to-market concern takes precedence over the data processing accuracy; when the DB is relatively small; if you're a mobile/frontend developer who is doing her first steps in the backend world; when there is a need for business-class support; AND when Prisma's long term business continuity risk is a non-issue for you"),(0,i.yg)("p",null,(0,i.yg)("strong",{parentName:"p"},"I'd probably prefer other options under these conditions -")," If the DB layer performance is a major concern; if you're savvy backend developer with solid SQL capabilities; when there is a need for fine grain control over the data layer. For all of these cases, Prisma might still work, but my primary choices would be using knex/TypeORM/Sequelize with a data-mapper style"),(0,i.yg)("p",null,"Consequently, we love Prisma and add it behind flag (--orm=prisma) to Practica.js. At the same time, until some clouds will disappear, Sequelize will remain our default ORM"),(0,i.yg)("h2",{id:"some-of-my-other-articles"},"Some of my other articles"),(0,i.yg)("ul",null,(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Book: Node.js testing best practices")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/testjavascript/nodejs-integration-tests-best-practices"},"Book: JavaScript testing best practices")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://practica.dev/blog/popular-nodejs-pattern-and-tools-to-reconsider"},"Popular Node.js patterns and tools to re-consider")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/practicajs/practica"},"Practica.js - A Node.js starter")),(0,i.yg)("li",{parentName:"ul"},(0,i.yg)("a",{parentName:"li",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js best practices"))))}d.isMDXComponent=!0},5457:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/count-docs-71e2e829f7c59b9d652603c03c373dea.png"},1886:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAAA0CAYAAADPCHf8AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAhGVYSWZNTQAqAAAACAAFARIAAwAAAAEAAQAAARoABQAAAAEAAABKARsABQAAAAEAAABSASgAAwAAAAEAAgAAh2kABAAAAAEAAABaAAAAAAAAAE0AAAABAAAATQAAAAEAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAyKADAAQAAAABAAAANAAAAABYRLIqAAAACXBIWXMAAAvXAAAL1wElddLwAAACy2lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8dGlmZjpZUmVzb2x1dGlvbj43NzwvdGlmZjpZUmVzb2x1dGlvbj4KICAgICAgICAgPHRpZmY6UmVzb2x1dGlvblVuaXQ+MjwvdGlmZjpSZXNvbHV0aW9uVW5pdD4KICAgICAgICAgPHRpZmY6WFJlc29sdXRpb24+Nzc8L3RpZmY6WFJlc29sdXRpb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4xMDI0PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6Q29sb3JTcGFjZT4xPC9leGlmOkNvbG9yU3BhY2U+CiAgICAgICAgIDxleGlmOlBpeGVsWURpbWVuc2lvbj41NzY8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KdfdJeQAAF9hJREFUeAHtXQl4VFWWPpWq7AmEbCQQlohsCgjK4tJgu8BIi2uzaM+nM2qPy+hMjzPO93XP9EyPtto92DI42vq12tiujagoMAIqKrJ8KogLoKJsCUtEspOFLJXU/P95dSuvygqmKlVFQdfJ9+q9d5dzzz33nHvPvffcF4cHIAlIcCBGHKC4dXZ2SlJSkjgcjhiVGn4xjoSChM+8RM7QOEDlsCtF4Hto2GKTOik2xSRK+UvngFGGvXv3yoYNG6SiokKVheHxDAkFiefWOQlpe+2112Tq1KmqJKxeQkFOwkZOVCl8DmRlZWnmtLQ0vdtNrvCxRi9nYgSJHm8TmINwoKOjQ0M5UT8RIKEgJ0IrnQQ0GlOqu3u8VjGhIPHaMicZXVzWJbhcLr+7CdfAOPyxqI1DwhIknVwcqK6uFqfTKfX19Vqx2tpa4UXo16+f3uPxJzGCxGOrnEQ0GZNq9erVqgjPPvus1m7BggWSm5srmzdv1vd4nZPExUahYeLxlovjubGruwFxsCVg9rZJinkOp11Mft01x8hx8MABKRk0SFENLCnR9ykTJsiqNWukHxSlE5N3P3PLIAhWuOETCTxWOnve7tIZXEyr+BBAQfAKQ1yMIFzqi4fL4hC5FPsLHIgPHniFQ5dfjaCEcTf5k6AcBCrFn154Xp8H9O+v91/Nn6/KwRdNZy8nySuk9jDzzDgTb+4mrrt7d+kMLhPPuRJxeDcwXe72djl69Kg2jlIdwx8qL3uNOtiiR44cESee7QodC1KoCuzl+hXlSEZWGnoyLD+SQTEECpO7o12+bdiPUsmB2JbPIj2Qi5QWkfwKlB3h4js7PZKWmiqTUrOVq5u3bJGpQ4bJaW6HNHz0mbjd7eweQIND0lrbJL0VhMS4DZTtTjDhKMoeM1pkcIkqiasdCnIEEydqcMxNHWipKzlZyuF+sGfXLknF5lGsbVEKZ3tru5x+3kjJT8kVd7s75p1FksMpR9sbZUvFG9Lp6fSWH6yroOQGC1e5C/vH4QEPXB7JrXbIuDewbhMFu6IWnVDfzAxZOO96+acXn5FbLrpEXB9ulcPonJOSnOJAvd1Y4cqtqpb0sgPiSUkWh+6VRFhbg3GJRbCszHSRJdtE1j/YpSDUVCoHe/KYKwjocqDcZChJenq6pKCXUQWJVe9B5zn0Wq4kl7icyRi1wQcwS82DkHryQKEN1qiBadhSTOdBmUnidLgkzdUHbWBXkO7wWPlMfutOfCzDnifw3Z7GxOEOBUl2eiQ1xSHOHHSUEVYQUuRAMQ4owrmnjJBZYyfIhEFDxZORKs60FIqgxjvRgycJNhI72wVLXiQ2dkCTKjVF5NJTBQLpK1eXeakYx0M5tExv2VQMXibMR2G0H9Bx+MrVsig4BuzPJqwn957mA99VqC3R9ng6BFyAsNiFvCfl9S4NhZf72h6U6+lA2T0lv4fFEh1r1I72LcjKlvuunCeZ6Aw73B1d6kw5ID6ESZsbGyZ4MxkZHm2AGQgbHyZWGxnhKy3CfYUPb+gPsRo1glIWW4G0kxBYcuC7Pe3J8MzROTfT8sfqtj5kgg4r3aaIfIQyHj8Bchg/ChL5KicwxikH3F5/rDglz4+siCsITSSdR/gVc/K9sJ5qDgapmsZxyA4DuOJjoDv8Jj6e793xx14n1tT+fqz6dNrMnmOli3RcRBWElaU7gXFljjSx8YKP9UzBKgv9igIbWHngcmLBIeU7ccemH+KC4T0llRNEy9ByJWN1x3o8dtY4jE0Bb1xcGbXRxudkE05ZQeWYpieQhHyGLz1JH6k0EVUQKkdjY6Ps3LkzROGIVHWij8d0AuVl+6WmusZPSRhHpamrrcfS9T4892TpHBNj1Q2HdGCJ+cDuQ74RuL66ERNZrmpFv16RLCEJBJdVV0pNU6Nvb4vK4cRq3cHaajnccASrZi5pamuVuuYm1A888BJg7nZ6OsCgXcjX1onJuy2tPU20niOmIDSruFy7Y8cOOfucc1RRqDCBPazd/GKcYYg9XXfP0WJCKHjZmFwSX/Cbh+VPTzyvz8xvaHZCKf789BJZOP8RhFq74351hgll0lIzrDgs9TodUvVtnfzsst/ovkxjfbPcNO2XUrGvEntFLp/S+Ggl43zM6yqf8R5bGSzLV54vc3QeDDk0h+5e9pKs37lDkrG0a8pPwhr6k+vekWWfbpY0yMriDzfK9U89Ku10MwFf7WYU8+g7eN3Y3ibDlzwlZQ1wdPRuR9jTRqc2FtaIKYgh0u9ADCptwDCJ5hcVR00xLPWRMQT2vNb+A5fAaVpY4Xz289ExCI/jnUKdX5ArD973kJSX7YO5laICzPvB/RVy33/Ml9z8XDG8SE1L1fqxHqnpqLO3kbkHxDhulHHDlCMOgcufGdnp8qtFt0teYV/d3Wec4QnTUKF4medkKJHuIwBnMsy05BQX8MKE8Yb7lElzRO+HNHKz0wU6OsAnCrLV9lZnwGZFkLg7O+TC0WPknsvnaFqmS4UMJKO9OUrQpNJ2Z2IvUNmwmaLhlsllYqJ3132QSKI3jah3W+WMULy7dq3wXHJra6tc+qMfycyZM6UJu6mLFy+WOXPmqLCthOfnaaNGyYgRI2T9xo3qpjBh/Hjhrr/BH0maQ8XFBudoSXjj/96Sv7vjRqWLtL339noNT/XOQZhuy4cfy3NPvSg1VTVy9TVXyF9dOh35XXCx6JC1a96RV/68XEqG9Ze0og7Jzk1TAWIZhzB6lI4cKC1Ym1+9eL1ccMUU6ZObpWV9tHa7KsjkC8fJx+u/kIN7vpXaynqpq26QGXPPk8qKGnn31Q9l/A9Gy/mXT5KMzDQorNmEVBKj9kO3EQo8hT0V9eTmp26GQmlc3hGFbkXNMLEOw8UoqcQalXd+c1CWfbAe9d4nFww5ReYOGyk56RmKa1hGtiwv2yVlW7fIEYwot5x2hkzuP8DStqjVJCpOBV3UGmE2AvXmW2/J7NmzZdzYsXL+tGnytzfcIEuXLtWJ2kuvviqHDh1S0+wPTzwhX2Ee09LSIvf/9reqTAZXF/bj90RaGhub5G9u/mtZ+uJy2bNzr2RlZ8mB/QfltZdWyI23XS8NRxolFSPk1k+3y6wLZkMpLpK/v/NmufeX82Xl8tWSnpEua1a/I9f9+Kdy7rTJMnjQEPndnYukoaZFnK4kaWlqkT/81xJpbGiW1pY2ee7BFdLcQLcMa9TYua1MvtyyS5mwc2u5/PG+V6RocIEUFOfKz+ctkA2rPlbFWHT/UtmIZ+M0GAuuUTm417F400ZZ+NZKeejtVbJwzeu4r5RFH6zTkYLm52f7y+Xhd9+AwidJWVWlXLDgXnGCr9eOHiu/3vKB/OOGNdLpdusIs7u5QZbs2iFTi0tkRN9+MmX5Yvmypkp33FletCDiI4ghlGYIL5oZ7EUbGhrknnvukccefVTmzZunZlRhYaHMxfObGDHmQnG+xPxl8ODBiqKurk727N2rLijDhw+HXc293vgAKkhVZbVcNfsyGTV6hLzwzBK5/8G75fVlq+XiSy6UwUNLZMWrq5TY1SvelGuuny1DTxmifODzYwsfh8JcLP/7u8fksacfkrnX/hgT1iPSlP6N3H3T763RCD0sgSMve2QC3WLY2VAe0jM4IsBvDH+cc1zx0wtk5rXT5PDBannxkVUy+5YZcvrE4dKAucyXH++R6XPOU7yKKAY/FFkXJuIcRdS/DHRyxSoVbj0gBLEeSU9OkdK8AjWz3/tym5w7YZL8Yvplkld/RCYWFMnAFx6Xn084W07tk6MUPzz1YjlzAOQDI8g2LAK8V7FfRucXerkTnUpFTUFoj/Mi8E4F2b17t5SWlmqjU3EGec8HtMHcmnjWWfLkokUyCG7Rt992m343aSlGlctnzZKcvn3hgQC3BGVsdBgREla0L88vUAguvXKmjCudLDNmXiQvPvuyPLd0kWz9ZJsKMjuI5uajUnm4SpYvfV2qYWIVFOTLz/71dmmD1+rWj7fJKacOVcF3o6csHJinZGg97b2ipR9aHudkjIKaoAPCA+KYNysbpoh2SFZHwmXmdrhsWHMQL4KQKhl+YtJfixWs2WdNkbkTz4EjZpvPxKo8Ui+teDdmGBWeClSJla3zSgZLOr92gs5nAEagktR0qW456lOQLJq18Pylgp2KUeQonwlRrF7EJ+kUGgKPU/KYZU1NjVRVVUlmZqZcf9118uxzz2l4U1OTLFu2TKadf770hQIUFxVpA7/97rtyJg7S5Oflycb335czxo3TCS4ZGTcAUqj0XNIeNGSg/PqB/5Qrp8+TOT+5WoaWDoZZ1KSdABceho8cpqPlHXfeKg88fL/M/slVMuW8yZKTmyM33HqdPP3k86o4NKPWrvhARk0cKmmZ8FOi2z2A7vcZWfAyBXy+6WudwFd9UyPvLP0AE35rr0WFjcpiA44qFFSOvP4xtkRRfOTiC1enWqG8bbzgzt8GgXarJWAt61KuOVnnfOR0KMcDK16R3Ycq1Cdq1b49cqD1qKSyQ/DSyeVeow3062K9ow0RH0HMxs/ESZP8aH8b84+77rpLxp1xhqxcuVKKi4vliy++kHXr1kkGPHnpzXv2lCkqeAX5+RpPBCUYUdgzxhNQ8NhrqxCikWZgfkFT6pJZ0zEaWKssXInqQOPPwgiz+f0tMv6Us+Wqay6XZ558QV5e9bwMHFQst9xxk9x4za1yxqhJMmLoSPn0089k4fJf6ATedAjEkw6F+bfHbpb7b3tc1r3+EZTGI9+UVVmrfSifadXxD/Ji5igcYUCmKkkrJvnRFyVvC4EWKkcrzD/eeZFf/KO5yLYkbQynuHP+QOvgh1jR+ofzp8u4xxfK+Pz+cmbfXEVIRTDAjUUDxOLGyKPQpTcmOmJ3R3Nzs6cWvbwuqfUSLRkBfFJeXt4l1AijOUJB55yDo0pZWZnG08SionyNuceB/fv14BYn5v1x4oy9cz1WOIq8p896SVq32Ulze6tbRk4eJnnF/Xp0HoS84oScSp1fkKfzLJ4jMUuqtTV1OkkvwejC1RvWaffXe2BmNuLY6UApLilS5eLcrB72dtnucjiRNkt9erlk903X0YMjx2GsROX1z5GUNJgWEAK+Vx+qlVws/XK/hUu/eTjoVfkNPn4AQcsv6qdm1eGDNdKvsI+OPHVVR6SluVXNN6N0gcygN68bXWVOrUNOew9Lqb20Kyj8+7Gxl5mappN1LvcSuFFYUYfNVYwKxTCRDmNfgytZQzAPqYLptQ9XYUur9K+plaPQhdNeflq2XX2djME84ytMyIdm99URhZpf0digSlYEU4z16lKdwNr18B04sFwqsqNc5N5/FjkbHTzojqiCkBQKj1kCNaSpEGKJlsu0jDN7Hnxn5cr27JGyvXslIyNDszDc7H+wp44mhKMgFMZkmFjsDUkfcfBiXXixfuSDqR+faZIhie4NtLdZtjPTUtBTU1JhpzfJprI3ffhYZyocRwaOVJQAup4Ql+WvhbIQzqVipiO42zlP8+ZDp0Ql0/0TTO4Z1x1EWkFYjjU5B30UMm/BqIW1z4F60/wy1gY7ka37y2TmI/Pl4YsulbFQrH95f52cU1gk//ODC3VpmPsfqLB2BMTDkZpAE9Lg14Bwf0BTMAWJuInFyXcwoaYAsXEZR8ExYJSJ8RxqtcJ4Jh5eDI87AE3cxyFthj4qi3lnHSn8rC/DGMdjzQYYTmBcB9I2w52kpb3FZyaZdDzpqPi9LOBIp8JhEiCc8Ry9CJoWd5NP46BAzMQVsFgC5x4s0dDEsvnejnB9Jt1sb4Y7OmUs5iBv3/nv8vKGtbIB+yF3jBkvc3C4SvdNwD9M5XXE0Lz4oVUSiJ9xkYaIKwgZYmdKIMH2eAqRHbQJkZ9wLBz2PMfr2Qi5Kd9Or72OjA98N3lMHHtQo0y0zw0ECjXfLe6YFNbdXjZD7Pk0LlgmfxQRf6OZFQzstCpfvImoLCMGDJL/njFLsisOWaf7qECQEXs6g7M7/CY+UnerK4sUtgSeBAfC5ADVqZWrXFAKDnncIIyY+RQmTcwW8RGkF7Qksv6Fc6C70eV4siUxghxP7ifK/g4Hghtm30kWs4CEgsSM1YmCTkQOJBTkRGy1BM0x40BCQWLG6kRBJyIHEgpyIrZaguaYcSChIMrqrr2HmHHeW1BgyYHvsabneJd3/OvvT4Eu8+pGDDZ2Ajfuos0sXdbjhhIurl7Yl/miXbbBb+reta8V23UUs/Vn6s+vKiof/NvJkBv5u7e6Wj5agWXH+MOOvjpZbYE+2+uBEHRX1Jc6wg/kN4UAfmJ24HlIdYVgYKwVhGXyA8XqnoINIicuumXEErjrzM0p0sCyO+GBqwIaUyLoowUXHbiEWzvpXqkNlQZ6JnRpeo9zqy8Wftz47GhHO/jPs+7hKKiWz2JDp5++vfTs7WzDpz/hsNgJ/zKH1wetxxVhQtCtu++hutZoWaAbJzghBL4awF/M+o4VXR1CVRDtbciUXgDL51mR7OxsdQDk7mlIAIGg20G4isU6tCe74SiZicM66eJ2hvl1dwpmGLygQvCj2fwWbXZGPwgJfYwoLpaYmTt5Evhs5xNFkg585J9pkUAxNeEmH+MNzg58vDorK0kycnECEKGsjg+RyfB9d/qYkQch8oEjFpW0A/nTcdyYBRu3/e8r0i+elaE7DonneRrWoadAmuFQKqfCTxBHpYmK2R34TI/nq6++kpycnC5PUiZWDgXB7o2jMrHXpeeqgj2P/TkICmUg8BMHhZuHp4grVCUlI6gYbeh1+LWUUBVcSWM90Xu40tBR4IsjJL3nYNiI9sD5Byp7aK3CkixfIzfyN7U2KA6rHsFat6s85rPK6rq34P9qpMET9vtpMHlYPgHv5EMbzp404QvvqEeovGRb0EmTefkcUn5WlSRAQdyQhTTkd7qSgYNCHoQPQcg3fKQc0QGU/ylA29KePVg+E09ZBO2tOGZRMmG8DMfFOrh4PoNfD+F5DeOhqjw7xg8Fmd6p69evlxkzZoTde7MIMnTDhg0yafJk9GBZXUp6jPIZxbrSXZrnRrZ//rlMQf5wRxHi++STT2UsDu1Q4RVPsIZhQj/gOQT8gR/bt28Xnp1PpRt8D7WMbcO0dIXneZgWmBaj8DUX8rbn/0yInQw+sQPB2LRps0w8c6LWoacCatHQiQ4mXbZt2yZJA134osxoabbRYGTIVN3IWdcdgoTz53oaFF+f4f8ebIPHNjs/k8ael8/EyTiC0oCOjp3c+5s+lAFDS6WwoEA7PsutPRgWf6qoTCk4416J06sHKypkLE6iqte4tmNgfqtcOwWqDDh6wDM7VYe/1QQaRnfzIhx3pXDy6inQJMrDsViOPL0FMpS4KCihApnaH8zs06dPqFn90ufn5+HwU75fWCgvhYUFUti/QBUmlHwmbQHyN+Mrg/wYA69woLCoQPLwPa5wITcvB58GTZYM8JRXqJCHNihAW2bAZA4XBuQXSDFkIZNng7zng0LB5UT5HhxF6BNGXpaTjRG4yfvfd/nuopZw5CD01MzhCMI8PD1ICKfnNsOwGY1oJlFBDA3ES9qYLhAYTjB0NGJYJjDcxAXLZ9KYsk0a5uGpP/Y47DCs+gT2cYG9kEUDcfBifn6IgZ/68eeHHY+SiR8rjPmYlqMo+dmK/ARjqnir2WVlBCFB5zBoD/KNo2lgHfzYZ/J778TPeEMDR7COZGsO6KNBKer+R2sCRNqO4EEr2pEKQnqs3r/7vCaGOAwNbMsW8CITnbWRBZL7fcC5F/nIvE12ufRjQHAs9jq0gH7Lo9hK6xozZozvBCALIBjBsZIE/6Upwl6fQOaEA6YcfrTBlE1cJtzcA3HbwymQQ4YM0SQMt8cF5uO7iTd3E8aFAjsNVl6yzoD9mWH+7xzJDB/M3eQMTGuFW/lNWtM5MM7w4Dtt61+koqWJZ/KUlpYGqYNGWz8mv/du8BsaeKLTzCl9NNiyd/doeMn/d24OwJn83eUJDDc00NQ3J0sNDkN2YB77u8crg8xLi4hgcNrTfd8zTVvKFIH16tW/gabWh0NEIJGRwhOIN5R39lZGQULJZ9L2Nr8Z/SLBT0NTqPfe0sB2VKEymhcqAXGQnjwwoxnJwTZEp8f0AHFAX1gksFIneh3CqnhAppOBD/FWh16NIAHtk3hNcOCk48D/A1KW695yKUPuAAAAAElFTkSuQmCC"},537:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAAA1CAYAAAAEVKRZAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAhGVYSWZNTQAqAAAACAAFARIAAwAAAAEAAQAAARoABQAAAAEAAABKARsABQAAAAEAAABSASgAAwAAAAEAAgAAh2kABAAAAAEAAABaAAAAAAAAAE0AAAABAAAATQAAAAEAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAyKADAAQAAAABAAAANQAAAABlJJuaAAAACXBIWXMAAAvXAAAL1wElddLwAAACy2lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8dGlmZjpZUmVzb2x1dGlvbj43NzwvdGlmZjpZUmVzb2x1dGlvbj4KICAgICAgICAgPHRpZmY6UmVzb2x1dGlvblVuaXQ+MjwvdGlmZjpSZXNvbHV0aW9uVW5pdD4KICAgICAgICAgPHRpZmY6WFJlc29sdXRpb24+Nzc8L3RpZmY6WFJlc29sdXRpb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4xMDI0PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6Q29sb3JTcGFjZT4xPC9leGlmOkNvbG9yU3BhY2U+CiAgICAgICAgIDxleGlmOlBpeGVsWURpbWVuc2lvbj41NzY8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KdfdJeQAAF11JREFUeAHtXQl8ldWV/7/sIRvZFwga1kiAwLAoOlSWClK1xamoo79fp1pnRmxn+Ok47e/nyLSjU23p2N84OrU6KpVaUaG4gq3oWIW2bAJhkX1JgEBYs2/vJZnzP9+7yZfHy/KSvBfDfBfed+937z33nnPuOfeeu31xtYiD4xwOOBzwy4Ewv7FOpMMBhwPKAUdBHEG4rDnQ3Nys9G3atAn33nsvNmzYoO/dNZwcBbmsxcMhznDg5MmTWLZsGYqLix0FMUxxfIcDhgPR0dEajI2NNVHd8p0RpFtscjINdA4YU8v43aXHUZDucsrJNyA5YOYavn53iXEUpLuccvINOA5QKcLDwxXvyMhI9SMiItQPCwuDUZrOCLNyd5bDSXM4MAA5QOF3uVzYvHkzDh06hJ07dyoVXMWqq6tDXl4epk+frkrCfB05lxTkbBR2xB0nfsBygHMNjhI7i4pQOHGi0pGcnIyLFy9qmIozdepUmHwdEeqYWB1xxokf0BygcgAtmFBYiCefeEJpyR02TP1/XbJElYMvVj6N9vtwFMQvW5zIy4EDzbBMp7+64w4lx32xXP2Fd92lvrWF2DmlOgepralBQ0OD2mydZ+/bVFp31GCPx4MS2cDxuN1wqeb3oB5air62pD3OhI3PKrzh5qZmxCbEIGd45qUTN2OeGkOU775hk8egbU/XerwJBtae35s3zBWGkxVHUNVwEeGuCKnCFGIKDbIvOIU1uZB7wIXIRhda7DgGuWqr+BY0Cw9iRQ4Tq6subcse4OAiC4WO0eItnXsTvv/hGjw2ay7GbdoObNkJmV9ouhbNvBEyoa+QumddB+QLlJhpqiCNjY2gkoTJjD/UUxIqCOsvPnoU9TJ5Ig4quIp18B+uMBfcDR4kZyUhKTcm5PQbCsNc4ThecQBlNYcRGRYr6tGd/s1AB+pT+tsUkILULLZEhNuFpG3hiK4QBaFktGUJtIKA81NYPSKgLhHQxCPFlrBSgHvhOPnWOUZkFG4fnIHvS1l3pmQD73+C5oZGkTUhWusQfrQIv+NkE/HdPcAHYywFkfyqICyIgklh7Q8F4VJcdEyMsqIrm7AX/PILStrDwzyIiooBhdRqEj4pRKFwVo2sOzoiFrERSaIg0YJH74QjIMylqhYqiPjhCeEygolgST9FxSEWdk7wnY5xJs03TjN4HwbWnodJBrYtLEuyYSKD4QLRkC6I9E1HaeYQuWKd/PHm25E3eDCksWHivWgKQoJRjOy2z8oDvEvCTFMFUSQlQ6iVg/WZHzWdPwpsKPHgCNIi9bawB1FnmtL43uigezJmCA7NLU36C7mCCLnN8pPq0eKxxFo7V6HbHydMnPHJHnvYsMs3zv5uD0sDiFIIlEcQaPRI2LSHKal3fpio9LU5uWo2wS3l+zqtXxCoaxBC2jC7RJF84Zx3hwMh5wD109LRPq26pVmUr0vXvmJHQbpkmJPhcuGAqwdaFxQFMWbT5cJYf3SQRpqEHbnO0jqCYbzyzpuhRWyeUJqbneEVaFpHMmCPZ7jZZs50VgfztRk+neXs27SgKEiUTIL4C8S1H9gCgQx9XjZsuKy4xHgXFnwx4DwqJjYmwGVziwORUZEyP7UWS6JiIhEZJdPE/pAMX6ICeCe6kXLmiT+70/jwCERJPHkYKeFYWWHSeac9o4bbS0SYwDBfqF2fKgiJjpDVh40bN2LLli3gwTDG+TrGsYe1p9nDzG9/t4d9ywr1O3GhEJ8uLcP7b3/QTnaZxlU47im9s/J9VFVW62E5xvPnS7PBnfH8UQA+/2wXThWflVW9KOzfcRTHD51GeKSsruloYiDEF7ayTONMHcxnkFIYvofQEY8okYHPjx3B1mOHVR4MbozfXVqCDYf2I1pWik6Wn5fwPpmXN8kKoiyWCJ6KPnllo43hdw/vx+maapmbuNDUuqASfML6TEHIBNMTrF69GmvXrvWr8cxHJWLva0YZxnGp1ywzsxxz6pIsYLg/eg9/7Df4l548hfvuegCHDx5BdHRUq/BTeXbt2IO//9Y/oKKiQtfaKfykgaNKZKS907AUh3yIiZUlRpGQpYtfQemxMzqK/O719di1+YAufXLNPpxLoF7H1bcIbmwJDHnDEY0/jjpMo4uMjkREP4xAtPXX7SnCmp3bVYktwbfkY9PhQ3hpwyeiEGHYe6oUC//nadTKPhgVhKt4UUKnS3hFWVAlkXiPxH/jo/dwtKpCl2DDZeShY7nBdu3HwD6qLTEpCbF+zA8KF4WhsqoKFy9cAG93ZWRkoEl6kJraWt0wjI+LU7++vh5xEqarrKzUvKHeROyIHUJGq/nw9sr3MGr0CG1Q5q+trsWK5W96Qa0la95mq6yoFJrLkZiUgJTUZKHRrTAREWE4c+Ysmj3NsvroxoSZOaoQbPyF989HdGykLL22oLaqThUgOpamawvcshTaUN+I+KRB8Ei4scEt8S7NNzg9UZXnbOkF9ZNS40WBQyFOFtmcLQwSmmPY8Ymg02nHIuF4ic9ISIS7yYNpeSPwh4eWIDEmVkaFFsSIYlyor0NN+UUkSdvHR7HTsPCePDhVOgKRBdllr3Q3YkhcgnYMLLet27Dq78tnn40gdqSaReDZa9odCeGZ/K1bt+IvZ8zA5ClTMLagAK+tWKHZtu/YgVWrVqkCHT58GA8+/DDcsrnDXvhffvhDVRIym+X0u5MWaZLjKXT/tfQX2L/3gNLG0aNo+y68+epqJIsSNMkRGtL85w2bkT9kEuZd9w0UDJuC9Z/8Sc000rL6jXcxcfg1+IvR1+LZH7+I4u0VKggcFT5bswV7PxczRUaBVS98iJ0b9yFCzC2OFCeOnMbP/2mZdqOlx85i8c1PyujzIh6Y9xhef2YN3nppHe7/6o9w38wl2L5hr45IoeZdlXRy5dLxldfWeP1aOUpTrx0irYgjZ8rw3//7ezQ0uVWRPtq9AyOW/hBDVy5DwqvP4dMTxWpScSTJih2Ef/7zpyhY/WvkrngRj25ejxpRFPIpmBIRlBFEJcf2MMpRVlaGG+fPx78//jgWLFig5/QX3HorsrOy9Hz+j598EnfcfjuOybmss+fO4WJ5OU6dOiUbnNFISEhotdNtRfdLkCYElXf23OsxS34v/3I5/uPZJ7QX//VLr+GnTz+OVW+8LcIcgbLTZ/DNG/8aK9e+imnTp2DLxs9x2/y78cWJbSg9cQrf+85DeO3tZbhqXD7eem8V1v7mM4GTUUOE4pSYWkkp8Tr/OF9WjvraxlZ6Pe4m7N50CDxH5pbRqOJCFR586ltIy0rB9+Y/jsIZY/D8x/+GHX/aiycWvYDlG3+C2LgYFc6gmqvSeVCg46T3X7puDT4+sAel0uuzJ04RIS85fw6LZsxRxa4WZVm5fTN+8s27cKjsNO5Z/gJ+s+BOzEtIxrqSo5i5dhVK7rwPufEJ2HXhHO4YmY83CqegQuZ4Y3+7HBNTM7Bw9FjZ2PSoorQypw8DQVMQNjBHEeNzznHg4EFF/e6770ZiYiJycnLwyCOPYP369bhhzhxkpadjh5zfP3X6NIbn5eGg5N+3fz++JkpFk61BbNWgNm63GWv1WeXlFbjl1q9hycOP4f7Ff4ua6hoxpaow58ZZ+MHiJTp6nCw5qaXuLtqDom271LRiRLmYW0XbduK+7/4Nvjp/tgr6jJumAI/J6CS7yaST5pTOMyS/jhycrAtfOfnmShd7YdoX+i7HNPInDReYaMy6dRrGXT0KV4zORnVlrdbvlt3jQZTS7uyVKUQPH8Ia4l4vHcjCSdOwSA4HchJOFy0m1Kqtm1BeVyN5rKPmmQmDdf6x8/gxzJlyDeaNn4TU02W4c3QBXj3wBbafK0OumGQlDXW4bcQY5CSlIEfGjP+cNgNbz5zCwlFXKXwPse0SLCgKQgZRIQYNGqTzCTYq382XJXg4kZNWnuKtkvmI8ErnG7d8/et4culS3LFwIb5zzz147vnntae+5aab2q1qdElVCDJQQLdt3oGMzDQ896un8fRPn1VBeOChv5PRLl4xIB903iRv4wvHYVD8IBHmZhl5voIhudLMwpeK8kpVCJpiTW4x2+Q+DyfXdIRXXybd7P3Lz1XqqKQrZTL/4GSc/6iuKdkJUo6MJjIXMXCcp1DZtAyWFUxbRGuxHsS6Sei8Mi0DBTlD1RzlvCRGaOTq1uZjlZKDmMuxKFVyly79FldX6wgnTBPZcKOkuhIer6k+LCZOShACZO5CFyl5IrzzG40I0qPPFYSNzkn3unXrMHbsWJl4evQYe5qMDqNGjkShXGB59NFHsWjRIhTJaPHMM8/g/ffe0wlr/mgeTIaaXCOGD8cJ+ZbRrJkzkZaWZjEuSEwIuFgRNo9X8CiEc+bNxKJvL8bV107B1Ksn4+yZc1pkQ30Dho/Kw1Xj83WecrOMNocOHNb0iZMLcd310/Hwdx9BwfixEr4Gr73yrsING5mlgl1XUy8nnBt1zlE4fQx+tvhl5A7PQowoy9M/WC6dh8caocXMOnuyXHtlKk3lxWqdxPPqgJkL6vKvV+ECprcHAHUyP6iXtm+QkYRCTrmgo1nFOQmdR45+HJelXsrLxCuG49Drv8IvP/kQC7KH4p2jB7GrqtxSIIEtqa+R/FKGl4ZaUaBK6WiNY+lWd2Ji+sYP/5G4BplM0aY2PU9Pijaw7N2q5ej8QbkHvHv3buzatUvvBZeKsM+94QbMF3Np3759eHPlSp1f/PyppzBp0iScKCnRCTqJpBJlZ2bq3KNwwgQMGzpUe2dTR0/w6wiGZdKOj4mLRtqQlI6ytcazobkkXSM0pqanYOr0yYiPj8PMG76C+bfMRXZOls4JOJGeMm0S0jPSMHP2DHz8+z/g5edewfatRapIucOGIiUlGfNuvgG/e/9DfPD2h3DFufGPS+9GSsZgFey66gZk5KQga1gaUjMHI/uKdLy17COcOXkeU2dPQOG1Y9Ss4gpWXFIsCqaMVGWqra5DusDlXJmButoGiQvDuGmjdNORWwh+9UQYHyZpmSViusl9kEuPu7ayoMtAhPTuF0UJMpMG46rsIZaSSqUcLagcKXHxmHRFHipF7rISkzBZlCM9PhFzx47HH4sPY83unZiZeyWqpYOZlJ6Bq1LT0SydwbVZQ5AqK15UhXKBTRGrpDDNusPTJ8ohVg3OyVLy7OlA7hAZcWWFTBq8pUImw2xwCndfOfYKpjwKlQnTtOI7N9P0mLtMwGtlpaNo2zY1x5hP+xrJQ0fCrZC+9vmDPa5H7oMkZSQif+rI1p4ukIpID5ev6bOjURq8+DOO5hN9Ll0zzH2TBrmPYMFFqqnZ5GnBoYvbca7uBFxN0lB626cNCyoy90Ia66V82Q8hH8ljCru3qrbM9hAZyAW3ziRIGKzH3WWleMJ6OXZfJWYbbYtgMt6LIyf0NL2OnDmNFUWf495RYzFBFGP7+bOY9s4K7L/t2xidmibL4J6gzjWUiTzuvq8YePxBYPo04Zv3wpQX1z712IC+zggQG5sCxeGf8xE6xjFdnfHlJQRtZNXZiydxbxSBpxC2o0PK5LsZnTkHsxTFur3JNCoKFSpKloi5V+GRntI6itEeIZpInmaPTtaZonMLqc/GqvYA5o0M7Ew5TL5+8rlB6BZFT5WJeJSM5JN/8TNcn52LT08dx29n34TRyTKqs7OVfP3hLpXiPsLC2L724igQ/NGx96MzI4tRhFYl0dQv94OTRmuqKTIoI5HdkQ5Dq/ENzead+c1oo/sqwgRNM8ywFyjFsy7yVetkdXbht4ftcAx3luabtx/euYOeEB2DB+fdjDvFzHIdOYqcWTfKZmCi4M7hr/9c0BTELgT+yGO6P2XoKN5fGf0dZ5TDHx7+6PcXR1jGa7/RXsf8FduqdJpoz28P+0J2luabtx/eyUcuBTfJHCVf9jYSG6TzlA6nReL6G/W+m3T0A2OdKi8fDhhFqJfVqRb58TSGietPKh0F6U/uO3W34wAVgnMNjqj9NedohxDx8Y1w3h0OfDk44G8iFnrMHAUJPc+dGgcQBxwFGUCN5aAaeg44ChJ6njs1DiAOOAoygBrr/xeqX4Y1rC/hJD3kUzOpMOR1DgRJ70+m8HiA+XXGK384Ms5ffGflME1h5MF6bU43CrlhZzbtjG/LE9Qg0bHXyX6jPYpBrZ7b063r7XY+BLnWS4pXmpUZ5ID8esOEAJnYLjt3LOXHs1m9wuESCruOaGG9rF/OnDXLz+c4WtcFaA7DOFIVgGNlPIsoh0zpDLQqiHWALrr12AMTWY3xDYCpWt9JjFfbuG5twkyjs8NaMW1xpnKWx6MWfOd1Wl4A4v0J37IMfDB8HhFpCZcTuvzggdzD8OlAAqvSMC0wKMkthznl27zhkfJhBrkKwo8z2Hnd7eIM8hSyQJxUJt9QQJjA8yaKfIinZ5/O7mn9UidR5tZgFAmXey1h3IEw5QVCC/MSLmAeCIzc/UejdQTKVKkKUlpaqh9SMKdOTWKnviDB07hUDl58Chghe+EsS050urh7Si0OkDHmPJe/81/2avyFdVNKpELOAeJ0sXU3gXEUWiHK5vtCM51O8gi+Fg7ySRq542DBW6ldP6Uc+a9018YhqjZbOgvpJAJQEcVSymB7sHPR81pEy1a5oYRR9rC+S8ZmiYwUobyQLcxIlN1sdqTEixl8HMs18SbMeiN4olnOivEuPnlg0gw43+lM/b5l8JRZRWYSyrPlPr+UZ9ItqK6fWp/Uq/dgerITL/xrHpaJ5EEx0IsPUlYEmbl7zx7k5eXpUezuCBmZwdO4/KPsPKrOi1E9+fsiLIdKef78eflmcAMKxo1rPfnaNTt4GlnuccidgL179+oV3my5wuuW08GBCCjL4Cnb48dP4MLJagwZImXwboz8Mw1qcDEN2/5dBCMiUr5MckYvhg2Vuyu8Gsyd4I7g7Q3PPFQut5zqrTnuQsEY+bt5glMg0iFsRJTw8Ysv9srlslRkypdieEra9wClwdvXN+1w7tx5HIstw/ipk72njO2Y+kK1vfMUMnm4S+7/pMifOSMPrVPK3YNnSaYt98hdofjEPAzJzg6QBn4eKlw7+uNy96ggP7/1xHMbph2HyENaMFXy7a3zcqJaDrvrKKoKwk/vTPT+HbeOi7g0hcLJT/KMGjXq0sQAYvjlEn7JZMSIEQFAtWXlKVn+/bnU1NS2yABD/AQRBSorMytASCs7L0Y1yi26nsKzFNY/On9kj+onkLulEZly0WywXFTqiUtNT0bUkXCMujKvJ+CoEgXLEeXIEuHuqXOJcCbLnyhIkVukPXFu6WhT5IMRo/OG9wRcD0juP3DAghWtUROLowDNJN7h6O4IwuGco4a5z2Fd3ul+j0EM2HOZcthr07F+HZ4lraORgHBMY53EmReR+KNjHHtkk0cj/TxMOusjDtrjSpl09jL8gLZGGTxZH3lhaOguvCmI8KzfwLfxQJTGH0u9QxM9ppMWllFfVy/fypLP9ye10dA6jLEcBfD65lXiDQ/Y6zd626GVBsnXmWOxBr5RDhly9KSjPKnJp2+dP1gG62Nb1kk7xggv6bqLg2b28oCWSK1XFoiXOr9MtJJanzZ43qenY/vKvfcwFMj3qUgMHd+7cmwQOuY1QmwPdwVv0g0BLMOUY8Lm3eS1+ybN4BofH6+mFvMYPEweO5w9bNKNb0/rbhl2GHs5DJufPY+/sOGlHd7Q1WG7UqLEeT3rRZ78NBJNXzpDQ7tMBsAH3tTdCiPwBn8DooV28DDwBobZTFndgTf56fttSyZ04bx9hioZ+UBn+NgFqCYbeDuMdqLyMGndKaddHvaaFHKaJ71x7G3Yg/IrKKF2ygSRRDOHMgIWKB7En2WZW4NGaLpbDmE5kpuvSXYXri/z9bYdiD9HgZ7ysK9oMW3ak/Ioz7RGjCzqnfTeFNgTJBwYhwMDhQNqTwXa29mJo3L1heurcvoCl/4q48vAg97g0BvY/uK5v3rtdOgI4i+TE+dwwOGAzGMcJjgccDjQMQf+D/KM2rsXmxV7AAAAAElFTkSuQmCC"},6574:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAAA0CAYAAADPCHf8AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAhGVYSWZNTQAqAAAACAAFARIAAwAAAAEAAQAAARoABQAAAAEAAABKARsABQAAAAEAAABSASgAAwAAAAEAAgAAh2kABAAAAAEAAABaAAAAAAAAAE0AAAABAAAATQAAAAEAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAyKADAAQAAAABAAAANAAAAABYRLIqAAAACXBIWXMAAAvXAAAL1wElddLwAAACy2lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8dGlmZjpZUmVzb2x1dGlvbj43NzwvdGlmZjpZUmVzb2x1dGlvbj4KICAgICAgICAgPHRpZmY6UmVzb2x1dGlvblVuaXQ+MjwvdGlmZjpSZXNvbHV0aW9uVW5pdD4KICAgICAgICAgPHRpZmY6WFJlc29sdXRpb24+Nzc8L3RpZmY6WFJlc29sdXRpb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4xMDI0PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6Q29sb3JTcGFjZT4xPC9leGlmOkNvbG9yU3BhY2U+CiAgICAgICAgIDxleGlmOlBpeGVsWURpbWVuc2lvbj41NzY8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KdfdJeQAAFwZJREFUeAHtXQt4lNWZ/jKTyQ0IhkQJt8Ail5CAQFG8YVGQApVqt3QVdNe1PuqyrqvbXZVa+zy2yrq6VbtrvT0VVFrBWgUVFbGyoDxrfUAUKYTAckkit4RAEnKdSTKTfd/vn5P8jDNhZpJMhjIHZv7zn+t3vvPdzjnfmSS1IUgiJDCQwEBQDDiCpiYSExhIYEAxkGCQM5AQjNLfs2eP3H777bJ69WodhUk/A4cUtyAnGCRupyY0YIYRqqqqZOnSpfLVV1+FLpzI6RIGEgzSJfT1buXk5GQFIDMzs3cB+QvuPcEgZ/DkGk3i9XrP4FHEN+gJBonv+ekUOsMgnRZKZHYJAwkG6RL6eqeyw2FNm8vlUgCcTqc+k5KSJME03TsnlhHbvW0mWushDJD4yQTcvdq0aZMcOXJEe9q6dassW7ZMhgwZInPmzOmh3s/OZpOA9F4/KIwDEEB4vUsA4cyCr80nTodTjpYflcGDBivAQ4YMlYaGeqmpqZF16z6Q2bPniNfnFdUyEc5sL6NAJEJ4u23GOHBOAIkggBDiQoNQKp7tIRwUOJMsU2pQ7iB57533Zd5110h29gA5fPiQ3LXobpn9HUt7kIk0nGlo7U14zQQYRvETpDKIx+2WlpYWVd+xJFQKDEq6poYGqaqu9ku92IoR9kbcDBiUJckup7T5KEliiQV2lyQnGsrF0+oWR5IDgjQ0DqhtXckuyR6TKv2Hi1QeO67ATpmdL2V1xdLU1GThMcwhJKErH5Y0GQ1J0v8Y1jDkrdDdh9lqhMWAb4fPJ30Be4xRb43VgV49HpEJBSLnnduhTTAMNbGqQZwN9fXCxV4szR2dbCw0K8rLZfu2bZKSkgIC9VkUGyGOoy1OGBxA0KSrxkt6nzTxen2BWjbapsOulwSm2H5kk1Q2lonLkdbpHJCZaUKlp2XIto175fF/Xipzb7xCbvq3a8SbhElW4vaTGR+dETvyk4DuZlebDDvokLHrk8WbgXUO0mIZfBh/SkuzDN29X5K4ZU2C7Qzu7gSOGiMjTWRVkciXL4hMvgASAwjwb4SoBnEA62QOSvNYMwj75IFXRkaGcFcmlv0Tz+wvCTA4HcmYF4hPUiA/YQXOYmBZe5qJ25+mYdZjOunBIanODMlIzgKDQEichjqUqcUhBReer/UvmztJ+vbtK56mNGV2TQzzixrEBSpISQUNZGP8qaiINPuoDPSBTdrTGWcwo7LXt3Ks0drTrfrceHCIo7VFZHiuRZzEPwm3p4MBNgW7gde2WogI6FMZhKAQ6bEmTsJi+vWBa/mJKQycB5hU3DQ1/ZI47ZNIGDsPwSbSnmbi5mla63hnn1yA+9q8+jkdg7AFarqMvmnyn2/cK7l5OdLSDAIDtdNCjCSoiYUKbW0wryC8+WFaYDOB76aPwHTzbp6mnHkGpiveHUhtRcccQ4wtCAwcA8aMN1H7BkInogxigE88zywM+MANIwuGiQ9mCeNnZGjX1pGJpW4fa4juEweF3Y7p2DbY2tIaTPDFFoi/4N56hEFibir1wgRxjKFCtONXc9PfKE0/Y/aF6ofp8bZFTj3mC2KqBKazDNNCCG4OTYNVrve0Y48wSFpami68w5lgg4gz6UmiTE/HzkeIkIa8aHYEU1JdqGdtlLhSk8WZjEVz79FGiNGFTiaoTuAmDZstdsI36el+1xjmsQzLdkYjrOfABk4SznV6Cw09wiDbsGV77NixkEyikpIShJLGb4PaEWWPczoC35nWG4FwcNfN3eSWLZ99Ia1YWAaT4F9s3iZ1tXUYv7VtznqhNILmIZ/t7N1RKierUA/bSkfLKqWupkEcfoY5ZbxAG1GnwR837XSkhaeB/K10+UF94MRuVL3HLVtL90srNCzHRLisdI98XnoAW9Q+8WJDYm/FUWlsbvaf+1AOYIMBUNiGpfWLjlfIkfpajXvbB91lcMNuoNsYRBGBrWIPDlyunjVLtm/fHnTb1pTj1q5uK/tNFcYZiCDG7YRn8rRAb34BOEr1qqpq+d6MH0px0W5sj6a0776lYLvw0NeH5ZorfyDlRytQNlnzCH9yin+8tkluo+sIcMbtbW+LVx5c8GtlDJZf/MMn5c+f7UY9J6Qx/vkFCYefhHMCnt0QWaBJjZORCBvxS/HNONvR9xjgjN26MJZD1VVy7XNPSk1TozIGTaRkwFZ+shrpT0hDs0eqcDA8/clHZO+xo9hidiqOOD6nf97bTTQM7vEvN8v6Q2UckOZzKOgqZqHbGMRAzAkpGDcOkg/mQZCQmpoqzZAcdXV1Onl6OIg6ra3Yh0bgmQzvN/BjJFArD4/iIdA24Oz4Z+iNFatVm1jmFHco22Tde39USLm3z0NPM77ak3XihcYx7yyUkpIqHrdHGhsaVVMMnKhVleAffvVuKbxoNOr4sAPKXaqONY/ih9oLTOLzQvIir7XZK+4Gj7iUEZOkqd6NPJ9fi1nt9vz3qaTLueQc2hm8BfOc07efrL3rfhl1Xq404z2VZhTK1YN5qHnUrFJg2yQLOMpAPgaD/Ga1OJQuen4w2kO3MwhbpdsKqP8bQ6BE+/iTT+SGBQtkXGGh3HvffVJaWqpS9JlnnoFP0WEloPfWrpXPNm+GnZ8un8NTdcOGDZZN/40WeyGBgts/tld+86rs/PMuNYlSQJilJWXy8E8fU6DI8C54BhzYVyL33HGvFAy5UBZ+/xaUL1KNQeLetPFTmTP9OrnuqgWy6nfvibNuAAjAwtv2PxXL8fJqJZzXn/1Avt57RPuhO0zRln2yduUn0F4uKdq6T37z8OvyzIOvyj/M+Ll8um6bvPvbjXLzJT+RpY++IVXHToJJYqdJzIyQaWlKGZPKMDhpoMXbKh/s2CY1jY2SCu15uPqE/GLt2zJw+TMy/b0/yMaDJcoIbCsTDLJi7y7524/elX7I/+W2zdKAU/dYMUmPMEi7ivRji8ih5Ny5c6fMnz9fFoJBPvrwQ01bdOedcrK2VlLBDDt37YJnaoO8vHy5bMM963q4v/xx/XpJxyk7ERuM6cyExOQJ2qU05HjGTyyQB35xrywHk1ALOLCQXPPm+3LP4n+SK2ddoRPIdciPFiySK6/+tuwo2yy3LrpZ5l7xfbj1NEjJ/jK5Yd7fyZ133yFPvfioeOudcuRAlTIP23/3pY+lGsTNtcualzZIfS38lKjB0H9tdb3s/vKAMmpVRY1sWL1FvnP95XLXf9woT/3rK1JdWSu/fPNe2bfja/l4zRbVNLHAD1lbQcTz/jdXyK2vvCCLfvei3Ibn4lUrYSbBdEZeEzTBrz/5SNcrHgjT21a+LNVVJ+Sz+TfLjwsmyYy1q2RbZTnKO6UFmuPtQ6Xyj4WTZPO1C+T+rZ/K63uL0VFs6KHHDgopZTnR/Ci3433jxo2yaNEiWbhwoRLCTxYvlnyYYyUHDsic2bNl1VtvyVDcacgeAEmK8rtx76GyslLG5ee3S+1YTHTnfSiXyM7tu2TZyufkpr++Vfbs+j8ZOHigPPvUi/Lhp2/L+nUbwPyQjAePyL49+7W5LZ99LvV19bqwP1ZRKV9u+Upuvm2h3HjL9ZhrLPwzjsvLT7+mZhHxlZc/SDUGF68jC4dZWoBbv2iN65k+memKV5p1sxZcKlOmF0rN8Trt68prp6p5Nvemb0vxF/uVydhmLMP1F10qAzL6SjO0RSqcK8tOVMrW1aUAwTK7CEu6K0UOgTF2fl0iv73vIbmgqVkuyDpX/r22RtaU7JPJg4ZKLZjp5Wmz5PK8v9K6q2ZcI8t275AfjbvAoitNZWs9E3qEQSjtudagiURG4TsX5Vx7MI3vZBwuTl3QDkzPGzZMaqFJnn3+ebn1llvUu/fRxx+XuWCcAVlZukaJ9SSHQjl9pxgGwNX8occelGXPL5dhw4fKI0/8TAYPHSRF24t1jOaueCoW8lx/9D+nv6x852UZOOg81Y59+vZR05FrDN3SRZsOLFoZdNcL7EA7nmzhBvG4YFIRn9bhIFnFMvfIjK1Y5Lc0cx0H2MALHnczZYw4YZLFKthZcNqofMntf45QQ6TDeth99LCCEcinNMMYXFiESxvcPVAgAwxV2tigQtIFzZzK9SzXqMhLBx0lEycYHNd5KKT1e+qr2xmEhE/C2AVzKScnR9xwpac9PmLECJk5c6bMxo23C6dMkfETJsjKFSskB8Q/cuRIqT5xQiZPnCgrfv97GT16tFRUVKi5NWH8eEkBIzX3gjt+cKTDxPJPqhum1bTpl8mim+9Ws6n40JfK7KzH3bwRI4eLC2cinMyr586Ug2UH5eiRCumX2U8unXaxzLz4uzL10otkwuQCWbviE+1u4NBsZaamBlxB8LRKSlqKTJ0xQZY+8gcZPPw8cTd65OnFr8qsGy7xMwsW50hT4aEUSq1tCSUyEvNiFzpw09TiETfWCm6dN8usIhw0v42g88BBcUQ23MsRnvufdfLTsROk+Fi5/HjLJlky+RJlkEaU8UALkTnI+c3YeDiBrWQNTOpZ/uheXywSAjXFvHnzZMmSJdKnTx8QQ6aUlZXJkkcekTvuuEOWY32xGKYVd7nIGKvfeEM1xHGcm0wEgxwoKbFMLIx9fEGBDM/LC3oya2Eoxt+YEG7NUjt+97rZ2nnffn1kxVsvSWNjEzRKFrRgnSz8+79RPGT2z5SP/vcdefiBR+XpJ16Q4p275VcvPKZrlvyCMfLaGuDiX34mTlzCyMrLkP9a84D0OydDtcH4i8dIn37pqmmvnn+p1FbVy11zl8jUmePlqh9MlaEjBymhnZOTKbnDcpRZeO4y7ZpvQTMn63sW8oaMOI90FZPA+c9ISZPvTZis5hM7dXI7GiEDWnROwQRJAX1QK84bP0m3hfulpcun9/1c/nv9+zLs1RfkoUlT5Z78C6SJGgOAnw8tlJvRx2IEbGAMwCH0ZbmDtQ0MsseHpvdBeF2zEYtjXQjrcKL/onSg9OROlmnPMA4Ji1uiXIizDF3c6aZ9+NAhnCkUKVGxHhf01ES6EwTt0WOBBO+XtoWXj/XfBwl++GeHgeOh9yzPQBDFWsE6q2iBxOb5BBftNB+5U0WBwTFxmzcV2oDaQz1v0SAPBBvrG6XZ0ywl9Tuk3ovLT14sZGE5tGDblmcbbI9Pbvc21jVBo7jQplO9eXnaTtOKY+B5CaUpzSyaazyRp1mnW70oF0rSctOsBSgedDhJRv0J7cLd3b+RZh9y2HHFDSwIMoI9ML0Z6dzSZaD2SMMaxA3zeg12tCZjy/cST4s0weL41tuvya8unibzRxci340rABiPahCgB+1wK1jNLnsH0cY5gZhH2XdQ5IkHcWmqkPv1PFzSFk8dRbSd2OoREXQ14VrDHphO04uEzzxqFzIA1x9G5fJpzgnscXs78RAnbHQn4Vg4bzRlGJiu409PRR4Qj2AERXbOADXNyAwsp3lgMrbDtcjBJuzYuGFSOVO1DZ5nsC0GEjrrZGb10TS2TZu9DWcg3MKlpDVlufXLOD9O5LUfHmpLPf+luIFwCNzJZDpdTUw6mQNA4qAwGZ7mbrkah4h3Ywfr6V1fySIwxty8kXr2kca1iS2QUXgAy7qxCKf23k09msVpYHOGMJhvypg0U9YiOouA7HGTHy9Pwm9gN08Dm3Ur0RoD80isZBQGe1nG2Q4JnoRD08MEagVjP5g66trCRPw3+Uon/PJXNelsJzDPtN3TT0r5jpF09GZPN4ySDEl96xUzZPr5Y+Tkzh1y06ixMjknV80v4i1YO6HSO3rqvliPMIiZ0FBgmnwONDCYPKbb44Hlevu9M9iC5QVLM2NkHgnhFGwEoYxT2rDnh4prB7HHlB0ce+/2dBOnEKT+HQcT69wx8JiAwIDEUKFiytjbYDxUemC57njvEQbpDsDOrjas842za8z+0fqFgxvmtpfmNjZBuH0bSyboDO8JBukMO4m8mGGA5iU/Sbj6G0/BWqrHE0QJWM5ODJxiX8YPChIMEj9zkYAkDjGQYJA4nJQESPGDgQSDxM9cJCCJQwwkGCQOJyUBUvxg4OzexYq7hWFvAcR+sXvUG93zLAy7V9/oPCxYTCGz88V3Ez8dk/n7NU2YZ0A1ZRA9sPMf2gU7vAuo022v7ItwhYCt2/oJ1VD7wZsfpxYeCE9sIeIxofWPCj3cCQ41qijSdWsV/dKxECD4nZWjaCjyKpyDNn5wot4G/zFllmhRwGmLtC5/1ZH9+q8FBFZXBqE/TCscBI1zYbjDNMzUTmjhVvSXY/1k+Ockw4GRbbB/02ZETbVLoYhq6Vy0+UCa6JuOg/RjolMgXiMKhp0irKZsyDp6MIaZSEqmm3p0DBolCtQx0QFnRfbrgm+Xg/gwAwoTC5ZggVsIG4k0AHAfGDMFrjUOuPd3SVxGgwTWgTMkfk0iKOTKIBVwNef1VnqehkOgnFRD3PxhAuOXFCFeFSAeDvHOSBo8e6MJhIWu88a3K9I2OA7+msbJynqpq2oIa/yBfZDBSByKh8DMMN7JIM6GTElrwg8W6N8AiRyTTjj1WTiIrC4P5hxOEkmKHB8JT1mBjxn+sRXD8Pa4GY5JI/6sX1Rx4f6KdS/F5Jmy9qc9z8R5ZYo+2805faWNPmvRSCjU4WUzHx1HDeD2jkPFCQTopy0vV7Lg1dsvoFwyfWH4Ez3Dhw9XCc730wUiha7rxcXF6q7OP/2lF5pOV9GejwGZX/04CHf3AdnZ+NMM56pTX7gaiRqHfw9j//79Mh4Xq8KB3Q6ChRunuvqX7Tmi91PacAc63EDhQ9fzmpqT+jtYecPz1DvZ0sRsvfPA+mROuv43HnZI/uiLbcLm9PXZOomZ/4t2FsnoMeN0XsLHAyoCCLrmF5fvkaSx2ZKDeVAPZBuRBqM3ppEO+LdKTp48KaVfl8nEKReqhzYttVDBnsU4HTXTYL2UH6uQ3bhRmj96jO1yXBg4QBHi2w0clpSVyrixY7XN8DQRIMAYqBiqcOWjGr+qwh+WIQcYXZhMqZObmytTcMsv0kBEZgOhZJCuBP748kDAcC4YJNJA+HnHZBzutkcb6HJP9/t83H2PJjTgeuiJ4yckD5e7ogkqcDKKJL9wbDTVtU6zF5M7Efe0yS1RhEZPg15Oy+LvAUQY+LdlfPgxvcIxYyKs2VE8F3RUhVul548a1ZEYSQyCPZN0MCY6HDaDwXixLzCoidWIn18hoYW7BuCEkihJWJR+DKwfruRneZZlHXIvtQ/bYgiEg+XYn71tvjOo5IB5RhONgekmz15eM/35pj2TxjbMBS+mRSR9YYhofVyQaocfdzdochg4TD+hnmYM5hIV+++AsYPYO2JWS2YjwYzT3ejWi1q8i2PGEIpZLOxZOGVZziWFncc2B7qeIJ5tmiRwDLQAWJfSmz/lw9AO/ylGWmBNvPvbtrfR6J9H/lQQTW8LziB17UlohzhsAgz8MLTTQSewtzfhr88xGBf89jxEsD52quRkJ0S2Qbi9UGC8fQJs5U39wLKdvbOOCaZfA4N5Z749bt4NAZLBMnGt16QHltUM/5fJM0/TBt9N3A6TvW6wOOeYwbSncdgXfLenaaEgX8H6NHVPVz+Q+Hm9l3PJcLoxWMxmfZuypl/WZ5wEighfQwafP98Oq2knEL5vNOKvy90rBl6U4+1SBnVaRH7nvWvRdhg5dlPfwOAv0enDMCHrmPmwV+jSX7ml5iFgXI9EEwgQAWM7JHQiKZpg2ommLutQa5krwNG0wZuRlMDmV1w4pkgCBQ7XUjTzog1dxQHnQH9lBuuRSAPHTy1uCDTS+ixviDNS3Nn76goO7DRgb0cZxJ5g7zART2DgbMeA6rdoudZwfVeR2F3tdAWOrsLQ2/W7MnbWPdPh7+r4Q+GgSyZWdwCVaCOBgXjGQMcqOZ6hTMCWwEAvYeD/AYvJJOKJ2lVnAAAAAElFTkSuQmCC"},7365:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/one-hump-dbd2860e9cff3ebe16ced6cf7c4ec64f.png"},6467:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/pg-driver-is-faster-88ee7217dd06fff1cc35ee2e8ccc3736.png"},2944:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/sequelize-log-af147131006e4207620f8e3918724ecc.png"},7424:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/suite-4d046fac9ca9db57eafa55c4a7eac116.png"},5542:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/throughput-benchmark-91b84b17d860e3769a11be3835d6961a.png"},635:(e,t,a)=>{a.d(t,{A:()=>n});const n="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA88AAAEFCAMAAADquSUKAAAAVFBMVEX///+7Qrjem92nEKT9+/1TYNX50/jD2v71yfT+2Ni9y/X8y8vmRETjMjJpd9vq7Pv/4uLDKir91dX50tLxrq7TWFiEE4XhgIBGSryXqOjEecTpt+gMeeYSAAAACXBIWXMAABYlAAAWJQFJUiTwAAAR1ElEQVR42u2dDXeqvBJGBUQNtBT5MAL//3/eNTPhw9qe2/Zt0dq9zzqKERAe8jCZEOhmAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwii0A3JJvtTNnNIAbx9RvXNcuBoDbMWy/0dB9AgC3JBq+ydDbTZwkEQDckCQ5fIuht5uXKOkPOwC4GUOU9N/k5yGJDgBwQ3ZxEm03+BkAP+NnAPwMAPgZAPAzAH7GzwD4GQDwMwDgZwDAzwD4+cN+tgGly4nD/NHex/kO05fLhRel8nq5vnmBi+UAHtitt/RzL8SHwyGeJ+LdYRf3UT/sDkMvnw7Te9z3w7ywLDPsDruh7w/2OvS2wp2tbpDCwebkQMOD+3gIr+oYKxy0ZLg0+Q/6OTkJ/eEQ6UR02PWnaHfo9VO8G06nRDagP5162aQovOum2iKxfJvIKeCU7OKTrW9nK0jiXSSrO0TzYgAPSZ9Eg7yKnXa9eOowJEm8E59Ew24tP/dxL86NbEKc2Ist+6E/JYdBXT36WOyt/laiU9LH0SkZ7BSwi0+RzKBPYJCiOI5CiVid4w0PHZ0lug1S0/tE7XQaNAzGu+TU95f1/0f9HO/Ey/pfjdmrp5PDbiftZ4nZOw274uf+FEXqb9km2fydRt6lnxPNm7VIZtEpOWsQnuGh/SyxbdhFGr2inQbAIZHmqzphmaau5udDFPwszew+lpb/6ZToVlqDOTnFfbCmmfgQTgGL+NxrUh1a7clBVhGd6FSHRzf0LjkNB/k/qJ+jkzhCQ3VinU/r+DnqI2tvJ5GcTtTPhzix7Hc4JZIJ6OvuIBnyEJoOb/s55M+yFgnlIaifTjHhGR4d9XMy+jk+xVHSS823lHrN/rB4F/rD4p35WXq2o9NJgqtZurfmeNL3wZxq4mG4js/6AMPQH2ahPFHnAzw21t4edubnIdHEOR7kEpAUr9YfJg1ryYM1zbX8Oel3u8PpNAynk/Z1D31IB7QTfDfmz8MpSrT7TNPtOX8eM/Eo9BSQPcPf8LO0Zi19Fd8M4udTEg/Jen4OTeHIurWH0B9mvd3qZ0mAx17veBjiUzKEJU4SwxM5IZ0i6xyP5/gcurbxM/yh9ra2cyOp89rs7i31XLO9vfBz6A+Lxta3doxZn11vOb60xE/zIqFNHdLm/rC8/qyXshPiM/wV4vhg47HkynN8kGEkQzzIdDys1R82daTrxNDH+n8a+nWQXm7dNh3zNcwbrJvV97GelGQYmPbh6fgwGWGma7HBYa866wEek3HU8zjMeRcGf07Dolfw8/RL0xDs6+HXYVz24dVQbptJz0DzNocnDB+Wg7d3DN4GWMXP37BxHB+Ah/EzAOBnAPyMnwHwM34GwM8AgJ8BAD8D4Gf8DPAX/XyQJxoBwA3pk2jzPWyjJIoB4Hb0SRJvvitARwkA3JR++13xefPSRwBwQ+Lt5vsMvdkCwO0wG36Xob9vVQDwtaAKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA8GX4U58A4998/fVu5oQG8Chu2G62+wwAsmz/8vvt/FKkACAU2SPYuQCAoijSNPvVTe7tpkuLbg8A+31WpMUvb3IXKXYGCIZO0/3v7hPDzwATBX4GwM/4GQA/42cA/IyfAfAzAH7GzwD4GT8D4Gf8DICf8TMAfgbAz7/Az1nXdV22nNh3nb2OxeN84S0bF+2M8atphnm58YtsmhcAP/+gn7uzUGRhIs322fmc7bNUPnX7faGv+316PofZ03HR1BaVsmy/z2TZfSeF8gvdOZU1dTpfJqs5LxYF+FHkxqmiKPad1vasKDIpSovs8f1cdF2qrky7rjgXZsxCisWOhTh29nFxDv5WnxYaczudRReTyU5PAZ2uST1ufi6Iz7Cin4tCKnNhUUZqZHrWksf3swXTzkx5zrLzWcJz97LvCvWzRN/C/Jyd01mT9Jy9jCG+m04DWjD5WV7NzzYvwEq8dBaOOm1JZtm5eHnprgL0Y/u50x1XY6YWTYtzYWZNzzZ3NrWa07Ol3J18KYuZdef16lJdiM9Teg6wCmkh1TfVpuc50+Zk9gfy57m9XRTp2N7OipAHF+cuFbN36ufi3L0UY4Pb8mcNxd05zSxhzvadJS62JjlBzPlzQSWDtei0+yftzpJLnzV/nnPFv9IfNibC0kFdWP7ciZnTTPwsHVzd1OC2/Fnjs4bxEJ+7NA3hPrN38me4QQYt4VnyvFSrcJZ1+2X78aHjcxaa0mMOvc/kGWMvxbmQaJxpFJb+bfO8+X2ZP6uTrZnehbZ2N/esncmf4TbheV+cNYfszlq3s+L8J/LnaSK1XFkDcZYVFp/Fkpk8aEmibCat827Mn/UZxmPmbf3bXaYzTH4Wp6vRdV6AdcKz1etCO8LkzVK+v9G/PUZZ8WNIgsdmeDH3Vds38yLz9efUGu7ZGMHn68/jxQKuP8O6fp7HRnWZfXoz4Xs4P49P79UmdhjLZemzFRSmhoz3slmmIWNvjg/rwnKMD4PfAOO3AfAzfgbAz/gZAD/jZwD8DICf8TMAfsbPAPgZPwPgZ/wMgJ8B8DN+BrhrsjR9+c1+3m6yNNVbnQD+PF2RFr86Om+22yIFgMAvb25vN9sORwMIRffyu+0sht5sXwDg5eVl89vtvHmEPQDADMskGgCIbAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwh+HvmX3hb5whEH8Y7j7djASfFwbREOZuD0BZOXhFVf6rbiLaF0SDVexc+qdnuOLJ1+/Wze2mQrS3RXMY+sZ2fnp+foJrnp/eM/R24xDtbc2enzD0bf3sn5+aHK5o2mf/bs1EtHdEe3r2mOqmhn56bkoyv2vK5vmpfDPWaJsG0T4nGqzo56qqHP8u/lUuf3p27/s5d4j2lmj4+S78TGh51VP7ET8j02vRSvx8F36mLl7zf/2MRNfXq/DzvfiZ5uL8T+X4iJ+R6lI0/Ex8Jj4Tn+G7/Ux8uYw1H4rPCPVKNPxMfCY+E5/hZ+LzfGQ+dvz+OftXKvyKyV7Ywmq5F1+Kz8vF/7MjvrCOamXV3pCR/Jn4THwmPsNP+lkGSIw4m3ZzSZh200wu/F+8LNag8St8vFjZcvHF0tPbz0eZ5ba65da76pP9225e2duiueqVaNXXRKtuLdosm3NvHFHi813G58qVZenGal2WMlUGXChxocgOsXPlODZIZx/nHyuAfKyWK1ssbm+Lta4T86q3Kubsp8/G58q9L9q8X2FvXbUs+odo7lK0eYYbibY427vq+nxEfL7P+NwItVWVumlyJ6+GDJlqmvCFfC6rsITWwTKXqWl+yy+rstZSXUYKmqZ2YX2yNv2Fyuauw0IrdF1XZenMFOYJZy/lRU748fg8yiFL57pL+SiaU4HCF0EKW8J+WWYvp/lzM02QchRNXqtxhlG0elXRxl8onZ6dy7kVcyEbfr43P7dHwasV/fHYOpcfA7Wr2+NRZraiVseJyrRWQ69lednY3L7UE3fT6py6mjy82m+0Yc62Kf2xdfJz7UpJaVU2besb53yru+N9Xua+bf1X4nPlbIfaWbTSBRFUPxFNfnIhWnPUfQ2i+XoUrTHRgpSubI5tLaK1ef1KtLz0R1/WIlq1WoBWkRpX+lbORrmop7LVxOf79XPrWzF08LYcMS3wodZ587MW5upCLZPvpKy1Ce81/sgHXV9eXvi5lTmmRapcfqduj2FDfj7QyGbJRrXhRGIb1dpufz4+z6LJtO7tKFoVRKunva2DaM0smkjRXomm58aFn0fRWtNZRcuP64gWevxVtqbUQy87IZOtiEh8vks/S6jxpVSoppRq1KpVy/zY1mWpEVTChrqyrDXsiA/G6FqWeetdIxOW3FW2On298HOjyWBzbCurkbKwRZpqpfCcb71VzKaUAJo3R78t/UWA/oyfGxXNIqp8crp30jgtVSCNya0LKog5VVkvv9q06tIp3a5UtDrYfPZzPopmqxlF0/R7rVZNs22O3mSTM1dTy3b7qV2Fn+8vPlsrzpfy0mh1cVpVK6mGTWtOPza517om1g8V+eibqpSStrGUMSxYax289LOmjWPV1NjsLTyvkT9b3tq2Ep/NE8dcmhyNK5cXkz/lZ1eX7Syatanb3PIREc2Uyi2zEIlCiGtVtHIhmplYRbv08yRaaX6WTbYDs058XsrW6rYdGyey1dP9efj5HuOzxg2JqnK4cmtn5VYN7Zt8zPbU+VtdpnqVCuahalZh+av8edHetkbpapGmshavd2Xb+mOuL6Um+s2ygfC5+CxZr5eg1dSTWrW9bVW0KfG106Uuo1lx68fk+jj6OZwP3FX+3EztbTe25K2jfiXZ5JBJk8Mfc3/0oXEzy4af7zc+Szxp2jFftlTZN9JppZGh8Xoc26MPQUe6RiaXeh/8fKxDxLF1LPLnZuohCrl6PseBHw80rs7VVG0uUcYf81x6jpd9S5+Mz5eiWW4sfnZBILOCyic9ZJNozSiaZsejn11tb7KOapk/T6LldvJtVxJtlK3Krb0tsvnm2FR5lTftkfb2Pcdne/Whh7aq5rZyiBH58ViXY/21aFw2Ptccsl6mgtLYdK331n2TW0NRAv+2DI3MkACWzRSeqxUSQemZ1YrZSpO79po/S3Ctv54/a3/eKNoYnxeiWT7dLkVz0sk4GncpWp6baI0K3cz58ySahGU9AmOKslL+rK2H9ihd8Lk/NiKb2LvEz3cbn+WE29Yy1VgQ1gARalIT+raavDHTj2USK2oJRFL78jyvndU4qX3awStrzrWhKG3SPLf8WTvHJj+vMR7ZhR5k8djxWIZ4o11UX+/fNtF0SkUL15qqhUCqy4Vo0h/n89yMq6Lp+AyRsvHjidPWLO3aSbTQqx38vIpoy8sCstnOci/tUTzSv33P8TlcJW00nmo8sW4t66vW8hB02twiU4gwc1Y8XX9202XTanGJdXEpNbdu4aWfq1UijW997pz3LvfWtsi9dE196frzuEO6M9UsmrS3J9GqsP+aeYhofrqYfGwW4siJZOxgWIg2Sq6nidKPl71GP1erXX+Wy/aNrypp+vu8rH3ruf58v/HZt3aAXNN6V9XyJhXM566WUQR1JaMIZBBBa4Mw5IvFgAwJ5q18a+MmKldJaasDKmwG6Tlr9Z9cqKlttZX93Er5cxh8KblqqWOdpnGU1dfyZ90dGckle1GraKqDKNOaQG0+ida0Pq9qFVXOK+aQC9FqKQ3nOZnMSy0aRbOhHCqa9UStOD4syCaaVYvhp8Tne/Szm4cJy4GqbMT2POp6HDC8HEw8DyLW8tdDkeWrMveNDa7UoYHj2ObXq62+fpfl18dvu+V9JVX1hf5tN+9xVZpo5X8QrQoDUMtmKVp1c9Guxm+/JRt+vrf4XF3eO+Uu74q6vE3o1U0+17cKTcYpFzdlubfuMnIXd+msEWou7724LvnK/VWPLNo/Zau4X/JO4/N/PHu7V/Firt9zLfzoMxJudAvv1Hhc4f7n6RavXy7auAX4+X78XN28UiwTtbWeT7L49PphI59+PsnDi3bVEHj9TBX8fF/x+YNttzcukLx2xNUzaty/l798BtCaYfh1lfxCfP450f5Vsr5oVz80bwLPA7zX+Fx9zAzvF736trqOYtV7zxv7bfF5sck/LVr1TmOA+AzE5/fj8zxJfP5EfCZ/frD+sId7tB3PA+R5gL/Wz9TN6zjE36P7/DUO/HwnfobXfOTvxcKVaPj5DvxMaLmGv+f+raLBOnbe+OcnvXMHLmnaZ/+ubIj2jmhP/xAN1vBz+fT8/ATXPD/l70Sa7aZ6QrN3RHOE59sa2vmnZ7imzd+tmdtNjWhv8eRr7HxrQ2/KGq5w23/UzO1m45Dok6LBWoaGzwqDaO8Lgza3PgbwBoj2/aIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHwP/wP/ftBbduSfyQAAAABJRU5ErkJggg=="},3656:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/two-humps-c54bed6a1428c1ad0f7e028d10a44206.png"}}]);
\ No newline at end of file
diff --git a/assets/js/5484f123.c58e5177.js b/assets/js/5484f123.c58e5177.js
new file mode 100644
index 00000000..39c41db4
--- /dev/null
+++ b/assets/js/5484f123.c58e5177.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4074],{5680:(e,t,r)=>{r.d(t,{xA:()=>p,yg:()=>y});var a=r(6540);function n(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function i(e,t){var r=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),r.push.apply(r,a)}return r}function o(e){for(var t=1;t=0||(n[r]=e[r]);return n}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,r)&&(n[r]=e[r])}return n}var s=a.createContext({}),l=function(e){var t=a.useContext(s),r=t;return e&&(r="function"==typeof e?e(t):o(o({},t),e)),r},p=function(e){var t=l(e.components);return a.createElement(s.Provider,{value:t},e.children)},u="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},g=a.forwardRef((function(e,t){var r=e.components,n=e.mdxType,i=e.originalType,s=e.parentName,p=c(e,["components","mdxType","originalType","parentName"]),u=l(r),g=n,y=u["".concat(s,".").concat(g)]||u[g]||d[g]||i;return r?a.createElement(y,o(o({ref:t},p),{},{components:r})):a.createElement(y,o({ref:t},p))}));function y(e,t){var r=arguments,n=t&&t.mdxType;if("string"==typeof e||n){var i=r.length,o=new Array(i);o[0]=g;var c={};for(var s in t)hasOwnProperty.call(t,s)&&(c[s]=t[s]);c.originalType=e,c[u]="string"==typeof e?e:n,o[1]=c;for(var l=2;l{r.r(t),r.d(t,{assets:()=>s,contentTitle:()=>o,default:()=>d,frontMatter:()=>i,metadata:()=>c,toc:()=>l});var a=r(8168),n=(r(6540),r(5680));const i={sidebar_position:2},o=void 0,c={unversionedId:"the-basics/getting-started-quickly",id:"the-basics/getting-started-quickly",title:"getting-started-quickly",description:"Run Practica.js from the Command Line",source:"@site/docs/the-basics/getting-started-quickly.md",sourceDirName:"the-basics",slug:"/the-basics/getting-started-quickly",permalink:"/the-basics/getting-started-quickly",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/the-basics/getting-started-quickly.md",tags:[],version:"current",sidebarPosition:2,frontMatter:{sidebar_position:2},sidebar:"tutorialSidebar",previous:{title:"What is practica.js",permalink:"/the-basics/what-is-practica"},next:{title:"Coding with Practica",permalink:"/the-basics/coding-with-practica"}},s={},l=[{value:"Run Practica.js from the Command Line",id:"run-practicajs-from-the-command-line",level:3},{value:"Start the Project",id:"start-the-project",level:3},{value:"Next Steps",id:"next-steps",level:3}],p={toc:l},u="wrapper";function d(e){let{components:t,...r}=e;return(0,n.yg)(u,(0,a.A)({},p,r,{components:t,mdxType:"MDXLayout"}),(0,n.yg)("h3",{id:"run-practicajs-from-the-command-line"},"Run Practica.js from the Command Line"),(0,n.yg)("p",null,"Run practica CLI and generate our default app (you can customize it using different flags):"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npx @practica/create-node-app immediate --install-dependencies\n")),(0,n.yg)("p",null,"\u2728 And you're done! That's it. The code's all been generated."),(0,n.yg)("p",null,"We also have a CLI interactive mode:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npx @practica/create-node-app interactive\n")),(0,n.yg)("p",null,"Note that for now, it can generate an app that is based on Express and PostgreSQL only. Other options will get added soon"),(0,n.yg)("br",null),(0,n.yg)("h3",{id:"start-the-project"},"Start the Project"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"cd {your chosen folder name}\nnpm install\n")),(0,n.yg)("p",null,"Then choose whether to start the app:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm run\n")),(0,n.yg)("p",null,"or run the tests:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm test\n")),(0,n.yg)("p",null,"Pretty straight forward, right?"),(0,n.yg)("p",null,"You just got a Node.js Monorepo solution with one example component/Microservice and multiple libraries. Based on this hardened solution you can build a robust application. The example component/Microservice is located under: ",(0,n.yg)("em",{parentName:"p"},"{your chosen folder name}/services/order-service"),". This is where you'll find the API and a good spot to start your journey from."),(0,n.yg)("br",null),(0,n.yg)("h3",{id:"next-steps"},"Next Steps"),(0,n.yg)("ul",null,(0,n.yg)("li",{parentName:"ul"},"\u2705 Start coding. The code we generate is minimal by design and based on known libraries. This should help you get up to speed quickly."),(0,n.yg)("li",{parentName:"ul"},"\u2705 Read our ",(0,n.yg)("a",{parentName:"li",href:"https://practica.dev/the-basics/coding-with-practica/"},"'coding with practica'")," guide."),(0,n.yg)("li",{parentName:"ul"},"\u2705 Master it by reading our ",(0,n.yg)("a",{parentName:"li",href:"https://practica.dev"},"docs at https://practica.dev"),".")))}d.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/5e729dc7.2571a393.js b/assets/js/5e729dc7.2571a393.js
new file mode 100644
index 00000000..29e52794
--- /dev/null
+++ b/assets/js/5e729dc7.2571a393.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4415],{3197:t=>{t.exports=JSON.parse('{"permalink":"/blog/tags/integration","page":1,"postsPerPage":10,"totalPages":1,"totalCount":2,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/621e6abe.66fcad17.js b/assets/js/621e6abe.66fcad17.js
new file mode 100644
index 00000000..3583a909
--- /dev/null
+++ b/assets/js/621e6abe.66fcad17.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[2345],{5680:(e,t,a)=>{a.d(t,{xA:()=>d,yg:()=>g});var n=a(6540);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function s(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var s=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var l=n.createContext({}),c=function(e){var t=n.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},d=function(e){var t=c(e.components);return n.createElement(l.Provider,{value:t},e.children)},h="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},p=n.forwardRef((function(e,t){var a=e.components,r=e.mdxType,s=e.originalType,l=e.parentName,d=o(e,["components","mdxType","originalType","parentName"]),h=c(a),p=r,g=h["".concat(l,".").concat(p)]||h[p]||u[p]||s;return a?n.createElement(g,i(i({ref:t},d),{},{components:a})):n.createElement(g,i({ref:t},d))}));function g(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var s=a.length,i=new Array(s);i[0]=p;var o={};for(var l in t)hasOwnProperty.call(t,l)&&(o[l]=t[l]);o.originalType=e,o[h]="string"==typeof e?e:r,i[1]=o;for(var c=2;c{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>i,default:()=>u,frontMatter:()=>s,metadata:()=>o,toc:()=>c});var n=a(8168),r=(a(6540),a(5680));const s={slug:"about-the-sweet-and-powerful-use-case-code-pattern",date:"2025-03-05T10:00",hide_table_of_contents:!0,title:"About the sweet and powerful 'use case' code pattern",authors:["goldbergyoni"],tags:["node.js","use-case","clean-architecture","javascript","tdd","workflow","domain","tdd"]},i=void 0,o={permalink:"/blog/about-the-sweet-and-powerful-use-case-code-pattern",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/use-case/index.md",source:"@site/blog/use-case/index.md",title:"About the sweet and powerful 'use case' code pattern",description:"Intro: A sweet pattern that got lost in time",date:"2025-03-05T10:00:00.000Z",formattedDate:"March 5, 2025",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"use-case",permalink:"/blog/tags/use-case"},{label:"clean-architecture",permalink:"/blog/tags/clean-architecture"},{label:"javascript",permalink:"/blog/tags/javascript"},{label:"tdd",permalink:"/blog/tags/tdd"},{label:"workflow",permalink:"/blog/tags/workflow"},{label:"domain",permalink:"/blog/tags/domain"}],readingTime:17.875,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"about-the-sweet-and-powerful-use-case-code-pattern",date:"2025-03-05T10:00",hide_table_of_contents:!0,title:"About the sweet and powerful 'use case' code pattern",authors:["goldbergyoni"],tags:["node.js","use-case","clean-architecture","javascript","tdd","workflow","domain","tdd"]},nextItem:{title:"A compilation of outstanding testing articles (with JavaScript)",permalink:"/blog/a-compilation-of-outstanding-testing-articles-with-javaScript"}},l={authorsImageUrls:[void 0]},c=[{value:"Intro: A sweet pattern that got lost in time",id:"intro-a-sweet-pattern-that-got-lost-in-time",level:2},{value:"The problem: too many details, too soon",id:"the-problem-too-many-details-too-soon",level:2},{value:"The use-case pattern",id:"the-use-case-pattern",level:2},{value:"The merits",id:"the-merits",level:2},{value:"1. A navigation index",id:"1-a-navigation-index",level:3},{value:"2. Deferred and spread complexity",id:"2-deferred-and-spread-complexity",level:3},{value:"3. A practical workflow that promotes efficiency",id:"3-a-practical-workflow-that-promotes-efficiency",level:3},{value:"4. The optimal design viewpoint",id:"4-the-optimal-design-viewpoint",level:3},{value:"5. Better coverage reports",id:"5-better-coverage-reports",level:3},{value:"6. Practical domain-driven code",id:"6-practical-domain-driven-code",level:3},{value:"7. Consistent observability",id:"7-consistent-observability",level:3},{value:"Implementation best practices",id:"implementation-best-practices",level:2},{value:"1. Dead-simple 'no code'",id:"1-dead-simple-no-code",level:3},{value:"2. Find the right level of specificity",id:"2-find-the-right-level-of-specificity",level:3},{value:"3. When have no choice, control the DB transaction from the use-case",id:"3-when-have-no-choice-control-the-db-transaction-from-the-use-case",level:3},{value:"4. Aggregate small use-cases in a single file",id:"4-aggregate-small-use-cases-in-a-single-file",level:3},{value:"Closing: Easy to start, use everywhere",id:"closing-easy-to-start-use-everywhere",level:2}],d={toc:c},h="wrapper";function u(e){let{components:t,...s}=e;return(0,r.yg)(h,(0,n.A)({},d,s,{components:t,mdxType:"MDXLayout"}),(0,r.yg)("h2",{id:"intro-a-sweet-pattern-that-got-lost-in-time"},"Intro: A sweet pattern that got lost in time"),(0,r.yg)("p",null,"When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time. "),(0,r.yg)("p",null,"The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the ",(0,r.yg)("em",{parentName:"p"},"code level")," by emphasizing its surprising merits how to implement it correctly."),(0,r.yg)("p",null,"Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples."),(0,r.yg)("p",null,"But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble."),(0,r.yg)("p",null,(0,r.yg)("em",{parentName:"p"},"Prefer a 10 min video? Watch here, or keep reading below")),(0,r.yg)("iframe",{width:"1024",height:"768",src:"https://www.youtube.com/embed/y4mBg920UZA?si=A_ZTVzG0AjVhzQcd",title:"About the use-case code pattern",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:!0}),(0,r.yg)("h2",{id:"the-problem-too-many-details-too-soon"},"The problem: too many details, too soon"),(0,r.yg)("p",null,"Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'\u2014specifically, an issue with price calculation in an electronic shop app."),(0,r.yg)("p",null,"Her journey begins promisingly smooth:"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83e\udd17 Testing -")," She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'test("When adding an order with 100$ product, then the price charge should be 100$ ", async () => {\n // ....\n})\n')),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83e\udd17 Controller -")," She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'app.post("/api/order", async (req: Request, res: Response) => {\n const newOrder = req.body;\n await orderService.addOrder(newOrder); // \ud83d\udc48 This is where the real-work is done\n res.status(200).json({ message: "Order created successfully" });\n});\n')),(0,r.yg)("p",null,"Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83d\ude32 The service -")," Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'let DBRepository;\n\nexport class OrderService : ServiceBase {\n async addOrder(orderRequest: OrderRequest): Promise {\n try {\n ensureDBRepositoryInitialized();\n const { openTelemetry, monitoring, secretManager, priceService, userService } =\n dependencyInjection.getVariousServices();\n logger.info("Add order flow starts now", orderRequest);\n openTelemetry.sendEvent("new order", orderRequest);\n\n const validationRules = await getFromConfigSystem("order-validation-rules");\n const validatedOrder = validateOrder(orderRequest, validationRules);\n if (!validatedOrder) {\n throw new Error("Invalid order");\n }\n this.base.startTransaction();\n const user = await userService.getUserInfo(validatedOrder.customerId);\n if (!user) {\n const savedOrder = await tryAddUserWithLegacySystem(validatedOrder);\n return savedOrder;\n }\n // And it goes on and on until the pricing module is mentioned\n}\n')),(0,r.yg)("p",null,"So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?"),(0,r.yg)("p",null,"She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task."),(0,r.yg)("h2",{id:"the-use-case-pattern"},"The use-case pattern"),(0,r.yg)("p",null,"In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about."),(0,r.yg)("p",null,"The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"A use-case code example",src:a(132).A,width:"1321",height:"444"})),(0,r.yg)("p",null,"Each interaction with the system\u2014whether it's posting a new comment, requesting user deletion, or any other action\u2014is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow."),(0,r.yg)("p",null,"By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'."),(0,r.yg)("p",null,"But why is this minimalistic approach so crucial?"),(0,r.yg)("h2",{id:"the-merits"},"The merits"),(0,r.yg)("h3",{id:"1-a-navigation-index"},"1. A navigation index"),(0,r.yg)("p",null,"When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Library catalog",src:a(4186).A,width:"1792",height:"1024"}),"\n",(0,r.yg)("em",{parentName:"p"},"The library catalog redirects the reader to the area of interest")),(0,r.yg)("p",null,"Similarly, in software development, when a developer needs to address a particular issue\u2014such as fixing a bug in pricing calculations\u2014the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells ",(0,r.yg)("em",{parentName:"p"},"when precisely")," this module is used, what is the ",(0,r.yg)("em",{parentName:"p"},"specific")," entry point and which ",(0,r.yg)("em",{parentName:"p"},"exact")," parameters are passed."),(0,r.yg)("h3",{id:"2-deferred-and-spread-complexity"},"2. Deferred and spread complexity"),(0,r.yg)("p",null,"When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system"),(0,r.yg)("p",null,"When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called ",(0,r.yg)("em",{parentName:"p"},"accidental complexity"),". Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it."),(0,r.yg)("p",null,"Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"The blocking-complexity tree",src:a(7951).A,width:"792",height:"760"}),"\n",(0,r.yg)("em",{parentName:"p"},"The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.")),(0,r.yg)("p",null,"This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset\u2014a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"The spread-complexity tree",src:a(9635).A,width:"792",height:"760"}),"\n",(0,r.yg)("em",{parentName:"p"},"The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.")),(0,r.yg)("h3",{id:"3-a-practical-workflow-that-promotes-efficiency"},"3. A practical workflow that promotes efficiency"),(0,r.yg)("p",null,"When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on."),(0,r.yg)("p",null,"While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const orderWithPricing = calculateOrderPricing(validatedOrder);\n const purchasingCustomer = await assertCustomerExists(orderWithPricing.customerId);\n const savedOrder = await insertOrder(orderWithPricing);\n await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email);\n}\n")),(0,r.yg)("p",null,"This structured approach allows you to preemptively tackle potential implementation hurdles:"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- sendSuccessEmailToCustomer -")," What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting ",(0,r.yg)("em",{parentName:"p"},"now"),", before spending 3 days on coding, can make a big difference."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- calculateOrderPricing -")," Reminds you to confirm pricing details with the product team\u2014ideally before they're out of office, avoiding delays that could impact your delivery timeline."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- assertCustomerExists -")," This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later."),(0,r.yg)("p",null,"Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:"),(0,r.yg)("h3",{id:"4-the-optimal-design-viewpoint"},"4. The optimal design viewpoint"),(0,r.yg)("p",null,"Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email, orderId);\nconst savedOrder = await insertOrder(orderWithPricing);\n")),(0,r.yg)("p",null,"Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used."),(0,r.yg)("h3",{id:"5-better-coverage-reports"},"5. Better coverage reports"),(0,r.yg)("p",null,"Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code ",(0,r.yg)("em",{parentName:"p"},"exactly")," is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task."),(0,r.yg)("p",null,"Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets ",(0,r.yg)("em",{parentName:"p"},"'features coverage'"),", a unique look into which user features and steps lack testing:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Use case coverage",src:a(2899).A,width:"1327",height:"713"}),"\n",(0,r.yg)("em",{parentName:"p"},"The use-cases folder test coverage report, some use-cases are only partially tested")),(0,r.yg)("p",null,"See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality."),(0,r.yg)("h3",{id:"6-practical-domain-driven-code"},"6. Practical domain-driven code"),(0,r.yg)("p",null,'The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?'),(0,r.yg)("p",null,"Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team."),(0,r.yg)("h3",{id:"7-consistent-observability"},"7. Consistent observability"),(0,r.yg)("p",null,"I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability."),(0,r.yg)("p",null,"Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step."),(0,r.yg)("p",null,"The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'// \u2757\ufe0fVerbose use case\nexport async function addOrderUseCase(orderRequest: OrderRequest): Promise {\n logger.info("Add order use case - Adding order starts now", orderRequest);\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n logger.debug("Add order use case - The order was validated", validatedOrder);\n const orderWithPricing = calculateOrderPricing(validatedOrder);\n logger.debug("Add order use case - The order pricing was decided", validatedOrder);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);\n logger.debug("Add order use case - Verified the user balance already", purchasingCustomer);\n const returnOrder = mapFromRepositoryToDto(purchasingCustomer as unknown as OrderRecord);\n logger.info("Add order use case - About to return result", returnOrder);\n return returnOrder;\n}\n')),(0,r.yg)("p",null,"One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'import { openTelemetry } from "@opentelemetry";\nasync function runUseCaseStep(stepName, stepFunction) {\n logger.debug(`Use case step ${stepName} starts now`);\n // Create Open Telemetry custom span\n openTelemetry.startSpan(stepName);\n return await stepFunction();\n}\n')),(0,r.yg)("p",null,"Now the use-case gets automated and consistent transparency:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'export async function addOrderUseCase(orderRequest: OrderRequest) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const validatedOrder = await runUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest));\n const orderWithPricing = await runUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder));\n await runUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing));\n}\n')),(0,r.yg)("p",null,"The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets ",(0,r.yg)("em",{parentName:"p"},"automated and consistent")," observability."),(0,r.yg)("h2",{id:"implementation-best-practices"},"Implementation best practices"),(0,r.yg)("h3",{id:"1-dead-simple-no-code"},"1. Dead-simple 'no code'"),(0,r.yg)("p",null,"Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put ",(0,r.yg)("em",{parentName:"p"},"only one")," If/Else in a use-case: "),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);\n if (purchasingCustomer.isPremium) {//\u2757\ufe0f\n sendEmailToPremiumCustomer(purchasingCustomer);\n // This easily will grow with time to multiple if/else\n }\n}\n")),(0,r.yg)("p",null,"A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);\n await sendEmailIfPremiumCustomer(purchasingCustomer); //\ud83d\ude42\n}\n")),(0,r.yg)("h3",{id:"2-find-the-right-level-of-specificity"},"2. Find the right level of specificity"),(0,r.yg)("p",null,"The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const finalOrderToSave = await applyAllBusinessLogic(validatedOrder);//\ud83e\udd14\n await insertOrder(finalOrderToSave);\n}\n")),(0,r.yg)("p",null,"The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const pricedOrder = await calculatePrice(validatedOrder);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);\n const orderWithShippingInstructions = await addShippingInfo(pricedOrder, purchasingCustomer);\n await insertOrder(orderWithShippingInstructions);\n}\n")),(0,r.yg)("p",null,"Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other."),(0,r.yg)("h3",{id:"3-when-have-no-choice-control-the-db-transaction-from-the-use-case"},"3. When have no choice, control the DB transaction from the use-case"),(0,r.yg)("p",null,"What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?"),(0,r.yg)("p",null,"If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const transaction = Repository.startTransaction();\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderRequest, transaction);\n const orderWithPricing = calculateOrderPricing(purchasingCustomer);\n const savedOrder = await insertOrder(orderWithPricing, transaction);\n const returnOrder = mapFromRepositoryToDto(savedOrder);\n Repository.commitTransaction(transaction);\n return returnOrder;\n}\n")),(0,r.yg)("h3",{id:"4-aggregate-small-use-cases-in-a-single-file"},"4. Aggregate small use-cases in a single file"),(0,r.yg)("p",null,"A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"// query-orders-use-cases.ts\nexport async function getOrder(id) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const result = await orderRepository.getOrderByID(id);\n return result;\n}\n\nexport async function getAllOrders(criteria) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const result = await orderRepository.queryOrders(criteria);\n return result;\n}\n")),(0,r.yg)("h2",{id:"closing-easy-to-start-use-everywhere"},"Closing: Easy to start, use everywhere"),(0,r.yg)("p",null,"If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature."),(0,r.yg)("p",null,"Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component."),(0,r.yg)("p",null,"You might think this all sounds remarkably straightforward\u2014and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software."))}u.isMDXComponent=!0},7951:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/blocking-complexity-tree-dd1cde956e00160fe4fadf67d6dd3649.jpg"},9635:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/deferred-complexity-tree-3407b9e6f355d2e32aacfc0bd7216de4.jpg"},4186:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/library-catalog-37d0f18aa61b71ed77ae72a945f3c1de.webp"},2899:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/use-case-coverage-3f223674f7783dfc904109647ad99304.png"},132:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/use-code-example-6d6c34330ad8a86f7c511123d4d5f654.png"}}]);
\ No newline at end of file
diff --git a/assets/js/621e7957.f3ba0d17.js b/assets/js/621e7957.f3ba0d17.js
new file mode 100644
index 00000000..816dc44c
--- /dev/null
+++ b/assets/js/621e7957.f3ba0d17.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[1277],{5680:(e,t,r)=>{r.d(t,{xA:()=>p,yg:()=>f});var a=r(6540);function o(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function i(e,t){var r=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),r.push.apply(r,a)}return r}function n(e){for(var t=1;t=0||(o[r]=e[r]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,r)&&(o[r]=e[r])}return o}var l=a.createContext({}),c=function(e){var t=a.useContext(l),r=t;return e&&(r="function"==typeof e?e(t):n(n({},t),e)),r},p=function(e){var t=c(e.components);return a.createElement(l.Provider,{value:t},e.children)},d="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},g=a.forwardRef((function(e,t){var r=e.components,o=e.mdxType,i=e.originalType,l=e.parentName,p=s(e,["components","mdxType","originalType","parentName"]),d=c(r),g=o,f=d["".concat(l,".").concat(g)]||d[g]||u[g]||i;return r?a.createElement(f,n(n({ref:t},p),{},{components:r})):a.createElement(f,n({ref:t},p))}));function f(e,t){var r=arguments,o=t&&t.mdxType;if("string"==typeof e||o){var i=r.length,n=new Array(i);n[0]=g;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[d]="string"==typeof e?e:o,n[1]=s;for(var c=2;c{r.r(t),r.d(t,{assets:()=>l,contentTitle:()=>n,default:()=>u,frontMatter:()=>i,metadata:()=>s,toc:()=>c});var a=r(8168),o=(r(6540),r(5680));const i={slug:"practica-is-alive",date:"2022-07-15T10:00",hide_table_of_contents:!0,title:"Practica.js v0.0.1 is alive",authors:["goldbergyoni"],tags:["node.js","express","fastify"]},n="Practica.js v0.0.1 is alive",s={permalink:"/blog/practica-is-alive",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/practica-is-alive/index.md",source:"@site/blog/practica-is-alive/index.md",title:"Practica.js v0.0.1 is alive",description:"\ud83e\udd73 We're thrilled to launch the very first version of Practica.js.",date:"2022-07-15T10:00:00.000Z",formattedDate:"July 15, 2022",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"express",permalink:"/blog/tags/express"},{label:"fastify",permalink:"/blog/tags/fastify"}],readingTime:1.21,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"practica-is-alive",date:"2022-07-15T10:00",hide_table_of_contents:!0,title:"Practica.js v0.0.1 is alive",authors:["goldbergyoni"],tags:["node.js","express","fastify"]},prevItem:{title:"Popular Node.js patterns and tools to re-consider",permalink:"/blog/popular-nodejs-pattern-and-tools-to-reconsider"}},l={authorsImageUrls:[void 0]},c=[{value:"What is Practica is one paragraph",id:"what-is-practica-is-one-paragraph",level:2},{value:"90 seconds video",id:"90-seconds-video",level:2},{value:"How to get started",id:"how-to-get-started",level:2}],p={toc:c},d="wrapper";function u(e){let{components:t,...r}=e;return(0,o.yg)(d,(0,a.A)({},p,r,{components:t,mdxType:"MDXLayout"}),(0,o.yg)("p",null,"\ud83e\udd73 We're thrilled to launch the very first version of Practica.js."),(0,o.yg)("h2",{id:"what-is-practica-is-one-paragraph"},"What is Practica is one paragraph"),(0,o.yg)("p",null,"Although Node.js has great frameworks \ud83d\udc9a, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are ",(0,o.yg)("a",{parentName:"p",href:"./decisions/index"},"neatly and thoughtfully documented"),". We strive to keep things as simple and standard as possible and base our work off the popular guide: ",(0,o.yg)("a",{parentName:"p",href:"https://github.com/goldbergyoni/nodebestpractices"},"Node.js Best Practices"),"."),(0,o.yg)("p",null,"Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app. "),(0,o.yg)("h2",{id:"90-seconds-video"},"90 seconds video"),(0,o.yg)("iframe",{width:"1024",height:"768",src:"https://www.youtube.com/embed/F6kAs2VEcKw",title:"YouTube video player",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:!0}),(0,o.yg)("h2",{id:"how-to-get-started"},"How to get started"),(0,o.yg)("p",null,"To get up to speed quickly, read our ",(0,o.yg)("a",{parentName:"p",href:"https://practica.dev/the-basics/getting-started-quickly"},"getting started guide"),"."))}u.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/6739c067.e5f6376f.js b/assets/js/6739c067.e5f6376f.js
new file mode 100644
index 00000000..4c12a524
--- /dev/null
+++ b/assets/js/6739c067.e5f6376f.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8022],{6746:o=>{o.exports=JSON.parse('{"permalink":"/blog/tags/workflow","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/6875c492.6c420d12.js b/assets/js/6875c492.6c420d12.js
new file mode 100644
index 00000000..d17b3781
--- /dev/null
+++ b/assets/js/6875c492.6c420d12.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4813],{7713:(e,t,a)=>{a.d(t,{A:()=>s});var n=a(6540),l=a(1312),r=a(9022);function s(e){const{metadata:t}=e,{previousPage:a,nextPage:s}=t;return n.createElement("nav",{className:"pagination-nav","aria-label":(0,l.T)({id:"theme.blog.paginator.navAriaLabel",message:"Blog list page navigation",description:"The ARIA label for the blog pagination"})},a&&n.createElement(r.A,{permalink:a,title:n.createElement(l.A,{id:"theme.blog.paginator.newerEntries",description:"The label used to navigate to the newer blog posts page (previous page)"},"Newer Entries")}),s&&n.createElement(r.A,{permalink:s,title:n.createElement(l.A,{id:"theme.blog.paginator.olderEntries",description:"The label used to navigate to the older blog posts page (next page)"},"Older Entries"),isNext:!0}))}},3892:(e,t,a)=>{a.d(t,{A:()=>s});var n=a(6540),l=a(7131),r=a(8258);function s(e){let{items:t,component:a=r.A}=e;return n.createElement(n.Fragment,null,t.map((e=>{let{content:t}=e;return n.createElement(l.i,{key:t.metadata.permalink,content:t},n.createElement(a,null,n.createElement(t,null)))})))}},3069:(e,t,a)=>{a.r(t),a.d(t,{default:()=>E});var n=a(6540),l=a(53),r=a(1312),s=a(5846),o=a(1003),i=a(7559),c=a(5489),g=a(6669),m=a(7713),p=a(1463),u=a(3892);function d(e){const t=function(){const{selectMessage:e}=(0,s.W)();return t=>e(t,(0,r.T)({id:"theme.blog.post.plurals",description:'Pluralized label for "{count} posts". Use as much plural forms (separated by "|") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)',message:"One post|{count} posts"},{count:t}))}();return(0,r.T)({id:"theme.blog.tagTitle",description:"The title of the page for a blog tag",message:'{nPosts} tagged with "{tagName}"'},{nPosts:t(e.count),tagName:e.label})}function h(e){let{tag:t}=e;const a=d(t);return n.createElement(n.Fragment,null,n.createElement(o.be,{title:a}),n.createElement(p.A,{tag:"blog_tags_posts"}))}function b(e){let{tag:t,items:a,sidebar:l,listMetadata:s}=e;const o=d(t);return n.createElement(g.A,{sidebar:l},n.createElement("header",{className:"margin-bottom--xl"},n.createElement("h1",null,o),n.createElement(c.A,{href:t.allTagsPath},n.createElement(r.A,{id:"theme.tags.tagsPageLink",description:"The label of the link targeting the tag list page"},"View All Tags"))),n.createElement(u.A,{items:a}),n.createElement(m.A,{metadata:s}))}function E(e){return n.createElement(o.e3,{className:(0,l.A)(i.G.wrapper.blogPages,i.G.page.blogTagPostListPage)},n.createElement(h,e),n.createElement(b,e))}}}]);
\ No newline at end of file
diff --git a/assets/js/69404bc7.f6bfce87.js b/assets/js/69404bc7.f6bfce87.js
new file mode 100644
index 00000000..a455af96
--- /dev/null
+++ b/assets/js/69404bc7.f6bfce87.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[9480],{5680:(e,t,a)=>{a.d(t,{xA:()=>c,yg:()=>g});var r=a(6540);function n(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,r)}return a}function i(e){for(var t=1;t=0||(n[a]=e[a]);return n}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(n[a]=e[a])}return n}var l=r.createContext({}),d=function(e){var t=r.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},c=function(e){var t=d(e.components);return r.createElement(l.Provider,{value:t},e.children)},p="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return r.createElement(r.Fragment,{},t)}},h=r.forwardRef((function(e,t){var a=e.components,n=e.mdxType,o=e.originalType,l=e.parentName,c=s(e,["components","mdxType","originalType","parentName"]),p=d(a),h=n,g=p["".concat(l,".").concat(h)]||p[h]||u[h]||o;return a?r.createElement(g,i(i({ref:t},c),{},{components:a})):r.createElement(g,i({ref:t},c))}));function g(e,t){var a=arguments,n=t&&t.mdxType;if("string"==typeof e||n){var o=a.length,i=new Array(o);i[0]=h;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[p]="string"==typeof e?e:n,i[1]=s;for(var d=2;d{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>i,default:()=>u,frontMatter:()=>o,metadata:()=>s,toc:()=>d});var r=a(8168),n=(a(6540),a(5680));const o={sidebar_position:3},i="Coding with Practica",s={unversionedId:"the-basics/coding-with-practica",id:"the-basics/coding-with-practica",title:"Coding with Practica",description:"Now that you have Practice installed (if not, do this first), it's time to code a great app using it and understand its unique power. This journey will inspire you with good patterns and practices. All the concepts in this guide are not our unique ideas, quite the opposite, they are all standard patterns or libraries that we just put together. In this tutorial we will implement a simple feature using Practica, ready?",source:"@site/docs/the-basics/coding-with-practica.md",sourceDirName:"the-basics",slug:"/the-basics/coding-with-practica",permalink:"/the-basics/coding-with-practica",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/the-basics/coding-with-practica.md",tags:[],version:"current",sidebarPosition:3,frontMatter:{sidebar_position:3},sidebar:"tutorialSidebar",previous:{title:"getting-started-quickly",permalink:"/the-basics/getting-started-quickly"},next:{title:"README",permalink:"/decisions/"}},l={},d=[{value:"Pre-requisites",id:"pre-requisites",level:2},{value:"What's inside that box?",id:"whats-inside-that-box",level:2},{value:"Running and testing the solution",id:"running-and-testing-the-solution",level:2},{value:"The 3 layers of a component",id:"the-3-layers-of-a-component",level:2},{value:"Let's code a flow from API to DB and in return",id:"lets-code-a-flow-from-api-to-db-and-in-return",level:2}],c={toc:d},p="wrapper";function u(e){let{components:t,...o}=e;return(0,n.yg)(p,(0,r.A)({},c,o,{components:t,mdxType:"MDXLayout"}),(0,n.yg)("h1",{id:"coding-with-practica"},"Coding with Practica"),(0,n.yg)("p",null,"Now that you have Practice installed (if not, ",(0,n.yg)("a",{parentName:"p",href:"/the-basics/getting-started-quickly"},"do this first"),"), it's time to code a great app using it and understand its unique power. This journey will inspire you with good patterns and practices. All the concepts in this guide are not our unique ideas, quite the opposite, they are all standard patterns or libraries that we just put together. In this tutorial we will implement a simple feature using Practica, ready?"),(0,n.yg)("h2",{id:"pre-requisites"},"Pre-requisites"),(0,n.yg)("p",null,"Just before you start coding, ensure you have ",(0,n.yg)("a",{parentName:"p",href:"https://www.docker.com/"},"Docker")," and ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/nvm-sh/nvm#installing-and-updating"},"nvm")," (a utility that installs Node.js) installed. Both are common development tooling that are considered as a 'good practice'."),(0,n.yg)("h2",{id:"whats-inside-that-box"},"What's inside that box?"),(0,n.yg)("p",null,"You now have a folder with Practica code. What will you find inside this box? Practica created for you an example Node.js solution with a single component (API, Microservice) that is called 'order-service'. Of course you'll change its name to something that represents your solution. Inside, it packs a lot of thoughtful and standard optimizations that will save you countless hours doing what others have done before."),(0,n.yg)("p",null,"Besides this component, there are also a bunch of reusable libraries like logger, error-handler and more. All sit together under a single root folder in a single Git repository - this popular structure is called a 'Monorepo'."),(0,n.yg)("p",null,(0,n.yg)("img",{alt:"Monorepos",src:a(6642).A,width:"2996",height:"1729"}),"\n",(0,n.yg)("em",{parentName:"p"},"A typical Monorepo structure")),(0,n.yg)("p",null,"The code inside is coded with Node.js, TypeScript, express and Postgresql. Later version of Practica.js will support more frameworks."),(0,n.yg)("h2",{id:"running-and-testing-the-solution"},"Running and testing the solution"),(0,n.yg)("p",null,"A minute before we start coding, let's ensure the solution starts and the tests pass. This will give us confidence to add more and more code knowing that we have a valid checkpoint (and tests to watch our back)."),(0,n.yg)("p",null,"Just run the following standard commands:"),(0,n.yg)("ol",null,(0,n.yg)("li",{parentName:"ol"},"CD into the solution folder")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"cd {your-solution-folder}\n")),(0,n.yg)("ol",{start:2},(0,n.yg)("li",{parentName:"ol"},"Install the right Node.js version")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"nvm use\n")),(0,n.yg)("ol",{start:3},(0,n.yg)("li",{parentName:"ol"},"Install dependencies")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm install\n")),(0,n.yg)("ol",{start:4},(0,n.yg)("li",{parentName:"ol"},"Run the tests")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm test\n")),(0,n.yg)("p",null,"Tests pass? Great! \ud83e\udd73\u2705 "),(0,n.yg)("p",null,"They fail? oppss, this does not happen too often. Please approach our ",(0,n.yg)("a",{parentName:"p",href:"https://discord.com/invite/SrM68BJPqR"},"discord")," or open an issue in ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica/issues"},"Github"),"? We will try to assist shortly"),(0,n.yg)("ol",{start:5},(0,n.yg)("li",{parentName:"ol"},"Optional: Start the app and check with Postman")),(0,n.yg)("p",null,"Some rely on testing only, others like also to invoke routes using POSTMAN and test manually. We're good with both approach and recommend down the road to rely more and more on testing. Practica includes testing templates that are easy to write"),(0,n.yg)("p",null,"Start the process first by navigating to the example component (order-service):"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"cd services/order-service\n")),(0,n.yg)("p",null,"Start the DB using Docker and install tables (migration):"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"docker-compose -f ./test/docker-compose.yml up\n")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm run db:migrate\n")),(0,n.yg)("p",null,"This step is not necessary for running tests as it will happen automatically"),(0,n.yg)("p",null,"Then start the app:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-bash"},"npm start\n")),(0,n.yg)("p",null,"Now visit our ",(0,n.yg)("a",{parentName:"p",href:"https://documenter.getpostman.com/view/190644/VUqmxKok"},"online POSTMAN collection"),", explore the routes, invoke and make yourself familiar with the app"),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"Note:")," The API routes authorize requests, a valid token must be provided. You may generate one yourself (",(0,n.yg)("a",{parentName:"p",href:"/questions"},"see here how"),"), or just use the default ",(0,n.yg)("em",{parentName:"p"},"development")," token that we generated for you \ud83d\udc47. Put it inside an 'Authorization' header:"),(0,n.yg)("p",null,(0,n.yg)("inlineCode",{parentName:"p"},"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MzM4NTIyMTk5NzEsImRhdGEiOnsidXNlciI6ImpvZSIsInJvbGVzIjoiYWRtaW4ifSwiaWF0IjoxNzEyMjUyMjE5fQ.kUS7AnwtGum40biJYt0oyOH_le1KfVD2EOrs-ozclY0")),(0,n.yg)("p",null,"We have the ground ready \ud83d\udc25. Let's code now, just remember to run the tests (or POSTMAN) once in a while to ensure nothing breaks"),(0,n.yg)("h2",{id:"the-3-layers-of-a-component"},"The 3 layers of a component"),(0,n.yg)("p",null,"A typical component (e.g., Microservice) contains 3 main layers. This is a known and powerful pattern that is called ",(0,n.yg)("a",{parentName:"p",href:"https://www.techopedia.com/definition/24649/three-tier-architecture"},'"3-Tiers"'),". It's an architectural structure that strikes a great balance between simplicity and robustness. Unlike other fancy architectures (e.g. hexagonal architecture, etc), this style is more likely to keep things simple and organized. The three layers represent the physical flow of a request with no abstractions:"),(0,n.yg)("p",null,(0,n.yg)("img",{alt:"Monorepos",src:a(4964).A,width:"1452",height:"637"}),"\n",(0,n.yg)("em",{parentName:"p"},"A typical Monorepo structure")),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"- Layer 1: Entry points -")," This is the door to the application where flows start and requests come-in. Our example component has a REST API (i.e., API controllers), this is one kind of an entry-point. There might be other entry-points like a scheduled job, CLI, message queue and more. Whatever entry-point you're dealing with, the responsibility of this layer is minimal - receive requests, perform authentication, pass the request to be handled by the internal code and handle errors. For example, a controller gets an API request then it does nothing more than authenticating the user, extract the payload and call a domain layer function \ud83d\udc47"),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"- Domain -")," A folder containing the heart of the app where the flows, logic and data-structure are defined. Its functions can serve any type of entry-points - whether it's being called from API or message queue, the domain layer is agnostic to the source of the caller. Code here may call other services via HTTP/queue. It's likely also to fetch from and save information in a DB, for this it will call the data-access layer \ud83d\udc47"),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"- Data-access -")," Your entire DB interaction functionality and configuration is kept in this folder. For now, Practica.js uses ORM to interact with the DB - we're still debating on this decision"),(0,n.yg)("p",null,"Now that you understand the structure of the example component, it's much easier to code over it \ud83d\udc47"),(0,n.yg)("h2",{id:"lets-code-a-flow-from-api-to-db-and-in-return"},"Let's code a flow from API to DB and in return"),(0,n.yg)("p",null,"We're about to implement a simple feature to make you familiar with the major code areas. After reading/coding this section, you should be able to add routes, logic and DB objects to your system easily. The example app deals with an imaginary e-commerce app. It has functionality for adding and querying for Orders. Goes without words that you'll change this to the entities and columns that represent your app."),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\ud83d\udddd Key insight:")," Practica has no hidden abstractions, you have to become familiar with the (popular) chosen libraries. This minimizes future scenarios where you get stuck when an abstraction is not suitable to your need or you don't understand how things work.")),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"Requirements -")," - Our missions is to code the following: Allow ",(0,n.yg)("em",{parentName:"p"},"updating")," an order through the API. Orders should also have a new field: Status. When trying to edit an existing order, if the field order.'paymentTermsInDays' is 0 (i.e., the payment due date is now) or the order.status is 'delivered' - no changes are allowed and the code should return HTTP status 400 (bad request). Otherwise, we should update the DB with new order information"),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"1. Change the example component/service name")),(0,n.yg)("p",null,"Obviously your solution, has a different context and name. You probably want to rename the example service name from 'order-service' to {your-component-name}. Change both the folder name ('order-service') and the package.json name field:"),(0,n.yg)("p",null,(0,n.yg)("em",{parentName:"p"},"./services/order-service/package.json")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},'{\n "name": "your-name-here",\n "version": "0.0.2",\n "description": "An example Node.js app that is packed with best practices",\n}\n\n')),(0,n.yg)("p",null,"If you're just experimenting with Practica, you may leave the name as-is for now."),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"2. Add a new 'Edit' route")),(0,n.yg)("p",null,"The express API routes are located in the entry-points layer, in the file 'routes.ts': ",(0,n.yg)("em",{parentName:"p"},"[root]","/services/order-service/entry-points/api/routes.ts")),(0,n.yg)("p",null,"This is a very typical express code, if you're familiar with express you'll be productive right away. This is a core principle of Practica - it uses battle tested technologies as-is. Let's just add a new route in this file:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// A new route to edit order\nrouter.put('/:id', async (req, res, next) => {\n try {\n logger.info(`Order API was called to edit order ${req.params.id}`);\n // Later on we will call the main code in the domain layer\n // Fow now let's put hard coded values\n res.json({id:1, userId: 1, productId: 2, countryId: 1,\n deliveryAddress: '123 Main St, New York',\n paymentTermsInDays: 30}).status(200).end();\n } catch (err) {\n next(err);\n }\n });\n")),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\u2705Best practice:")," The API entry-point (controller) should stay thin and focus on forwarding the request to the domain layer.")),(0,n.yg)("p",null,"Looks highly familiar, right? If not, it means you should learn first how to code first with your preferred framework - in this case it's Express. That's the thing with Practica - We don't replace neither abstract your reputable framework, we only augment it."),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"3. Test your first route")),(0,n.yg)("p",null,"Commonly, once we have a first code skeleton, it's time to start testing it. In Practica we recommend writing 'component tests' against the API and including all the layers (no mocking), we have great examples for this under ","[root]","/services/order-service/test"),(0,n.yg)("p",null,"You may visit the file: ","[root]","/services/order-service/test/add-order.test.ts, read one of the test and you're likely to get the intent shortly. Our testing guide will be released shortly."),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\ud83d\udddd Key insight:")," Practica's testing strategy is based on 'component tests' that include all the layers including the DB using docker-compose. We include rich testing patterns that mitigate various real-world risks like testing error handling, integrations and other things beyond the basics. Thanks to thoughtful setup, we're able to run 50 tests with DB in ~6 seconds. This is considered as a modern and highly-efficient strategy for testing Microservices")),(0,n.yg)("p",null,"In this guide though, we're more focused on features craft - it's OK for now to test with POSTMAN or any other API explorer tool."),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"4. Create a DTO and a validation function")),(0,n.yg)("p",null,"We're about to receive a payload from the caller, the edited order JSON. We obviously want to declare a strong schema/type so we can validate the incoming payloads and work with strong TypeScript types"),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\u2705Best practice:")," Validate incoming request and fail early. Both in run-time and development time")),(0,n.yg)("p",null,"To meet these goals, we use two popular and powerful libraries: ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/sinclairzx81/typebox"},"typebox")," and ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/ajv-validator/ajv"},"ajv"),". The first library, Typebox allows defining a schema with two outputs: TypeScript type and also JSON Schema. This is a standard and popular format that can be reused in many other places (e.g., to define OpenAPI spec). Based on this, the second library, ajv, will validate the requests."),(0,n.yg)("p",null,"Open the file ","[root]","/services/order-service/domain/order-schema.ts"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// Declare the basic order schema\nimport { Static, Type } from '@sinclair/typebox';\nexport const orderSchema = Type.Object({\n deliveryAddress: Type.String(),\n paymentTermsInDays: Type.Number(),\n productId: Type.Integer(),\n userId: Type.Integer(),\n status: Type.Optional(Type.String()), // \ud83d\udc48 Add this field\n});\n")),(0,n.yg)("p",null,"This is Typebox's syntax for defines the basic order schema. Based on this we can get both JSON Schema and TypeScript type (!), this allows both run-time and development time protection. Add the status field to it and the following line to get a TypeScript type:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// This is a standard TypeScript type - we can use it now in the code and get intellisense + Typescript build-time validation\nexport type editOrderDTO = Static;\n")),(0,n.yg)("p",null,"We have now strong development types to work with, it's time to configure our runtime validator. The library ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/ajv-validator/ajv"},"ajv")," gets JSON Schema, and validates the payload against it."),(0,n.yg)("p",null,"In the same file, let's define a validation function for edited orders:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// [root]/services/order-service/domain/order-schema\nimport { ajv } from '@practica/validation';\nexport function editOrderValidator() {\n // For performance reason we cache the compiled validator function\n const validator = ajv.getSchema('edit-order');\n if (!validator) {\n ajv.addSchema(editOrderSchema, 'edit-order');\n }\n\n return ajv.getSchema('edit-order')!;\n}\n")),(0,n.yg)("p",null,"We now have a TypeScript type and a function that can validate it on run-time. Knowing that we have safe types, it's time for the 'main thing' - coding the flow and logic"),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"5. Create a use case (what the heck is 'use case'?)")),(0,n.yg)("p",null,"Let's code our logic, but where? Obviously not in the controller/route which merely forwards request to our business logic layer. This should be done inside our domain folder, where the logic lives. Let's create a special type of code object - a use case."),(0,n.yg)("p",null,"A use-case is a plain JavaScript object/class which is created for every flow/feature. It summarizes the flow in a business and simple language without delving into the technical and small details. It mostly orchestrates other small services that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily and avoid exposure to ",(0,n.yg)("em",{parentName:"p"},"unnecessary")," complexity."),(0,n.yg)("p",null,"Let's add a new file inside the domain layer: edit-order-use-case.ts, and code the requirements:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// [root]/services/order-service/domain/edit-order-use-case.ts\nimport * as orderRepository from '../data-access/repositories/order-repository';\n\nexport default async function editOrder(orderId: number, updatedOrder: editOrderDTO) {\n // Note how we use \ud83d\udc46 the editOrderDTO that was defined in the previous step\n assertOrderIsValid(updatedOrder);\n assertEditingIsAllowed(updatedOrder.status, updatedOrder.paymentTermsInDays);\n // Call the DB layer here \ud83d\udc47 - to be explained soon\n return await orderRepository.editOrder(orderId, updatedOrder);\n}\n")),(0,n.yg)("p",null,"Note how reading this function above easily tells the flow without messing with too much details. This is where use cases shine - by summarizing long details."),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\u2705Best practice:")," Describe every feature/flow with a 'use case' object that summarizes the flow for better readability"),(0,n.yg)("p",{parentName:"blockquote"}," Now we need to implement the functions that the use case calls. Since this is just a simple demo, we can put everything inside the use case. Consider a real-world scenario with heavier logic, calls to 3rd parties and DB work - In this case you'll need to spread this code across multiple services.")),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"// [root]/services/order-service/domain/edit-order-use-case.ts\nimport { AppError } from '@practica/error-handling';\nimport { ajv } from '@practica/validation';\nimport { editOrderDTO, addOrderSchema } from './order-schema';\n\nfunction assertOrderIsValid(updatedOrder: editOrderDTO) {\n const isValid = ajv.validate(addOrderSchema, updatedOrder);\n if (isValid === false) {\n throw new AppError('invalid-order', `Validation failed`, 400, true);\n }\n}\n\nfunction assertEditingIsAllowed( status: string | undefined, \npaymentTermsInDays: number) {\n if (status === 'delivered' || paymentTermsInDays === 0) {\n throw new AppError(\n 'changes-not-allowed',\n `It's not allow to delivered or paid orders`,\n 409, true);\n }\n}\n\n")),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\ud83d\udddd Key insight:")," Note how everything we did thus far is mostly coding ",(0,n.yg)("em",{parentName:"p"},"functions"),". No fancy constructs, no abstractions, not even classes - we try to keep things as simple as possible. You may of course use other language features ",(0,n.yg)("strong",{parentName:"p"},"when the need arises"),". We suggest by-default to stick to plain functions and use other constructs when a strong need is identified.")),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"6. Put the data access code")),(0,n.yg)("p",null,"We're tasked with saving the edited order in the database. Any DB-related code is located within the folder: ","[root]","/services/order-service/data-access."),(0,n.yg)("p",null,"Practica supports two popular ORM, ",(0,n.yg)("a",{parentName:"p",href:"https://github.com/sequelize/sequelize"},"Sequelize")," (default) and ",(0,n.yg)("a",{parentName:"p",href:"https://www.prisma.io/"},"Prisma"),". Whatever you chose, both are a battle-tested and reputable option that will surely serve you well as long as the DB complexity is not overwhelming. "),(0,n.yg)("p",null,"Before discussing the ORM-side, we wrap the entire DB layer with a simple class that externalizes all the DB functions to the domain layer. This is the ",(0,n.yg)("a",{parentName:"p",href:"https://martinfowler.com/eaaCatalog/repository.html"},"repository pattern")," which advocates decoupling the DB narratives from the one who codes business logic. Inside ","[root]","/services/order-service/data-access/repositories, you'll find a file 'order-repository', open it and add a new function:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"[root]/services/order-service/data-access/order-repository.js\nimport { getOrderModel } from './models/order-model';// \ud83d\udc48 This is the ORM code which will get explained soon \n\nexport async function editOrder(orderId: number, orderDetails): OrderRecord {\n const orderEditingResponse = await getOrderModel().update(orderDetails, {\n where: { id: orderId },\n });\n\n return orderEditingResponse;\n}\n")),(0,n.yg)("p",null,"Note that this file contains a type - OrderRecord. This is a plain JS object (POJO) that is used to interact with the data access layer. This approach prevents leaking DB/ORM narratives to the domain layer (e.g., ActiveRecord style)"),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\u2705Best practice:")," Externalize any DB data with a response that contains plain JavaScript objects (the repository pattern)")),(0,n.yg)("p",null,"Add the new Status field to this type:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"type OrderRecord = {\n id: number;\n // ... other existing fields\n status: string;// \ud83d\udc48 Add this field per our requirements\n};\n")),(0,n.yg)("p",null,"Let's configure the ORM now and define the Order model - a mapper between JavaScript object and a database table (a common ORM notion). Open the file ","[root]","/services/order-service/data-access/models/order-model.ts:"),(0,n.yg)("pre",null,(0,n.yg)("code",{parentName:"pre",className:"language-javascript"},"import { DataTypes } from 'sequelize';\nimport getDbConnection from '../db-connection';\n\nexport default function getOrderModel() {\n // getDbConnection returns a singleton Sequelize (ORM) object - This is necessary to avoid multiple DB connection pools\n return getDbConnection().define('Order', {\n id: {\n type: DataTypes.INTEGER,\n primaryKey: true,\n autoIncrement: true,\n },\n deliveryAddress: {\n type: DataTypes.STRING,\n },\n //some other fields here\n status: {\n type: DataTypes.String,// \ud83d\udc48 Add this field per our requirements\n allowNull: true\n }\n });\n}\n\n")),(0,n.yg)("p",null,"This file defines the mapping between our received and returned JavaScript object and the database. Given this definition, the ORM can now expose functions to interact with data."),(0,n.yg)("p",null,(0,n.yg)("strong",{parentName:"p"},"7. \ud83e\udd73 You have a robust working flow now")),(0,n.yg)("p",null,"You should now be able to run the automated tests or POSTMAN and see the full flow working. It might feel like an overkill to create multiple layers and objects - naturally this level of modularization pays off when things get more complicated in real-world scenarios. Follow these layers and principles to write great code. In a short time, once you become familiar with these techniques - it will feel quick and natural "),(0,n.yg)("blockquote",null,(0,n.yg)("p",{parentName:"blockquote"},(0,n.yg)("strong",{parentName:"p"},"\ud83d\udddd Key insight:")," Anything we went through in this article is not unique to Practica.js rather ubiquitous backend concepts. Practica.js brings no overhead beyond the common best practices. This knowledge will serve you in any other scenario, regardless of Practica.js")),(0,n.yg)("p",null,"We will be grateful if you share with us how to make this guide better"),(0,n.yg)("ul",null,(0,n.yg)("li",{parentName:"ul"},"Ideas for future iterations: How to work with the Monorepo commands, Focus on a single componenent or run commands from the root, DB migration")))}u.isMDXComponent=!0},4964:(e,t,a)=>{a.d(t,{A:()=>r});const r=a.p+"assets/images/3-tiers-fb96effa6ad8f8f08b594f3455628305.png"},6642:(e,t,a)=>{a.d(t,{A:()=>r});const r=a.p+"assets/images/monorepo-structure-d3796dd4b9597a4f74c8c13fcb055511.png"}}]);
\ No newline at end of file
diff --git a/assets/js/710c3838.fd76c10b.js b/assets/js/710c3838.fd76c10b.js
new file mode 100644
index 00000000..1bcf470c
--- /dev/null
+++ b/assets/js/710c3838.fd76c10b.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[7908],{5680:(e,t,n)=>{n.d(t,{xA:()=>c,yg:()=>h});var o=n(6540);function r(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function a(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,o)}return n}function i(e){for(var t=1;t=0||(r[n]=e[n]);return r}(e,t);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(r[n]=e[n])}return r}var s=o.createContext({}),u=function(e){var t=o.useContext(s),n=t;return e&&(n="function"==typeof e?e(t):i(i({},t),e)),n},c=function(e){var t=u(e.components);return o.createElement(s.Provider,{value:t},e.children)},p="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},g=o.forwardRef((function(e,t){var n=e.components,r=e.mdxType,a=e.originalType,s=e.parentName,c=l(e,["components","mdxType","originalType","parentName"]),p=u(n),g=r,h=p["".concat(s,".").concat(g)]||p[g]||d[g]||a;return n?o.createElement(h,i(i({ref:t},c),{},{components:n})):o.createElement(h,i({ref:t},c))}));function h(e,t){var n=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var a=n.length,i=new Array(a);i[0]=g;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l[p]="string"==typeof e?e:r,i[1]=l;for(var u=2;u{n.r(t),n.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>d,frontMatter:()=>a,metadata:()=>l,toc:()=>u});var o=n(8168),r=(n(6540),n(5680));const a={sidebar_position:1,sidebar_label:"Short guide"},i="Contributing to Practica.js - The short guide",l={unversionedId:"contribution/contribution-short-guide",id:"contribution/contribution-short-guide",title:"Contributing to Practica.js - The short guide",description:"You belong with us",source:"@site/docs/contribution/contribution-short-guide.md",sourceDirName:"contribution",slug:"/contribution/contribution-short-guide",permalink:"/contribution/contribution-short-guide",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/contribution/contribution-short-guide.md",tags:[],version:"current",sidebarPosition:1,frontMatter:{sidebar_position:1,sidebar_label:"Short guide"},sidebar:"tutorialSidebar",previous:{title:"Common questions",permalink:"/questions"},next:{title:"Long guide",permalink:"/contribution/contribution-long-guide"}},s={},u=[{value:"You belong with us",id:"you-belong-with-us",level:2},{value:"2 things to consider",id:"2-things-to-consider",level:2},{value:"The main internals tiers (in a nutshell)",id:"the-main-internals-tiers-in-a-nutshell",level:2},{value:"Option 1 - External or configuration change",id:"option-1---external-or-configuration-change",level:3},{value:"Option 2 - The code generator",id:"option-2---the-code-generator",level:3},{value:"Option 3 - The code templates",id:"option-3---the-code-templates",level:3},{value:"Workflow",id:"workflow",level:2},{value:"Development machine setup",id:"development-machine-setup",level:2}],c={toc:u},p="wrapper";function d(e){let{components:t,...n}=e;return(0,r.yg)(p,(0,o.A)({},c,n,{components:t,mdxType:"MDXLayout"}),(0,r.yg)("h1",{id:"contributing-to-practicajs---the-short-guide"},"Contributing to Practica.js - The short guide"),(0,r.yg)("h2",{id:"you-belong-with-us"},"You belong with us"),(0,r.yg)("p",null,"We are in an ever-going quest for better software practices. If you reached down to this page, you probably belong with us \ud83d\udc9c."),(0,r.yg)("p",null,"Note: This is a shortened guide that suits those are willing to quickly contribute. Once you deepen your relations with Practica.js - It's a good idea to read the ",(0,r.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica/blob/main/CONTRIBUTING.md"},"full guide")),(0,r.yg)("h2",{id:"2-things-to-consider"},"2 things to consider"),(0,r.yg)("ul",null,(0,r.yg)("li",{parentName:"ul"},"Our philosophy is all about minimalism and simplicity - We strive to write less code, rely on existing and reputable libraries, stick to Node/JS standards and avoid adding our own abstractions"),(0,r.yg)("li",{parentName:"ul"},"Popular vendors only - Each technology and vendor that we introduce must super popular and reliable. For example, a library must one of the top 5 most starred and downloaded in its category. . See ",(0,r.yg)("a",{parentName:"li",href:"/contribution/vendor-pick-guidelines"},"full vendor choose instructions here"))),(0,r.yg)("h2",{id:"the-main-internals-tiers-in-a-nutshell"},"The main internals tiers (in a nutshell)"),(0,r.yg)("p",null,"For a quick start, you don't necessarily need to understand the entire codebase. Typically, your contribution will fall under one of these three categories:"),(0,r.yg)("h3",{id:"option-1---external-or-configuration-change"},"Option 1 - External or configuration change"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"High-level changes")),(0,r.yg)("p",null,"If you simply mean to edit things beyond the code - There is no need to delve into the internals. For example, when changing documentation, CI/bots, and alike - One can simply perform the task without delving into the code"),(0,r.yg)("h3",{id:"option-2---the-code-generator"},"Option 2 - The code generator"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"Code and CLI to get the user preferences and copy the right code to her computer")),(0,r.yg)("p",null,"Here you will find CLI, UI, and logic to generate the right code. We run our own custom code to go through the code-template folder and filter out parts/files based on the user preferences. For example, should she ask NOT to get a GitHub Actions file - The generator will remove this file from the output"),(0,r.yg)("p",null,"How to work with it?"),(0,r.yg)("ol",null,(0,r.yg)("li",{parentName:"ol"},"If all you need is to alter the logic, you may just code in the ~/code-generator/generation-logic folder and run the tests (located in the same folder)"),(0,r.yg)("li",{parentName:"ol"},"If you wish to modify the CLI UI, then you'll need to build the code before running (because there is no way to run TypeScript in CLI). Open two terminals:")),(0,r.yg)("ul",null,(0,r.yg)("li",{parentName:"ul"},"Open one terminal to compile the code:")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"npm run build:watch\n")),(0,r.yg)("ul",null,(0,r.yg)("li",{parentName:"ul"},"Open second terminal to run the CLI UI:")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"npm run start:cli\n")),(0,r.yg)("h3",{id:"option-3---the-code-templates"},"Option 3 - The code templates"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"The output of our program: An example Microservice and libraries")),(0,r.yg)("p",null,"Here you will the generated code that we will selectively copy to the user's computer which is located under {root}/src/code-templates. It's preferable to work on this code outside the main repository in some side folder. To achieve this, simply generate the code using the CLI, code, run the tests, then finally copy to the main repository"),(0,r.yg)("ol",null,(0,r.yg)("li",{parentName:"ol"},"Install dependencies")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"nvm use && npm i\n")),(0,r.yg)("ol",{start:2},(0,r.yg)("li",{parentName:"ol"},"Build the code")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"npm run build\n")),(0,r.yg)("ol",{start:3},(0,r.yg)("li",{parentName:"ol"},"Bind the CLI command to our code")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"cd .dist && npm link\n")),(0,r.yg)("ol",{start:4},(0,r.yg)("li",{parentName:"ol"},"Generate the code to your preferred working folder")),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"cd {some folder like $HOME}\ncreate-node-app immediate --install-dependencies\n")),(0,r.yg)("ol",{start:4},(0,r.yg)("li",{parentName:"ol"},(0,r.yg)("p",{parentName:"li"},"Now you can work on the generated code. Later on, once your tests pass and you're happy - copy the changes back to ",(0,r.yg)("inlineCode",{parentName:"p"},"~/practica/src/code-templates"))),(0,r.yg)("li",{parentName:"ol"},(0,r.yg)("p",{parentName:"li"},"Run the tests while you code"))),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"#From the folder where you generated the code to. You might need to 'git init'\ncd default-app-name/services/order-service\nnpm run test:dev\n")),(0,r.yg)("h2",{id:"workflow"},"Workflow"),(0,r.yg)("ol",null,(0,r.yg)("li",{parentName:"ol"},"Idea - Claim an existing issue or open a new one"),(0,r.yg)("li",{parentName:"ol"},"Optional: Design - If you're doing something that is not straightforward, share your high-level approach to this within the issue"),(0,r.yg)("li",{parentName:"ol"},"PR - Once you're done, run the tests locally then PR to main. Ensure all checks pass. If you introduced a new feature - Update the docs")),(0,r.yg)("h2",{id:"development-machine-setup"},"Development machine setup"),(0,r.yg)("p",null,"\u2705 Ensure Node, Docker and ",(0,r.yg)("a",{parentName:"p",href:"https://github.com/nvm-sh/nvm#installing-and-updating"},"NVM")," are installed"),(0,r.yg)("p",null,"\u2705 Configure GitHub and npm 2FA!"),(0,r.yg)("p",null,"\u2705 Close the repo if you are a maintainer, or fork it if have no collaborators permissions"),(0,r.yg)("p",null,"\u2705 With your terminal, ensure the right Node version is installed:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"nvm use\n")),(0,r.yg)("p",null,"\u2705 Install dependencies:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"npm i\n")),(0,r.yg)("p",null,"\u2705 Ensure all tests pass:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-bash"},"npm t\n")),(0,r.yg)("p",null,"\u2705 You can safely start now: Code, run the test and vice versa"))}d.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/7302b0ae.2d80561e.js b/assets/js/7302b0ae.2d80561e.js
new file mode 100644
index 00000000..384d1181
--- /dev/null
+++ b/assets/js/7302b0ae.2d80561e.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4511],{3965:t=>{t.exports=JSON.parse('{"permalink":"/blog/tags/component-test","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/74aae855.fc486571.js b/assets/js/74aae855.fc486571.js
new file mode 100644
index 00000000..858b47ee
--- /dev/null
+++ b/assets/js/74aae855.fc486571.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[4617],{4926:a=>{a.exports=JSON.parse('{"label":"unit","permalink":"/blog/tags/unit","allTagsPath":"/blog/tags","count":1}')}}]);
\ No newline at end of file
diff --git a/assets/js/785487f7.236828d4.js b/assets/js/785487f7.236828d4.js
new file mode 100644
index 00000000..c6669e04
--- /dev/null
+++ b/assets/js/785487f7.236828d4.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8363],{6700:a=>{a.exports=JSON.parse('{"label":"passport","permalink":"/blog/tags/passport","allTagsPath":"/blog/tags","count":2}')}}]);
\ No newline at end of file
diff --git a/assets/js/79d3ae8c.40524e51.js b/assets/js/79d3ae8c.40524e51.js
new file mode 100644
index 00000000..2920099e
--- /dev/null
+++ b/assets/js/79d3ae8c.40524e51.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8269],{5779:a=>{a.exports=JSON.parse('{"permalink":"/blog/tags/fastify","page":1,"postsPerPage":10,"totalPages":1,"totalCount":4,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/7abf8f9a.f3f0998b.js b/assets/js/7abf8f9a.f3f0998b.js
new file mode 100644
index 00000000..14bef751
--- /dev/null
+++ b/assets/js/7abf8f9a.f3f0998b.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[347],{5680:(n,e,l)=>{l.d(e,{xA:()=>u,yg:()=>d});var t=l(6540);function r(n,e,l){return e in n?Object.defineProperty(n,e,{value:l,enumerable:!0,configurable:!0,writable:!0}):n[e]=l,n}function o(n,e){var l=Object.keys(n);if(Object.getOwnPropertySymbols){var t=Object.getOwnPropertySymbols(n);e&&(t=t.filter((function(e){return Object.getOwnPropertyDescriptor(n,e).enumerable}))),l.push.apply(l,t)}return l}function i(n){for(var e=1;e=0||(r[l]=n[l]);return r}(n,e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(n);for(t=0;t=0||Object.prototype.propertyIsEnumerable.call(n,l)&&(r[l]=n[l])}return r}var s=t.createContext({}),a=function(n){var e=t.useContext(s),l=e;return n&&(l="function"==typeof n?n(e):i(i({},e),n)),l},u=function(n){var e=a(n.components);return t.createElement(s.Provider,{value:e},n.children)},c="mdxType",y={inlineCode:"code",wrapper:function(n){var e=n.children;return t.createElement(t.Fragment,{},e)}},p=t.forwardRef((function(n,e){var l=n.components,r=n.mdxType,o=n.originalType,s=n.parentName,u=g(n,["components","mdxType","originalType","parentName"]),c=a(l),p=r,d=c["".concat(s,".").concat(p)]||c[p]||y[p]||o;return l?t.createElement(d,i(i({ref:e},u),{},{components:l})):t.createElement(d,i({ref:e},u))}));function d(n,e){var l=arguments,r=e&&e.mdxType;if("string"==typeof n||r){var o=l.length,i=new Array(o);i[0]=p;var g={};for(var s in e)hasOwnProperty.call(e,s)&&(g[s]=e[s]);g.originalType=n,g[c]="string"==typeof n?n:r,i[1]=g;for(var a=2;a{l.r(e),l.d(e,{assets:()=>s,contentTitle:()=>i,default:()=>y,frontMatter:()=>o,metadata:()=>g,toc:()=>a});var t=l(8168),r=(l(6540),l(5680));const o={sidebar_position:3,sidebar_label:"OpenAPI"},i="Decision: Choosing **_OpenAPI** generator tooling",g={unversionedId:"decisions/openapi",id:"decisions/openapi",title:"Decision: Choosing **_OpenAPI** generator tooling",description:"\ud83d\udcd4 What is it - A decision data and discussion about the right OpenAPI tools and approach",source:"@site/docs/decisions/openapi.md",sourceDirName:"decisions",slug:"/decisions/openapi",permalink:"/decisions/openapi",draft:!1,editUrl:"https://github.com/practicajs/practica/tree/main/docs/docs/decisions/openapi.md",tags:[],version:"current",sidebarPosition:3,frontMatter:{sidebar_position:3,sidebar_label:"OpenAPI"},sidebar:"tutorialSidebar",previous:{title:"Monorepo",permalink:"/decisions/monorepo"},next:{title:"Docker base image",permalink:"/decisions/docker-base-image"}},s={},a=[],u={toc:a},c="wrapper";function y(n){let{components:e,...l}=n;return(0,r.yg)(c,(0,t.A)({},u,l,{components:e,mdxType:"MDXLayout"}),(0,r.yg)("h1",{id:"decision-choosing-_openapi-generator-tooling"},"Decision: Choosing ",(0,r.yg)("strong",{parentName:"h1"},"_OpenAPI")," generator tooling"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"\ud83d\udcd4 What is it")," - A decision data and discussion about the right OpenAPI tools and approach"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"\u23f0 Status")," - Open, closed in June 1st 2022"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"\ud83d\udcc1 Corresponding discussion")," - ",(0,r.yg)("a",{parentName:"p",href:"https://github.com/practicajs/practica/issues/67"},"Here")),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"\ud83c\udfafBottom-line: our recommendation")," - TBD"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"\ud83d\udcca Detailed comparison table")),(0,r.yg)("table",{width:"80%",valign:"top"},(0,r.yg)("tr",null,(0,r.yg)("td",null),(0,r.yg)("td",null,(0,r.yg)("h1",null,"tsoa")),(0,r.yg)("td",null,(0,r.yg)("h1",null,"JSON Schema")),(0,r.yg)("td",null,(0,r.yg)("h1",null,"Other option 1")),(0,r.yg)("td",null,(0,r.yg)("h1",null,"Other option 2"))),(0,r.yg)("tr",null,(0,r.yg)("td",{colspan:"5",align:"center"},(0,r.yg)("h2",null,"Executive Summary"))),(0,r.yg)("tr",{valign:"top"},(0,r.yg)("td",null,"Some dimension"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"1ms"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"5ms"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"4ms"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"5ms")),(0,r.yg)("tr",{valign:"top"},(0,r.yg)("td",null,"Some dimension"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"Superior"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/partial.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"Less popular than competitors"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"Highly popular"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"Highly popular")),(0,r.yg)("tr",{valign:"top"},(0,r.yg)("td",null,"\u2757 Important factor"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/almost-full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"No"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/full.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"Yes"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/partial.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"No"),(0,r.yg)("td",null,(0,r.yg)("img",{src:"/img/docs/decisions/partial.png"}),(0,r.yg)("br",null),(0,r.yg)("br",null),"No")),(0,r.yg)("tr",null,(0,r.yg)("td",{class:"tg-ho3n",colspan:"5",align:"center"},(0,r.yg)("h2",null,"More details: Community & Popularity - March 2022"))),(0,r.yg)("tr",null,(0,r.yg)("td",null,"Stars"),(0,r.yg)("td",null,(0,r.yg)("br",null),"4200 \u2728"),(0,r.yg)("td",null,(0,r.yg)("br",null),"2500 \u2728"),(0,r.yg)("td",null,(0,r.yg)("br",null),"2500 \u2728"),(0,r.yg)("td",null,(0,r.yg)("br",null),"1000 \u2728")),(0,r.yg)("tr",null,(0,r.yg)("td",null,"Downloads/Week"),(0,r.yg)("td",null,(0,r.yg)("br",null),"12,900,223 \ud83d\udcc1"),(0,r.yg)("td",null,(0,r.yg)("br",null),"4,000,000 \ud83d\udcc1"),(0,r.yg)("td",null,(0,r.yg)("br",null),"6,000,000 \ud83d\udcc1"),(0,r.yg)("td",null,(0,r.yg)("br",null),"5,000,000 \ud83d\udcc1")),(0,r.yg)("tr",null,(0,r.yg)("td",null,"Dependents"),(0,r.yg)("td",null,(0,r.yg)("br",null),"26,000 \ud83d\udc69\u200d\ud83d\udc67"),(0,r.yg)("td",null,(0,r.yg)("br",null),"600 \ud83d\udc67"),(0,r.yg)("td",null,(0,r.yg)("br",null),"800 \ud83d\udc67"),(0,r.yg)("td",null,(0,r.yg)("br",null),"1000 \ud83d\udc67"))))}y.isMDXComponent=!0}}]);
\ No newline at end of file
diff --git a/assets/js/7d794bdc.899ed657.js b/assets/js/7d794bdc.899ed657.js
new file mode 100644
index 00000000..8052a3ac
--- /dev/null
+++ b/assets/js/7d794bdc.899ed657.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[7435],{951:e=>{e.exports=JSON.parse('{"permalink":"/blog/tags/clean-architecture","page":1,"postsPerPage":10,"totalPages":1,"totalCount":1,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/7fe44762.e63e55de.js b/assets/js/7fe44762.e63e55de.js
new file mode 100644
index 00000000..e5252434
--- /dev/null
+++ b/assets/js/7fe44762.e63e55de.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[5816],{6771:s=>{s.exports=JSON.parse('{"permalink":"/blog/tags/passport","page":1,"postsPerPage":10,"totalPages":1,"totalCount":2,"blogDescription":"Blog","blogTitle":"Blog"}')}}]);
\ No newline at end of file
diff --git a/assets/js/814f3328.b188be05.js b/assets/js/814f3328.b188be05.js
new file mode 100644
index 00000000..697e92c4
--- /dev/null
+++ b/assets/js/814f3328.b188be05.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[7472],{5513:t=>{t.exports=JSON.parse('{"title":"Recent posts","items":[{"title":"About the sweet and powerful \'use case\' code pattern","permalink":"/blog/about-the-sweet-and-powerful-use-case-code-pattern"},{"title":"A compilation of outstanding testing articles (with JavaScript)","permalink":"/blog/a-compilation-of-outstanding-testing-articles-with-javaScript"},{"title":"Testing the dark scenarios of your Node.js application","permalink":"/blog/testing-the-dark-scenarios-of-your-nodejs-application"},{"title":"Practica v0.0.6 is alive","permalink":"/blog/practica-v0.0.6-is-alive"},{"title":"Is Prisma better than your \'traditional\' ORM?","permalink":"/blog/is-prisma-better-than-your-traditional-orm"},{"title":"Which Monorepo is right for a Node.js BACKEND\xa0now?","permalink":"/blog/monorepo-backend"},{"title":"Popular Node.js patterns and tools to re-consider","permalink":"/blog/popular-nodejs-pattern-and-tools-to-reconsider"},{"title":"Practica.js v0.0.1 is alive","permalink":"/blog/practica-is-alive"}]}')}}]);
\ No newline at end of file
diff --git a/assets/js/8382.27e51a91.js b/assets/js/8382.27e51a91.js
new file mode 100644
index 00000000..8cd9d6ad
--- /dev/null
+++ b/assets/js/8382.27e51a91.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8382],{6669:(e,t,a)=>{a.d(t,{A:()=>h});var l=a(6540),r=a(53),n=a(9408),o=a(4581),s=a(5489),i=a(1312);const m={sidebar:"sidebar_re4s",sidebarItemTitle:"sidebarItemTitle_pO2u",sidebarItemList:"sidebarItemList_Yudw",sidebarItem:"sidebarItem__DBe",sidebarItemLink:"sidebarItemLink_mo7H",sidebarItemLinkActive:"sidebarItemLinkActive_I1ZP"};function c(e){let{sidebar:t}=e;return l.createElement("aside",{className:"col col--3"},l.createElement("nav",{className:(0,r.A)(m.sidebar,"thin-scrollbar"),"aria-label":(0,i.T)({id:"theme.blog.sidebar.navAriaLabel",message:"Blog recent posts navigation",description:"The ARIA label for recent posts in the blog sidebar"})},l.createElement("div",{className:(0,r.A)(m.sidebarItemTitle,"margin-bottom--md")},t.title),l.createElement("ul",{className:(0,r.A)(m.sidebarItemList,"clean-list")},t.items.map((e=>l.createElement("li",{key:e.permalink,className:m.sidebarItem},l.createElement(s.A,{isNavLink:!0,to:e.permalink,className:m.sidebarItemLink,activeClassName:m.sidebarItemLinkActive},e.title)))))))}var u=a(5600);function d(e){let{sidebar:t}=e;return l.createElement("ul",{className:"menu__list"},t.items.map((e=>l.createElement("li",{key:e.permalink,className:"menu__list-item"},l.createElement(s.A,{isNavLink:!0,to:e.permalink,className:"menu__link",activeClassName:"menu__link--active"},e.title)))))}function g(e){return l.createElement(u.GX,{component:d,props:e})}function p(e){let{sidebar:t}=e;const a=(0,o.l)();return t?.items.length?"mobile"===a?l.createElement(g,{sidebar:t}):l.createElement(c,{sidebar:t}):null}function h(e){const{sidebar:t,toc:a,children:o,...s}=e,i=t&&t.items.length>0;return l.createElement(n.A,s,l.createElement("div",{className:"container margin-vert--lg"},l.createElement("div",{className:"row"},l.createElement(p,{sidebar:t}),l.createElement("main",{className:(0,r.A)("col",{"col--7":i,"col--9 col--offset-1":!i}),itemScope:!0,itemType:"http://schema.org/Blog"},o),a&&l.createElement("div",{className:"col col--2"},a))))}},8258:(e,t,a)=>{a.d(t,{A:()=>M});var l=a(6540),r=a(53),n=a(7131),o=a(6025);function s(e){let{children:t,className:a}=e;const{frontMatter:r,assets:s,metadata:{description:i}}=(0,n.e)(),{withBaseUrl:m}=(0,o.h)(),c=s.image??r.image,u=r.keywords??[];return l.createElement("article",{className:a,itemProp:"blogPost",itemScope:!0,itemType:"http://schema.org/BlogPosting"},i&&l.createElement("meta",{itemProp:"description",content:i}),c&&l.createElement("link",{itemProp:"image",href:m(c,{absolute:!0})}),u.length>0&&l.createElement("meta",{itemProp:"keywords",content:u.join(",")}),t)}var i=a(5489);const m={title:"title_f1Hy"};function c(e){let{className:t}=e;const{metadata:a,isBlogPostPage:o}=(0,n.e)(),{permalink:s,title:c}=a,u=o?"h1":"h2";return l.createElement(u,{className:(0,r.A)(m.title,t),itemProp:"headline"},o?c:l.createElement(i.A,{itemProp:"url",to:s},c))}var u=a(1312),d=a(5846);const g={container:"container_mt6G"};function p(e){let{readingTime:t}=e;const a=function(){const{selectMessage:e}=(0,d.W)();return t=>{const a=Math.ceil(t);return e(a,(0,u.T)({id:"theme.blog.post.readingTime.plurals",description:'Pluralized label for "{readingTime} min read". Use as much plural forms (separated by "|") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)',message:"One min read|{readingTime} min read"},{readingTime:a}))}}();return l.createElement(l.Fragment,null,a(t))}function h(e){let{date:t,formattedDate:a}=e;return l.createElement("time",{dateTime:t,itemProp:"datePublished"},a)}function E(){return l.createElement(l.Fragment,null," \xb7 ")}function b(e){let{className:t}=e;const{metadata:a}=(0,n.e)(),{date:o,formattedDate:s,readingTime:i}=a;return l.createElement("div",{className:(0,r.A)(g.container,"margin-vert--md",t)},l.createElement(h,{date:o,formattedDate:s}),void 0!==i&&l.createElement(l.Fragment,null,l.createElement(E,null),l.createElement(p,{readingTime:i})))}function f(e){return e.href?l.createElement(i.A,e):l.createElement(l.Fragment,null,e.children)}function v(e){let{author:t,className:a}=e;const{name:n,title:o,url:s,imageURL:i,email:m}=t,c=s||m&&`mailto:${m}`||void 0;return l.createElement("div",{className:(0,r.A)("avatar margin-bottom--sm",a)},i&&l.createElement(f,{href:c,className:"avatar__photo-link"},l.createElement("img",{className:"avatar__photo",src:i,alt:n,itemProp:"image"})),n&&l.createElement("div",{className:"avatar__intro",itemProp:"author",itemScope:!0,itemType:"https://schema.org/Person"},l.createElement("div",{className:"avatar__name"},l.createElement(f,{href:c,itemProp:"url"},l.createElement("span",{itemProp:"name"},n))),o&&l.createElement("small",{className:"avatar__subtitle",itemProp:"description"},o)))}const P={authorCol:"authorCol_Hf19",imageOnlyAuthorRow:"imageOnlyAuthorRow_pa_O",imageOnlyAuthorCol:"imageOnlyAuthorCol_G86a"};function A(e){let{className:t}=e;const{metadata:{authors:a},assets:o}=(0,n.e)();if(0===a.length)return null;const s=a.every((e=>{let{name:t}=e;return!t}));return l.createElement("div",{className:(0,r.A)("margin-top--md margin-bottom--sm",s?P.imageOnlyAuthorRow:"row",t)},a.map(((e,t)=>l.createElement("div",{className:(0,r.A)(!s&&"col col--6",s?P.imageOnlyAuthorCol:P.authorCol),key:t},l.createElement(v,{author:{...e,imageURL:o.authorsImageUrls[t]??e.imageURL}})))))}function N(){return l.createElement("header",null,l.createElement(c,null),l.createElement(b,null),l.createElement(A,null))}var _=a(440),k=a(7780);function T(e){let{children:t,className:a}=e;const{isBlogPostPage:o}=(0,n.e)();return l.createElement("div",{id:o?_.blogPostContainerID:void 0,className:(0,r.A)("markdown",a),itemProp:"articleBody"},l.createElement(k.A,null,t))}var w=a(1943),I=a(2053),y=a(8168);function F(){return l.createElement("b",null,l.createElement(u.A,{id:"theme.blog.post.readMore",description:"The label used in blog post item excerpts to link to full blog posts"},"Read More"))}function L(e){const{blogPostTitle:t,...a}=e;return l.createElement(i.A,(0,y.A)({"aria-label":(0,u.T)({message:"Read more about {title}",id:"theme.blog.post.readMoreLabel",description:"The ARIA label for the link to full blog posts from excerpts"},{title:t})},a),l.createElement(F,null))}const B={blogPostFooterDetailsFull:"blogPostFooterDetailsFull_mRVl"};function C(){const{metadata:e,isBlogPostPage:t}=(0,n.e)(),{tags:a,title:o,editUrl:s,hasTruncateMarker:i}=e,m=!t&&i,c=a.length>0;return c||m||s?l.createElement("footer",{className:(0,r.A)("row docusaurus-mt-lg",t&&B.blogPostFooterDetailsFull)},c&&l.createElement("div",{className:(0,r.A)("col",{"col--9":m})},l.createElement(I.A,{tags:a})),t&&s&&l.createElement("div",{className:"col margin-top--sm"},l.createElement(w.A,{editUrl:s})),m&&l.createElement("div",{className:(0,r.A)("col text--right",{"col--3":c})},l.createElement(L,{blogPostTitle:o,to:e.permalink}))):null}function M(e){let{children:t,className:a}=e;const o=function(){const{isBlogPostPage:e}=(0,n.e)();return e?void 0:"margin-bottom--xl"}();return l.createElement(s,{className:(0,r.A)(o,a)},l.createElement(N,null),l.createElement(T,null,t),l.createElement(C,null))}},7131:(e,t,a)=>{a.d(t,{e:()=>s,i:()=>o});var l=a(6540),r=a(9532);const n=l.createContext(null);function o(e){let{children:t,content:a,isBlogPostPage:r=!1}=e;const o=function(e){let{content:t,isBlogPostPage:a}=e;return(0,l.useMemo)((()=>({metadata:t.metadata,frontMatter:t.frontMatter,assets:t.assets,toc:t.toc,isBlogPostPage:a})),[t,a])}({content:a,isBlogPostPage:r});return l.createElement(n.Provider,{value:o},t)}function s(){const e=(0,l.useContext)(n);if(null===e)throw new r.dV("BlogPostProvider");return e}},5846:(e,t,a)=>{a.d(t,{W:()=>m});var l=a(6540),r=a(4586);const n=["zero","one","two","few","many","other"];function o(e){return n.filter((t=>e.includes(t)))}const s={locale:"en",pluralForms:o(["one","other"]),select:e=>1===e?"one":"other"};function i(){const{i18n:{currentLocale:e}}=(0,r.A)();return(0,l.useMemo)((()=>{try{return function(e){const t=new Intl.PluralRules(e);return{locale:e,pluralForms:o(t.resolvedOptions().pluralCategories),select:e=>t.select(e)}}(e)}catch(t){return console.error(`Failed to use Intl.PluralRules for locale "${e}".\nDocusaurus will fallback to the default (English) implementation.\nError: ${t.message}\n`),s}}),[e])}function m(){const e=i();return{selectMessage:(t,a)=>function(e,t,a){const l=e.split("|");if(1===l.length)return l[0];l.length>a.pluralForms.length&&console.error(`For locale=${a.locale}, a maximum of ${a.pluralForms.length} plural forms are expected (${a.pluralForms.join(",")}), but the message contains ${l.length}: ${e}`);const r=a.select(t),n=a.pluralForms.indexOf(r);return l[Math.min(n,l.length-1)]}(a,t,e)}}}}]);
\ No newline at end of file
diff --git a/assets/js/85510b4d.90255dce.js b/assets/js/85510b4d.90255dce.js
new file mode 100644
index 00000000..63ce6867
--- /dev/null
+++ b/assets/js/85510b4d.90255dce.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[122],{5680:(e,t,a)=>{a.d(t,{xA:()=>d,yg:()=>g});var n=a(6540);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function s(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var s=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var l=n.createContext({}),c=function(e){var t=n.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},d=function(e){var t=c(e.components);return n.createElement(l.Provider,{value:t},e.children)},h="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},p=n.forwardRef((function(e,t){var a=e.components,r=e.mdxType,s=e.originalType,l=e.parentName,d=o(e,["components","mdxType","originalType","parentName"]),h=c(a),p=r,g=h["".concat(l,".").concat(p)]||h[p]||u[p]||s;return a?n.createElement(g,i(i({ref:t},d),{},{components:a})):n.createElement(g,i({ref:t},d))}));function g(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var s=a.length,i=new Array(s);i[0]=p;var o={};for(var l in t)hasOwnProperty.call(t,l)&&(o[l]=t[l]);o.originalType=e,o[h]="string"==typeof e?e:r,i[1]=o;for(var c=2;c{a.r(t),a.d(t,{assets:()=>l,contentTitle:()=>i,default:()=>u,frontMatter:()=>s,metadata:()=>o,toc:()=>c});var n=a(8168),r=(a(6540),a(5680));const s={slug:"about-the-sweet-and-powerful-use-case-code-pattern",date:"2025-03-05T10:00",hide_table_of_contents:!0,title:"About the sweet and powerful 'use case' code pattern",authors:["goldbergyoni"],tags:["node.js","use-case","clean-architecture","javascript","tdd","workflow","domain","tdd"]},i=void 0,o={permalink:"/blog/about-the-sweet-and-powerful-use-case-code-pattern",editUrl:"https://github.com/practicajs/practica/tree/main/docs/blog/use-case/index.md",source:"@site/blog/use-case/index.md",title:"About the sweet and powerful 'use case' code pattern",description:"Intro: A sweet pattern that got lost in time",date:"2025-03-05T10:00:00.000Z",formattedDate:"March 5, 2025",tags:[{label:"node.js",permalink:"/blog/tags/node-js"},{label:"use-case",permalink:"/blog/tags/use-case"},{label:"clean-architecture",permalink:"/blog/tags/clean-architecture"},{label:"javascript",permalink:"/blog/tags/javascript"},{label:"tdd",permalink:"/blog/tags/tdd"},{label:"workflow",permalink:"/blog/tags/workflow"},{label:"domain",permalink:"/blog/tags/domain"}],readingTime:17.875,hasTruncateMarker:!1,authors:[{name:"Yoni Goldberg",title:"Practica.js core maintainer",url:"https://github.com/goldbergyoni",imageURL:"https://github.com/goldbergyoni.png",key:"goldbergyoni"}],frontMatter:{slug:"about-the-sweet-and-powerful-use-case-code-pattern",date:"2025-03-05T10:00",hide_table_of_contents:!0,title:"About the sweet and powerful 'use case' code pattern",authors:["goldbergyoni"],tags:["node.js","use-case","clean-architecture","javascript","tdd","workflow","domain","tdd"]},nextItem:{title:"A compilation of outstanding testing articles (with JavaScript)",permalink:"/blog/a-compilation-of-outstanding-testing-articles-with-javaScript"}},l={authorsImageUrls:[void 0]},c=[{value:"Intro: A sweet pattern that got lost in time",id:"intro-a-sweet-pattern-that-got-lost-in-time",level:2},{value:"The problem: too many details, too soon",id:"the-problem-too-many-details-too-soon",level:2},{value:"The use-case pattern",id:"the-use-case-pattern",level:2},{value:"The merits",id:"the-merits",level:2},{value:"1. A navigation index",id:"1-a-navigation-index",level:3},{value:"2. Deferred and spread complexity",id:"2-deferred-and-spread-complexity",level:3},{value:"3. A practical workflow that promotes efficiency",id:"3-a-practical-workflow-that-promotes-efficiency",level:3},{value:"4. The optimal design viewpoint",id:"4-the-optimal-design-viewpoint",level:3},{value:"5. Better coverage reports",id:"5-better-coverage-reports",level:3},{value:"6. Practical domain-driven code",id:"6-practical-domain-driven-code",level:3},{value:"7. Consistent observability",id:"7-consistent-observability",level:3},{value:"Implementation best practices",id:"implementation-best-practices",level:2},{value:"1. Dead-simple 'no code'",id:"1-dead-simple-no-code",level:3},{value:"2. Find the right level of specificity",id:"2-find-the-right-level-of-specificity",level:3},{value:"3. When have no choice, control the DB transaction from the use-case",id:"3-when-have-no-choice-control-the-db-transaction-from-the-use-case",level:3},{value:"4. Aggregate small use-cases in a single file",id:"4-aggregate-small-use-cases-in-a-single-file",level:3},{value:"Closing: Easy to start, use everywhere",id:"closing-easy-to-start-use-everywhere",level:2}],d={toc:c},h="wrapper";function u(e){let{components:t,...s}=e;return(0,r.yg)(h,(0,n.A)({},d,s,{components:t,mdxType:"MDXLayout"}),(0,r.yg)("h2",{id:"intro-a-sweet-pattern-that-got-lost-in-time"},"Intro: A sweet pattern that got lost in time"),(0,r.yg)("p",null,"When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time. "),(0,r.yg)("p",null,"The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the ",(0,r.yg)("em",{parentName:"p"},"code level")," by emphasizing its surprising merits how to implement it correctly."),(0,r.yg)("p",null,"Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples."),(0,r.yg)("p",null,"But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble."),(0,r.yg)("p",null,(0,r.yg)("em",{parentName:"p"},"Prefer a 10 min video? Watch here, or keep reading below")),(0,r.yg)("iframe",{width:"1024",height:"768",src:"https://www.youtube.com/embed/y4mBg920UZA?si=A_ZTVzG0AjVhzQcd",title:"About the use-case code pattern",frameborder:"0",allow:"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture",allowfullscreen:!0}),(0,r.yg)("h2",{id:"the-problem-too-many-details-too-soon"},"The problem: too many details, too soon"),(0,r.yg)("p",null,"Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'\u2014specifically, an issue with price calculation in an electronic shop app."),(0,r.yg)("p",null,"Her journey begins promisingly smooth:"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83e\udd17 Testing -")," She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'test("When adding an order with 100$ product, then the price charge should be 100$ ", async () => {\n // ....\n})\n')),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83e\udd17 Controller -")," She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'app.post("/api/order", async (req: Request, res: Response) => {\n const newOrder = req.body;\n await orderService.addOrder(newOrder); // \ud83d\udc48 This is where the real-work is done\n res.status(200).json({ message: "Order created successfully" });\n});\n')),(0,r.yg)("p",null,"Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- \ud83d\ude32 The service -")," Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'let DBRepository;\n\nexport class OrderService : ServiceBase {\n async addOrder(orderRequest: OrderRequest): Promise {\n try {\n ensureDBRepositoryInitialized();\n const { openTelemetry, monitoring, secretManager, priceService, userService } =\n dependencyInjection.getVariousServices();\n logger.info("Add order flow starts now", orderRequest);\n openTelemetry.sendEvent("new order", orderRequest);\n\n const validationRules = await getFromConfigSystem("order-validation-rules");\n const validatedOrder = validateOrder(orderRequest, validationRules);\n if (!validatedOrder) {\n throw new Error("Invalid order");\n }\n this.base.startTransaction();\n const user = await userService.getUserInfo(validatedOrder.customerId);\n if (!user) {\n const savedOrder = await tryAddUserWithLegacySystem(validatedOrder);\n return savedOrder;\n }\n // And it goes on and on until the pricing module is mentioned\n}\n')),(0,r.yg)("p",null,"So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?"),(0,r.yg)("p",null,"She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task."),(0,r.yg)("h2",{id:"the-use-case-pattern"},"The use-case pattern"),(0,r.yg)("p",null,"In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about."),(0,r.yg)("p",null,"The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"A use-case code example",src:a(132).A,width:"1321",height:"444"})),(0,r.yg)("p",null,"Each interaction with the system\u2014whether it's posting a new comment, requesting user deletion, or any other action\u2014is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow."),(0,r.yg)("p",null,"By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'."),(0,r.yg)("p",null,"But why is this minimalistic approach so crucial?"),(0,r.yg)("h2",{id:"the-merits"},"The merits"),(0,r.yg)("h3",{id:"1-a-navigation-index"},"1. A navigation index"),(0,r.yg)("p",null,"When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Library catalog",src:a(4186).A,width:"1792",height:"1024"}),"\n",(0,r.yg)("em",{parentName:"p"},"The library catalog redirects the reader to the area of interest")),(0,r.yg)("p",null,"Similarly, in software development, when a developer needs to address a particular issue\u2014such as fixing a bug in pricing calculations\u2014the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells ",(0,r.yg)("em",{parentName:"p"},"when precisely")," this module is used, what is the ",(0,r.yg)("em",{parentName:"p"},"specific")," entry point and which ",(0,r.yg)("em",{parentName:"p"},"exact")," parameters are passed."),(0,r.yg)("h3",{id:"2-deferred-and-spread-complexity"},"2. Deferred and spread complexity"),(0,r.yg)("p",null,"When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system"),(0,r.yg)("p",null,"When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called ",(0,r.yg)("em",{parentName:"p"},"accidental complexity"),". Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it."),(0,r.yg)("p",null,"Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"The blocking-complexity tree",src:a(7951).A,width:"792",height:"760"}),"\n",(0,r.yg)("em",{parentName:"p"},"The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.")),(0,r.yg)("p",null,"This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset\u2014a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win."),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"The spread-complexity tree",src:a(9635).A,width:"792",height:"760"}),"\n",(0,r.yg)("em",{parentName:"p"},"The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.")),(0,r.yg)("h3",{id:"3-a-practical-workflow-that-promotes-efficiency"},"3. A practical workflow that promotes efficiency"),(0,r.yg)("p",null,"When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on."),(0,r.yg)("p",null,"While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const orderWithPricing = calculateOrderPricing(validatedOrder);\n const purchasingCustomer = await assertCustomerExists(orderWithPricing.customerId);\n const savedOrder = await insertOrder(orderWithPricing);\n await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email);\n}\n")),(0,r.yg)("p",null,"This structured approach allows you to preemptively tackle potential implementation hurdles:"),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- sendSuccessEmailToCustomer -")," What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting ",(0,r.yg)("em",{parentName:"p"},"now"),", before spending 3 days on coding, can make a big difference."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- calculateOrderPricing -")," Reminds you to confirm pricing details with the product team\u2014ideally before they're out of office, avoiding delays that could impact your delivery timeline."),(0,r.yg)("p",null,(0,r.yg)("strong",{parentName:"p"},"- assertCustomerExists -")," This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later."),(0,r.yg)("p",null,"Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:"),(0,r.yg)("h3",{id:"4-the-optimal-design-viewpoint"},"4. The optimal design viewpoint"),(0,r.yg)("p",null,"Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email, orderId);\nconst savedOrder = await insertOrder(orderWithPricing);\n")),(0,r.yg)("p",null,"Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used."),(0,r.yg)("h3",{id:"5-better-coverage-reports"},"5. Better coverage reports"),(0,r.yg)("p",null,"Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code ",(0,r.yg)("em",{parentName:"p"},"exactly")," is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task."),(0,r.yg)("p",null,"Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets ",(0,r.yg)("em",{parentName:"p"},"'features coverage'"),", a unique look into which user features and steps lack testing:"),(0,r.yg)("p",null,(0,r.yg)("img",{alt:"Use case coverage",src:a(2899).A,width:"1327",height:"713"}),"\n",(0,r.yg)("em",{parentName:"p"},"The use-cases folder test coverage report, some use-cases are only partially tested")),(0,r.yg)("p",null,"See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality."),(0,r.yg)("h3",{id:"6-practical-domain-driven-code"},"6. Practical domain-driven code"),(0,r.yg)("p",null,'The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?'),(0,r.yg)("p",null,"Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team."),(0,r.yg)("h3",{id:"7-consistent-observability"},"7. Consistent observability"),(0,r.yg)("p",null,"I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability."),(0,r.yg)("p",null,"Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step."),(0,r.yg)("p",null,"The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'// \u2757\ufe0fVerbose use case\nexport async function addOrderUseCase(orderRequest: OrderRequest): Promise {\n logger.info("Add order use case - Adding order starts now", orderRequest);\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n logger.debug("Add order use case - The order was validated", validatedOrder);\n const orderWithPricing = calculateOrderPricing(validatedOrder);\n logger.debug("Add order use case - The order pricing was decided", validatedOrder);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);\n logger.debug("Add order use case - Verified the user balance already", purchasingCustomer);\n const returnOrder = mapFromRepositoryToDto(purchasingCustomer as unknown as OrderRecord);\n logger.info("Add order use case - About to return result", returnOrder);\n return returnOrder;\n}\n')),(0,r.yg)("p",null,"One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'import { openTelemetry } from "@opentelemetry";\nasync function runUseCaseStep(stepName, stepFunction) {\n logger.debug(`Use case step ${stepName} starts now`);\n // Create Open Telemetry custom span\n openTelemetry.startSpan(stepName);\n return await stepFunction();\n}\n')),(0,r.yg)("p",null,"Now the use-case gets automated and consistent transparency:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},'export async function addOrderUseCase(orderRequest: OrderRequest) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const validatedOrder = await runUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest));\n const orderWithPricing = await runUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder));\n await runUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing));\n}\n')),(0,r.yg)("p",null,"The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets ",(0,r.yg)("em",{parentName:"p"},"automated and consistent")," observability."),(0,r.yg)("h2",{id:"implementation-best-practices"},"Implementation best practices"),(0,r.yg)("h3",{id:"1-dead-simple-no-code"},"1. Dead-simple 'no code'"),(0,r.yg)("p",null,"Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put ",(0,r.yg)("em",{parentName:"p"},"only one")," If/Else in a use-case: "),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);\n if (purchasingCustomer.isPremium) {//\u2757\ufe0f\n sendEmailToPremiumCustomer(purchasingCustomer);\n // This easily will grow with time to multiple if/else\n }\n}\n")),(0,r.yg)("p",null,"A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);\n await sendEmailIfPremiumCustomer(purchasingCustomer); //\ud83d\ude42\n}\n")),(0,r.yg)("h3",{id:"2-find-the-right-level-of-specificity"},"2. Find the right level of specificity"),(0,r.yg)("p",null,"The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const finalOrderToSave = await applyAllBusinessLogic(validatedOrder);//\ud83e\udd14\n await insertOrder(finalOrderToSave);\n}\n")),(0,r.yg)("p",null,"The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n const validatedOrder = validateAndCoerceOrder(orderRequest);\n const pricedOrder = await calculatePrice(validatedOrder);\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);\n const orderWithShippingInstructions = await addShippingInfo(pricedOrder, purchasingCustomer);\n await insertOrder(orderWithShippingInstructions);\n}\n")),(0,r.yg)("p",null,"Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other."),(0,r.yg)("h3",{id:"3-when-have-no-choice-control-the-db-transaction-from-the-use-case"},"3. When have no choice, control the DB transaction from the use-case"),(0,r.yg)("p",null,"What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?"),(0,r.yg)("p",null,"If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"export async function addOrderUseCase(orderRequest: OrderRequest) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const transaction = Repository.startTransaction();\n const purchasingCustomer = await assertCustomerHasEnoughBalance(orderRequest, transaction);\n const orderWithPricing = calculateOrderPricing(purchasingCustomer);\n const savedOrder = await insertOrder(orderWithPricing, transaction);\n const returnOrder = mapFromRepositoryToDto(savedOrder);\n Repository.commitTransaction(transaction);\n return returnOrder;\n}\n")),(0,r.yg)("h3",{id:"4-aggregate-small-use-cases-in-a-single-file"},"4. Aggregate small use-cases in a single file"),(0,r.yg)("p",null,"A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:"),(0,r.yg)("pre",null,(0,r.yg)("code",{parentName:"pre",className:"language-javascript"},"// query-orders-use-cases.ts\nexport async function getOrder(id) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const result = await orderRepository.getOrderByID(id);\n return result;\n}\n\nexport async function getAllOrders(criteria) {\n // \ud83d\uddbc This is a use case - the story of the flow. Only simple, flat and high-level code is allowed\n const result = await orderRepository.queryOrders(criteria);\n return result;\n}\n")),(0,r.yg)("h2",{id:"closing-easy-to-start-use-everywhere"},"Closing: Easy to start, use everywhere"),(0,r.yg)("p",null,"If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature."),(0,r.yg)("p",null,"Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component."),(0,r.yg)("p",null,"You might think this all sounds remarkably straightforward\u2014and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software."))}u.isMDXComponent=!0},7951:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/blocking-complexity-tree-dd1cde956e00160fe4fadf67d6dd3649.jpg"},9635:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/deferred-complexity-tree-3407b9e6f355d2e32aacfc0bd7216de4.jpg"},4186:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/library-catalog-37d0f18aa61b71ed77ae72a945f3c1de.webp"},2899:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/use-case-coverage-3f223674f7783dfc904109647ad99304.png"},132:(e,t,a)=>{a.d(t,{A:()=>n});const n=a.p+"assets/images/use-code-example-6d6c34330ad8a86f7c511123d4d5f654.png"}}]);
\ No newline at end of file
diff --git a/assets/js/8809.dabd488c.js b/assets/js/8809.dabd488c.js
new file mode 100644
index 00000000..78a8183a
--- /dev/null
+++ b/assets/js/8809.dabd488c.js
@@ -0,0 +1 @@
+(self.webpackChunkpractica_docs=self.webpackChunkpractica_docs||[]).push([[8809],{5680:(e,t,n)=>{"use strict";n.d(t,{xA:()=>u,yg:()=>f});var o=n(6540);function a(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function r(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,o)}return n}function c(e){for(var t=1;t=0||(a[n]=e[n]);return a}(e,t);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(a[n]=e[n])}return a}var i=o.createContext({}),s=function(e){var t=o.useContext(i),n=t;return e&&(n="function"==typeof e?e(t):c(c({},t),e)),n},u=function(e){var t=s(e.components);return o.createElement(i.Provider,{value:t},e.children)},m="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},p=o.forwardRef((function(e,t){var n=e.components,a=e.mdxType,r=e.originalType,i=e.parentName,u=l(e,["components","mdxType","originalType","parentName"]),m=s(n),p=a,f=m["".concat(i,".").concat(p)]||m[p]||d[p]||r;return n?o.createElement(f,c(c({ref:t},u),{},{components:n})):o.createElement(f,c({ref:t},u))}));function f(e,t){var n=arguments,a=t&&t.mdxType;if("string"==typeof e||a){var r=n.length,c=new Array(r);c[0]=p;var l={};for(var i in t)hasOwnProperty.call(t,i)&&(l[i]=t[i]);l.originalType=e,l[m]="string"==typeof e?e:a,c[1]=l;for(var s=2;s{"use strict";n.d(t,{A:()=>u});var o=n(6540),a=n(1312),r=n(7559),c=n(8168),l=n(53);const i={iconEdit:"iconEdit_Z9Sw"};function s(e){let{className:t,...n}=e;return o.createElement("svg",(0,c.A)({fill:"currentColor",height:"20",width:"20",viewBox:"0 0 40 40",className:(0,l.A)(i.iconEdit,t),"aria-hidden":"true"},n),o.createElement("g",null,o.createElement("path",{d:"m34.5 11.7l-3 3.1-6.3-6.3 3.1-3q0.5-0.5 1.2-0.5t1.1 0.5l3.9 3.9q0.5 0.4 0.5 1.1t-0.5 1.2z m-29.5 17.1l18.4-18.5 6.3 6.3-18.4 18.4h-6.3v-6.2z"})))}function u(e){let{editUrl:t}=e;return o.createElement("a",{href:t,target:"_blank",rel:"noreferrer noopener",className:r.G.common.editThisPage},o.createElement(s,null),o.createElement(a.A,{id:"theme.common.editThisPage",description:"The link label to edit the current page"},"Edit this page"))}},1107:(e,t,n)=>{"use strict";n.d(t,{A:()=>u});var o=n(8168),a=n(6540),r=n(53),c=n(1312),l=n(6342),i=n(5489);const s={anchorWithStickyNavbar:"anchorWithStickyNavbar_LWe7",anchorWithHideOnScrollNavbar:"anchorWithHideOnScrollNavbar_WYt5"};function u(e){let{as:t,id:n,...u}=e;const{navbar:{hideOnScroll:m}}=(0,l.p)();if("h1"===t||!n)return a.createElement(t,(0,o.A)({},u,{id:void 0}));const d=(0,c.T)({id:"theme.common.headingLinkTitle",message:"Direct link to {heading}",description:"Title for link to heading"},{heading:"string"==typeof u.children?u.children:n});return a.createElement(t,(0,o.A)({},u,{className:(0,r.A)("anchor",m?s.anchorWithHideOnScrollNavbar:s.anchorWithStickyNavbar,u.className),id:n}),u.children,a.createElement(i.A,{className:"hash-link",to:`#${n}`,"aria-label":d,title:d},"\u200b"))}},7780:(e,t,n)=>{"use strict";n.d(t,{A:()=>ye});var o=n(6540),a=n(5680),r=n(8168),c=n(5260);var l=n(2303),i=n(53),s=n(5293),u=n(6342);function m(){const{prism:e}=(0,u.p)(),{colorMode:t}=(0,s.G)(),n=e.theme,o=e.darkTheme||n;return"dark"===t?o:n}var d=n(7559),p=n(8426),f=n.n(p);const g=/title=(?["'])(?.*?)\1/,h=/\{(?[\d,-]+)\}/,y={js:{start:"\\/\\/",end:""},jsBlock:{start:"\\/\\*",end:"\\*\\/"},jsx:{start:"\\{\\s*\\/\\*",end:"\\*\\/\\s*\\}"},bash:{start:"#",end:""},html:{start:"\x3c!--",end:"--\x3e"}};function b(e,t){const n=e.map((e=>{const{start:n,end:o}=y[e];return`(?:${n}\\s*(${t.flatMap((e=>[e.line,e.block?.start,e.block?.end].filter(Boolean))).join("|")})\\s*${o})`})).join("|");return new RegExp(`^\\s*(?:${n})\\s*$`)}function v(e,t){let n=e.replace(/\n$/,"");const{language:o,magicComments:a,metastring:r}=t;if(r&&h.test(r)){const e=r.match(h).groups.range;if(0===a.length)throw new Error(`A highlight range has been given in code block's metastring (\`\`\` ${r}), but no magic comment config is available. Docusaurus applies the first magic comment entry's className for metastring ranges.`);const t=a[0].className,o=f()(e).filter((e=>e>0)).map((e=>[e-1,[t]]));return{lineClassNames:Object.fromEntries(o),code:n}}if(void 0===o)return{lineClassNames:{},code:n};const c=function(e,t){switch(e){case"js":case"javascript":case"ts":case"typescript":return b(["js","jsBlock"],t);case"jsx":case"tsx":return b(["js","jsBlock","jsx"],t);case"html":return b(["js","jsBlock","html"],t);case"python":case"py":case"bash":return b(["bash"],t);case"markdown":case"md":return b(["html","jsx","bash"],t);default:return b(Object.keys(y),t)}}(o,a),l=n.split("\n"),i=Object.fromEntries(a.map((e=>[e.className,{start:0,range:""}]))),s=Object.fromEntries(a.filter((e=>e.line)).map((e=>{let{className:t,line:n}=e;return[n,t]}))),u=Object.fromEntries(a.filter((e=>e.block)).map((e=>{let{className:t,block:n}=e;return[n.start,t]}))),m=Object.fromEntries(a.filter((e=>e.block)).map((e=>{let{className:t,block:n}=e;return[n.end,t]})));for(let p=0;pvoid 0!==e));s[t]?i[s[t]].range+=`${p},`:u[t]?i[u[t]].start=p:m[t]&&(i[m[t]].range+=`${i[m[t]].start}-${p-1},`),l.splice(p,1)}n=l.join("\n");const d={};return Object.entries(i).forEach((e=>{let[t,{range:n}]=e;f()(n).forEach((e=>{d[e]??=[],d[e].push(t)}))})),{lineClassNames:d,code:n}}const E={codeBlockContainer:"codeBlockContainer_Ckt0"};function k(e){let{as:t,...n}=e;const a=function(e){const t={color:"--prism-color",backgroundColor:"--prism-background-color"},n={};return Object.entries(e.plain).forEach((e=>{let[o,a]=e;const r=t[o];r&&"string"==typeof a&&(n[r]=a)})),n}(m());return o.createElement(t,(0,r.A)({},n,{style:a,className:(0,i.A)(n.className,E.codeBlockContainer,d.G.common.codeBlock)}))}const N={codeBlockContent:"codeBlockContent_biex",codeBlockTitle:"codeBlockTitle_Ktv7",codeBlock:"codeBlock_bY9V",codeBlockStandalone:"codeBlockStandalone_MEMb",codeBlockLines:"codeBlockLines_e6Vv",codeBlockLinesWithNumbering:"codeBlockLinesWithNumbering_o6Pm",buttonGroup:"buttonGroup__atx"};function A(e){let{children:t,className:n}=e;return o.createElement(k,{as:"pre",tabIndex:0,className:(0,i.A)(N.codeBlockStandalone,"thin-scrollbar",n)},o.createElement("code",{className:N.codeBlockLines},t))}var C=n(9532);const w={attributes:!0,characterData:!0,childList:!0,subtree:!0};function B(e,t){const[n,a]=(0,o.useState)(),r=(0,o.useCallback)((()=>{a(e.current?.closest("[role=tabpanel][hidden]"))}),[e,a]);(0,o.useEffect)((()=>{r()}),[r]),function(e,t,n){void 0===n&&(n=w);const a=(0,C._q)(t),r=(0,C.Be)(n);(0,o.useEffect)((()=>{const t=new MutationObserver(a);return e&&t.observe(e,r),()=>t.disconnect()}),[e,a,r])}(n,(e=>{e.forEach((e=>{"attributes"===e.type&&"hidden"===e.attributeName&&(t(),r())}))}),{attributes:!0,characterData:!1,childList:!1,subtree:!1})}const T={plain:{backgroundColor:"#2a2734",color:"#9a86fd"},styles:[{types:["comment","prolog","doctype","cdata","punctuation"],style:{color:"#6c6783"}},{types:["namespace"],style:{opacity:.7}},{types:["tag","operator","number"],style:{color:"#e09142"}},{types:["property","function"],style:{color:"#9a86fd"}},{types:["tag-id","selector","atrule-id"],style:{color:"#eeebff"}},{types:["attr-name"],style:{color:"#c4b9fe"}},{types:["boolean","string","entity","url","attr-value","keyword","control","directive","unit","statement","regex","atrule","placeholder","variable"],style:{color:"#ffcc99"}},{types:["deleted"],style:{textDecorationLine:"line-through"}},{types:["inserted"],style:{textDecorationLine:"underline"}},{types:["italic"],style:{fontStyle:"italic"}},{types:["important","bold"],style:{fontWeight:"bold"}},{types:["important"],style:{color:"#c4b9fe"}}]};var L={Prism:n(1258).A,theme:T};function _(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function j(){return j=Object.assign||function(e){for(var t=1;t0&&e[n-1]===t?e:e.concat(t)};function P(e,t){var n={};for(var o in e)Object.prototype.hasOwnProperty.call(e,o)&&-1===t.indexOf(o)&&(n[o]=e[o]);return n}var z=function(e){function t(){for(var t=this,n=[],o=arguments.length;o--;)n[o]=arguments[o];e.apply(this,n),_(this,"getThemeDict",(function(e){if(void 0!==t.themeDict&&e.theme===t.prevTheme&&e.language===t.prevLanguage)return t.themeDict;t.prevTheme=e.theme,t.prevLanguage=e.language;var n=e.theme?function(e,t){var n=e.plain,o=Object.create(null),a=e.styles.reduce((function(e,n){var o=n.languages,a=n.style;return o&&!o.includes(t)||n.types.forEach((function(t){var n=j({},e[t],a);e[t]=n})),e}),o);return a.root=n,a.plain=j({},n,{backgroundColor:null}),a}(e.theme,e.language):void 0;return t.themeDict=n})),_(this,"getLineProps",(function(e){var n=e.key,o=e.className,a=e.style,r=j({},P(e,["key","className","style","line"]),{className:"token-line",style:void 0,key:void 0}),c=t.getThemeDict(t.props);return void 0!==c&&(r.style=c.plain),void 0!==a&&(r.style=void 0!==r.style?j({},r.style,a):a),void 0!==n&&(r.key=n),o&&(r.className+=" "+o),r})),_(this,"getStyleForToken",(function(e){var n=e.types,o=e.empty,a=n.length,r=t.getThemeDict(t.props);if(void 0!==r){if(1===a&&"plain"===n[0])return o?{display:"inline-block"}:void 0;if(1===a&&!o)return r[n[0]];var c=o?{display:"inline-block"}:{},l=n.map((function(e){return r[e]}));return Object.assign.apply(Object,[c].concat(l))}})),_(this,"getTokenProps",(function(e){var n=e.key,o=e.className,a=e.style,r=e.token,c=j({},P(e,["key","className","style","token"]),{className:"token "+r.types.join(" "),children:r.content,style:t.getStyleForToken(r),key:void 0});return void 0!==a&&(c.style=void 0!==c.style?j({},c.style,a):a),void 0!==n&&(c.key=n),o&&(c.className+=" "+o),c})),_(this,"tokenize",(function(e,t,n,o){var a={code:t,grammar:n,language:o,tokens:[]};e.hooks.run("before-tokenize",a);var r=a.tokens=e.tokenize(a.code,a.grammar,a.language);return e.hooks.run("after-tokenize",a),r}))}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.render=function(){var e=this.props,t=e.Prism,n=e.language,o=e.code,a=e.children,r=this.getThemeDict(this.props),c=t.languages[n];return a({tokens:function(e){for(var t=[[]],n=[e],o=[0],a=[e.length],r=0,c=0,l=[],i=[l];c>-1;){for(;(r=o[c]++)0?u:["plain"],s=m):(u=S(u,m.type),m.alias&&(u=S(u,m.alias)),s=m.content),"string"==typeof s){var d=s.split(x),p=d.length;l.push({types:u,content:d[0]});for(var f=1;f
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
+
+
+
+
\ No newline at end of file
diff --git a/blog/archive/index.html b/blog/archive/index.html
new file mode 100644
index 00000000..d31abcfe
--- /dev/null
+++ b/blog/archive/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+Archive | Practica.js
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/blog/atom.xml b/blog/atom.xml
new file mode 100644
index 00000000..6f44093f
--- /dev/null
+++ b/blog/atom.xml
@@ -0,0 +1,181 @@
+
+
+ https://practica.dev/blog
+ Practica.js Blog
+ 2025-03-05T10:00:00.000Z
+ https://github.com/jpmonette/feed
+
+ Practica.js Blog
+ https://practica.dev/img/favicon.ico
+
+
+ https://practica.dev/blog/about-the-sweet-and-powerful-use-case-code-pattern
+
+ 2025-03-05T10:00:00.000Z
+
+ Intro: A sweet pattern that got lost in time
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
]]>
+
+ Yoni Goldberg
+ https://github.com/goldbergyoni
+
+
+
+
+
+
+
+
+
+
+
+ https://practica.dev/blog/a-compilation-of-outstanding-testing-articles-with-javaScript
+
+ 2023-08-06T10:00:00.000Z
+
+ What's special about this article?
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
]]>
+
+ Yoni Goldberg
+ https://github.com/goldbergyoni
+
+
+
+
+
+
+
+
+
+
+ https://practica.dev/blog/testing-the-dark-scenarios-of-your-nodejs-application
+
+ 2023-07-07T11:00:00.000Z
+
+ Where the dead-bodies are covered
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
]]>
+
+ Yoni Goldberg
+ https://github.com/goldbergyoni
+
+
+ Raz Luvaton
+ https://github.com/rluvaton
+
+
+ Daniel Gluskin
+ https://github.com/DanielGluskin
+
+
+ Michael Salomon
+ https://github.com/mikicho
+
+
+
+
+
+
+
+
+ https://practica.dev/blog/is-prisma-better-than-your-traditional-orm
+
+ 2022-12-07T11:00:00.000Z
+
+ Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Just before delving into the strategic differences, for the benefit of those unfamiliar with Prisma - here is a quick 'hello-world' workflow with Prisma ORM. If you're already familiar with it - skipping to the next section sounds sensible. Simply put, Prisma dictates 3 key steps to get our ORM code working:
A. Define a model - Unlike almost any other ORM, Prisma brings a unique language (DSL) for modeling the database-to-code mapping. This proprietary syntax aims to express these models with minimum clutter (i.e., TypeScript generics and verbose code). Worried about having intellisense and validation? A well-crafted vscode extension gets you covered. In the following example, the prisma.schema file describes a DB with an Order table that has a one-to-many relation with a Country table:
// prisma.schema file model Order { id Int @id @default(autoincrement()) userId Int? paymentTermsInDays Int? deliveryAddress String? @db.VarChar(255) country Country @relation(fields: [countryId], references: [id]) countryId Int } model Country { id Int @id @default(autoincrement()) name String @db.VarChar(255) Order Order[] }
B. Generate the client code - Another unusual technique: to get the ORM code ready, one must invoke Prisma's CLI and ask for it:
npx prisma generate
Alternatively, if you wish to have your DB ready and the code generated with one command, just fire:
npx prisma migrate deploy
This will generate migration files that you can execute later in production and also the ORM client code
This will generate migration files that you can execute later in production and the TypeScript ORM code based on the model. The generated code location is defaulted under '[root]/NODE_MODULES/.prisma/client'. Every time the model changes, the code must get re-generated again. While most ORMs name this code 'repository' or 'entity' or 'active record', interestingly, Prisma calls it a 'client'. This shows part of its unique philosophy, which we will explore later
C. All good, use the client to interact with the DB - The generated client has a rich set of functions and types for your DB interactions. Just import the ORM/client code and use it:
import{PrismaClient}from'.prisma/client'; const prisma =newPrismaClient(); // A query example await prisma.order.findMany({ where:{ paymentTermsInDays:30, }, orderBy:{ id:'asc', }, }); // Use the same client for insertion, deletion, updates, etc
That's the nuts and bolts of Prisma. Is it different and better?
When comparing options, before outlining differences, it's useful to state what is actually similar among these products. Here is a partial list of features that both TypeORM, Sequelize and Prisma support
Casual queries with sorting, filtering, distinct, group by, 'upsert' (update or create),etc
Raw queries
Full text search
Association/relations of any type (e.g., many to many, self-relation, etc)
Aggregation queries
Pagination
CLI
Transactions
Migration & seeding
Hooks/events (called middleware in Prisma)
Connection pool
Based on various community benchmarks, no dramatic performance differences
All have huge amount of stars and downloads
Overall, I found TypeORM and Sequelize to be a little more feature rich. For example, the following features are not supported only in Prisma: GIS queries, DB-level custom constraints, DB replication, soft delete, caching, exclude queries and some more
With that, shall we focus on what really set them apart and make a difference
💁♂️ What is it about: ORM's life is not easier since the TypeScript rise, to say the least. The need to support typed models/queries/etc yields a lot of developers sweat. Sequelize, for example, struggles to stabilize a TypeScript interface and, by now offers 3 different syntaxes + one external library (sequelize-typescript) that offers yet another style. Look at the syntax below, this feels like an afterthought - a library that was not built for TypeScript and now tries to squeeze it in somehow. Despite the major investment, both Sequelize and TypeORM offer only partial type safety. Simple queries do return typed objects, but other common corner cases like attributes/projections leave you with brittle strings. Here are a few examples:
// Sequelize pesky TypeScript interface type OrderAttributes={ id: number, price: number, // other attributes... }; type OrderCreationAttributes=Optional<OrderAttributes,'id'>; //😯 Isn't this a weird syntax? classOrderextendsModel<InferAttributes<Order>,InferCreationAttributes<Order>>{ declare id:CreationOptional<number>; declare price: number; }
// Sequelize loose query types awaitgetOrderModel().findAll({ where:{noneExistingField:'noneExistingValue'}//👍 TypeScript will warn here attributes:['none-existing-field','another-imaginary-column'],// No errors here although these columns do not exist include:'no-such-table',//😯 no errors here although this table doesn't exist }); awaitgetCountryModel().findByPk('price');//😯 No errors here although the price column is not a primary key
// TypeORM loose query constordersOnSales:Post[]=await orderRepository.find({ where:{onSale:true},//👍 TypeScript will warn here select:['id','price'], }) console.log(ordersOnSales[0].userId);//😯 No errors here although the 'userId' column is not part of the returned object
Isn't it ironic that a library called TypeORM base its queries on strings?
🤔 How Prisma is different: It takes a totally different approach by generating per-project client code that is fully typed. This client embodies types for everything: every query, relations, sub-queries, everything (except migrations). While other ORMs struggles to infer types from discrete models (including associations that are declared in other files), Prisma's offline code generation is easier: It can look through the entire DB relations, use custom generation code and build an almost perfect TypeScript experience. Why 'almost' perfect? for some reason, Prisma advocates using plain SQL for migrations, which might result in a discrepancy between the code models and the DB schema. Other than that, this is how Prisma's client brings end to end type safety:
await prisma.order.findMany({ where:{ noneExistingField:1,//👍 TypeScript error here }, select:{ noneExistingRelation:{//👍 TypeScript error here select:{id:true}, }, noneExistingField:true,//👍 TypeScript error here }, }); await prisma.order.findUnique({ where:{price:50},//👍 TypeScript error here });
📊 How important: TypeScript support across the board is valuable for DX mostly. Luckily, we have another safety net: The project testing. Since tests are mandatory, having build-time type verification is important but not a life saver
💁♂️ What is it about: Many avoid ORMs while preferring to interact with the DB using lower-level techniques. One of their arguments is against the efficiency of ORMs: Since the generated queries are not visible immediately to the developers, wasteful queries might get executed unknowingly. While all ORMs provide syntactic sugar over SQL, there are subtle differences in the level of abstraction. The more the ORM syntax resembles SQL, the more likely the developers will understand their own actions
For example, TypeORM's query builder looks like SQL broken into convenient functions
No join is reminded here also it fetches records from two related tables (order, and country). Could you guess what SQL is being produced here? how many queries? One right, a simple join? Surprise, actually, two queries are made. Prisma fires one query per-table here, as the join logic happens on the ORM client side (not inside the DB). But why?? in some cases, mostly where there is a lot of repetition in the DB cartesian join, querying each side of the relation is more efficient. But in other cases, it's not. Prisma arbitrarily chose what they believe will perform better in most cases. I checked, in my case it's slower than doing a one-join query on the DB side. As a developer, I would miss this deficiency due to the high-level syntax (no join is mentioned). My point is, Prisma sweet and simple syntax might be a bless for developer who are brand new to databases and aim to achieve a working solution in a short time. For the longer term, having full awareness of the DB interactions is helpful, other ORMs encourage this awareness a little better
📊 How important: Any ORM will hide SQL details from their users - without developer's awareness no ORM will save the day
💁♂️ What is it about: Speak to an ORM antagonist and you'll hear a common sensible argument: ORMs are much slower than a 'raw' approach. To an extent, this is a legit observation as most comparisons will show none-negligible differences between raw/query-builder and ORM.
+Example: a direct insert against the PG driver is much shorter Source
It should also be noted that these benchmarks don't tell the entire story - on top of raw queries, every solution must build a mapper layer that maps the raw data to JS objects, nest the results, cast types, and more. This work is included within every ORM but not shown in benchmarks for the raw option. In reality, every team which doesn't use ORM would have to build their own small "ORM", including a mapper, which will also impact performance
🤔 How Prisma is different: It was my hope to see a magic here, eating the ORM cake without counting the calories, seeing Prisma achieving an almost 'raw' query speed. I had some good and logical reasons for this hope: Prisma uses a DB client built with Rust. Theoretically, it could serialize to and nest objects faster (in reality, this happens on the JS side). It was also built from the ground up and could build on the knowledge pilled in ORM space for years. Also, since it returns POJOs only (see bullet 'No Active Record here!') - no time should be spent on decorating objects with ORM fields
You already got it, this hope was not fulfilled. Going with every community benchmark (one, two, three), Prisma at best is not faster than the average ORM. What is the reason? I can't tell exactly but it might be due the complicated system that must support Go, future languages, MongoDB and other non-relational DBs
+Example: Prisma is not faster than others. It should be noted that in other benchmarks Prisma scores higher and shows an 'average' performance Source
📊 How important: It's expected from ORM users to live peacefully with inferior performance, for many systems it won't make a great deal. With that, 10%-30% performance differences between various ORMs are not a key factor
💁♂️ What is it about: Node in its early days was heavily inspired by Ruby (e.g., testing "describe"), many great patterns were embraced, Active Record is not among the successful ones. What is this pattern about in a nutshell? say you deal with Orders in your system, with Active Record an Order object/class will hold both the entity properties, possible also some of the logic functions and also CRUD functions. Many find this pattern to be awful, why? ideally, when coding some logic/flow, one should not keep her mind busy with side effects and DB narratives. It also might be that accessing some property unconsciously invokes a heavy DB call (i.e., lazy loading). If not enough, in case of heavy logic, unit tests might be in order (i.e., read 'selective unit tests') - it's going to be much harder to write unit tests against code that interacts with the DB. In fact, all of the respectable and popular architecture (e.g., DDD, clean, 3-tiers, etc) advocate to 'isolate the domain', separate the core/logic of the system from the surrounding technologies. With all of that said, both TypeORM and Sequelize support the Active Record pattern which is displayed in many examples within their documentation. Both also support other better patterns like the data mapper (see below), but they still open the door for doubtful patterns
// TypeORM active records 😟 @Entity() classOrderextendsBaseEntity{ @PrimaryGeneratedColumn() id: number @Column() price: number @ManyToOne(()=>Product,(product)=> product.order) products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ // some logic here orderToUpdate.status="approval"; orderToUpdate.save(); orderToUpdate.products.forEach((products)=>{ }) orderToUpdate.usedConnection=? } }
🤔 How Prisma is different: The better alternative is the data mapper pattern. It acts as a bridge, an adapter, between simple object notations (domain objects with properties) to the DB language, typically SQL. Call it with a plain JS object, POJO, get it saved in the DB. Simple. It won't add functions to the result objects or do anything beyond returning pure data, no surprising side effects. In its purest sense, this is a DB-related utility and completely detached from the business logic. While both Sequelize and TypeORM support this, Prisma offers only this style - no room for mistakes.
// Prisma approach with a data mapper 👍 // This was generated automatically by Prisma type Order{ id: number price: number products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ orderToUpdate.status="approval"; prisma.order.update({where:{id: orderToUpdate.id},data: orderToUpdate }); // Side effect 👆, but an explicit one. The thoughtful coder will move this to another function. Since it's happening outside, mocking is possible 👍 products.forEach((products)=>{// No lazy loading, the data is already here 👍 }) } }
In Practica.js we take it one step further and put the prisma models within the "DAL" layer and wrap it with the repository pattern. You may glimpse into the code here, this is the business flow that calls the DAL layer
📊 How important: On the one hand, this is a key architectural principle to follow but the other hand most ORMs allow doing it right
💁♂️ What is it about: TypeORM and Sequelize documentation is mediocre, though TypeORM is a little better. Based on my personal experience they do get a little better over the years, but still by no mean they deserve to be called "good" or "great". For example, if you seek to learn about 'raw queries' - Sequelize offers a very short page on this matter, TypeORM info is spread in multiple other pages. Looking to learn about pagination? Couldn't find Sequelize documents, TypeORM has some short explanation, 150 words only
🤔 How Prisma is different: Prisma documentation rocks! See their documents on similar topics: raw queries and pagingation, thousands of words, and dozens of code examples. The writing itself is also great, feels like some professional writers were involved
This chart above shows how comprehensive are Prisma docs (Obviously this by itself doesn't prove quality)
📊 How important: Great docs are a key to awareness and avoiding pitfalls
💁♂️ What is it about: Good chances are (say about 99.9%) that you'll find yourself diagnostic slow queries in production or any other DB-related quirks. What can you expect from traditional ORMs in terms of observability? Mostly logging. Sequelize provides both logging of query duration and programmatic access to the connection pool state ({size,available,using,waiting}). TypeORM provides only logging of queries that suppress a pre-defined duration threshold. This is better than nothing, but assuming you don't read production logs 24/7, you'd probably need more than logging - an alert to fire when things seem faulty. To achieve this, it's your responsibility to bridge this info to your preferred monitoring system. Another logging downside for this sake is verbosity - we need to emit tons of information to the logs when all we really care for is the average duration. Metrics can serve this purpose much better as we're about to see soon with Prisma
What if you need to dig into which specific part of the query is slow? unfortunately, there is no breakdown of the query phases duration - it's being left to you as a black-box
// Sequelize - logging various DB information
+Logging each query in order to realize trends and anomaly in the monitoring system
🤔 How Prisma is different: Since Prisma targets also enterprises, it must bring strong ops capabilities. Beautifully, it packs support for both metrics and open telemetry tracing!. For metrics, it generates custom JSON with metric keys and values so anyone can adapt this to any monitoring system (e.g., CloudWatch, statsD, etc). On top of this, it produces out of the box metrics in Prometheus format (one of the most popular monitoring platforms). For example, the metric 'prisma_client_queries_duration_histogram_ms' provides the average query length in the system overtime. What is even more impressive is the support for open-tracing - it feeds your OpenTelemetry collector with spans that describe the various phases of every query. For example, it might help realize what is the bottleneck in the query pipeline: Is it the DB connection, the query itself or the serialization?
+Prisma visualizes the various query phases duration with open-telemtry
🏆 Is Prisma doing better?: Definitely
📊 How important: Goes without words how impactful is observability, however filling the gap in other ORM will demand no more than a few days
7. Continuity - will it be here with us in 2024/2025
💁♂️ What is it about: We live quite peacefully with the risk of one of our dependencies to disappear. With ORM though, this risk demand special attention because our buy-in is higher (i.e., harder to replace) and maintaining it was proven to be harder. Just look at a handful of successful ORMs in the past: objection.js, waterline, bookshelf - all of these respectful project had 0 commits in the past month. The single maintainer of objection.js announced that he won't work the project anymore. This high churn rate is not surprising given the huge amount of moving parts to maintain, the gazillion corner cases and the modest 'budget' OSS projects live with. Looking at OpenCollective shows that Sequelize and TypeORM are funded with ~1500$ month in average. This is barely enough to cover a daily Starbucks cappuccino and croissant (6.95$ x 365) for 5 maintainers. Nothing contrasts this model more than a startup company that just raised its series B - Prisma is funded with 40,000,000$ (40 millions) and recruited 80 people! Should not this inspire us with high confidence about their continuity? I'll surprisingly suggest that quite the opposite is true
See, an OSS ORM has to go over one huge hump, but a startup company must pass through TWO. The OSS project will struggle to achieve the critical mass of features, including some high technical barriers (e.g., TypeScript support, ESM). This typically lasts years, but once it does - a project can focus mostly on maintenance and step out of the danger zone. The good news for TypeORM and Sequelize is that they already did! Both struggled to keep their heads above the water, there were rumors in the past that TypeORM is not maintained anymore, but they managed to go through this hump. I counted, both projects had approximately ~2000 PRs in the past 3 years! Going with repo-tracker, each see multiple commits every week. They both have vibrant traction, and the majority of features you would expect from an ORM. TypeORM even supports beyond-the-basics features like multi data source and caching. It's unlikely that now, once they reached the promise land - they will fade away. It might happen, there is no guarantee in the OSS galaxy, but the risk is low
🤔 How Prisma is different: Prisma a little lags behind in terms of features, but with a budget of 40M$ - there are good reasons to believe that they will pass the first hump, achieving a critical mass of features. I'm more concerned with the second hump - showing revenues in 2 years or saying goodbye. As a company that is backed by venture capitals - the model is clear and cruel: In order to secure their next round, series B or C (depends whether the seed is counted), there must be a viable and proven business model. How do you 'sell' ORM? Prisma experiments with multiple products, none is mature yet or being paid for. How big is this risk? According to this startup companies success statistics, "About 65% of the Series A startups get series B, while 35% of the companies that get series A fail.". Since Prisma already gained a lot of love and adoption from the community, there success chances are higher than the average round A/B company, but even 20% or 10% chances to fade away is concerning
This is terrifying news - companies happily choose a young commercial OSS product without realizing that there are 10-30% chances for this product to disappear
Some of startup companies who seek a viable business model do not shut the doors rather change the product, the license or the free features. This is not my subjective business analysis, here are few examples: MongoDB changed their license, this is why the majority had to host their Mongo DB over a single vendor. Redis did something similar. What are the chances of Prisma pivoting to another type of product? It actually already happened before, Prisma 1 was mostly about graphQL client and server, it's now retired
It's just fair to mention the other potential path - most round B companies do succeed to qualify for the next round, when this happens even bigger money will be involved in building the 'Ferrari' of JavaScript ORMs. I'm surely crossing my fingers for these great people, at the same time we have to be conscious about our choices
📊 How important: As important as having to code again the entire DB layer in a big system
Before proposing my key take away - which is the primary ORM, let's repeat the key learning that were introduced here:
🥇 Prisma deserves a medal for its awesome DX, documentation, observability support and end-to-end TypeScript coverage
🤔 There are reasons to be concerned about Prisma's business continuity as a young startup without a viable business model. Also Prisma's abstract client syntax might blind developers a little more than other ORMs
🎩 The contenders, TypeORM and Sequelize, matured and doing quite well: both have merged thousand PRs in the past 3 years to become more stable, they keep introducing new releases (see repo-tracker), and for now holds more features than Prisma. Also, both show solid performance (for an ORM). Hats off to the maintainers!
Based on these observations, which should you pick? which ORM will we use for practica.js?
Prisma is an excellent addition to Node.js ORMs family, but not the hassle-free one tool to rule them all. It's a mixed bag of many delicious candies and a few gotchas. Wouldn't it grow to tick all the boxes? Maybe, but unlikely. Once built, it's too hard to dramatically change the syntax and engine performance. Then, during the writing and speaking with the community, including some Prisma enthusiasts, I realized that it doesn't aim to be the can-do-everything 'Ferrari'. Its positioning seems to resemble more a convenient family car with a solid engine and awesome user experience. In other words, it probably aims for the enterprise space where there is mostly demand for great DX, OK performance, and business-class support
In the end of this journey I see no dominant flawless 'Ferrari' ORM. I should probably change my perspective: Building ORM for the hectic modern JavaScript ecosystem is 10x harder than building a Java ORM back then in 2001. There is no stain in the shirt, it's a cool JavaScript swag. I learned to accept what we have, a rich set of features, tolerable performance, good enough for many systems. Need more? Don't use ORM. Nothing is going to change dramatically, it's now as good as it can be
Surely use Prisma under these scenarios - If your data needs are rather simple; when time-to-market concern takes precedence over the data processing accuracy; when the DB is relatively small; if you're a mobile/frontend developer who is doing her first steps in the backend world; when there is a need for business-class support; AND when Prisma's long term business continuity risk is a non-issue for you
I'd probably prefer other options under these conditions - If the DB layer performance is a major concern; if you're savvy backend developer with solid SQL capabilities; when there is a need for fine grain control over the data layer. For all of these cases, Prisma might still work, but my primary choices would be using knex/TypeORM/Sequelize with a data-mapper style
Consequently, we love Prisma and add it behind flag (--orm=prisma) to Practica.js. At the same time, until some clouds will disappear, Sequelize will remain our default ORM
]]>
+
+ Yoni Goldberg
+ https://github.com/goldbergyoni
+
+
+
+
+
+
+
+
+
+
+
+
+
+ https://practica.dev/blog/monorepo-backend
+
+ 2022-11-07T11:00:00.000Z
+
+ As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
Layer 1: Plain old folders to stay on top of your code
With zero tooling and only by having all the Microservice and libraries together in the same root folder, a developer gets great management perks and tons of value: Navigation, search across components, deleting a library instantly, debugging, quickly adding new components. Consider the alternative with multi-repo approach — adding a new component for modularity demands opening and configuring a new GitHub repository. Not just a hassle but also greater chances of developers choosing the short path and including the new code in some semi-relevant existing package. In plain words, zero-tooling Monorepos can increase modularity.
This layer is often overlooked. If your codebase is not huge and the components are highly decoupled (more on this later)— it might be all you need. We’ve seen a handful of successful Monorepo solutions without any special tooling.
With that said, some of the newer tools augment this experience with interesting features:
Both Turborepo and Nx and also Lerna provide a visual representation of the packages’ dependencies
Nx allows ‘visibility rules’ which is about enforcing who can use what. Consider, a ‘checkout’ library that should be approached only by the ‘order Microservice’ — deviating from this will result in failure during development (not runtime enforcement)
Nx dependencies graph
Nx workspace generator allows scaffolding out components. Whenever a team member needs to craft a new controller/library/class/Microservice, she just invokes a CLI command which products code based on a community or organization template. This enforces consistency and best practices sharing
Layer 2: Tasks and pipeline to build your code efficiently
Even in a world of autonomous components, there are management tasks that must be applied in a batch like applying a security patch via npm update, running the tests of multiple components that were affected by a change, publish 3 related libraries to name a few examples. All Monorepo tools support this basic functionality of invoking some command over a group of packages. For example, Lerna, Nx, and Turborepo do.
Apply some commands over multiple packages
In some projects, invoking a cascading command is all you need. Mostly if each package has an autonomous life cycle and the build process spans a single package (more on this later). In some other types of projects where the workflow demands testing/running and publishing/deploying many packages together — this will end in a terribly slow experience. Consider a solution with hundred of packages that are transpiled and bundled — one might wait minutes for a wide test to run. While it’s not always a great practice to rely on wide/E2E tests, it’s quite common in the wild. This is exactly where the new wave of Monorepo tooling shines — deeply optimizing the build process. I should say this out loud: These tools bring beautiful and innovative build optimizations:
Parallelization — If two commands or packages are orthogonal to each other, the commands will run in two different threads or processes. Typically your quality control involves testing, lining, license checking, CVE checking — why not parallelize?
Smart execution plan —Beyond parallelization, the optimized tasks execution order is determined based on many factors. Consider a build that includes A, B, C where A, C depend on B — naively, a build system would wait for B to build and only then run A & C. This can be optimized if we run A & C’s isolated unit tests while building B and not afterward. By running task in parallel as early as possible, the overall execution time is improved — this has a remarkable impact mostly when hosting a high number of components. See below a visualization example of a pipeline improvement
A modern tool advantage over old Lerna. Taken from Turborepo website
Detect who is affected by a change — Even on a system with high coupling between packages, it’s usually not necessary to run all packages rather than only those who are affected by a change. What exactly is ‘affected’? Packages/Microservices that depend upon another package that has changed. Some of the toolings can ignore minor changes that are unlikely to break others. This is not a great performance booster but also an amazing testing feature —developers can get quick feedback on whether any of their clients were broken. Both Nx and Turborepo support this feature. Lerna can tell only which of the Monorepo package has changed
Sub-systems (i.e., projects) — Similarly to ‘affected’ above, modern tooling can realize portions of the graph that are inter-connected (a project or application) while others are not reachable by the component in context (another project) so they know to involve only packages of the relevant group
Caching — This is a serious speed booster: Nx and Turborepo cache the result/output of tasks and avoid running them again on consequent builds if unnecessary. For example, consider long-running tests of a Microservice, when commanding to re-build this Microservice, the tooling might realize that nothing has changed and the test will get skipped. This is achieved by generating a hashmap of all the dependent resources — if any of these resources haven’t change, then the hashmap will be the same and the task will get skipped. They even cache the stdout of the command, so when you run a cached version it acts like the real thing — consider running 200 tests, seeing all the log statements of the tests, getting results over the terminal in 200 ms, everything acts like ‘real testing while in fact, the tests did not run at all rather the cache!
Remote caching — Similarly to caching, only by placing the task’s hashmaps and result on a global server so further executions on other team member’s computers will also skip unnecessary tasks. In huge Monorepo projects that rely on E2E tests and must build all packages for development, this can save a great deal of time
Layer 3: Hoist your dependencies to boost npm installation
The speed optimizations that were described above won’t be of help if the bottleneck is the big bull of mud that is called ‘npm install’ (not to criticize, it’s just hard by nature). Take a typical scenario as an example, given dozens of components that should be built, they could easily trigger the installation of thousands of sub-dependencies. Although they use quite similar dependencies (e.g., same logger, same ORM), if the dependency version is not equal then npm will duplicate (the NPM doppelgangers problem) the installation of those packages which might result in a long process.
This is where the workspace line of tools (e.g., Yarn workspace, npm workspaces, PNPM) kicks in and introduces some optimization — Instead of installing dependencies inside each component ‘NODE_MODULES’ folder, it will create one centralized folder and link all the dependencies over there. This can show a tremendous boost in install time for huge projects. On the other hand, if you always focus on one component at a time, installing the packages of a single Microservice/library should not be a concern.
Both Nx and Turborepo can rely on the package manager/workspace to provide this layer of optimizations. In other words, Nx and Turborepo are the layer above the package manager who take care of optimized dependencies installation.
On top of this, Nx introduces one more non-standard, maybe even controversial, technique: There might be only ONE package.json at the root folder of the entire Monorepo. By default, when creating components using Nx, they will not have their own package.json! Instead, all will share the root package.json. Going this way, all the Microservice/libraries share their dependencies and the installation time is improved. Note: It’s possible to create ‘publishable’ components that do have a package.json, it’s just not the default.
I’m concerned here. Sharing dependencies among packages increases the coupling, what if Microservice1 wishes to bump dependency1 version but Microservice2 can’t do this at the moment? Also, package.json is part of Node.js runtime and excluding it from the component root loses important features like package.json main field or ESM exports (telling the clients which files are exposed). I ran some POC with Nx last week and found myself blocked — library B was wadded, I tried to import it from Library A but couldn’t get the ‘import’ statement to specify the right package name. The natural action was to open B’s package.json and check the name, but there is no Package.json… How do I determine its name? Nx docs are great, finally, I found the answer, but I had to spend time learning a new ‘framework’.
Stop for a second: It’s all about your workflow
We deal with tooling and features, but it’s actually meaningless evaluating these options before determining whether your preferred workflow is synchronized or independent (we will discuss this in a few seconds). This upfront fundamental decision will change almost everything.
Consider the following example with 3 components: Library 1 is introducing some major and breaking changes, Microservice1 and Microservice2 depend upon Library1 and should react to those breaking changes. How?
Option A — The synchronized workflow- Going with this development style, all the three components will be developed and deployed in one chunk together. Practically, a developer will code the changes in Library1, test libray1 and also run wide integration/e2e tests that include Microservice1 and Microservice2. When they're ready, the version of all components will get bumped. Finally, they will get deployed together.
Going with this approach, the developer has the chance of seeing the full flow from the client's perspective (Microservice1 and 2), the tests cover not only the library but also through the eyes of the clients who actually use it. On the flip side, it mandates updating all the depend-upon components (could be dozens), doing so increases the risk’s blast radius as more units are affected and should be considered before deployment. Also, working on a large unit of work demands building and testing more things which will slow the build.
Option B — Independent workflow- This style is about working a unit by unit, one bite at a time, and deploy each component independently based on its personal business considerations and priority. This is how it goes: A developer makes the changes in Library1, they must be tested carefully in the scope of Library1. Once she is ready, the SemVer is bumped to a new major and the library is published to a package manager registry (e.g., npm). What about the client Microservices? Well, the team of Microservice2 is super-busy now with other priorities, and skip this update for now (the same thing as we all delay many of our npm updates,). However, Microservice1 is very much interested in this change — The team has to pro-actively update this dependency and grab the latest changes, run the tests and when they are ready, today or next week — deploy it.
Going with the independent workflow, the library author can move much faster because she does not need to take into account 2 or 30 other components — some are coded by different teams. This workflow also forces her to write efficient tests against the library — it’s her only safety net and is likely to end with autonomous components that have low coupling to others. On the other hand, testing in isolation without the client’s perspective loses some dimension of realism. Also, if a single developer has to update 5 units — publishing each individually to the registry and then updating within all the dependencies can be a little tedious.
Synchronized and independent workflows illustrated
On the illusion of synchronicity
In distributed systems, it’s not feasible to achieve 100% synchronicity — believing otherwise can lead to design faults. Consider a breaking change in Microservice1, now its client Microservice2 is adapting and ready for the change. These two Microservices are deployed together but due to the nature of Microservices and distributed runtime (e.g., Kubernetes) the deployment of Microservice1 only fail. Now, Microservice2’s code is not aligned with Microservice1 production and we are faced with a production bug. This line of failures can be handled to an extent also with a synchronized workflow — The deployment should orchestrate the rollout of each unit so each one is deployed at a time. Although this approach is doable, it increased the chances of large-scoped rollback and increases deployment fear.
This fundamental decision, synchronized or independent, will determine so many things — Whether performance is an issue or not at all (when working on a single unit), hoisting dependencies or leaving a dedicated node_modules in every package’s folder, and whether to create a local link between packages which is described in the next paragraph.
Layer 4: Link your packages for immediate feedback
When having a Monorepo, there is always the unavoidable dilemma of how to link between the components:
Option 1: Using npm — Each library is a standard npm package and its client installs it via the standards npm commands. Given Microservice1 and Library1, this will end with two copies of Library1: the one inside Microservices1/NODE_MODULES (i.e., the local copy of the consuming Microservice), and the 2nd is the development folder where the team is coding Library1.
Option2: Just a plain folder — With this, Library1 is nothing but a logical module inside a folder that Microservice1,2,3 just locally imports. NPM is not involved here, it’s just code in a dedicated folder. This is for example how Nest.js modules are represented.
With option 1, teams benefit from all the great merits of a package manager — SemVer(!), tooling, standards, etc. However, should one update Library1, the changes won’t get reflected in Microservice1 since it is grabbing its copy from the npm registry and the changes were not published yet. This is a fundamental pain with Monorepo and package managers — one can’t just code over multiple packages and test/run the changes.
With option 2, teams lose all the benefits of a package manager: Every change is propagated immediately to all of the consumers.
How do we bring the good from both worlds (presumably)? Using linking. Lerna, Nx, the various package manager workspaces (Yarn, npm, etc) allow using npm libraries and at the same time link between the clients (e.g., Microservice1) and the library. Under the hood, they created a symbolic link. In development mode, changes are propagated immediately, in deployment time — the copy is grabbed from the registry.
Linking packages in a Monorepo
If you’re doing the synchronized workflow, you’re all set. Only now any risky change that is introduced by Library3, must be handled NOW by the 10 Microservices that consume it.
If favoring the independent workflow, this is of course a big concern. Some may call this direct linking style a ‘monolith monorepo’, or maybe a ‘monolitho’. However, when not linking, it’s harder to debug a small issue between the Microservice and the npm library. What I typically do is temporarily link (with npm link) between the packages, debug, code, then finally remove the link.
Nx is taking a slightly more disruptive approach — it is using TypeScript paths to bind between the components. When Microservice1 is importing Library1, to avoid the full local path, it creates a TypeScript mapping between the library name and the full path. But wait a minute, there is no TypeScript in production so how could it work? Well, in serving/bundling time it webpacks and stitches the components together. Not a very standard way of doing Node.js work.
Closing: What should you use?
It’s all about your workflow and architecture — a huge unseen cross-road stands in front of the Monorepo tooling decision.
Scenario A — If your architecture dictates a synchronized workflow where all packages are deployed together, or at least developed in collaboration — then there is a strong need for a rich tool to manage this coupling and boost the performance. In this case, Nx might be a great choice.
For example, if your Microservice must keep the same versioning, or if the team really small and the same people are updating all the components, or if your modularization is not based on package manager but rather on framework-own modules (e.g., Nest.js), if you’re doing frontend where the components inherently are published together, or if your testing strategy relies on E2E mostly — for all of these cases and others, Nx is a tool that was built to enhance the experience of coding many relatively coupled components together. It is a great a sugar coat over systems that are unavoidably big and linked.
If your system is not inherently big or meant to synchronize packages deployment, fancy Monorepo features might increase the coupling between components. The Monorepo pyramid above draws a line between basic features that provide value without coupling components while other layers come with an architectural price to consider. Sometimes climbing up toward the tip is worth the consequences, just make this decision consciously.
Scenario B— If you’re into an independent workflow where each package is developed, tested, and deployed (almost) independently — then inherently there is no need to fancy tools to orchestrate hundreds of packages. Most of the time there is just one package in focus. This calls for picking a leaner and simpler tool — Turborepo. By going this route, Monorepo is not something that affects your architecture, but rather a scoped tool for faster build execution. One specific tool that encourages an independent workflow is Bilt by Gil Tayar, it’s yet to gain enough popularity but it might rise soon and is a great source to learn more about this philosophy of work.
In any scenario, consider workspaces — If you face performance issues that are caused by package installation, then the various workspace tools Yarn/npm/PNPM, can greatly minimize this overhead with a low footprint. That said, if you’re working in an autonomous workflow, smaller are the chances of facing such issues. Don’t just use tools unless there is a pain.
We tried to show the beauty of each and where it shines. If we’re allowed to end this article with an opinionated choice: We greatly believe in an independent and autonomous workflow where the occasional developer of a package can code and deploy fearlessly without messing with dozens of other foreign packages. For this reason, Turborepo will be our favorite tool for the next season. We promise to tell you how it goes.
Bonus: Comparison table
See below a detailed comparison table of the various tools and features:
Preview only, the complete table can be found here
]]>
+
+ Yoni Goldberg
+ https://github.com/goldbergyoni
+
+
+ Michael Salomon
+ https://github.com/mikicho
+
+
+
+
+
+
+ https://practica.dev/blog/popular-nodejs-pattern-and-tools-to-reconsider
+
+ 2022-08-02T10:00:00.000Z
+
+ Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
💁♂️ What is it about: A super popular technique in which the app configurable values (e.g., DB user name) are stored in a simple text file. Then, when the app loads, the dotenv library sets all the text file values as environment variables so the code can read this
// .env file USER_SERVICE_URL=https://users.myorg.com //start.js require('dotenv').config(); //blog-post-service.js repository.savePost(post); //update the user number of posts, read the users service URL from an environment variable await axios.put(`${process.env.USER_SERVICE_URL}/api/user/${post.userId}/incrementPosts`)
📊 How popular: 21,806,137 downloads/week!
🤔 Why it might be wrong: Dotenv is so easy and intuitive to start with, so one might easily overlook fundamental features: For example, it's hard to infer the configuration schema and realize the meaning of each key and its typing. Consequently, there is no built-in way to fail fast when a mandatory key is missing - a flow might fail after starting and presenting some side effects (e.g., DB records were already mutated before the failure). In the example above, the blog post will be saved to DB, and only then will the code realize that a mandatory key is missing - This leaves the app hanging in an invalid state. On top of this, in the presence of many keys, it's impossible to organize them hierarchically. If not enough, it encourages developers to commit this .env file which might contain production values - this happens because there is no clear way to define development defaults. Teams usually work around this by committing .env.example file and then asking whoever pulls code to rename this file manually. If they remember to of course
☀️ Better alternative: Some configuration libraries provide out of the box solution to all of these needs. They encourage a clear schema and the possibility to validate early and fail if needed. See comparison of options here. One of the better alternatives is 'convict', down below is the same example, this time with Convict, hopefully it's better now:
// config.js exportdefault{ userService:{ url:{ // Hierarchical, documented and strongly typed 👇 doc:"The URL of the user management service including a trailing slash", format:"url", default:"http://localhost:4001", nullable:false, env:"USER_SERVICE_URL", }, }, //more keys here }; //start.js importconvictfrom"convict"; importconfigSchemafrom"config"; convict(configSchema); // Fail fast! convictConfigurationProvider.validate(); //blog-post.js repository.savePost(post); // Will never arrive here if the URL is not set await axios.put( `${convict.get(userService.url)}/api/user/${post.userId}/incrementPosts` );
2. Calling a 'fat' service from the API controller
💁♂️ What is it about: Consider a reader of our code who wishes to understand the entire high-level flow or delve into a very specific part. She first lands on the API controller, where requests start. Unlike what its name implies, this controller layer is just an adapter and kept really thin and straightforward. Great thus far. Then the controller calls a big 'service' with thousands of lines of code that represent the entire logic
// user-controller router.post('/',async(req, res, next)=>{ await userService.add(req.body); // Might have here try-catch or error response logic } // user-service exports functionadd(newUser){ // Want to understand quickly? Need to understand the entire user service, 1500 loc // It uses technical language and reuse narratives of other flows this.copyMoreFieldsToUser(newUser) const doesExist =this.updateIfAlreadyExists(newUser) if(!doesExist){ addToCache(newUser); } // 20 more lines that demand navigating to other functions in order to get the intent }
📊 How popular: It's hard to pull solid numbers here, I could confidently say that in most of the app that I see, this is the case
🤔 Why it might be wrong: We're here to tame complexities. One of the useful techniques is deferring a complexity to the later stage possible. In this case though, the reader of the code (hopefully) starts her journey through the tests and the controller - things are simple in these areas. Then, as she lands on the big service - she gets tons of complexity and small details, although she is focused on understanding the overall flow or some specific logic. This is unnecessary complexity
☀️ Better alternative: The controller should call a particular type of service, a use-case , which is responsible for summarizing the flow in a business and simple language. Each flow/feature is described using a use-case, each contains 4-10 lines of code, that tell the story without technical details. It mostly orchestrates other small services, clients, and repositories that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily. She can now choose where she would like to focus. She is now exposed only to necessary complexity. This technique also encourages partitioning the code to the smaller object that the use-case orchestrates. Bonus: By looking at coverage reports, one can tell which features are covered, not just files/functions
This idea by the way is formalized in the 'clean architecture' book - I'm not a big fan of 'fancy' architectures, but see - it's worth cherry-picking techniques from every source. You may walk-through our Node.js best practices starter, practica.js, and examine the use-cases code
3. Nest.js: Wire everything with dependency injection
💁♂️ What is it about: If you're doing Nest.js, besides having a powerful framework in your hands, you probably use DI for everything and make every class injectable. Say you have a weather-service that depends upon humidity-service, and there is no requirement to swap the humidity-service with alternative providers. Nevertheless, you inject humidity-service into the weather-service. It becomes part of your development style, "why not" you think - I may need to stub it during testing or replace it in the future
// humidity-service.ts - not customer facing @Injectable() exportclassGoogleHumidityService{ asyncgetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } } // weather-service.ts - customer facing import{ GoogleHumidityService }from'./humidity-service.ts'; exporttypeweatherInfo{ temperature:number, humidity:number } exportclassWeatherService{ constructor(private humidityService: GoogleHumidityService){} asyncGetWeather(when: Datetime):Promise<weatherInfo>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService } } // app.module.ts @Module({ providers:[GoogleHumidityService, WeatherService], }) exportclassAppModule{}
📊 How popular: No numbers here but I could confidently say that in all of the Nest.js app that I've seen, this is the case. In the popular 'nestjs-realworld-example-ap[p']() all the services are 'injectable'
🤔 Why it might be wrong: Dependency injection is not a priceless coding style but a pattern you should pull in the right moment, like any other pattern. Why? Because any pattern has a price. What price, you ask? First, encapsulation is violated. Clients of the weather-service are now aware that other providers are being used internally. Some clients may get tempted to override providers also it's not under their responsibility. Second, it's another layer of complexity to learn, maintain, and one more way to shoot yourself in the legs. StackOverflow owes some of its revenues to Nest.js DI - plenty of discussions try to solve this puzzle (e.g. did you know that in case of circular dependencies the order of imports matters?). Third, there is the performance thing - Nest.js, for example struggled to provide a decent start time for serverless environments and had to introduce lazy loaded modules. Don't get me wrong, in some cases, there is a good case for DI: When a need arises to decouple a dependency from its caller, or to allow clients to inject custom implementations (e.g., the strategy pattern). In such case, when there is a value, you may consider whether the value of DI is worth its price. If you don't have this case, why pay for nothing?
I recommend reading the first paragraphs of this blog post 'Dependency Injection is EVIL' (and absolutely don't agree with this bold words)
☀️ Better alternative: 'Lean-ify' your engineering approach - avoid using any tool unless it serves a real-world need immediately. Start simple, a dependent class should simply import its dependency and use it - Yeah, using the plain Node.js module system ('require'). Facing a situation when there is a need to factor dynamic objects? There are a handful of simple patterns, simpler than DI, that you should consider, like 'if/else', factory function, and more. Are singletons requested? Consider techniques with lower costs like the module system with factory function. Need to stub/mock for testing? Monkey patching might be better than DI: better clutter your test code a bit than clutter your production code. Have a strong need to hide from an object where its dependencies are coming from? You sure? Use DI!
// humidity-service.ts - not customer facing exportasyncfunctiongetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } // weather-service.ts - customer facing import{ getHumidity }from"./humidity-service.ts"; // ✅ No wiring is happening externally, all is flat and explicit. Simple exportasyncfunctiongetWeather(when: Datetime):Promise<number>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService // Nobody needs to know about it, its an implementation details awaitgetHumidity(when); }
My name is Yoni Goldberg, I'm a Node.js developer and consultant. I wrote few code-books like JavaScript testing best practices and Node.js best practices (100,000 stars ✨🥹). That said, my best guide is Node.js testing practices which only few read 😞. I shall release an advanced Node.js testing course soon and also hold workshops for teams. I'm also a core maintainer of Practica.js which is a Node.js starter that creates a production-ready example Node Monorepo solution that is based on the standards and simplicity. It might be your primary option when starting a new Node.js solution
💁♂️ What is it about: Commonly, you're in need to issue or/and authenticate JWT tokens. Similarly, you might need to allow login from one single social network like Google/Facebook. When faced with these kinds of needs, Node.js developers rush to the glorious library Passport.js like butterflies are attracted to light
📊 How popular: 1,389,720 weekly downloads
🤔 Why it might be wrong: When tasked with guarding your routes with JWT token - you're just a few lines of code shy from ticking the goal. Instead of messing up with a new framework, instead of introducing levels of indirections (you call passport, then it calls you), instead of spending time learning new abstractions - use a JWT library directly. Libraries like jsonwebtoken or fast-jwt are simple and well maintained. Have concerns with the security hardening? Good point, your concerns are valid. But would you not get better hardening with a direct understanding of your configuration and flow? Will hiding things behind a framework help? Even if you prefer the hardening of a battle-tested framework, Passport doesn't handle a handful of security risks like secrets/token, secured user management, DB protection, and more. My point, you probably anyway need fully-featured user and authentication management platforms. Various cloud services and OSS projects, can tick all of those security concerns. Why then start in the first place with a framework that doesn't satisfy your security needs? It seems like many who opt for Passport.js are not fully aware of which needs are satisfied and which are left open. All of that said, Passport definitely shines when looking for a quick way to support many social login providers
☀️ Better alternative: Is token authentication in order? These few lines of code below might be all you need. You may also glimpse into Practica.js wrapper around these libraries. A real-world project at scale typically need more: supporting async JWT (JWKS), securely manage and rotate the secrets to name a few examples. In this case, OSS solution like [keycloak (https://github.com/keycloak/keycloak) or commercial options like Auth0[https://github.com/auth0] are alternatives to consider
// jwt-middleware.js, a simplified version - Refer to Practica.js to see some more corner cases constmiddleware=(req, res, next)=>{ if(!req.headers.authorization){ res.sendStatus(401) } jwt.verify(req.headers.authorization, options.secret,(err: any,jwtContent: any)=>{ if(err){ return res.sendStatus(401); } req.user= jwtContent.data; next(); });
💁♂️ What is it about: When testing against an API (i.e., component, integration, E2E tests), the library supertest provides a sweet syntax that can both detect the web server address, make HTTP call and also assert on the response. Three in one
test("When adding invalid user, then the response is 400",(done)=>{ const request =require("supertest"); const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act request(app) .post("/user") .send(userToAdd) .expect("Content-Type",/json/) .expect(400, done); // Assert // We already asserted above ☝🏻 as part of the request });
📊 How popular: 2,717,744 weekly downloads
🤔 Why it might be wrong: You already have your assertion library (Jest? Chai?), it has a great error highlighting and comparison - you trust it. Why code some tests using another assertion syntax? Not to mention, Supertest's assertion errors are not as descriptive as Jest and Chai. It's also cumbersome to mix HTTP client + assertion library instead of choosing the best for each mission. Speaking of the best, there are more standard, popular, and better-maintained HTTP clients (like fetch, axios and other friends). Need another reason? Supertest might encourage coupling the tests to Express as it offers a constructor that gets an Express object. This constructor infers the API address automatically (useful when using dynamic test ports). This couples the test to the implementation and won't work in the case where you wish to run the same tests against a remote process (the API doesn't live with the tests). My repository 'Node.js testing best practices' holds examples of how tests can infer the API port and address
☀️ Better alternative: A popular and standard HTTP client library like Node.js Fetch or Axios. In Practica.js (a Node.js starter that packs many best practices) we use Axios. It allows us to configure a HTTP client that is shared among all the tests: We bake inside a JWT token, headers, and a base URL. Another good pattern that we look at, is making each Microservice generate HTTP client library for its consumers. This brings strong-type experience to the clients, synchronizes the provider-consumer versions and as a bonus - The provider can test itself with the same library that its consumers are using
test("When adding invalid user, then the response is 400 and includes a reason",(done)=>{ const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act const receivedResponse = axios.post( `http://localhost:${apiPort}/user`, userToAdd ); // Assert // ✅ Assertion happens in a dedicated stage and a dedicated library expect(receivedResponse).toMatchObject({ status:400, data:{ reason:"no-name", }, }); });
6. Fastify decorate for non request/web utilities
💁♂️ What is it about:Fastify introduces great patterns. Personally, I highly appreciate how it preserves the simplicity of Express while bringing more batteries. One thing that got me wondering is the 'decorate' feature which allows placing common utilities/services inside a widely accessible container object. I'm referring here specifically to the case where a cross-cutting concern utility/service is being used. Here is an example:
// An example of a utility that is cross-cutting-concern. Could be logger or anything else fastify.decorate('metricsService',function(name){ fireMetric:()=>{ // My code that sends metrics to the monitoring system } }) fastify.get('/api/orders',asyncfunction(request, reply){ this.metricsService.fireMetric({name:'new-request'}) // Handle the request }) // my-business-logic.js exports functioncalculateSomething(){ // How to fire a metric? }
It should be noted that 'decoration' is also used to place values (e.g., user) inside a request - this is a slightly different case and a sensible one
📊 How popular: Fastify has 696,122 weekly download and growing rapidly. The decorator concept is part of the framework's core
🤔 Why it might be wrong: Some services and utilities serve cross-cutting-concern needs and should be accessible from other layers like domain (i.e, business logic, DAL). When placing utilities inside this object, the Fastify object might not be accessible to these layers. You probably don't want to couple your web framework with your business logic: Consider that some of your business logic and repositories might get invoked from non-REST clients like CRON, MQ, and similar - In these cases, Fastify won't get involved at all so better not trust it to be your service locator
☀️ Better alternative: A good old Node.js module is a standard way to expose and consume functionality. Need a singleton? Use the module system caching. Need to instantiate a service in correlation with a Fastify life-cycle hook (e.g., DB connection on start)? Call it from that Fastify hook. In the rare case where a highly dynamic and complex instantiation of dependencies is needed - DI is also a (complex) option to consider
// ✅ A simple usage of good old Node.js modules // metrics-service.js exports asyncfunctionfireMetric(name){ // My code that sends metrics to the monitoring system } import{fireMetric}from'./metrics-service.js' fastify.get('/api/orders',asyncfunction(request, reply){ metricsService.fireMetric({name:'new-request'}) }) // my-business-logic.js exports functioncalculateSomething(){ metricsService.fireMetric({name:'new-request'}) }
💁♂️ What is it about: You catch an error somewhere deep in the code (not on the route level), then call logger.error to make this error observable. Seems simple and necessary
📊 How popular: Hard to put my hands on numbers but it's quite popular, right?
🤔 Why it might be wrong: First, errors should get handled/logged in a central location. Error handling is a critical path. Various catch clauses are likely to behave differently without a centralized and unified behavior. For example, a request might arise to tag all errors with certain metadata, or on top of logging, to also fire a monitoring metric. Applying these requirements in ~100 locations is not a walk in the park. Second, catch clauses should be minimized to particular scenarios. By default, the natural flow of an error is bubbling down to the route/entry-point - from there, it will get forwarded to the error handler. Catch clauses are more verbose and error-prone - therefore it should serve two very specific needs: When one wishes to change the flow based on the error or enrich the error with more information (which is not the case in this example)
☀️ Better alternative: By default, let the error bubble down the layers and get caught by the entry-point global catch (e.g., Express error middleware). In cases when the error should trigger a different flow (e.g., retry) or there is value in enriching the error with more context - use a catch clause. In this case, ensure the .catch code also reports to the error handler
// A case where we wish to retry upon failure try{ axios.post('https://thatService.io/api/users); } catch(error){ // ✅ A central location that handles error errorHandler.handle(error,this,{operation: addNewOrder}); callTheUserService(numOfRetries++); }
💁♂️ What is it about: In many web apps, you are likely to find a pattern that is being copy-pasted for ages - Using Morgan logger to log requests information:
const express =require("express"); const morgan =require("morgan"); const app =express(); app.use(morgan("combined"));
📊 How popular: 2,901,574 downloads/week
🤔 Why it might be wrong: Wait a second, you already have your main logger, right? Is it Pino? Winston? Something else? Great. Why deal with and configure yet another logger? I do appreciate the HTTP domain-specific language (DSL) of Morgan. The syntax is sweet! But does it justify having two loggers?
☀️ Better alternative: Put your chosen logger in a middleware and log the desired request/response properties:
// ✅ Use your preferred logger for all the tasks const logger =require("pino")(); app.use((req, res, next)=>{ res.on("finish",()=>{ logger.info(`${req.url}${res.statusCode}`);// Add other properties here }); next(); });
9. Having conditional code based on NODE_ENV value
💁♂️ What is it about: To differentiate between development vs production configuration, it's common to set the environment variable NODE_ENV with "production|test". Doing so allows the various tooling to act differently. For example, some templating engines will cache compiled templates only in production. Beyond tooling, custom applications use this to specify behaviours that are unique to the development or production environment:
if(process.env.NODE_ENV==="production"){ // This is unlikely to be tested since test runner usually set NODE_ENV=test setLogger({stdout:true,prettyPrint:false}); // If this code branch above exists, why not add more production-only configurations: collectMetrics(); }else{ setLogger({splunk:true,prettyPrint:true}); }
📊 How popular: 5,034,323 code results in GitHub when searching for "NODE_ENV". It doesn't seem like a rare pattern
🤔 Why it might be wrong: Anytime your code checks whether it's production or not, this branch won't get hit by default in some test runner (e.g., Jest set NODE_ENV=test). In any test runner, the developer must remember to test for each possible value of this environment variable. In the example above, collectMetrics() will be tested for the first time in production. Sad smiley. Additionally, putting these conditions opens the door to add more differences between production and the developer machine - when this variable and conditions exists, a developer gets tempted to put some logic for production only. Theoretically, this can be tested: one can set NODE_ENV = "production" in testing and cover the production branches (if she remembers...). But then, if you can test with NODE_ENV='production', what's the point in separating? Just consider everything to be 'production' and avoid this error-prone mental load
☀️ Better alternative: Any code that was written by us, must be tested. This implies avoiding any form of if(production)/else(development) conditions. Wouldn't anyway developers machine have different surrounding infrastructure than production (e.g., logging system)? They do, the environments are quite difference, but we feel comfortable with it. These infrastructural things are battle-tested, extraneous, and not part of our code. To keep the same code between dev/prod and still use different infrastructure - we put different values in the configuration (not in the code). For example, a typical logger emits JSON in production but in a development machine it emits 'pretty-print' colorful lines. To meet this, we set ENV VAR that tells whether what logging style we aim for:
//package.json "scripts":{ "start":"LOG_PRETTY_PRINT=false index.js", "test":"LOG_PRETTY_PRINT=true jest" } //index.js //✅ No condition, same code for all the environments. The variations are defined externally in config or deployment files setLogger({prettyPrint: process.env.LOG_PRETTY_PRINT})
I hope that these thoughts, at least one of them, made you re-consider adding a new technique to your toolbox. In any case, let's keep our community vibrant, disruptive and kind. Respectful discussions are almost as important as the event loop. Almost.
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Just before delving into the strategic differences, for the benefit of those unfamiliar with Prisma - here is a quick 'hello-world' workflow with Prisma ORM. If you're already familiar with it - skipping to the next section sounds sensible. Simply put, Prisma dictates 3 key steps to get our ORM code working:
A. Define a model - Unlike almost any other ORM, Prisma brings a unique language (DSL) for modeling the database-to-code mapping. This proprietary syntax aims to express these models with minimum clutter (i.e., TypeScript generics and verbose code). Worried about having intellisense and validation? A well-crafted vscode extension gets you covered. In the following example, the prisma.schema file describes a DB with an Order table that has a one-to-many relation with a Country table:
// prisma.schema file model Order { id Int @id @default(autoincrement()) userId Int? paymentTermsInDays Int? deliveryAddress String? @db.VarChar(255) country Country @relation(fields: [countryId], references: [id]) countryId Int } model Country { id Int @id @default(autoincrement()) name String @db.VarChar(255) Order Order[] }
B. Generate the client code - Another unusual technique: to get the ORM code ready, one must invoke Prisma's CLI and ask for it:
npx prisma generate
Alternatively, if you wish to have your DB ready and the code generated with one command, just fire:
npx prisma migrate deploy
This will generate migration files that you can execute later in production and also the ORM client code
This will generate migration files that you can execute later in production and the TypeScript ORM code based on the model. The generated code location is defaulted under '[root]/NODE_MODULES/.prisma/client'. Every time the model changes, the code must get re-generated again. While most ORMs name this code 'repository' or 'entity' or 'active record', interestingly, Prisma calls it a 'client'. This shows part of its unique philosophy, which we will explore later
C. All good, use the client to interact with the DB - The generated client has a rich set of functions and types for your DB interactions. Just import the ORM/client code and use it:
import{PrismaClient}from'.prisma/client'; const prisma =newPrismaClient(); // A query example await prisma.order.findMany({ where:{ paymentTermsInDays:30, }, orderBy:{ id:'asc', }, }); // Use the same client for insertion, deletion, updates, etc
That's the nuts and bolts of Prisma. Is it different and better?
When comparing options, before outlining differences, it's useful to state what is actually similar among these products. Here is a partial list of features that both TypeORM, Sequelize and Prisma support
Casual queries with sorting, filtering, distinct, group by, 'upsert' (update or create),etc
Raw queries
Full text search
Association/relations of any type (e.g., many to many, self-relation, etc)
Aggregation queries
Pagination
CLI
Transactions
Migration & seeding
Hooks/events (called middleware in Prisma)
Connection pool
Based on various community benchmarks, no dramatic performance differences
All have huge amount of stars and downloads
Overall, I found TypeORM and Sequelize to be a little more feature rich. For example, the following features are not supported only in Prisma: GIS queries, DB-level custom constraints, DB replication, soft delete, caching, exclude queries and some more
With that, shall we focus on what really set them apart and make a difference
💁♂️ What is it about: ORM's life is not easier since the TypeScript rise, to say the least. The need to support typed models/queries/etc yields a lot of developers sweat. Sequelize, for example, struggles to stabilize a TypeScript interface and, by now offers 3 different syntaxes + one external library (sequelize-typescript) that offers yet another style. Look at the syntax below, this feels like an afterthought - a library that was not built for TypeScript and now tries to squeeze it in somehow. Despite the major investment, both Sequelize and TypeORM offer only partial type safety. Simple queries do return typed objects, but other common corner cases like attributes/projections leave you with brittle strings. Here are a few examples:
// Sequelize pesky TypeScript interface type OrderAttributes={ id: number, price: number, // other attributes... }; type OrderCreationAttributes=Optional<OrderAttributes,'id'>; //😯 Isn't this a weird syntax? classOrderextendsModel<InferAttributes<Order>,InferCreationAttributes<Order>>{ declare id:CreationOptional<number>; declare price: number; }
// Sequelize loose query types awaitgetOrderModel().findAll({ where:{noneExistingField:'noneExistingValue'}//👍 TypeScript will warn here attributes:['none-existing-field','another-imaginary-column'],// No errors here although these columns do not exist include:'no-such-table',//😯 no errors here although this table doesn't exist }); awaitgetCountryModel().findByPk('price');//😯 No errors here although the price column is not a primary key
// TypeORM loose query constordersOnSales:Post[]=await orderRepository.find({ where:{onSale:true},//👍 TypeScript will warn here select:['id','price'], }) console.log(ordersOnSales[0].userId);//😯 No errors here although the 'userId' column is not part of the returned object
Isn't it ironic that a library called TypeORM base its queries on strings?
🤔 How Prisma is different: It takes a totally different approach by generating per-project client code that is fully typed. This client embodies types for everything: every query, relations, sub-queries, everything (except migrations). While other ORMs struggles to infer types from discrete models (including associations that are declared in other files), Prisma's offline code generation is easier: It can look through the entire DB relations, use custom generation code and build an almost perfect TypeScript experience. Why 'almost' perfect? for some reason, Prisma advocates using plain SQL for migrations, which might result in a discrepancy between the code models and the DB schema. Other than that, this is how Prisma's client brings end to end type safety:
await prisma.order.findMany({ where:{ noneExistingField:1,//👍 TypeScript error here }, select:{ noneExistingRelation:{//👍 TypeScript error here select:{id:true}, }, noneExistingField:true,//👍 TypeScript error here }, }); await prisma.order.findUnique({ where:{price:50},//👍 TypeScript error here });
📊 How important: TypeScript support across the board is valuable for DX mostly. Luckily, we have another safety net: The project testing. Since tests are mandatory, having build-time type verification is important but not a life saver
💁♂️ What is it about: Many avoid ORMs while preferring to interact with the DB using lower-level techniques. One of their arguments is against the efficiency of ORMs: Since the generated queries are not visible immediately to the developers, wasteful queries might get executed unknowingly. While all ORMs provide syntactic sugar over SQL, there are subtle differences in the level of abstraction. The more the ORM syntax resembles SQL, the more likely the developers will understand their own actions
For example, TypeORM's query builder looks like SQL broken into convenient functions
No join is reminded here also it fetches records from two related tables (order, and country). Could you guess what SQL is being produced here? how many queries? One right, a simple join? Surprise, actually, two queries are made. Prisma fires one query per-table here, as the join logic happens on the ORM client side (not inside the DB). But why?? in some cases, mostly where there is a lot of repetition in the DB cartesian join, querying each side of the relation is more efficient. But in other cases, it's not. Prisma arbitrarily chose what they believe will perform better in most cases. I checked, in my case it's slower than doing a one-join query on the DB side. As a developer, I would miss this deficiency due to the high-level syntax (no join is mentioned). My point is, Prisma sweet and simple syntax might be a bless for developer who are brand new to databases and aim to achieve a working solution in a short time. For the longer term, having full awareness of the DB interactions is helpful, other ORMs encourage this awareness a little better
📊 How important: Any ORM will hide SQL details from their users - without developer's awareness no ORM will save the day
💁♂️ What is it about: Speak to an ORM antagonist and you'll hear a common sensible argument: ORMs are much slower than a 'raw' approach. To an extent, this is a legit observation as most comparisons will show none-negligible differences between raw/query-builder and ORM.
+Example: a direct insert against the PG driver is much shorter Source
It should also be noted that these benchmarks don't tell the entire story - on top of raw queries, every solution must build a mapper layer that maps the raw data to JS objects, nest the results, cast types, and more. This work is included within every ORM but not shown in benchmarks for the raw option. In reality, every team which doesn't use ORM would have to build their own small "ORM", including a mapper, which will also impact performance
🤔 How Prisma is different: It was my hope to see a magic here, eating the ORM cake without counting the calories, seeing Prisma achieving an almost 'raw' query speed. I had some good and logical reasons for this hope: Prisma uses a DB client built with Rust. Theoretically, it could serialize to and nest objects faster (in reality, this happens on the JS side). It was also built from the ground up and could build on the knowledge pilled in ORM space for years. Also, since it returns POJOs only (see bullet 'No Active Record here!') - no time should be spent on decorating objects with ORM fields
You already got it, this hope was not fulfilled. Going with every community benchmark (one, two, three), Prisma at best is not faster than the average ORM. What is the reason? I can't tell exactly but it might be due the complicated system that must support Go, future languages, MongoDB and other non-relational DBs
+Example: Prisma is not faster than others. It should be noted that in other benchmarks Prisma scores higher and shows an 'average' performance Source
📊 How important: It's expected from ORM users to live peacefully with inferior performance, for many systems it won't make a great deal. With that, 10%-30% performance differences between various ORMs are not a key factor
💁♂️ What is it about: Node in its early days was heavily inspired by Ruby (e.g., testing "describe"), many great patterns were embraced, Active Record is not among the successful ones. What is this pattern about in a nutshell? say you deal with Orders in your system, with Active Record an Order object/class will hold both the entity properties, possible also some of the logic functions and also CRUD functions. Many find this pattern to be awful, why? ideally, when coding some logic/flow, one should not keep her mind busy with side effects and DB narratives. It also might be that accessing some property unconsciously invokes a heavy DB call (i.e., lazy loading). If not enough, in case of heavy logic, unit tests might be in order (i.e., read 'selective unit tests') - it's going to be much harder to write unit tests against code that interacts with the DB. In fact, all of the respectable and popular architecture (e.g., DDD, clean, 3-tiers, etc) advocate to 'isolate the domain', separate the core/logic of the system from the surrounding technologies. With all of that said, both TypeORM and Sequelize support the Active Record pattern which is displayed in many examples within their documentation. Both also support other better patterns like the data mapper (see below), but they still open the door for doubtful patterns
// TypeORM active records 😟 @Entity() classOrderextendsBaseEntity{ @PrimaryGeneratedColumn() id: number @Column() price: number @ManyToOne(()=>Product,(product)=> product.order) products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ // some logic here orderToUpdate.status="approval"; orderToUpdate.save(); orderToUpdate.products.forEach((products)=>{ }) orderToUpdate.usedConnection=? } }
🤔 How Prisma is different: The better alternative is the data mapper pattern. It acts as a bridge, an adapter, between simple object notations (domain objects with properties) to the DB language, typically SQL. Call it with a plain JS object, POJO, get it saved in the DB. Simple. It won't add functions to the result objects or do anything beyond returning pure data, no surprising side effects. In its purest sense, this is a DB-related utility and completely detached from the business logic. While both Sequelize and TypeORM support this, Prisma offers only this style - no room for mistakes.
// Prisma approach with a data mapper 👍 // This was generated automatically by Prisma type Order{ id: number price: number products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ orderToUpdate.status="approval"; prisma.order.update({where:{id: orderToUpdate.id},data: orderToUpdate }); // Side effect 👆, but an explicit one. The thoughtful coder will move this to another function. Since it's happening outside, mocking is possible 👍 products.forEach((products)=>{// No lazy loading, the data is already here 👍 }) } }
In Practica.js we take it one step further and put the prisma models within the "DAL" layer and wrap it with the repository pattern. You may glimpse into the code here, this is the business flow that calls the DAL layer
📊 How important: On the one hand, this is a key architectural principle to follow but the other hand most ORMs allow doing it right
💁♂️ What is it about: TypeORM and Sequelize documentation is mediocre, though TypeORM is a little better. Based on my personal experience they do get a little better over the years, but still by no mean they deserve to be called "good" or "great". For example, if you seek to learn about 'raw queries' - Sequelize offers a very short page on this matter, TypeORM info is spread in multiple other pages. Looking to learn about pagination? Couldn't find Sequelize documents, TypeORM has some short explanation, 150 words only
🤔 How Prisma is different: Prisma documentation rocks! See their documents on similar topics: raw queries and pagingation, thousands of words, and dozens of code examples. The writing itself is also great, feels like some professional writers were involved
This chart above shows how comprehensive are Prisma docs (Obviously this by itself doesn't prove quality)
📊 How important: Great docs are a key to awareness and avoiding pitfalls
💁♂️ What is it about: Good chances are (say about 99.9%) that you'll find yourself diagnostic slow queries in production or any other DB-related quirks. What can you expect from traditional ORMs in terms of observability? Mostly logging. Sequelize provides both logging of query duration and programmatic access to the connection pool state ({size,available,using,waiting}). TypeORM provides only logging of queries that suppress a pre-defined duration threshold. This is better than nothing, but assuming you don't read production logs 24/7, you'd probably need more than logging - an alert to fire when things seem faulty. To achieve this, it's your responsibility to bridge this info to your preferred monitoring system. Another logging downside for this sake is verbosity - we need to emit tons of information to the logs when all we really care for is the average duration. Metrics can serve this purpose much better as we're about to see soon with Prisma
What if you need to dig into which specific part of the query is slow? unfortunately, there is no breakdown of the query phases duration - it's being left to you as a black-box
// Sequelize - logging various DB information
+Logging each query in order to realize trends and anomaly in the monitoring system
🤔 How Prisma is different: Since Prisma targets also enterprises, it must bring strong ops capabilities. Beautifully, it packs support for both metrics and open telemetry tracing!. For metrics, it generates custom JSON with metric keys and values so anyone can adapt this to any monitoring system (e.g., CloudWatch, statsD, etc). On top of this, it produces out of the box metrics in Prometheus format (one of the most popular monitoring platforms). For example, the metric 'prisma_client_queries_duration_histogram_ms' provides the average query length in the system overtime. What is even more impressive is the support for open-tracing - it feeds your OpenTelemetry collector with spans that describe the various phases of every query. For example, it might help realize what is the bottleneck in the query pipeline: Is it the DB connection, the query itself or the serialization?
+Prisma visualizes the various query phases duration with open-telemtry
🏆 Is Prisma doing better?: Definitely
📊 How important: Goes without words how impactful is observability, however filling the gap in other ORM will demand no more than a few days
7. Continuity - will it be here with us in 2024/2025
💁♂️ What is it about: We live quite peacefully with the risk of one of our dependencies to disappear. With ORM though, this risk demand special attention because our buy-in is higher (i.e., harder to replace) and maintaining it was proven to be harder. Just look at a handful of successful ORMs in the past: objection.js, waterline, bookshelf - all of these respectful project had 0 commits in the past month. The single maintainer of objection.js announced that he won't work the project anymore. This high churn rate is not surprising given the huge amount of moving parts to maintain, the gazillion corner cases and the modest 'budget' OSS projects live with. Looking at OpenCollective shows that Sequelize and TypeORM are funded with ~1500$ month in average. This is barely enough to cover a daily Starbucks cappuccino and croissant (6.95$ x 365) for 5 maintainers. Nothing contrasts this model more than a startup company that just raised its series B - Prisma is funded with 40,000,000$ (40 millions) and recruited 80 people! Should not this inspire us with high confidence about their continuity? I'll surprisingly suggest that quite the opposite is true
See, an OSS ORM has to go over one huge hump, but a startup company must pass through TWO. The OSS project will struggle to achieve the critical mass of features, including some high technical barriers (e.g., TypeScript support, ESM). This typically lasts years, but once it does - a project can focus mostly on maintenance and step out of the danger zone. The good news for TypeORM and Sequelize is that they already did! Both struggled to keep their heads above the water, there were rumors in the past that TypeORM is not maintained anymore, but they managed to go through this hump. I counted, both projects had approximately ~2000 PRs in the past 3 years! Going with repo-tracker, each see multiple commits every week. They both have vibrant traction, and the majority of features you would expect from an ORM. TypeORM even supports beyond-the-basics features like multi data source and caching. It's unlikely that now, once they reached the promise land - they will fade away. It might happen, there is no guarantee in the OSS galaxy, but the risk is low
🤔 How Prisma is different: Prisma a little lags behind in terms of features, but with a budget of 40M$ - there are good reasons to believe that they will pass the first hump, achieving a critical mass of features. I'm more concerned with the second hump - showing revenues in 2 years or saying goodbye. As a company that is backed by venture capitals - the model is clear and cruel: In order to secure their next round, series B or C (depends whether the seed is counted), there must be a viable and proven business model. How do you 'sell' ORM? Prisma experiments with multiple products, none is mature yet or being paid for. How big is this risk? According to this startup companies success statistics, "About 65% of the Series A startups get series B, while 35% of the companies that get series A fail.". Since Prisma already gained a lot of love and adoption from the community, there success chances are higher than the average round A/B company, but even 20% or 10% chances to fade away is concerning
This is terrifying news - companies happily choose a young commercial OSS product without realizing that there are 10-30% chances for this product to disappear
Some of startup companies who seek a viable business model do not shut the doors rather change the product, the license or the free features. This is not my subjective business analysis, here are few examples: MongoDB changed their license, this is why the majority had to host their Mongo DB over a single vendor. Redis did something similar. What are the chances of Prisma pivoting to another type of product? It actually already happened before, Prisma 1 was mostly about graphQL client and server, it's now retired
It's just fair to mention the other potential path - most round B companies do succeed to qualify for the next round, when this happens even bigger money will be involved in building the 'Ferrari' of JavaScript ORMs. I'm surely crossing my fingers for these great people, at the same time we have to be conscious about our choices
📊 How important: As important as having to code again the entire DB layer in a big system
Before proposing my key take away - which is the primary ORM, let's repeat the key learning that were introduced here:
🥇 Prisma deserves a medal for its awesome DX, documentation, observability support and end-to-end TypeScript coverage
🤔 There are reasons to be concerned about Prisma's business continuity as a young startup without a viable business model. Also Prisma's abstract client syntax might blind developers a little more than other ORMs
🎩 The contenders, TypeORM and Sequelize, matured and doing quite well: both have merged thousand PRs in the past 3 years to become more stable, they keep introducing new releases (see repo-tracker), and for now holds more features than Prisma. Also, both show solid performance (for an ORM). Hats off to the maintainers!
Based on these observations, which should you pick? which ORM will we use for practica.js?
Prisma is an excellent addition to Node.js ORMs family, but not the hassle-free one tool to rule them all. It's a mixed bag of many delicious candies and a few gotchas. Wouldn't it grow to tick all the boxes? Maybe, but unlikely. Once built, it's too hard to dramatically change the syntax and engine performance. Then, during the writing and speaking with the community, including some Prisma enthusiasts, I realized that it doesn't aim to be the can-do-everything 'Ferrari'. Its positioning seems to resemble more a convenient family car with a solid engine and awesome user experience. In other words, it probably aims for the enterprise space where there is mostly demand for great DX, OK performance, and business-class support
In the end of this journey I see no dominant flawless 'Ferrari' ORM. I should probably change my perspective: Building ORM for the hectic modern JavaScript ecosystem is 10x harder than building a Java ORM back then in 2001. There is no stain in the shirt, it's a cool JavaScript swag. I learned to accept what we have, a rich set of features, tolerable performance, good enough for many systems. Need more? Don't use ORM. Nothing is going to change dramatically, it's now as good as it can be
Surely use Prisma under these scenarios - If your data needs are rather simple; when time-to-market concern takes precedence over the data processing accuracy; when the DB is relatively small; if you're a mobile/frontend developer who is doing her first steps in the backend world; when there is a need for business-class support; AND when Prisma's long term business continuity risk is a non-issue for you
I'd probably prefer other options under these conditions - If the DB layer performance is a major concern; if you're savvy backend developer with solid SQL capabilities; when there is a need for fine grain control over the data layer. For all of these cases, Prisma might still work, but my primary choices would be using knex/TypeORM/Sequelize with a data-mapper style
Consequently, we love Prisma and add it behind flag (--orm=prisma) to Practica.js. At the same time, until some clouds will disappear, Sequelize will remain our default ORM
As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
Layer 1: Plain old folders to stay on top of your code
With zero tooling and only by having all the Microservice and libraries together in the same root folder, a developer gets great management perks and tons of value: Navigation, search across components, deleting a library instantly, debugging, quickly adding new components. Consider the alternative with multi-repo approach — adding a new component for modularity demands opening and configuring a new GitHub repository. Not just a hassle but also greater chances of developers choosing the short path and including the new code in some semi-relevant existing package. In plain words, zero-tooling Monorepos can increase modularity.
This layer is often overlooked. If your codebase is not huge and the components are highly decoupled (more on this later)— it might be all you need. We’ve seen a handful of successful Monorepo solutions without any special tooling.
With that said, some of the newer tools augment this experience with interesting features:
Both Turborepo and Nx and also Lerna provide a visual representation of the packages’ dependencies
Nx allows ‘visibility rules’ which is about enforcing who can use what. Consider, a ‘checkout’ library that should be approached only by the ‘order Microservice’ — deviating from this will result in failure during development (not runtime enforcement)
Nx dependencies graph
Nx workspace generator allows scaffolding out components. Whenever a team member needs to craft a new controller/library/class/Microservice, she just invokes a CLI command which products code based on a community or organization template. This enforces consistency and best practices sharing
Layer 2: Tasks and pipeline to build your code efficiently
Even in a world of autonomous components, there are management tasks that must be applied in a batch like applying a security patch via npm update, running the tests of multiple components that were affected by a change, publish 3 related libraries to name a few examples. All Monorepo tools support this basic functionality of invoking some command over a group of packages. For example, Lerna, Nx, and Turborepo do.
Apply some commands over multiple packages
In some projects, invoking a cascading command is all you need. Mostly if each package has an autonomous life cycle and the build process spans a single package (more on this later). In some other types of projects where the workflow demands testing/running and publishing/deploying many packages together — this will end in a terribly slow experience. Consider a solution with hundred of packages that are transpiled and bundled — one might wait minutes for a wide test to run. While it’s not always a great practice to rely on wide/E2E tests, it’s quite common in the wild. This is exactly where the new wave of Monorepo tooling shines — deeply optimizing the build process. I should say this out loud: These tools bring beautiful and innovative build optimizations:
Parallelization — If two commands or packages are orthogonal to each other, the commands will run in two different threads or processes. Typically your quality control involves testing, lining, license checking, CVE checking — why not parallelize?
Smart execution plan —Beyond parallelization, the optimized tasks execution order is determined based on many factors. Consider a build that includes A, B, C where A, C depend on B — naively, a build system would wait for B to build and only then run A & C. This can be optimized if we run A & C’s isolated unit tests while building B and not afterward. By running task in parallel as early as possible, the overall execution time is improved — this has a remarkable impact mostly when hosting a high number of components. See below a visualization example of a pipeline improvement
A modern tool advantage over old Lerna. Taken from Turborepo website
Detect who is affected by a change — Even on a system with high coupling between packages, it’s usually not necessary to run all packages rather than only those who are affected by a change. What exactly is ‘affected’? Packages/Microservices that depend upon another package that has changed. Some of the toolings can ignore minor changes that are unlikely to break others. This is not a great performance booster but also an amazing testing feature —developers can get quick feedback on whether any of their clients were broken. Both Nx and Turborepo support this feature. Lerna can tell only which of the Monorepo package has changed
Sub-systems (i.e., projects) — Similarly to ‘affected’ above, modern tooling can realize portions of the graph that are inter-connected (a project or application) while others are not reachable by the component in context (another project) so they know to involve only packages of the relevant group
Caching — This is a serious speed booster: Nx and Turborepo cache the result/output of tasks and avoid running them again on consequent builds if unnecessary. For example, consider long-running tests of a Microservice, when commanding to re-build this Microservice, the tooling might realize that nothing has changed and the test will get skipped. This is achieved by generating a hashmap of all the dependent resources — if any of these resources haven’t change, then the hashmap will be the same and the task will get skipped. They even cache the stdout of the command, so when you run a cached version it acts like the real thing — consider running 200 tests, seeing all the log statements of the tests, getting results over the terminal in 200 ms, everything acts like ‘real testing while in fact, the tests did not run at all rather the cache!
Remote caching — Similarly to caching, only by placing the task’s hashmaps and result on a global server so further executions on other team member’s computers will also skip unnecessary tasks. In huge Monorepo projects that rely on E2E tests and must build all packages for development, this can save a great deal of time
Layer 3: Hoist your dependencies to boost npm installation
The speed optimizations that were described above won’t be of help if the bottleneck is the big bull of mud that is called ‘npm install’ (not to criticize, it’s just hard by nature). Take a typical scenario as an example, given dozens of components that should be built, they could easily trigger the installation of thousands of sub-dependencies. Although they use quite similar dependencies (e.g., same logger, same ORM), if the dependency version is not equal then npm will duplicate (the NPM doppelgangers problem) the installation of those packages which might result in a long process.
This is where the workspace line of tools (e.g., Yarn workspace, npm workspaces, PNPM) kicks in and introduces some optimization — Instead of installing dependencies inside each component ‘NODE_MODULES’ folder, it will create one centralized folder and link all the dependencies over there. This can show a tremendous boost in install time for huge projects. On the other hand, if you always focus on one component at a time, installing the packages of a single Microservice/library should not be a concern.
Both Nx and Turborepo can rely on the package manager/workspace to provide this layer of optimizations. In other words, Nx and Turborepo are the layer above the package manager who take care of optimized dependencies installation.
On top of this, Nx introduces one more non-standard, maybe even controversial, technique: There might be only ONE package.json at the root folder of the entire Monorepo. By default, when creating components using Nx, they will not have their own package.json! Instead, all will share the root package.json. Going this way, all the Microservice/libraries share their dependencies and the installation time is improved. Note: It’s possible to create ‘publishable’ components that do have a package.json, it’s just not the default.
I’m concerned here. Sharing dependencies among packages increases the coupling, what if Microservice1 wishes to bump dependency1 version but Microservice2 can’t do this at the moment? Also, package.json is part of Node.js runtime and excluding it from the component root loses important features like package.json main field or ESM exports (telling the clients which files are exposed). I ran some POC with Nx last week and found myself blocked — library B was wadded, I tried to import it from Library A but couldn’t get the ‘import’ statement to specify the right package name. The natural action was to open B’s package.json and check the name, but there is no Package.json… How do I determine its name? Nx docs are great, finally, I found the answer, but I had to spend time learning a new ‘framework’.
Stop for a second: It’s all about your workflow
We deal with tooling and features, but it’s actually meaningless evaluating these options before determining whether your preferred workflow is synchronized or independent (we will discuss this in a few seconds). This upfront fundamental decision will change almost everything.
Consider the following example with 3 components: Library 1 is introducing some major and breaking changes, Microservice1 and Microservice2 depend upon Library1 and should react to those breaking changes. How?
Option A — The synchronized workflow- Going with this development style, all the three components will be developed and deployed in one chunk together. Practically, a developer will code the changes in Library1, test libray1 and also run wide integration/e2e tests that include Microservice1 and Microservice2. When they're ready, the version of all components will get bumped. Finally, they will get deployed together.
Going with this approach, the developer has the chance of seeing the full flow from the client's perspective (Microservice1 and 2), the tests cover not only the library but also through the eyes of the clients who actually use it. On the flip side, it mandates updating all the depend-upon components (could be dozens), doing so increases the risk’s blast radius as more units are affected and should be considered before deployment. Also, working on a large unit of work demands building and testing more things which will slow the build.
Option B — Independent workflow- This style is about working a unit by unit, one bite at a time, and deploy each component independently based on its personal business considerations and priority. This is how it goes: A developer makes the changes in Library1, they must be tested carefully in the scope of Library1. Once she is ready, the SemVer is bumped to a new major and the library is published to a package manager registry (e.g., npm). What about the client Microservices? Well, the team of Microservice2 is super-busy now with other priorities, and skip this update for now (the same thing as we all delay many of our npm updates,). However, Microservice1 is very much interested in this change — The team has to pro-actively update this dependency and grab the latest changes, run the tests and when they are ready, today or next week — deploy it.
Going with the independent workflow, the library author can move much faster because she does not need to take into account 2 or 30 other components — some are coded by different teams. This workflow also forces her to write efficient tests against the library — it’s her only safety net and is likely to end with autonomous components that have low coupling to others. On the other hand, testing in isolation without the client’s perspective loses some dimension of realism. Also, if a single developer has to update 5 units — publishing each individually to the registry and then updating within all the dependencies can be a little tedious.
Synchronized and independent workflows illustrated
On the illusion of synchronicity
In distributed systems, it’s not feasible to achieve 100% synchronicity — believing otherwise can lead to design faults. Consider a breaking change in Microservice1, now its client Microservice2 is adapting and ready for the change. These two Microservices are deployed together but due to the nature of Microservices and distributed runtime (e.g., Kubernetes) the deployment of Microservice1 only fail. Now, Microservice2’s code is not aligned with Microservice1 production and we are faced with a production bug. This line of failures can be handled to an extent also with a synchronized workflow — The deployment should orchestrate the rollout of each unit so each one is deployed at a time. Although this approach is doable, it increased the chances of large-scoped rollback and increases deployment fear.
This fundamental decision, synchronized or independent, will determine so many things — Whether performance is an issue or not at all (when working on a single unit), hoisting dependencies or leaving a dedicated node_modules in every package’s folder, and whether to create a local link between packages which is described in the next paragraph.
Layer 4: Link your packages for immediate feedback
When having a Monorepo, there is always the unavoidable dilemma of how to link between the components:
Option 1: Using npm — Each library is a standard npm package and its client installs it via the standards npm commands. Given Microservice1 and Library1, this will end with two copies of Library1: the one inside Microservices1/NODE_MODULES (i.e., the local copy of the consuming Microservice), and the 2nd is the development folder where the team is coding Library1.
Option2: Just a plain folder — With this, Library1 is nothing but a logical module inside a folder that Microservice1,2,3 just locally imports. NPM is not involved here, it’s just code in a dedicated folder. This is for example how Nest.js modules are represented.
With option 1, teams benefit from all the great merits of a package manager — SemVer(!), tooling, standards, etc. However, should one update Library1, the changes won’t get reflected in Microservice1 since it is grabbing its copy from the npm registry and the changes were not published yet. This is a fundamental pain with Monorepo and package managers — one can’t just code over multiple packages and test/run the changes.
With option 2, teams lose all the benefits of a package manager: Every change is propagated immediately to all of the consumers.
How do we bring the good from both worlds (presumably)? Using linking. Lerna, Nx, the various package manager workspaces (Yarn, npm, etc) allow using npm libraries and at the same time link between the clients (e.g., Microservice1) and the library. Under the hood, they created a symbolic link. In development mode, changes are propagated immediately, in deployment time — the copy is grabbed from the registry.
Linking packages in a Monorepo
If you’re doing the synchronized workflow, you’re all set. Only now any risky change that is introduced by Library3, must be handled NOW by the 10 Microservices that consume it.
If favoring the independent workflow, this is of course a big concern. Some may call this direct linking style a ‘monolith monorepo’, or maybe a ‘monolitho’. However, when not linking, it’s harder to debug a small issue between the Microservice and the npm library. What I typically do is temporarily link (with npm link) between the packages, debug, code, then finally remove the link.
Nx is taking a slightly more disruptive approach — it is using TypeScript paths to bind between the components. When Microservice1 is importing Library1, to avoid the full local path, it creates a TypeScript mapping between the library name and the full path. But wait a minute, there is no TypeScript in production so how could it work? Well, in serving/bundling time it webpacks and stitches the components together. Not a very standard way of doing Node.js work.
Closing: What should you use?
It’s all about your workflow and architecture — a huge unseen cross-road stands in front of the Monorepo tooling decision.
Scenario A — If your architecture dictates a synchronized workflow where all packages are deployed together, or at least developed in collaboration — then there is a strong need for a rich tool to manage this coupling and boost the performance. In this case, Nx might be a great choice.
For example, if your Microservice must keep the same versioning, or if the team really small and the same people are updating all the components, or if your modularization is not based on package manager but rather on framework-own modules (e.g., Nest.js), if you’re doing frontend where the components inherently are published together, or if your testing strategy relies on E2E mostly — for all of these cases and others, Nx is a tool that was built to enhance the experience of coding many relatively coupled components together. It is a great a sugar coat over systems that are unavoidably big and linked.
If your system is not inherently big or meant to synchronize packages deployment, fancy Monorepo features might increase the coupling between components. The Monorepo pyramid above draws a line between basic features that provide value without coupling components while other layers come with an architectural price to consider. Sometimes climbing up toward the tip is worth the consequences, just make this decision consciously.
Scenario B— If you’re into an independent workflow where each package is developed, tested, and deployed (almost) independently — then inherently there is no need to fancy tools to orchestrate hundreds of packages. Most of the time there is just one package in focus. This calls for picking a leaner and simpler tool — Turborepo. By going this route, Monorepo is not something that affects your architecture, but rather a scoped tool for faster build execution. One specific tool that encourages an independent workflow is Bilt by Gil Tayar, it’s yet to gain enough popularity but it might rise soon and is a great source to learn more about this philosophy of work.
In any scenario, consider workspaces — If you face performance issues that are caused by package installation, then the various workspace tools Yarn/npm/PNPM, can greatly minimize this overhead with a low footprint. That said, if you’re working in an autonomous workflow, smaller are the chances of facing such issues. Don’t just use tools unless there is a pain.
We tried to show the beauty of each and where it shines. If we’re allowed to end this article with an opinionated choice: We greatly believe in an independent and autonomous workflow where the occasional developer of a package can code and deploy fearlessly without messing with dozens of other foreign packages. For this reason, Turborepo will be our favorite tool for the next season. We promise to tell you how it goes.
Bonus: Comparison table
See below a detailed comparison table of the various tools and features:
Preview only, the complete table can be found here
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
💁♂️ What is it about: A super popular technique in which the app configurable values (e.g., DB user name) are stored in a simple text file. Then, when the app loads, the dotenv library sets all the text file values as environment variables so the code can read this
// .env file USER_SERVICE_URL=https://users.myorg.com //start.js require('dotenv').config(); //blog-post-service.js repository.savePost(post); //update the user number of posts, read the users service URL from an environment variable await axios.put(`${process.env.USER_SERVICE_URL}/api/user/${post.userId}/incrementPosts`)
📊 How popular: 21,806,137 downloads/week!
🤔 Why it might be wrong: Dotenv is so easy and intuitive to start with, so one might easily overlook fundamental features: For example, it's hard to infer the configuration schema and realize the meaning of each key and its typing. Consequently, there is no built-in way to fail fast when a mandatory key is missing - a flow might fail after starting and presenting some side effects (e.g., DB records were already mutated before the failure). In the example above, the blog post will be saved to DB, and only then will the code realize that a mandatory key is missing - This leaves the app hanging in an invalid state. On top of this, in the presence of many keys, it's impossible to organize them hierarchically. If not enough, it encourages developers to commit this .env file which might contain production values - this happens because there is no clear way to define development defaults. Teams usually work around this by committing .env.example file and then asking whoever pulls code to rename this file manually. If they remember to of course
☀️ Better alternative: Some configuration libraries provide out of the box solution to all of these needs. They encourage a clear schema and the possibility to validate early and fail if needed. See comparison of options here. One of the better alternatives is 'convict', down below is the same example, this time with Convict, hopefully it's better now:
// config.js exportdefault{ userService:{ url:{ // Hierarchical, documented and strongly typed 👇 doc:"The URL of the user management service including a trailing slash", format:"url", default:"http://localhost:4001", nullable:false, env:"USER_SERVICE_URL", }, }, //more keys here }; //start.js importconvictfrom"convict"; importconfigSchemafrom"config"; convict(configSchema); // Fail fast! convictConfigurationProvider.validate(); //blog-post.js repository.savePost(post); // Will never arrive here if the URL is not set await axios.put( `${convict.get(userService.url)}/api/user/${post.userId}/incrementPosts` );
2. Calling a 'fat' service from the API controller
💁♂️ What is it about: Consider a reader of our code who wishes to understand the entire high-level flow or delve into a very specific part. She first lands on the API controller, where requests start. Unlike what its name implies, this controller layer is just an adapter and kept really thin and straightforward. Great thus far. Then the controller calls a big 'service' with thousands of lines of code that represent the entire logic
// user-controller router.post('/',async(req, res, next)=>{ await userService.add(req.body); // Might have here try-catch or error response logic } // user-service exports functionadd(newUser){ // Want to understand quickly? Need to understand the entire user service, 1500 loc // It uses technical language and reuse narratives of other flows this.copyMoreFieldsToUser(newUser) const doesExist =this.updateIfAlreadyExists(newUser) if(!doesExist){ addToCache(newUser); } // 20 more lines that demand navigating to other functions in order to get the intent }
📊 How popular: It's hard to pull solid numbers here, I could confidently say that in most of the app that I see, this is the case
🤔 Why it might be wrong: We're here to tame complexities. One of the useful techniques is deferring a complexity to the later stage possible. In this case though, the reader of the code (hopefully) starts her journey through the tests and the controller - things are simple in these areas. Then, as she lands on the big service - she gets tons of complexity and small details, although she is focused on understanding the overall flow or some specific logic. This is unnecessary complexity
☀️ Better alternative: The controller should call a particular type of service, a use-case , which is responsible for summarizing the flow in a business and simple language. Each flow/feature is described using a use-case, each contains 4-10 lines of code, that tell the story without technical details. It mostly orchestrates other small services, clients, and repositories that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily. She can now choose where she would like to focus. She is now exposed only to necessary complexity. This technique also encourages partitioning the code to the smaller object that the use-case orchestrates. Bonus: By looking at coverage reports, one can tell which features are covered, not just files/functions
This idea by the way is formalized in the 'clean architecture' book - I'm not a big fan of 'fancy' architectures, but see - it's worth cherry-picking techniques from every source. You may walk-through our Node.js best practices starter, practica.js, and examine the use-cases code
3. Nest.js: Wire everything with dependency injection
💁♂️ What is it about: If you're doing Nest.js, besides having a powerful framework in your hands, you probably use DI for everything and make every class injectable. Say you have a weather-service that depends upon humidity-service, and there is no requirement to swap the humidity-service with alternative providers. Nevertheless, you inject humidity-service into the weather-service. It becomes part of your development style, "why not" you think - I may need to stub it during testing or replace it in the future
// humidity-service.ts - not customer facing @Injectable() exportclassGoogleHumidityService{ asyncgetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } } // weather-service.ts - customer facing import{ GoogleHumidityService }from'./humidity-service.ts'; exporttypeweatherInfo{ temperature:number, humidity:number } exportclassWeatherService{ constructor(private humidityService: GoogleHumidityService){} asyncGetWeather(when: Datetime):Promise<weatherInfo>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService } } // app.module.ts @Module({ providers:[GoogleHumidityService, WeatherService], }) exportclassAppModule{}
📊 How popular: No numbers here but I could confidently say that in all of the Nest.js app that I've seen, this is the case. In the popular 'nestjs-realworld-example-ap[p']() all the services are 'injectable'
🤔 Why it might be wrong: Dependency injection is not a priceless coding style but a pattern you should pull in the right moment, like any other pattern. Why? Because any pattern has a price. What price, you ask? First, encapsulation is violated. Clients of the weather-service are now aware that other providers are being used internally. Some clients may get tempted to override providers also it's not under their responsibility. Second, it's another layer of complexity to learn, maintain, and one more way to shoot yourself in the legs. StackOverflow owes some of its revenues to Nest.js DI - plenty of discussions try to solve this puzzle (e.g. did you know that in case of circular dependencies the order of imports matters?). Third, there is the performance thing - Nest.js, for example struggled to provide a decent start time for serverless environments and had to introduce lazy loaded modules. Don't get me wrong, in some cases, there is a good case for DI: When a need arises to decouple a dependency from its caller, or to allow clients to inject custom implementations (e.g., the strategy pattern). In such case, when there is a value, you may consider whether the value of DI is worth its price. If you don't have this case, why pay for nothing?
I recommend reading the first paragraphs of this blog post 'Dependency Injection is EVIL' (and absolutely don't agree with this bold words)
☀️ Better alternative: 'Lean-ify' your engineering approach - avoid using any tool unless it serves a real-world need immediately. Start simple, a dependent class should simply import its dependency and use it - Yeah, using the plain Node.js module system ('require'). Facing a situation when there is a need to factor dynamic objects? There are a handful of simple patterns, simpler than DI, that you should consider, like 'if/else', factory function, and more. Are singletons requested? Consider techniques with lower costs like the module system with factory function. Need to stub/mock for testing? Monkey patching might be better than DI: better clutter your test code a bit than clutter your production code. Have a strong need to hide from an object where its dependencies are coming from? You sure? Use DI!
// humidity-service.ts - not customer facing exportasyncfunctiongetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } // weather-service.ts - customer facing import{ getHumidity }from"./humidity-service.ts"; // ✅ No wiring is happening externally, all is flat and explicit. Simple exportasyncfunctiongetWeather(when: Datetime):Promise<number>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService // Nobody needs to know about it, its an implementation details awaitgetHumidity(when); }
My name is Yoni Goldberg, I'm a Node.js developer and consultant. I wrote few code-books like JavaScript testing best practices and Node.js best practices (100,000 stars ✨🥹). That said, my best guide is Node.js testing practices which only few read 😞. I shall release an advanced Node.js testing course soon and also hold workshops for teams. I'm also a core maintainer of Practica.js which is a Node.js starter that creates a production-ready example Node Monorepo solution that is based on the standards and simplicity. It might be your primary option when starting a new Node.js solution
💁♂️ What is it about: Commonly, you're in need to issue or/and authenticate JWT tokens. Similarly, you might need to allow login from one single social network like Google/Facebook. When faced with these kinds of needs, Node.js developers rush to the glorious library Passport.js like butterflies are attracted to light
📊 How popular: 1,389,720 weekly downloads
🤔 Why it might be wrong: When tasked with guarding your routes with JWT token - you're just a few lines of code shy from ticking the goal. Instead of messing up with a new framework, instead of introducing levels of indirections (you call passport, then it calls you), instead of spending time learning new abstractions - use a JWT library directly. Libraries like jsonwebtoken or fast-jwt are simple and well maintained. Have concerns with the security hardening? Good point, your concerns are valid. But would you not get better hardening with a direct understanding of your configuration and flow? Will hiding things behind a framework help? Even if you prefer the hardening of a battle-tested framework, Passport doesn't handle a handful of security risks like secrets/token, secured user management, DB protection, and more. My point, you probably anyway need fully-featured user and authentication management platforms. Various cloud services and OSS projects, can tick all of those security concerns. Why then start in the first place with a framework that doesn't satisfy your security needs? It seems like many who opt for Passport.js are not fully aware of which needs are satisfied and which are left open. All of that said, Passport definitely shines when looking for a quick way to support many social login providers
☀️ Better alternative: Is token authentication in order? These few lines of code below might be all you need. You may also glimpse into Practica.js wrapper around these libraries. A real-world project at scale typically need more: supporting async JWT (JWKS), securely manage and rotate the secrets to name a few examples. In this case, OSS solution like [keycloak (https://github.com/keycloak/keycloak) or commercial options like Auth0[https://github.com/auth0] are alternatives to consider
// jwt-middleware.js, a simplified version - Refer to Practica.js to see some more corner cases constmiddleware=(req, res, next)=>{ if(!req.headers.authorization){ res.sendStatus(401) } jwt.verify(req.headers.authorization, options.secret,(err: any,jwtContent: any)=>{ if(err){ return res.sendStatus(401); } req.user= jwtContent.data; next(); });
💁♂️ What is it about: When testing against an API (i.e., component, integration, E2E tests), the library supertest provides a sweet syntax that can both detect the web server address, make HTTP call and also assert on the response. Three in one
test("When adding invalid user, then the response is 400",(done)=>{ const request =require("supertest"); const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act request(app) .post("/user") .send(userToAdd) .expect("Content-Type",/json/) .expect(400, done); // Assert // We already asserted above ☝🏻 as part of the request });
📊 How popular: 2,717,744 weekly downloads
🤔 Why it might be wrong: You already have your assertion library (Jest? Chai?), it has a great error highlighting and comparison - you trust it. Why code some tests using another assertion syntax? Not to mention, Supertest's assertion errors are not as descriptive as Jest and Chai. It's also cumbersome to mix HTTP client + assertion library instead of choosing the best for each mission. Speaking of the best, there are more standard, popular, and better-maintained HTTP clients (like fetch, axios and other friends). Need another reason? Supertest might encourage coupling the tests to Express as it offers a constructor that gets an Express object. This constructor infers the API address automatically (useful when using dynamic test ports). This couples the test to the implementation and won't work in the case where you wish to run the same tests against a remote process (the API doesn't live with the tests). My repository 'Node.js testing best practices' holds examples of how tests can infer the API port and address
☀️ Better alternative: A popular and standard HTTP client library like Node.js Fetch or Axios. In Practica.js (a Node.js starter that packs many best practices) we use Axios. It allows us to configure a HTTP client that is shared among all the tests: We bake inside a JWT token, headers, and a base URL. Another good pattern that we look at, is making each Microservice generate HTTP client library for its consumers. This brings strong-type experience to the clients, synchronizes the provider-consumer versions and as a bonus - The provider can test itself with the same library that its consumers are using
test("When adding invalid user, then the response is 400 and includes a reason",(done)=>{ const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act const receivedResponse = axios.post( `http://localhost:${apiPort}/user`, userToAdd ); // Assert // ✅ Assertion happens in a dedicated stage and a dedicated library expect(receivedResponse).toMatchObject({ status:400, data:{ reason:"no-name", }, }); });
6. Fastify decorate for non request/web utilities
💁♂️ What is it about:Fastify introduces great patterns. Personally, I highly appreciate how it preserves the simplicity of Express while bringing more batteries. One thing that got me wondering is the 'decorate' feature which allows placing common utilities/services inside a widely accessible container object. I'm referring here specifically to the case where a cross-cutting concern utility/service is being used. Here is an example:
// An example of a utility that is cross-cutting-concern. Could be logger or anything else fastify.decorate('metricsService',function(name){ fireMetric:()=>{ // My code that sends metrics to the monitoring system } }) fastify.get('/api/orders',asyncfunction(request, reply){ this.metricsService.fireMetric({name:'new-request'}) // Handle the request }) // my-business-logic.js exports functioncalculateSomething(){ // How to fire a metric? }
It should be noted that 'decoration' is also used to place values (e.g., user) inside a request - this is a slightly different case and a sensible one
📊 How popular: Fastify has 696,122 weekly download and growing rapidly. The decorator concept is part of the framework's core
🤔 Why it might be wrong: Some services and utilities serve cross-cutting-concern needs and should be accessible from other layers like domain (i.e, business logic, DAL). When placing utilities inside this object, the Fastify object might not be accessible to these layers. You probably don't want to couple your web framework with your business logic: Consider that some of your business logic and repositories might get invoked from non-REST clients like CRON, MQ, and similar - In these cases, Fastify won't get involved at all so better not trust it to be your service locator
☀️ Better alternative: A good old Node.js module is a standard way to expose and consume functionality. Need a singleton? Use the module system caching. Need to instantiate a service in correlation with a Fastify life-cycle hook (e.g., DB connection on start)? Call it from that Fastify hook. In the rare case where a highly dynamic and complex instantiation of dependencies is needed - DI is also a (complex) option to consider
// ✅ A simple usage of good old Node.js modules // metrics-service.js exports asyncfunctionfireMetric(name){ // My code that sends metrics to the monitoring system } import{fireMetric}from'./metrics-service.js' fastify.get('/api/orders',asyncfunction(request, reply){ metricsService.fireMetric({name:'new-request'}) }) // my-business-logic.js exports functioncalculateSomething(){ metricsService.fireMetric({name:'new-request'}) }
💁♂️ What is it about: You catch an error somewhere deep in the code (not on the route level), then call logger.error to make this error observable. Seems simple and necessary
📊 How popular: Hard to put my hands on numbers but it's quite popular, right?
🤔 Why it might be wrong: First, errors should get handled/logged in a central location. Error handling is a critical path. Various catch clauses are likely to behave differently without a centralized and unified behavior. For example, a request might arise to tag all errors with certain metadata, or on top of logging, to also fire a monitoring metric. Applying these requirements in ~100 locations is not a walk in the park. Second, catch clauses should be minimized to particular scenarios. By default, the natural flow of an error is bubbling down to the route/entry-point - from there, it will get forwarded to the error handler. Catch clauses are more verbose and error-prone - therefore it should serve two very specific needs: When one wishes to change the flow based on the error or enrich the error with more information (which is not the case in this example)
☀️ Better alternative: By default, let the error bubble down the layers and get caught by the entry-point global catch (e.g., Express error middleware). In cases when the error should trigger a different flow (e.g., retry) or there is value in enriching the error with more context - use a catch clause. In this case, ensure the .catch code also reports to the error handler
// A case where we wish to retry upon failure try{ axios.post('https://thatService.io/api/users); } catch(error){ // ✅ A central location that handles error errorHandler.handle(error,this,{operation: addNewOrder}); callTheUserService(numOfRetries++); }
💁♂️ What is it about: In many web apps, you are likely to find a pattern that is being copy-pasted for ages - Using Morgan logger to log requests information:
const express =require("express"); const morgan =require("morgan"); const app =express(); app.use(morgan("combined"));
📊 How popular: 2,901,574 downloads/week
🤔 Why it might be wrong: Wait a second, you already have your main logger, right? Is it Pino? Winston? Something else? Great. Why deal with and configure yet another logger? I do appreciate the HTTP domain-specific language (DSL) of Morgan. The syntax is sweet! But does it justify having two loggers?
☀️ Better alternative: Put your chosen logger in a middleware and log the desired request/response properties:
// ✅ Use your preferred logger for all the tasks const logger =require("pino")(); app.use((req, res, next)=>{ res.on("finish",()=>{ logger.info(`${req.url}${res.statusCode}`);// Add other properties here }); next(); });
9. Having conditional code based on NODE_ENV value
💁♂️ What is it about: To differentiate between development vs production configuration, it's common to set the environment variable NODE_ENV with "production|test". Doing so allows the various tooling to act differently. For example, some templating engines will cache compiled templates only in production. Beyond tooling, custom applications use this to specify behaviours that are unique to the development or production environment:
if(process.env.NODE_ENV==="production"){ // This is unlikely to be tested since test runner usually set NODE_ENV=test setLogger({stdout:true,prettyPrint:false}); // If this code branch above exists, why not add more production-only configurations: collectMetrics(); }else{ setLogger({splunk:true,prettyPrint:true}); }
📊 How popular: 5,034,323 code results in GitHub when searching for "NODE_ENV". It doesn't seem like a rare pattern
🤔 Why it might be wrong: Anytime your code checks whether it's production or not, this branch won't get hit by default in some test runner (e.g., Jest set NODE_ENV=test). In any test runner, the developer must remember to test for each possible value of this environment variable. In the example above, collectMetrics() will be tested for the first time in production. Sad smiley. Additionally, putting these conditions opens the door to add more differences between production and the developer machine - when this variable and conditions exists, a developer gets tempted to put some logic for production only. Theoretically, this can be tested: one can set NODE_ENV = "production" in testing and cover the production branches (if she remembers...). But then, if you can test with NODE_ENV='production', what's the point in separating? Just consider everything to be 'production' and avoid this error-prone mental load
☀️ Better alternative: Any code that was written by us, must be tested. This implies avoiding any form of if(production)/else(development) conditions. Wouldn't anyway developers machine have different surrounding infrastructure than production (e.g., logging system)? They do, the environments are quite difference, but we feel comfortable with it. These infrastructural things are battle-tested, extraneous, and not part of our code. To keep the same code between dev/prod and still use different infrastructure - we put different values in the configuration (not in the code). For example, a typical logger emits JSON in production but in a development machine it emits 'pretty-print' colorful lines. To meet this, we set ENV VAR that tells whether what logging style we aim for:
//package.json "scripts":{ "start":"LOG_PRETTY_PRINT=false index.js", "test":"LOG_PRETTY_PRINT=true jest" } //index.js //✅ No condition, same code for all the environments. The variations are defined externally in config or deployment files setLogger({prettyPrint: process.env.LOG_PRETTY_PRINT})
I hope that these thoughts, at least one of them, made you re-consider adding a new technique to your toolbox. In any case, let's keep our community vibrant, disruptive and kind. Respectful discussions are almost as important as the event loop. Almost.
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
+
+
+
+
\ No newline at end of file
diff --git a/blog/rss.xml b/blog/rss.xml
new file mode 100644
index 00000000..5c0a6139
--- /dev/null
+++ b/blog/rss.xml
@@ -0,0 +1,131 @@
+
+
+
+ Practica.js Blog
+ https://practica.dev/blog
+ Practica.js Blog
+ Wed, 05 Mar 2025 10:00:00 GMT
+ https://validator.w3.org/feed/docs/rss2.html
+ https://github.com/jpmonette/feed
+ en
+
+
+ https://practica.dev/blog/about-the-sweet-and-powerful-use-case-code-pattern
+ https://practica.dev/blog/about-the-sweet-and-powerful-use-case-code-pattern
+ Wed, 05 Mar 2025 10:00:00 GMT
+
+ Intro: A sweet pattern that got lost in time
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
]]>
+ node.js
+ use-case
+ clean-architecture
+ javascript
+ tdd
+ workflow
+ domain
+
+
+
+ https://practica.dev/blog/a-compilation-of-outstanding-testing-articles-with-javaScript
+ https://practica.dev/blog/a-compilation-of-outstanding-testing-articles-with-javaScript
+ Sun, 06 Aug 2023 10:00:00 GMT
+
+ What's special about this article?
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
]]>
+ node.js
+ testing
+ javascript
+ tdd
+ unit
+ integration
+
+
+
+ https://practica.dev/blog/testing-the-dark-scenarios-of-your-nodejs-application
+ https://practica.dev/blog/testing-the-dark-scenarios-of-your-nodejs-application
+ Fri, 07 Jul 2023 11:00:00 GMT
+
+ Where the dead-bodies are covered
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
]]>
+ node.js
+ express
+ practica
+ prisma
+
+
+
+ https://practica.dev/blog/is-prisma-better-than-your-traditional-orm
+ https://practica.dev/blog/is-prisma-better-than-your-traditional-orm
+ Wed, 07 Dec 2022 11:00:00 GMT
+
+ Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Just before delving into the strategic differences, for the benefit of those unfamiliar with Prisma - here is a quick 'hello-world' workflow with Prisma ORM. If you're already familiar with it - skipping to the next section sounds sensible. Simply put, Prisma dictates 3 key steps to get our ORM code working:
A. Define a model - Unlike almost any other ORM, Prisma brings a unique language (DSL) for modeling the database-to-code mapping. This proprietary syntax aims to express these models with minimum clutter (i.e., TypeScript generics and verbose code). Worried about having intellisense and validation? A well-crafted vscode extension gets you covered. In the following example, the prisma.schema file describes a DB with an Order table that has a one-to-many relation with a Country table:
// prisma.schema file model Order { id Int @id @default(autoincrement()) userId Int? paymentTermsInDays Int? deliveryAddress String? @db.VarChar(255) country Country @relation(fields: [countryId], references: [id]) countryId Int } model Country { id Int @id @default(autoincrement()) name String @db.VarChar(255) Order Order[] }
B. Generate the client code - Another unusual technique: to get the ORM code ready, one must invoke Prisma's CLI and ask for it:
npx prisma generate
Alternatively, if you wish to have your DB ready and the code generated with one command, just fire:
npx prisma migrate deploy
This will generate migration files that you can execute later in production and also the ORM client code
This will generate migration files that you can execute later in production and the TypeScript ORM code based on the model. The generated code location is defaulted under '[root]/NODE_MODULES/.prisma/client'. Every time the model changes, the code must get re-generated again. While most ORMs name this code 'repository' or 'entity' or 'active record', interestingly, Prisma calls it a 'client'. This shows part of its unique philosophy, which we will explore later
C. All good, use the client to interact with the DB - The generated client has a rich set of functions and types for your DB interactions. Just import the ORM/client code and use it:
import{PrismaClient}from'.prisma/client'; const prisma =newPrismaClient(); // A query example await prisma.order.findMany({ where:{ paymentTermsInDays:30, }, orderBy:{ id:'asc', }, }); // Use the same client for insertion, deletion, updates, etc
That's the nuts and bolts of Prisma. Is it different and better?
When comparing options, before outlining differences, it's useful to state what is actually similar among these products. Here is a partial list of features that both TypeORM, Sequelize and Prisma support
Casual queries with sorting, filtering, distinct, group by, 'upsert' (update or create),etc
Raw queries
Full text search
Association/relations of any type (e.g., many to many, self-relation, etc)
Aggregation queries
Pagination
CLI
Transactions
Migration & seeding
Hooks/events (called middleware in Prisma)
Connection pool
Based on various community benchmarks, no dramatic performance differences
All have huge amount of stars and downloads
Overall, I found TypeORM and Sequelize to be a little more feature rich. For example, the following features are not supported only in Prisma: GIS queries, DB-level custom constraints, DB replication, soft delete, caching, exclude queries and some more
With that, shall we focus on what really set them apart and make a difference
💁♂️ What is it about: ORM's life is not easier since the TypeScript rise, to say the least. The need to support typed models/queries/etc yields a lot of developers sweat. Sequelize, for example, struggles to stabilize a TypeScript interface and, by now offers 3 different syntaxes + one external library (sequelize-typescript) that offers yet another style. Look at the syntax below, this feels like an afterthought - a library that was not built for TypeScript and now tries to squeeze it in somehow. Despite the major investment, both Sequelize and TypeORM offer only partial type safety. Simple queries do return typed objects, but other common corner cases like attributes/projections leave you with brittle strings. Here are a few examples:
// Sequelize pesky TypeScript interface type OrderAttributes={ id: number, price: number, // other attributes... }; type OrderCreationAttributes=Optional<OrderAttributes,'id'>; //😯 Isn't this a weird syntax? classOrderextendsModel<InferAttributes<Order>,InferCreationAttributes<Order>>{ declare id:CreationOptional<number>; declare price: number; }
// Sequelize loose query types awaitgetOrderModel().findAll({ where:{noneExistingField:'noneExistingValue'}//👍 TypeScript will warn here attributes:['none-existing-field','another-imaginary-column'],// No errors here although these columns do not exist include:'no-such-table',//😯 no errors here although this table doesn't exist }); awaitgetCountryModel().findByPk('price');//😯 No errors here although the price column is not a primary key
// TypeORM loose query constordersOnSales:Post[]=await orderRepository.find({ where:{onSale:true},//👍 TypeScript will warn here select:['id','price'], }) console.log(ordersOnSales[0].userId);//😯 No errors here although the 'userId' column is not part of the returned object
Isn't it ironic that a library called TypeORM base its queries on strings?
🤔 How Prisma is different: It takes a totally different approach by generating per-project client code that is fully typed. This client embodies types for everything: every query, relations, sub-queries, everything (except migrations). While other ORMs struggles to infer types from discrete models (including associations that are declared in other files), Prisma's offline code generation is easier: It can look through the entire DB relations, use custom generation code and build an almost perfect TypeScript experience. Why 'almost' perfect? for some reason, Prisma advocates using plain SQL for migrations, which might result in a discrepancy between the code models and the DB schema. Other than that, this is how Prisma's client brings end to end type safety:
await prisma.order.findMany({ where:{ noneExistingField:1,//👍 TypeScript error here }, select:{ noneExistingRelation:{//👍 TypeScript error here select:{id:true}, }, noneExistingField:true,//👍 TypeScript error here }, }); await prisma.order.findUnique({ where:{price:50},//👍 TypeScript error here });
📊 How important: TypeScript support across the board is valuable for DX mostly. Luckily, we have another safety net: The project testing. Since tests are mandatory, having build-time type verification is important but not a life saver
💁♂️ What is it about: Many avoid ORMs while preferring to interact with the DB using lower-level techniques. One of their arguments is against the efficiency of ORMs: Since the generated queries are not visible immediately to the developers, wasteful queries might get executed unknowingly. While all ORMs provide syntactic sugar over SQL, there are subtle differences in the level of abstraction. The more the ORM syntax resembles SQL, the more likely the developers will understand their own actions
For example, TypeORM's query builder looks like SQL broken into convenient functions
No join is reminded here also it fetches records from two related tables (order, and country). Could you guess what SQL is being produced here? how many queries? One right, a simple join? Surprise, actually, two queries are made. Prisma fires one query per-table here, as the join logic happens on the ORM client side (not inside the DB). But why?? in some cases, mostly where there is a lot of repetition in the DB cartesian join, querying each side of the relation is more efficient. But in other cases, it's not. Prisma arbitrarily chose what they believe will perform better in most cases. I checked, in my case it's slower than doing a one-join query on the DB side. As a developer, I would miss this deficiency due to the high-level syntax (no join is mentioned). My point is, Prisma sweet and simple syntax might be a bless for developer who are brand new to databases and aim to achieve a working solution in a short time. For the longer term, having full awareness of the DB interactions is helpful, other ORMs encourage this awareness a little better
📊 How important: Any ORM will hide SQL details from their users - without developer's awareness no ORM will save the day
💁♂️ What is it about: Speak to an ORM antagonist and you'll hear a common sensible argument: ORMs are much slower than a 'raw' approach. To an extent, this is a legit observation as most comparisons will show none-negligible differences between raw/query-builder and ORM.
+Example: a direct insert against the PG driver is much shorter Source
It should also be noted that these benchmarks don't tell the entire story - on top of raw queries, every solution must build a mapper layer that maps the raw data to JS objects, nest the results, cast types, and more. This work is included within every ORM but not shown in benchmarks for the raw option. In reality, every team which doesn't use ORM would have to build their own small "ORM", including a mapper, which will also impact performance
🤔 How Prisma is different: It was my hope to see a magic here, eating the ORM cake without counting the calories, seeing Prisma achieving an almost 'raw' query speed. I had some good and logical reasons for this hope: Prisma uses a DB client built with Rust. Theoretically, it could serialize to and nest objects faster (in reality, this happens on the JS side). It was also built from the ground up and could build on the knowledge pilled in ORM space for years. Also, since it returns POJOs only (see bullet 'No Active Record here!') - no time should be spent on decorating objects with ORM fields
You already got it, this hope was not fulfilled. Going with every community benchmark (one, two, three), Prisma at best is not faster than the average ORM. What is the reason? I can't tell exactly but it might be due the complicated system that must support Go, future languages, MongoDB and other non-relational DBs
+Example: Prisma is not faster than others. It should be noted that in other benchmarks Prisma scores higher and shows an 'average' performance Source
📊 How important: It's expected from ORM users to live peacefully with inferior performance, for many systems it won't make a great deal. With that, 10%-30% performance differences between various ORMs are not a key factor
💁♂️ What is it about: Node in its early days was heavily inspired by Ruby (e.g., testing "describe"), many great patterns were embraced, Active Record is not among the successful ones. What is this pattern about in a nutshell? say you deal with Orders in your system, with Active Record an Order object/class will hold both the entity properties, possible also some of the logic functions and also CRUD functions. Many find this pattern to be awful, why? ideally, when coding some logic/flow, one should not keep her mind busy with side effects and DB narratives. It also might be that accessing some property unconsciously invokes a heavy DB call (i.e., lazy loading). If not enough, in case of heavy logic, unit tests might be in order (i.e., read 'selective unit tests') - it's going to be much harder to write unit tests against code that interacts with the DB. In fact, all of the respectable and popular architecture (e.g., DDD, clean, 3-tiers, etc) advocate to 'isolate the domain', separate the core/logic of the system from the surrounding technologies. With all of that said, both TypeORM and Sequelize support the Active Record pattern which is displayed in many examples within their documentation. Both also support other better patterns like the data mapper (see below), but they still open the door for doubtful patterns
// TypeORM active records 😟 @Entity() classOrderextendsBaseEntity{ @PrimaryGeneratedColumn() id: number @Column() price: number @ManyToOne(()=>Product,(product)=> product.order) products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ // some logic here orderToUpdate.status="approval"; orderToUpdate.save(); orderToUpdate.products.forEach((products)=>{ }) orderToUpdate.usedConnection=? } }
🤔 How Prisma is different: The better alternative is the data mapper pattern. It acts as a bridge, an adapter, between simple object notations (domain objects with properties) to the DB language, typically SQL. Call it with a plain JS object, POJO, get it saved in the DB. Simple. It won't add functions to the result objects or do anything beyond returning pure data, no surprising side effects. In its purest sense, this is a DB-related utility and completely detached from the business logic. While both Sequelize and TypeORM support this, Prisma offers only this style - no room for mistakes.
// Prisma approach with a data mapper 👍 // This was generated automatically by Prisma type Order{ id: number price: number products:Product[] // Other columns here } functionupdateOrder(orderToUpdate:Order){ if(orderToUpdate.price>100){ orderToUpdate.status="approval"; prisma.order.update({where:{id: orderToUpdate.id},data: orderToUpdate }); // Side effect 👆, but an explicit one. The thoughtful coder will move this to another function. Since it's happening outside, mocking is possible 👍 products.forEach((products)=>{// No lazy loading, the data is already here 👍 }) } }
In Practica.js we take it one step further and put the prisma models within the "DAL" layer and wrap it with the repository pattern. You may glimpse into the code here, this is the business flow that calls the DAL layer
📊 How important: On the one hand, this is a key architectural principle to follow but the other hand most ORMs allow doing it right
💁♂️ What is it about: TypeORM and Sequelize documentation is mediocre, though TypeORM is a little better. Based on my personal experience they do get a little better over the years, but still by no mean they deserve to be called "good" or "great". For example, if you seek to learn about 'raw queries' - Sequelize offers a very short page on this matter, TypeORM info is spread in multiple other pages. Looking to learn about pagination? Couldn't find Sequelize documents, TypeORM has some short explanation, 150 words only
🤔 How Prisma is different: Prisma documentation rocks! See their documents on similar topics: raw queries and pagingation, thousands of words, and dozens of code examples. The writing itself is also great, feels like some professional writers were involved
This chart above shows how comprehensive are Prisma docs (Obviously this by itself doesn't prove quality)
📊 How important: Great docs are a key to awareness and avoiding pitfalls
💁♂️ What is it about: Good chances are (say about 99.9%) that you'll find yourself diagnostic slow queries in production or any other DB-related quirks. What can you expect from traditional ORMs in terms of observability? Mostly logging. Sequelize provides both logging of query duration and programmatic access to the connection pool state ({size,available,using,waiting}). TypeORM provides only logging of queries that suppress a pre-defined duration threshold. This is better than nothing, but assuming you don't read production logs 24/7, you'd probably need more than logging - an alert to fire when things seem faulty. To achieve this, it's your responsibility to bridge this info to your preferred monitoring system. Another logging downside for this sake is verbosity - we need to emit tons of information to the logs when all we really care for is the average duration. Metrics can serve this purpose much better as we're about to see soon with Prisma
What if you need to dig into which specific part of the query is slow? unfortunately, there is no breakdown of the query phases duration - it's being left to you as a black-box
// Sequelize - logging various DB information
+Logging each query in order to realize trends and anomaly in the monitoring system
🤔 How Prisma is different: Since Prisma targets also enterprises, it must bring strong ops capabilities. Beautifully, it packs support for both metrics and open telemetry tracing!. For metrics, it generates custom JSON with metric keys and values so anyone can adapt this to any monitoring system (e.g., CloudWatch, statsD, etc). On top of this, it produces out of the box metrics in Prometheus format (one of the most popular monitoring platforms). For example, the metric 'prisma_client_queries_duration_histogram_ms' provides the average query length in the system overtime. What is even more impressive is the support for open-tracing - it feeds your OpenTelemetry collector with spans that describe the various phases of every query. For example, it might help realize what is the bottleneck in the query pipeline: Is it the DB connection, the query itself or the serialization?
+Prisma visualizes the various query phases duration with open-telemtry
🏆 Is Prisma doing better?: Definitely
📊 How important: Goes without words how impactful is observability, however filling the gap in other ORM will demand no more than a few days
7. Continuity - will it be here with us in 2024/2025
💁♂️ What is it about: We live quite peacefully with the risk of one of our dependencies to disappear. With ORM though, this risk demand special attention because our buy-in is higher (i.e., harder to replace) and maintaining it was proven to be harder. Just look at a handful of successful ORMs in the past: objection.js, waterline, bookshelf - all of these respectful project had 0 commits in the past month. The single maintainer of objection.js announced that he won't work the project anymore. This high churn rate is not surprising given the huge amount of moving parts to maintain, the gazillion corner cases and the modest 'budget' OSS projects live with. Looking at OpenCollective shows that Sequelize and TypeORM are funded with ~1500$ month in average. This is barely enough to cover a daily Starbucks cappuccino and croissant (6.95$ x 365) for 5 maintainers. Nothing contrasts this model more than a startup company that just raised its series B - Prisma is funded with 40,000,000$ (40 millions) and recruited 80 people! Should not this inspire us with high confidence about their continuity? I'll surprisingly suggest that quite the opposite is true
See, an OSS ORM has to go over one huge hump, but a startup company must pass through TWO. The OSS project will struggle to achieve the critical mass of features, including some high technical barriers (e.g., TypeScript support, ESM). This typically lasts years, but once it does - a project can focus mostly on maintenance and step out of the danger zone. The good news for TypeORM and Sequelize is that they already did! Both struggled to keep their heads above the water, there were rumors in the past that TypeORM is not maintained anymore, but they managed to go through this hump. I counted, both projects had approximately ~2000 PRs in the past 3 years! Going with repo-tracker, each see multiple commits every week. They both have vibrant traction, and the majority of features you would expect from an ORM. TypeORM even supports beyond-the-basics features like multi data source and caching. It's unlikely that now, once they reached the promise land - they will fade away. It might happen, there is no guarantee in the OSS galaxy, but the risk is low
🤔 How Prisma is different: Prisma a little lags behind in terms of features, but with a budget of 40M$ - there are good reasons to believe that they will pass the first hump, achieving a critical mass of features. I'm more concerned with the second hump - showing revenues in 2 years or saying goodbye. As a company that is backed by venture capitals - the model is clear and cruel: In order to secure their next round, series B or C (depends whether the seed is counted), there must be a viable and proven business model. How do you 'sell' ORM? Prisma experiments with multiple products, none is mature yet or being paid for. How big is this risk? According to this startup companies success statistics, "About 65% of the Series A startups get series B, while 35% of the companies that get series A fail.". Since Prisma already gained a lot of love and adoption from the community, there success chances are higher than the average round A/B company, but even 20% or 10% chances to fade away is concerning
This is terrifying news - companies happily choose a young commercial OSS product without realizing that there are 10-30% chances for this product to disappear
Some of startup companies who seek a viable business model do not shut the doors rather change the product, the license or the free features. This is not my subjective business analysis, here are few examples: MongoDB changed their license, this is why the majority had to host their Mongo DB over a single vendor. Redis did something similar. What are the chances of Prisma pivoting to another type of product? It actually already happened before, Prisma 1 was mostly about graphQL client and server, it's now retired
It's just fair to mention the other potential path - most round B companies do succeed to qualify for the next round, when this happens even bigger money will be involved in building the 'Ferrari' of JavaScript ORMs. I'm surely crossing my fingers for these great people, at the same time we have to be conscious about our choices
📊 How important: As important as having to code again the entire DB layer in a big system
Before proposing my key take away - which is the primary ORM, let's repeat the key learning that were introduced here:
🥇 Prisma deserves a medal for its awesome DX, documentation, observability support and end-to-end TypeScript coverage
🤔 There are reasons to be concerned about Prisma's business continuity as a young startup without a viable business model. Also Prisma's abstract client syntax might blind developers a little more than other ORMs
🎩 The contenders, TypeORM and Sequelize, matured and doing quite well: both have merged thousand PRs in the past 3 years to become more stable, they keep introducing new releases (see repo-tracker), and for now holds more features than Prisma. Also, both show solid performance (for an ORM). Hats off to the maintainers!
Based on these observations, which should you pick? which ORM will we use for practica.js?
Prisma is an excellent addition to Node.js ORMs family, but not the hassle-free one tool to rule them all. It's a mixed bag of many delicious candies and a few gotchas. Wouldn't it grow to tick all the boxes? Maybe, but unlikely. Once built, it's too hard to dramatically change the syntax and engine performance. Then, during the writing and speaking with the community, including some Prisma enthusiasts, I realized that it doesn't aim to be the can-do-everything 'Ferrari'. Its positioning seems to resemble more a convenient family car with a solid engine and awesome user experience. In other words, it probably aims for the enterprise space where there is mostly demand for great DX, OK performance, and business-class support
In the end of this journey I see no dominant flawless 'Ferrari' ORM. I should probably change my perspective: Building ORM for the hectic modern JavaScript ecosystem is 10x harder than building a Java ORM back then in 2001. There is no stain in the shirt, it's a cool JavaScript swag. I learned to accept what we have, a rich set of features, tolerable performance, good enough for many systems. Need more? Don't use ORM. Nothing is going to change dramatically, it's now as good as it can be
Surely use Prisma under these scenarios - If your data needs are rather simple; when time-to-market concern takes precedence over the data processing accuracy; when the DB is relatively small; if you're a mobile/frontend developer who is doing her first steps in the backend world; when there is a need for business-class support; AND when Prisma's long term business continuity risk is a non-issue for you
I'd probably prefer other options under these conditions - If the DB layer performance is a major concern; if you're savvy backend developer with solid SQL capabilities; when there is a need for fine grain control over the data layer. For all of these cases, Prisma might still work, but my primary choices would be using knex/TypeORM/Sequelize with a data-mapper style
Consequently, we love Prisma and add it behind flag (--orm=prisma) to Practica.js. At the same time, until some clouds will disappear, Sequelize will remain our default ORM
]]>
+ node.js
+ express
+ nestjs
+ fastify
+ passport
+ dotenv
+ supertest
+ practica
+ testing
+
+
+
+ https://practica.dev/blog/monorepo-backend
+ https://practica.dev/blog/monorepo-backend
+ Mon, 07 Nov 2022 11:00:00 GMT
+
+ As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
Layer 1: Plain old folders to stay on top of your code
With zero tooling and only by having all the Microservice and libraries together in the same root folder, a developer gets great management perks and tons of value: Navigation, search across components, deleting a library instantly, debugging, quickly adding new components. Consider the alternative with multi-repo approach — adding a new component for modularity demands opening and configuring a new GitHub repository. Not just a hassle but also greater chances of developers choosing the short path and including the new code in some semi-relevant existing package. In plain words, zero-tooling Monorepos can increase modularity.
This layer is often overlooked. If your codebase is not huge and the components are highly decoupled (more on this later)— it might be all you need. We’ve seen a handful of successful Monorepo solutions without any special tooling.
With that said, some of the newer tools augment this experience with interesting features:
Both Turborepo and Nx and also Lerna provide a visual representation of the packages’ dependencies
Nx allows ‘visibility rules’ which is about enforcing who can use what. Consider, a ‘checkout’ library that should be approached only by the ‘order Microservice’ — deviating from this will result in failure during development (not runtime enforcement)
Nx dependencies graph
Nx workspace generator allows scaffolding out components. Whenever a team member needs to craft a new controller/library/class/Microservice, she just invokes a CLI command which products code based on a community or organization template. This enforces consistency and best practices sharing
Layer 2: Tasks and pipeline to build your code efficiently
Even in a world of autonomous components, there are management tasks that must be applied in a batch like applying a security patch via npm update, running the tests of multiple components that were affected by a change, publish 3 related libraries to name a few examples. All Monorepo tools support this basic functionality of invoking some command over a group of packages. For example, Lerna, Nx, and Turborepo do.
Apply some commands over multiple packages
In some projects, invoking a cascading command is all you need. Mostly if each package has an autonomous life cycle and the build process spans a single package (more on this later). In some other types of projects where the workflow demands testing/running and publishing/deploying many packages together — this will end in a terribly slow experience. Consider a solution with hundred of packages that are transpiled and bundled — one might wait minutes for a wide test to run. While it’s not always a great practice to rely on wide/E2E tests, it’s quite common in the wild. This is exactly where the new wave of Monorepo tooling shines — deeply optimizing the build process. I should say this out loud: These tools bring beautiful and innovative build optimizations:
Parallelization — If two commands or packages are orthogonal to each other, the commands will run in two different threads or processes. Typically your quality control involves testing, lining, license checking, CVE checking — why not parallelize?
Smart execution plan —Beyond parallelization, the optimized tasks execution order is determined based on many factors. Consider a build that includes A, B, C where A, C depend on B — naively, a build system would wait for B to build and only then run A & C. This can be optimized if we run A & C’s isolated unit tests while building B and not afterward. By running task in parallel as early as possible, the overall execution time is improved — this has a remarkable impact mostly when hosting a high number of components. See below a visualization example of a pipeline improvement
A modern tool advantage over old Lerna. Taken from Turborepo website
Detect who is affected by a change — Even on a system with high coupling between packages, it’s usually not necessary to run all packages rather than only those who are affected by a change. What exactly is ‘affected’? Packages/Microservices that depend upon another package that has changed. Some of the toolings can ignore minor changes that are unlikely to break others. This is not a great performance booster but also an amazing testing feature —developers can get quick feedback on whether any of their clients were broken. Both Nx and Turborepo support this feature. Lerna can tell only which of the Monorepo package has changed
Sub-systems (i.e., projects) — Similarly to ‘affected’ above, modern tooling can realize portions of the graph that are inter-connected (a project or application) while others are not reachable by the component in context (another project) so they know to involve only packages of the relevant group
Caching — This is a serious speed booster: Nx and Turborepo cache the result/output of tasks and avoid running them again on consequent builds if unnecessary. For example, consider long-running tests of a Microservice, when commanding to re-build this Microservice, the tooling might realize that nothing has changed and the test will get skipped. This is achieved by generating a hashmap of all the dependent resources — if any of these resources haven’t change, then the hashmap will be the same and the task will get skipped. They even cache the stdout of the command, so when you run a cached version it acts like the real thing — consider running 200 tests, seeing all the log statements of the tests, getting results over the terminal in 200 ms, everything acts like ‘real testing while in fact, the tests did not run at all rather the cache!
Remote caching — Similarly to caching, only by placing the task’s hashmaps and result on a global server so further executions on other team member’s computers will also skip unnecessary tasks. In huge Monorepo projects that rely on E2E tests and must build all packages for development, this can save a great deal of time
Layer 3: Hoist your dependencies to boost npm installation
The speed optimizations that were described above won’t be of help if the bottleneck is the big bull of mud that is called ‘npm install’ (not to criticize, it’s just hard by nature). Take a typical scenario as an example, given dozens of components that should be built, they could easily trigger the installation of thousands of sub-dependencies. Although they use quite similar dependencies (e.g., same logger, same ORM), if the dependency version is not equal then npm will duplicate (the NPM doppelgangers problem) the installation of those packages which might result in a long process.
This is where the workspace line of tools (e.g., Yarn workspace, npm workspaces, PNPM) kicks in and introduces some optimization — Instead of installing dependencies inside each component ‘NODE_MODULES’ folder, it will create one centralized folder and link all the dependencies over there. This can show a tremendous boost in install time for huge projects. On the other hand, if you always focus on one component at a time, installing the packages of a single Microservice/library should not be a concern.
Both Nx and Turborepo can rely on the package manager/workspace to provide this layer of optimizations. In other words, Nx and Turborepo are the layer above the package manager who take care of optimized dependencies installation.
On top of this, Nx introduces one more non-standard, maybe even controversial, technique: There might be only ONE package.json at the root folder of the entire Monorepo. By default, when creating components using Nx, they will not have their own package.json! Instead, all will share the root package.json. Going this way, all the Microservice/libraries share their dependencies and the installation time is improved. Note: It’s possible to create ‘publishable’ components that do have a package.json, it’s just not the default.
I’m concerned here. Sharing dependencies among packages increases the coupling, what if Microservice1 wishes to bump dependency1 version but Microservice2 can’t do this at the moment? Also, package.json is part of Node.js runtime and excluding it from the component root loses important features like package.json main field or ESM exports (telling the clients which files are exposed). I ran some POC with Nx last week and found myself blocked — library B was wadded, I tried to import it from Library A but couldn’t get the ‘import’ statement to specify the right package name. The natural action was to open B’s package.json and check the name, but there is no Package.json… How do I determine its name? Nx docs are great, finally, I found the answer, but I had to spend time learning a new ‘framework’.
Stop for a second: It’s all about your workflow
We deal with tooling and features, but it’s actually meaningless evaluating these options before determining whether your preferred workflow is synchronized or independent (we will discuss this in a few seconds). This upfront fundamental decision will change almost everything.
Consider the following example with 3 components: Library 1 is introducing some major and breaking changes, Microservice1 and Microservice2 depend upon Library1 and should react to those breaking changes. How?
Option A — The synchronized workflow- Going with this development style, all the three components will be developed and deployed in one chunk together. Practically, a developer will code the changes in Library1, test libray1 and also run wide integration/e2e tests that include Microservice1 and Microservice2. When they're ready, the version of all components will get bumped. Finally, they will get deployed together.
Going with this approach, the developer has the chance of seeing the full flow from the client's perspective (Microservice1 and 2), the tests cover not only the library but also through the eyes of the clients who actually use it. On the flip side, it mandates updating all the depend-upon components (could be dozens), doing so increases the risk’s blast radius as more units are affected and should be considered before deployment. Also, working on a large unit of work demands building and testing more things which will slow the build.
Option B — Independent workflow- This style is about working a unit by unit, one bite at a time, and deploy each component independently based on its personal business considerations and priority. This is how it goes: A developer makes the changes in Library1, they must be tested carefully in the scope of Library1. Once she is ready, the SemVer is bumped to a new major and the library is published to a package manager registry (e.g., npm). What about the client Microservices? Well, the team of Microservice2 is super-busy now with other priorities, and skip this update for now (the same thing as we all delay many of our npm updates,). However, Microservice1 is very much interested in this change — The team has to pro-actively update this dependency and grab the latest changes, run the tests and when they are ready, today or next week — deploy it.
Going with the independent workflow, the library author can move much faster because she does not need to take into account 2 or 30 other components — some are coded by different teams. This workflow also forces her to write efficient tests against the library — it’s her only safety net and is likely to end with autonomous components that have low coupling to others. On the other hand, testing in isolation without the client’s perspective loses some dimension of realism. Also, if a single developer has to update 5 units — publishing each individually to the registry and then updating within all the dependencies can be a little tedious.
Synchronized and independent workflows illustrated
On the illusion of synchronicity
In distributed systems, it’s not feasible to achieve 100% synchronicity — believing otherwise can lead to design faults. Consider a breaking change in Microservice1, now its client Microservice2 is adapting and ready for the change. These two Microservices are deployed together but due to the nature of Microservices and distributed runtime (e.g., Kubernetes) the deployment of Microservice1 only fail. Now, Microservice2’s code is not aligned with Microservice1 production and we are faced with a production bug. This line of failures can be handled to an extent also with a synchronized workflow — The deployment should orchestrate the rollout of each unit so each one is deployed at a time. Although this approach is doable, it increased the chances of large-scoped rollback and increases deployment fear.
This fundamental decision, synchronized or independent, will determine so many things — Whether performance is an issue or not at all (when working on a single unit), hoisting dependencies or leaving a dedicated node_modules in every package’s folder, and whether to create a local link between packages which is described in the next paragraph.
Layer 4: Link your packages for immediate feedback
When having a Monorepo, there is always the unavoidable dilemma of how to link between the components:
Option 1: Using npm — Each library is a standard npm package and its client installs it via the standards npm commands. Given Microservice1 and Library1, this will end with two copies of Library1: the one inside Microservices1/NODE_MODULES (i.e., the local copy of the consuming Microservice), and the 2nd is the development folder where the team is coding Library1.
Option2: Just a plain folder — With this, Library1 is nothing but a logical module inside a folder that Microservice1,2,3 just locally imports. NPM is not involved here, it’s just code in a dedicated folder. This is for example how Nest.js modules are represented.
With option 1, teams benefit from all the great merits of a package manager — SemVer(!), tooling, standards, etc. However, should one update Library1, the changes won’t get reflected in Microservice1 since it is grabbing its copy from the npm registry and the changes were not published yet. This is a fundamental pain with Monorepo and package managers — one can’t just code over multiple packages and test/run the changes.
With option 2, teams lose all the benefits of a package manager: Every change is propagated immediately to all of the consumers.
How do we bring the good from both worlds (presumably)? Using linking. Lerna, Nx, the various package manager workspaces (Yarn, npm, etc) allow using npm libraries and at the same time link between the clients (e.g., Microservice1) and the library. Under the hood, they created a symbolic link. In development mode, changes are propagated immediately, in deployment time — the copy is grabbed from the registry.
Linking packages in a Monorepo
If you’re doing the synchronized workflow, you’re all set. Only now any risky change that is introduced by Library3, must be handled NOW by the 10 Microservices that consume it.
If favoring the independent workflow, this is of course a big concern. Some may call this direct linking style a ‘monolith monorepo’, or maybe a ‘monolitho’. However, when not linking, it’s harder to debug a small issue between the Microservice and the npm library. What I typically do is temporarily link (with npm link) between the packages, debug, code, then finally remove the link.
Nx is taking a slightly more disruptive approach — it is using TypeScript paths to bind between the components. When Microservice1 is importing Library1, to avoid the full local path, it creates a TypeScript mapping between the library name and the full path. But wait a minute, there is no TypeScript in production so how could it work? Well, in serving/bundling time it webpacks and stitches the components together. Not a very standard way of doing Node.js work.
Closing: What should you use?
It’s all about your workflow and architecture — a huge unseen cross-road stands in front of the Monorepo tooling decision.
Scenario A — If your architecture dictates a synchronized workflow where all packages are deployed together, or at least developed in collaboration — then there is a strong need for a rich tool to manage this coupling and boost the performance. In this case, Nx might be a great choice.
For example, if your Microservice must keep the same versioning, or if the team really small and the same people are updating all the components, or if your modularization is not based on package manager but rather on framework-own modules (e.g., Nest.js), if you’re doing frontend where the components inherently are published together, or if your testing strategy relies on E2E mostly — for all of these cases and others, Nx is a tool that was built to enhance the experience of coding many relatively coupled components together. It is a great a sugar coat over systems that are unavoidably big and linked.
If your system is not inherently big or meant to synchronize packages deployment, fancy Monorepo features might increase the coupling between components. The Monorepo pyramid above draws a line between basic features that provide value without coupling components while other layers come with an architectural price to consider. Sometimes climbing up toward the tip is worth the consequences, just make this decision consciously.
Scenario B— If you’re into an independent workflow where each package is developed, tested, and deployed (almost) independently — then inherently there is no need to fancy tools to orchestrate hundreds of packages. Most of the time there is just one package in focus. This calls for picking a leaner and simpler tool — Turborepo. By going this route, Monorepo is not something that affects your architecture, but rather a scoped tool for faster build execution. One specific tool that encourages an independent workflow is Bilt by Gil Tayar, it’s yet to gain enough popularity but it might rise soon and is a great source to learn more about this philosophy of work.
In any scenario, consider workspaces — If you face performance issues that are caused by package installation, then the various workspace tools Yarn/npm/PNPM, can greatly minimize this overhead with a low footprint. That said, if you’re working in an autonomous workflow, smaller are the chances of facing such issues. Don’t just use tools unless there is a pain.
We tried to show the beauty of each and where it shines. If we’re allowed to end this article with an opinionated choice: We greatly believe in an independent and autonomous workflow where the occasional developer of a package can code and deploy fearlessly without messing with dozens of other foreign packages. For this reason, Turborepo will be our favorite tool for the next season. We promise to tell you how it goes.
Bonus: Comparison table
See below a detailed comparison table of the various tools and features:
Preview only, the complete table can be found here
]]>
+ monorepo
+ decisions
+
+
+
+ https://practica.dev/blog/popular-nodejs-pattern-and-tools-to-reconsider
+ https://practica.dev/blog/popular-nodejs-pattern-and-tools-to-reconsider
+ Tue, 02 Aug 2022 10:00:00 GMT
+
+ Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
💁♂️ What is it about: A super popular technique in which the app configurable values (e.g., DB user name) are stored in a simple text file. Then, when the app loads, the dotenv library sets all the text file values as environment variables so the code can read this
// .env file USER_SERVICE_URL=https://users.myorg.com //start.js require('dotenv').config(); //blog-post-service.js repository.savePost(post); //update the user number of posts, read the users service URL from an environment variable await axios.put(`${process.env.USER_SERVICE_URL}/api/user/${post.userId}/incrementPosts`)
📊 How popular: 21,806,137 downloads/week!
🤔 Why it might be wrong: Dotenv is so easy and intuitive to start with, so one might easily overlook fundamental features: For example, it's hard to infer the configuration schema and realize the meaning of each key and its typing. Consequently, there is no built-in way to fail fast when a mandatory key is missing - a flow might fail after starting and presenting some side effects (e.g., DB records were already mutated before the failure). In the example above, the blog post will be saved to DB, and only then will the code realize that a mandatory key is missing - This leaves the app hanging in an invalid state. On top of this, in the presence of many keys, it's impossible to organize them hierarchically. If not enough, it encourages developers to commit this .env file which might contain production values - this happens because there is no clear way to define development defaults. Teams usually work around this by committing .env.example file and then asking whoever pulls code to rename this file manually. If they remember to of course
☀️ Better alternative: Some configuration libraries provide out of the box solution to all of these needs. They encourage a clear schema and the possibility to validate early and fail if needed. See comparison of options here. One of the better alternatives is 'convict', down below is the same example, this time with Convict, hopefully it's better now:
// config.js exportdefault{ userService:{ url:{ // Hierarchical, documented and strongly typed 👇 doc:"The URL of the user management service including a trailing slash", format:"url", default:"http://localhost:4001", nullable:false, env:"USER_SERVICE_URL", }, }, //more keys here }; //start.js importconvictfrom"convict"; importconfigSchemafrom"config"; convict(configSchema); // Fail fast! convictConfigurationProvider.validate(); //blog-post.js repository.savePost(post); // Will never arrive here if the URL is not set await axios.put( `${convict.get(userService.url)}/api/user/${post.userId}/incrementPosts` );
2. Calling a 'fat' service from the API controller
💁♂️ What is it about: Consider a reader of our code who wishes to understand the entire high-level flow or delve into a very specific part. She first lands on the API controller, where requests start. Unlike what its name implies, this controller layer is just an adapter and kept really thin and straightforward. Great thus far. Then the controller calls a big 'service' with thousands of lines of code that represent the entire logic
// user-controller router.post('/',async(req, res, next)=>{ await userService.add(req.body); // Might have here try-catch or error response logic } // user-service exports functionadd(newUser){ // Want to understand quickly? Need to understand the entire user service, 1500 loc // It uses technical language and reuse narratives of other flows this.copyMoreFieldsToUser(newUser) const doesExist =this.updateIfAlreadyExists(newUser) if(!doesExist){ addToCache(newUser); } // 20 more lines that demand navigating to other functions in order to get the intent }
📊 How popular: It's hard to pull solid numbers here, I could confidently say that in most of the app that I see, this is the case
🤔 Why it might be wrong: We're here to tame complexities. One of the useful techniques is deferring a complexity to the later stage possible. In this case though, the reader of the code (hopefully) starts her journey through the tests and the controller - things are simple in these areas. Then, as she lands on the big service - she gets tons of complexity and small details, although she is focused on understanding the overall flow or some specific logic. This is unnecessary complexity
☀️ Better alternative: The controller should call a particular type of service, a use-case , which is responsible for summarizing the flow in a business and simple language. Each flow/feature is described using a use-case, each contains 4-10 lines of code, that tell the story without technical details. It mostly orchestrates other small services, clients, and repositories that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily. She can now choose where she would like to focus. She is now exposed only to necessary complexity. This technique also encourages partitioning the code to the smaller object that the use-case orchestrates. Bonus: By looking at coverage reports, one can tell which features are covered, not just files/functions
This idea by the way is formalized in the 'clean architecture' book - I'm not a big fan of 'fancy' architectures, but see - it's worth cherry-picking techniques from every source. You may walk-through our Node.js best practices starter, practica.js, and examine the use-cases code
3. Nest.js: Wire everything with dependency injection
💁♂️ What is it about: If you're doing Nest.js, besides having a powerful framework in your hands, you probably use DI for everything and make every class injectable. Say you have a weather-service that depends upon humidity-service, and there is no requirement to swap the humidity-service with alternative providers. Nevertheless, you inject humidity-service into the weather-service. It becomes part of your development style, "why not" you think - I may need to stub it during testing or replace it in the future
// humidity-service.ts - not customer facing @Injectable() exportclassGoogleHumidityService{ asyncgetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } } // weather-service.ts - customer facing import{ GoogleHumidityService }from'./humidity-service.ts'; exporttypeweatherInfo{ temperature:number, humidity:number } exportclassWeatherService{ constructor(private humidityService: GoogleHumidityService){} asyncGetWeather(when: Datetime):Promise<weatherInfo>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService } } // app.module.ts @Module({ providers:[GoogleHumidityService, WeatherService], }) exportclassAppModule{}
📊 How popular: No numbers here but I could confidently say that in all of the Nest.js app that I've seen, this is the case. In the popular 'nestjs-realworld-example-ap[p']() all the services are 'injectable'
🤔 Why it might be wrong: Dependency injection is not a priceless coding style but a pattern you should pull in the right moment, like any other pattern. Why? Because any pattern has a price. What price, you ask? First, encapsulation is violated. Clients of the weather-service are now aware that other providers are being used internally. Some clients may get tempted to override providers also it's not under their responsibility. Second, it's another layer of complexity to learn, maintain, and one more way to shoot yourself in the legs. StackOverflow owes some of its revenues to Nest.js DI - plenty of discussions try to solve this puzzle (e.g. did you know that in case of circular dependencies the order of imports matters?). Third, there is the performance thing - Nest.js, for example struggled to provide a decent start time for serverless environments and had to introduce lazy loaded modules. Don't get me wrong, in some cases, there is a good case for DI: When a need arises to decouple a dependency from its caller, or to allow clients to inject custom implementations (e.g., the strategy pattern). In such case, when there is a value, you may consider whether the value of DI is worth its price. If you don't have this case, why pay for nothing?
I recommend reading the first paragraphs of this blog post 'Dependency Injection is EVIL' (and absolutely don't agree with this bold words)
☀️ Better alternative: 'Lean-ify' your engineering approach - avoid using any tool unless it serves a real-world need immediately. Start simple, a dependent class should simply import its dependency and use it - Yeah, using the plain Node.js module system ('require'). Facing a situation when there is a need to factor dynamic objects? There are a handful of simple patterns, simpler than DI, that you should consider, like 'if/else', factory function, and more. Are singletons requested? Consider techniques with lower costs like the module system with factory function. Need to stub/mock for testing? Monkey patching might be better than DI: better clutter your test code a bit than clutter your production code. Have a strong need to hide from an object where its dependencies are coming from? You sure? Use DI!
// humidity-service.ts - not customer facing exportasyncfunctiongetHumidity(when: Datetime):Promise<number>{ // Fetches from some specific cloud service } // weather-service.ts - customer facing import{ getHumidity }from"./humidity-service.ts"; // ✅ No wiring is happening externally, all is flat and explicit. Simple exportasyncfunctiongetWeather(when: Datetime):Promise<number>{ // Fetch temperature from somewhere and then humidity from GoogleHumidityService // Nobody needs to know about it, its an implementation details awaitgetHumidity(when); }
My name is Yoni Goldberg, I'm a Node.js developer and consultant. I wrote few code-books like JavaScript testing best practices and Node.js best practices (100,000 stars ✨🥹). That said, my best guide is Node.js testing practices which only few read 😞. I shall release an advanced Node.js testing course soon and also hold workshops for teams. I'm also a core maintainer of Practica.js which is a Node.js starter that creates a production-ready example Node Monorepo solution that is based on the standards and simplicity. It might be your primary option when starting a new Node.js solution
💁♂️ What is it about: Commonly, you're in need to issue or/and authenticate JWT tokens. Similarly, you might need to allow login from one single social network like Google/Facebook. When faced with these kinds of needs, Node.js developers rush to the glorious library Passport.js like butterflies are attracted to light
📊 How popular: 1,389,720 weekly downloads
🤔 Why it might be wrong: When tasked with guarding your routes with JWT token - you're just a few lines of code shy from ticking the goal. Instead of messing up with a new framework, instead of introducing levels of indirections (you call passport, then it calls you), instead of spending time learning new abstractions - use a JWT library directly. Libraries like jsonwebtoken or fast-jwt are simple and well maintained. Have concerns with the security hardening? Good point, your concerns are valid. But would you not get better hardening with a direct understanding of your configuration and flow? Will hiding things behind a framework help? Even if you prefer the hardening of a battle-tested framework, Passport doesn't handle a handful of security risks like secrets/token, secured user management, DB protection, and more. My point, you probably anyway need fully-featured user and authentication management platforms. Various cloud services and OSS projects, can tick all of those security concerns. Why then start in the first place with a framework that doesn't satisfy your security needs? It seems like many who opt for Passport.js are not fully aware of which needs are satisfied and which are left open. All of that said, Passport definitely shines when looking for a quick way to support many social login providers
☀️ Better alternative: Is token authentication in order? These few lines of code below might be all you need. You may also glimpse into Practica.js wrapper around these libraries. A real-world project at scale typically need more: supporting async JWT (JWKS), securely manage and rotate the secrets to name a few examples. In this case, OSS solution like [keycloak (https://github.com/keycloak/keycloak) or commercial options like Auth0[https://github.com/auth0] are alternatives to consider
// jwt-middleware.js, a simplified version - Refer to Practica.js to see some more corner cases constmiddleware=(req, res, next)=>{ if(!req.headers.authorization){ res.sendStatus(401) } jwt.verify(req.headers.authorization, options.secret,(err: any,jwtContent: any)=>{ if(err){ return res.sendStatus(401); } req.user= jwtContent.data; next(); });
💁♂️ What is it about: When testing against an API (i.e., component, integration, E2E tests), the library supertest provides a sweet syntax that can both detect the web server address, make HTTP call and also assert on the response. Three in one
test("When adding invalid user, then the response is 400",(done)=>{ const request =require("supertest"); const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act request(app) .post("/user") .send(userToAdd) .expect("Content-Type",/json/) .expect(400, done); // Assert // We already asserted above ☝🏻 as part of the request });
📊 How popular: 2,717,744 weekly downloads
🤔 Why it might be wrong: You already have your assertion library (Jest? Chai?), it has a great error highlighting and comparison - you trust it. Why code some tests using another assertion syntax? Not to mention, Supertest's assertion errors are not as descriptive as Jest and Chai. It's also cumbersome to mix HTTP client + assertion library instead of choosing the best for each mission. Speaking of the best, there are more standard, popular, and better-maintained HTTP clients (like fetch, axios and other friends). Need another reason? Supertest might encourage coupling the tests to Express as it offers a constructor that gets an Express object. This constructor infers the API address automatically (useful when using dynamic test ports). This couples the test to the implementation and won't work in the case where you wish to run the same tests against a remote process (the API doesn't live with the tests). My repository 'Node.js testing best practices' holds examples of how tests can infer the API port and address
☀️ Better alternative: A popular and standard HTTP client library like Node.js Fetch or Axios. In Practica.js (a Node.js starter that packs many best practices) we use Axios. It allows us to configure a HTTP client that is shared among all the tests: We bake inside a JWT token, headers, and a base URL. Another good pattern that we look at, is making each Microservice generate HTTP client library for its consumers. This brings strong-type experience to the clients, synchronizes the provider-consumer versions and as a bonus - The provider can test itself with the same library that its consumers are using
test("When adding invalid user, then the response is 400 and includes a reason",(done)=>{ const app =express(); // Arrange const userToAdd ={ name:undefined, }; // Act const receivedResponse = axios.post( `http://localhost:${apiPort}/user`, userToAdd ); // Assert // ✅ Assertion happens in a dedicated stage and a dedicated library expect(receivedResponse).toMatchObject({ status:400, data:{ reason:"no-name", }, }); });
6. Fastify decorate for non request/web utilities
💁♂️ What is it about:Fastify introduces great patterns. Personally, I highly appreciate how it preserves the simplicity of Express while bringing more batteries. One thing that got me wondering is the 'decorate' feature which allows placing common utilities/services inside a widely accessible container object. I'm referring here specifically to the case where a cross-cutting concern utility/service is being used. Here is an example:
// An example of a utility that is cross-cutting-concern. Could be logger or anything else fastify.decorate('metricsService',function(name){ fireMetric:()=>{ // My code that sends metrics to the monitoring system } }) fastify.get('/api/orders',asyncfunction(request, reply){ this.metricsService.fireMetric({name:'new-request'}) // Handle the request }) // my-business-logic.js exports functioncalculateSomething(){ // How to fire a metric? }
It should be noted that 'decoration' is also used to place values (e.g., user) inside a request - this is a slightly different case and a sensible one
📊 How popular: Fastify has 696,122 weekly download and growing rapidly. The decorator concept is part of the framework's core
🤔 Why it might be wrong: Some services and utilities serve cross-cutting-concern needs and should be accessible from other layers like domain (i.e, business logic, DAL). When placing utilities inside this object, the Fastify object might not be accessible to these layers. You probably don't want to couple your web framework with your business logic: Consider that some of your business logic and repositories might get invoked from non-REST clients like CRON, MQ, and similar - In these cases, Fastify won't get involved at all so better not trust it to be your service locator
☀️ Better alternative: A good old Node.js module is a standard way to expose and consume functionality. Need a singleton? Use the module system caching. Need to instantiate a service in correlation with a Fastify life-cycle hook (e.g., DB connection on start)? Call it from that Fastify hook. In the rare case where a highly dynamic and complex instantiation of dependencies is needed - DI is also a (complex) option to consider
// ✅ A simple usage of good old Node.js modules // metrics-service.js exports asyncfunctionfireMetric(name){ // My code that sends metrics to the monitoring system } import{fireMetric}from'./metrics-service.js' fastify.get('/api/orders',asyncfunction(request, reply){ metricsService.fireMetric({name:'new-request'}) }) // my-business-logic.js exports functioncalculateSomething(){ metricsService.fireMetric({name:'new-request'}) }
💁♂️ What is it about: You catch an error somewhere deep in the code (not on the route level), then call logger.error to make this error observable. Seems simple and necessary
📊 How popular: Hard to put my hands on numbers but it's quite popular, right?
🤔 Why it might be wrong: First, errors should get handled/logged in a central location. Error handling is a critical path. Various catch clauses are likely to behave differently without a centralized and unified behavior. For example, a request might arise to tag all errors with certain metadata, or on top of logging, to also fire a monitoring metric. Applying these requirements in ~100 locations is not a walk in the park. Second, catch clauses should be minimized to particular scenarios. By default, the natural flow of an error is bubbling down to the route/entry-point - from there, it will get forwarded to the error handler. Catch clauses are more verbose and error-prone - therefore it should serve two very specific needs: When one wishes to change the flow based on the error or enrich the error with more information (which is not the case in this example)
☀️ Better alternative: By default, let the error bubble down the layers and get caught by the entry-point global catch (e.g., Express error middleware). In cases when the error should trigger a different flow (e.g., retry) or there is value in enriching the error with more context - use a catch clause. In this case, ensure the .catch code also reports to the error handler
// A case where we wish to retry upon failure try{ axios.post('https://thatService.io/api/users); } catch(error){ // ✅ A central location that handles error errorHandler.handle(error,this,{operation: addNewOrder}); callTheUserService(numOfRetries++); }
💁♂️ What is it about: In many web apps, you are likely to find a pattern that is being copy-pasted for ages - Using Morgan logger to log requests information:
const express =require("express"); const morgan =require("morgan"); const app =express(); app.use(morgan("combined"));
📊 How popular: 2,901,574 downloads/week
🤔 Why it might be wrong: Wait a second, you already have your main logger, right? Is it Pino? Winston? Something else? Great. Why deal with and configure yet another logger? I do appreciate the HTTP domain-specific language (DSL) of Morgan. The syntax is sweet! But does it justify having two loggers?
☀️ Better alternative: Put your chosen logger in a middleware and log the desired request/response properties:
// ✅ Use your preferred logger for all the tasks const logger =require("pino")(); app.use((req, res, next)=>{ res.on("finish",()=>{ logger.info(`${req.url}${res.statusCode}`);// Add other properties here }); next(); });
9. Having conditional code based on NODE_ENV value
💁♂️ What is it about: To differentiate between development vs production configuration, it's common to set the environment variable NODE_ENV with "production|test". Doing so allows the various tooling to act differently. For example, some templating engines will cache compiled templates only in production. Beyond tooling, custom applications use this to specify behaviours that are unique to the development or production environment:
if(process.env.NODE_ENV==="production"){ // This is unlikely to be tested since test runner usually set NODE_ENV=test setLogger({stdout:true,prettyPrint:false}); // If this code branch above exists, why not add more production-only configurations: collectMetrics(); }else{ setLogger({splunk:true,prettyPrint:true}); }
📊 How popular: 5,034,323 code results in GitHub when searching for "NODE_ENV". It doesn't seem like a rare pattern
🤔 Why it might be wrong: Anytime your code checks whether it's production or not, this branch won't get hit by default in some test runner (e.g., Jest set NODE_ENV=test). In any test runner, the developer must remember to test for each possible value of this environment variable. In the example above, collectMetrics() will be tested for the first time in production. Sad smiley. Additionally, putting these conditions opens the door to add more differences between production and the developer machine - when this variable and conditions exists, a developer gets tempted to put some logic for production only. Theoretically, this can be tested: one can set NODE_ENV = "production" in testing and cover the production branches (if she remembers...). But then, if you can test with NODE_ENV='production', what's the point in separating? Just consider everything to be 'production' and avoid this error-prone mental load
☀️ Better alternative: Any code that was written by us, must be tested. This implies avoiding any form of if(production)/else(development) conditions. Wouldn't anyway developers machine have different surrounding infrastructure than production (e.g., logging system)? They do, the environments are quite difference, but we feel comfortable with it. These infrastructural things are battle-tested, extraneous, and not part of our code. To keep the same code between dev/prod and still use different infrastructure - we put different values in the configuration (not in the code). For example, a typical logger emits JSON in production but in a development machine it emits 'pretty-print' colorful lines. To meet this, we set ENV VAR that tells whether what logging style we aim for:
//package.json "scripts":{ "start":"LOG_PRETTY_PRINT=false index.js", "test":"LOG_PRETTY_PRINT=true jest" } //index.js //✅ No condition, same code for all the environments. The variations are defined externally in config or deployment files setLogger({prettyPrint: process.env.LOG_PRETTY_PRINT})
I hope that these thoughts, at least one of them, made you re-consider adding a new technique to your toolbox. In any case, let's keep our community vibrant, disruptive and kind. Respectful discussions are almost as important as the event loop. Almost.
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
+
+
+
+
\ No newline at end of file
diff --git a/blog/tags/component-test/index.html b/blog/tags/component-test/index.html
new file mode 100644
index 00000000..f5439172
--- /dev/null
+++ b/blog/tags/component-test/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+One post tagged with "component-test" | Practica.js
+
+
+
+
+
+
+
+
+
+
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
+
+
+
+
\ No newline at end of file
diff --git a/blog/tags/domain/index.html b/blog/tags/domain/index.html
new file mode 100644
index 00000000..1befc15c
--- /dev/null
+++ b/blog/tags/domain/index.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+One post tagged with "domain" | Practica.js
+
+
+
+
+
+
+
+
+
+
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
+
+
+
+
\ No newline at end of file
diff --git a/blog/tags/dotenv/index.html b/blog/tags/dotenv/index.html
new file mode 100644
index 00000000..c6b41b5f
--- /dev/null
+++ b/blog/tags/dotenv/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+2 posts tagged with "dotenv" | Practica.js
+
+
+
+
+
+
+
+
+
+
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in Practica.js. In this post, we'd like to share our considerations in choosing our monorepo tooling
The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — Lerna- has just retired. When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
This post is concerned with backend-only and Node.js. It also scoped to typical business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like Bazel is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following 4 layers below. Each one of them can bring value to your project, each has different consequences, tooling, and features:
+
+
+
+
\ No newline at end of file
diff --git a/blog/tags/nestjs/index.html b/blog/tags/nestjs/index.html
new file mode 100644
index 00000000..68127731
--- /dev/null
+++ b/blog/tags/nestjs/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+2 posts tagged with "nestjs" | Practica.js
+
+
+
+
+
+
+
+
+
+
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.
Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task
Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines
Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?
Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...
From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
Ready to explore how good Prisma is and whether you should throw away your current tools?
Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
In his novel book 'Atomic Habits' the author James Clear states that:
"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit
The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal
Here they are, 10 outstanding testing articles:
📄 1. 'Selective Unit Testing – Costs and Benefits'
✍️ Author: Steve Sanderson
🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
Can break when you refactor application code. False negatives
May not fail when you break application code. False positives"
🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
👓 Read time: > 2 hours (10,500 words with many links)
📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
✍️ Author: Ryan Jones
🔖 Abstract:One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)
🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
📄 6. 'Mocking is a Code Smell' (JavaScript examples)
✍️ Author: Eric Elliott
🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
"Mocking is required when our decomposition strategy has failed"
The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:
Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
✍️ Author: Vitali Zaidman
🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
🏅 This is a masterpiece
✍️ Author: Justin Searls
🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
+
+
+
+
\ No newline at end of file
diff --git a/blog/tags/workflow/index.html b/blog/tags/workflow/index.html
new file mode 100644
index 00000000..cc4dd736
--- /dev/null
+++ b/blog/tags/workflow/index.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+One post tagged with "workflow" | Practica.js
+
+
+
+
+
+
+
+
+
+
When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.
Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
Prefer a 10 min video? Watch here, or keep reading below
Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
Her journey begins promisingly smooth:
- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
test("When adding an order with 100$ product, then the price charge should be 100$ ",async()=>{ // .... })
- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
app.post("/api/order",async(req:Request,res:Response)=>{ const newOrder = req.body; await orderService.addOrder(newOrder);// 👈 This is where the real-work is done res.status(200).json({message:"Order created successfully"}); });
Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
letDBRepository; exportclassOrderService:ServiceBase<OrderDto>{ asyncaddOrder(orderRequest:OrderRequest):Promise<Order>{ try{ ensureDBRepositoryInitialized(); const{ openTelemetry, monitoring, secretManager, priceService, userService }= dependencyInjection.getVariousServices(); logger.info("Add order flow starts now", orderRequest); openTelemetry.sendEvent("new order", orderRequest); const validationRules =awaitgetFromConfigSystem("order-validation-rules"); const validatedOrder =validateOrder(orderRequest, validationRules); if(!validatedOrder){ thrownewError("Invalid order"); } this.base.startTransaction(); const user =await userService.getUserInfo(validatedOrder.customerId); if(!user){ const savedOrder =awaittryAddUserWithLegacySystem(validatedOrder); return savedOrder; } // And it goes on and on until the pricing module is mentioned }
So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
+The library catalog redirects the reader to the area of interest
Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.
When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
+The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.
This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
+The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.
When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
This structured approach allows you to preemptively tackle potential implementation hurdles:
- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.
- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:
+The use-cases folder test coverage report, some use-cases are only partially tested
See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
// ❗️Verbose use case exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest):Promise<Order>{ logger.info("Add order use case - Adding order starts now", orderRequest); const validatedOrder =validateAndCoerceOrder(orderRequest); logger.debug("Add order use case - The order was validated", validatedOrder); const orderWithPricing =calculateOrderPricing(validatedOrder); logger.debug("Add order use case - The order pricing was decided", validatedOrder); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderWithPricing); logger.debug("Add order use case - Verified the user balance already", purchasingCustomer); const returnOrder =mapFromRepositoryToDto(purchasingCustomer as unknown asOrderRecord); logger.info("Add order use case - About to return result", returnOrder); return returnOrder; }
One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
import{ openTelemetry }from"@opentelemetry"; asyncfunctionrunUseCaseStep(stepName, stepFunction){ logger.debug(`Use case step ${stepName} starts now`); // Create Open Telemetry custom span openTelemetry.startSpan(stepName); returnawaitstepFunction(); }
Now the use-case gets automated and consistent transparency:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const validatedOrder =awaitrunUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest)); const orderWithPricing =awaitrunUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder)); awaitrunUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing)); }
The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.
Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ const validatedOrder =validateAndCoerceOrder(orderRequest); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(validatedOrder); if(purchasingCustomer.isPremium){//❗️ sendEmailToPremiumCustomer(purchasingCustomer); // This easily will grow with time to multiple if/else } }
A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
3. When have no choice, control the DB transaction from the use-case
What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
exportasyncfunctionaddOrderUseCase(orderRequest:OrderRequest){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const transaction =Repository.startTransaction(); const purchasingCustomer =awaitassertCustomerHasEnoughBalance(orderRequest, transaction); const orderWithPricing =calculateOrderPricing(purchasingCustomer); const savedOrder =awaitinsertOrder(orderWithPricing, transaction); const returnOrder =mapFromRepositoryToDto(savedOrder); Repository.commitTransaction(transaction); return returnOrder; }
A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
// query-orders-use-cases.ts exportasyncfunctiongetOrder(id){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.getOrderByID(id); return result; } exportasyncfunctiongetAllOrders(criteria){ // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed const result =await orderRepository.queryOrders(criteria); return result; }
If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
+
+
+
+
\ No newline at end of file
diff --git a/blog/testing-the-dark-scenarios-of-your-nodejs-application/index.html b/blog/testing-the-dark-scenarios-of-your-nodejs-application/index.html
new file mode 100644
index 00000000..3a136fd8
--- /dev/null
+++ b/blog/testing-the-dark-scenarios-of-your-nodejs-application/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+Testing the dark scenarios of your Node.js application | Practica.js
+
+
+
+
+
+
+
+
+
+
This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js
But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
Here are a handful of examples that might open your mind to a whole new class of risks and tests
July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com
👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
📝 Code
Code under test, api.js:
// A common express server initialization conststartWebServer=()=>{ returnnewPromise((resolve, reject)=>{ try{ // A typical Express setup expressApp =express(); defineRoutes(expressApp);// a function that defines all routes expressApp.listen(process.env.WEB_SERVER_PORT); }catch(error){ //log here, fire a metric, maybe even retry and finally: process.exit(); } }); };
The test:
const api =require('./entry-points/api');// our api starter that exposes 'startWebServer' function const sinon =require('sinon');// a mocking library test('When an error happens during the startup phase, then the process exits',async()=>{ // Arrange const processExitListener = sinon.stub(process,'exit'); // 👇 Choose a function that is part of the initialization phase and make it fail sinon .stub(routes,'defineRoutes') .throws(newError('Cant initialize connection')); // Act await api.startWebServer(); // Assert expect(processExitListener.called).toBe(true); });
👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
📝 Code
test('When exception is throw during request, Then logger reports the mandatory fields',async()=>{ //Arrange const orderToAdd ={ userId:1, productId:2, status:'approved', }; const metricsExporterDouble = sinon.stub(metricsExporter,'fireMetric'); sinon .stub(OrderRepository.prototype,'addOrder') .rejects(newAppError('saving-failed','Order could not be saved',500)); const loggerDouble = sinon.stub(logger,'error'); //Act await axiosAPIClient.post('/order', orderToAdd); //Assert expect(loggerDouble).toHaveBeenCalledWith({ name:'saving-failed', status:500, stack: expect.any(String), message: expect.any(String), }); expect( metricsExporterDouble).toHaveBeenCalledWith('error',{ errorName:'example-error', }) });
👽 The 'unexpected visitor' test - when an uncaught exception meets our code
👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
researches says that, rejection
📝 Code
test('When an unhandled exception is thrown, then process stays alive and the error is logged',async()=>{ //Arrange const loggerDouble = sinon.stub(logger,'error'); const processExitListener = sinon.stub(process,'exit'); const errorToThrow =newError('An error that wont be caught 😳'); //Act process.emit('uncaughtException', errorToThrow);//👈 Where the magic is // Assert expect(processExitListener.called).toBe(false); expect(loggerDouble).toHaveBeenCalledWith(errorToThrow); });
🕵🏼 The 'hidden effect' test - when the code should not mutate at all
👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
📝 Code
it('When adding an invalid order, then it returns 400 and NOT retrievable',async()=>{ //Arrange const orderToAdd ={ userId:1, mode:'draft', externalIdentifier:uuid(),//no existing record has this value }; //Act const{status: addingHTTPStatus }=await axiosAPIClient.post( '/order', orderToAdd ); //Assert const{status: fetchingHTTPStatus }=await axiosAPIClient.get( `/order/externalIdentifier/${orderToAdd.externalIdentifier}` );// Trying to get the order that should have failed expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({ addingHTTPStatus:400, fetchingHTTPStatus:404, }); // 👆 Check that no such record exists });
🧨 The 'overdoing' test - when the code should mutate but it's doing too much
👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
📝 Code
test('When deleting an existing order, Then it should NOT be retrievable',async()=>{ // Arrange const orderToDelete ={ userId:1, productId:2, }; const deletedOrder =(await axiosAPIClient.post('/order', orderToDelete)).data .id;// We will delete this soon const orderNotToBeDeleted = orderToDelete; const notDeletedOrder =( await axiosAPIClient.post('/order', orderNotToBeDeleted) ).data.id;// We will not delete this // Act await axiosAPIClient.delete(`/order/${deletedOrder}`); // Assert const{status: getDeletedOrderStatus }=await axiosAPIClient.get( `/order/${deletedOrder}` ); const{status: getNotDeletedOrderStatus }=await axiosAPIClient.get( `/order/${notDeletedOrder}` ); expect(getNotDeletedOrderStatus).toBe(200); expect(getDeletedOrderStatus).toBe(404); });
🕰 The 'slow collaborator' test - when the other HTTP service times out
👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
📝 Code
// In this example, our code accepts new Orders and while processing them approaches the Users Microservice test('When users service times out, then return 503 (option 1 with fake timers)',async()=>{ //Arrange const clock = sinon.useFakeTimers(); config.HTTPCallTimeout=1000;// Set a timeout for outgoing HTTP calls nock(`${config.userServiceURL}/user/`) .get('/1',()=> clock.tick(2000))// Reply delay is bigger than configured timeout 👆 .reply(200); const loggerDouble = sinon.stub(logger,'error'); const orderToAdd ={ userId:1, productId:2, mode:'approved', }; //Act // 👇try to add new order which should fail due to User service not available const response =await axiosAPIClient.post('/order', orderToAdd); //Assert // 👇At least our code does its best given this situation expect(response.status).toBe(503); expect(loggerDouble.lastCall.firstArg).toMatchObject({ name:'user-service-not-available', stack: expect.any(String), message: expect.any(String), }); });
💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB
Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
📝 Code
Create a fake message queue that does almost nothing but record calls, see full example here
classFakeMessageQueueProviderextendsEventEmitter{ // Implement here publish(message){} consume(queueName, callback){} }
Make your message queue client accept real or fake provider
classMessageQueueClientextendsEventEmitter{ // Pass to it a fake or real message queue constructor(customMessageQueueProvider){} publish(message){} consume(queueName, callback){} // Simple implementation can be found here: // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js }
Expose a convenient function that tells when certain calls where made
constFakeMessageQueueProvider=require('./libs/fake-message-queue-provider'); constMessageQueueClient=require('./libs/message-queue-client'); const newOrderService =require('./domain/newOrderService'); test('When a poisoned message arrives, then it is being rejected back',async()=>{ // Arrange const messageWithInvalidSchema ={nonExistingProperty:'invalid❌'}; const messageQueueClient =newMessageQueueClient( newFakeMessageQueueProvider() ); // Subscribe to new messages and passing the handler function messageQueueClient.consume('orders.new', newOrderService.addOrder); // Act await messageQueueClient.publish('orders.new', messageWithInvalidSchema); // Now all the layers of the app will get stretched 👆, including logic and message queue libraries // Assert await messageQueueClient.waitFor('reject',{howManyTimes:1}); // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message });
👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
📝 Code
Consider the following scenario, you're developing a library, and you wrote this code:
See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆
What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇
📝 Code
// global-setup.js // 1. Setup the in-memory NPM registry, one function that's it! 🔥 awaitsetupVerdaccio(); // 2. Building our package awaitexec('npm',['run','build'],{ cwd: packagePath, }); // 3. Publish it to the in-memory registry awaitexec('npm',['publish','--registry=http://localhost:4873'],{ cwd: packagePath, }); // 4. Installing it in the consumer directory awaitexec('npm',['install','my-package','--registry=http://localhost:4873'],{ cwd: consumerPath, }); // Test file in the consumerPath // 5. Test the package 🚀 test("should succeed",async()=>{ const{ fn1 }=awaitimport('my-package'); expect(fn1()).toEqual(1); });
Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
You want to test ESM and CJS consumers
If you have CLI application you can test it like your users
Making sure all the voodoo magic in that babel file is working as expected
🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).
The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
"responses":{ "200":{ "description":"successful", } , "400":{ "description":"Invalid ID", "content":{} },// No 409 in this list😲👈 }
The test code
const jestOpenAPI =require('jest-openapi'); jestOpenAPI('../openapi.json'); test('When an order with duplicated coupon is added , then 409 error should get returned',async()=>{ // Arrange const orderToAdd ={ userId:1, productId:2, couponId:uuid(), }; await axiosAPIClient.post('/order', orderToAdd); // Act // We're adding the same coupon twice 👇 const receivedResponse =await axios.post('/order', orderToAdd); // Assert; expect(receivedResponse.status).toBe(409); expect(res).toSatisfyApiSpec(); // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI });
Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
beforeAll(()=>{ axios.interceptors.response.use((response)=>{ expect(response.toSatisfyApiSpec()); // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs }); });
The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
If you reached down to this page, you probably belong with us 💜. We are in an ever-going quest for better software practices. This journey can bring two things to your benefit: A lot of learning and global impact on many people's craft. Does this sounds attractive?
Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, small code enhancement or anything that is small and obvious - start by reading the shortened guide here. As you'll expand your engagement with this repo, it might be a good idea to visit this long guide again
Our main selling point is our philosophy, our philosophy is 'make it SIMPLE'. There is one really important holy grail in software - Speed. The faster you move, the more features and value is created for the users. The faster you move, more improvements cycles are deployed and the software/ops become better. Researches show that faster team produces software that is more reliable. Complexity is the enemy of speed - Commonly apps are big, sophisticated, has a lot of internal abstractions and demand long training before being productive. Our mission is to minimize complexity, get onboarded developers up to speed quickly, or in simple words - Let the reader of the code understand it in a breeze. If you make simplicity a 1st principle - Great things will come your way.
Big words, how exactly? Here are few examples:
- Simple language - We use TypeScript because we believe in types, but we minimize advanced features. This boils down to using functions only, sometimes also classes. No abstracts, generic, complex types or anything that demand more CPU cycles from the reader.
- Less generic - Yes, you read it right. If you can code a function that covers less scenarios but is shorter and simpler to understand - Consider this option first. Sometimes one if forced to make things generic - That's fine, at least we minimized the amount of complex code locations
- Simple tools - Need to use some 3rd party for some task? Choose the library that is doing the minimal amount of work. For example, when seeking a library that parses JWT tokens - avoid picking a super-fancy framework that can solve any authorization path (e.g., Passport). Instead, Opt for a library that is doing exactly this. This will result in code that is simpler to understand and reduced bug surface
- Prefer Node/JavaScript built-in tooling - Some new frameworks have abstractions over some standard tooling. They have their way of defining modules, libraries and others which demand learning one more concept and being exposed to unnecessary layer of bugs. Our preferred way is the vanilla way, if it's part of JavaScript/Node - We use it. For example, should we need to group a bunch of files as a logical modules - We use ESM to export the relevant files and functions
Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, linting rules, look&feel fixes, fixing TYPOs, comments or anything that is small and obvious - Just fork to your machine, code, ensure all tests pass (e.g., npm test), PR with a meaningful title, get 1 approver before merging. That's it.
Need to change the code itself? Here is a typical workflow
➡️ Idea
➡ Design decisions
➡ Code
➡️ Merge
When
Got an idea how to improve? Want to handle an existing issue?
When the change implies some major decisions, those should be discussed in advance
When got confirmation from core maintainer that the design decisions are sensible
When you have accomplished a short iteration . If the whole change is small, PR in the end
What
1. Create an issue (if doesn't exist) 2. Label the issue with the its type (e.g., question, bug) and the area of improvement (e.g., area-generator, area-express) 3. Comment and specify your intent to handle this issue
1. Within the issue, specify your overall approach/design. Or just open a discussion 2. If choosing a 3rd party library, ensure to follow our standard decision and comparison template. Example can be found here
1. Do it with passions 💜 2. Follow our coding guide. Keep it simple. Stay loyal to our philosophy 3. Run all the quality measures frequently (testing, linting)
1. Share your progress early by submit a work in progress PR 2. Ensure all CI checks pass (e.g., testing) 3. Get at least one approval before merging
Typically, the two main sections are the Microservice (apps) and cross-cutting-concern libraries:
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%% graph A[Code Templates] -->|The example Microservice/app| B(Services) B -->|Where the API, logic and data lives| D(Example Microservice) A -->|Cross Microservice concerns| C(Libraries) C -->|Explained in a dedicated section| K(*Multiple libraries like logger) style D stroke:#333,stroke-width:4px
The Microservice structure
The entry-point of the generated code is an example Microservice that exposes API and has the traditional layers of a component:
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%% graph A[Services] -->|Where the API, logic and data lives| D(Example Microservice) A -->|Almost empty, used to exemplify<br/> Microservice communication| E(Collaborator Microservice) D -->|The web layer with REST/Graph| G(Web/API layer) N -->|Docker-compose based DB, MQ and Cache| F(Infrastructure) D -->|Where the business lives| M(Domain layer) D -->|Anything related with database| N(Data-access layer) D -->|Component-wide testing| S(Testing) style D stroke:#333,stroke-width:4px
Libraries
All libraries are independent npm packages that can be testing in isolation
This solution is built around independent domains that share almost nothing with others. It is recommended to start with understanding a single and small domain (package), then expanding and getting acquainted with more. This is also an opportunity to master a specific topic that you're passionate about. Following is our packages list, choose where you wish to contribute first
Package
What
Status
Chosen libs
Quick links
microservice/express
A web layer of an example Microservice based on expressjs
We are in an ever-going quest for better software practices. If you reached down to this page, you probably belong with us 💜.
Note: This is a shortened guide that suits those are willing to quickly contribute. Once you deepen your relations with Practica.js - It's a good idea to read the full guide
Our philosophy is all about minimalism and simplicity - We strive to write less code, rely on existing and reputable libraries, stick to Node/JS standards and avoid adding our own abstractions
Popular vendors only - Each technology and vendor that we introduce must super popular and reliable. For example, a library must one of the top 5 most starred and downloaded in its category. . See full vendor choose instructions here
For a quick start, you don't necessarily need to understand the entire codebase. Typically, your contribution will fall under one of these three categories:
If you simply mean to edit things beyond the code - There is no need to delve into the internals. For example, when changing documentation, CI/bots, and alike - One can simply perform the task without delving into the code
Code and CLI to get the user preferences and copy the right code to her computer
Here you will find CLI, UI, and logic to generate the right code. We run our own custom code to go through the code-template folder and filter out parts/files based on the user preferences. For example, should she ask NOT to get a GitHub Actions file - The generator will remove this file from the output
How to work with it?
If all you need is to alter the logic, you may just code in the ~/code-generator/generation-logic folder and run the tests (located in the same folder)
If you wish to modify the CLI UI, then you'll need to build the code before running (because there is no way to run TypeScript in CLI). Open two terminals:
The output of our program: An example Microservice and libraries
Here you will the generated code that we will selectively copy to the user's computer which is located under {root}/src/code-templates. It's preferable to work on this code outside the main repository in some side folder. To achieve this, simply generate the code using the CLI, code, run the tests, then finally copy to the main repository
Install dependencies
nvm use &&npm i
Build the code
npm run build
Bind the CLI command to our code
cd .dist &&npmlink
Generate the code to your preferred working folder
cd{some folder like $HOME} create-node-app immediate --install-dependencies
Now you can work on the generated code. Later on, once your tests pass and you're happy - copy the changes back to ~/practica/src/code-templates
Run the tests while you code
#From the folder where you generated the code to. You might need to 'git init' cd default-app-name/services/order-service npm run test:dev
🎯Bottom-line: our recommendation - ✨convict✨ ticks all the boxes by providing both strict schema, fail fast option, entry documentation and hierarchical structure
📊 Detailed comparison table
dotenv
Convict
nconf
config
Executive Summary
Performance (load time for 100 keys)
1ms
5ms
4ms
5ms
Popularity
Superior
Less popular than competitors
Highly popular
Highly popular
❗ Fail fast & strict schema
No
Yes
No
No
Items documentation
No
Yes
No
No
Hierarchical configuration schema
No
Yes
Yes
No
More details: Community & Popularity - March 2022
Stars
4200 ✨
2500 ✨
2500 ✨
1000 ✨
Downloads/Week
12,900,223 📁
4,000,000 📁
6,000,000 📁
5,000,000 📁
Dependents
26,000 👩👧
600 👧
800 👧
1000 👧
+
+
+
+
\ No newline at end of file
diff --git a/decisions/docker-base-image/index.html b/decisions/docker-base-image/index.html
new file mode 100644
index 00000000..700ac3ab
--- /dev/null
+++ b/decisions/docker-base-image/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+Decision: Choosing a **Docker base image** | Practica.js
+
+
+
+
+
+
+
+
+
+
📔 What is it - The Dockerfile that is included inherits from a base Node.js image. There are variois considerations when choosing the right option which are listed below
Making our decisions transparent and collaborative is at the heart of Practica. In this folder, all decisions should be documented using our decision template
*For some lacking features there is a community package that bridges the gap; For workspace, we evaluated whether most of them support a specific feature
nx
Turborepo
Lerna
workspace (npm, yarn, pnpm)
Executive Summary
Community and maintenance
Huge eco-system and commercial-grade maintenance
Trending, commercial-grade maintenance
Not maintained anymore
Solid
❗Encourage component autonomy
Packages are highly coupled
Workflow is coupled
npm link bypasses the SemVer
Minor concern: shared NODE_MODULES on the root
Build speed
Smart inference and execution plan, shared dependencies, cache
Smart inference and execution plan, shared dependencies, cache
Parallel tasks execution, copied dependencies
Shared dependencies
Standardization
Non standard Node.js stuff: One single root package.json by default, TS-paths for linking
An external build layer
An external build layer
An external package centralizer
Tasks and build pipeline
Run recursive commands (affect a group of packages)
Yes
Yes
Yes
Yes
❗️Parallel task execution
Yes
Yes
No
Yes* (Yarn & Pnpm)
❗️Realize which packages changed
Yes
Yes
Yes
No
❗️Realize packages that are affected by a change
Yes both through package.json and code
Yes through package.json
None
None
Ignore missing commands/scripts
No
Yes
Yes
Yes
❗️In-project cache - Skip tasks if local result exists
Yes
Yes
No
No
Remote cache - Skip tasks if remote result exists
Yes
Yes
No
No
Visual dependency graph
Yes
Yes
Partially, via plugin
No
❗️Smart waterfall pipeline - Schedule unrelated tasks in parallel, not topologically
Yes
Yes
No
No
Distributed task execution - Spread tasks across machines
Yes
No
No
No
Locally linking packages
❗️Is supported
Partially Achieved through TS paths
No Relies on workspaces
Yes
Yes
How
❗️Via TypeScript paths and webpack
Relies on workspaces
Symlink
Symlink
❗️Can opt-out?
Yes By default local packages are linked
-
No
Partially Pnpm allows preferring remote packages, Yarn has a [focused package](https://classic.yarnpkg.com/blog/2018/05/18/focused-workspaces/) option which only works per a single package
Link a range - only specific versions will be symlinked
No
-
No
Some Yarn and Pnpm allows workspace versioning
Optimizing dependencies installation speed
Supported
Yes Via a single Root package.json and NODE_MODULES
Yes Via caching
No Can be used on top of yarn workspace
Yes Via single node_modules folder
Retain origin file path (some module refers to relative paths)
****
Partially NODE_MODULES is on the root, not per package
Yes
Not relevant
Partially Pnpm uses hard link instead of symlinks
Keep single NODE_MODULES per machine (faster, less disc space)
No
No
No
Partially Pnpm supports this
Other features and considerations
Community plugins
Yes
No
Yes
Yes
Scaffold new component from a gallery
Yes
None
None
None
Create a new package to the repo
Built it code genreation with useful templates
None, 3rd party code generator can be used
None, 3rd party code generator can be used
None, 3rd party code generator can be used
Adapt changes in the monorepo tool
Supported via nx migrate
Supported via codemod
None
None
Incremental builds
Supported
Supported
None
None
Cross-package modifications
Supported via nx generate
None
None
None
__
Ideas for next iteration:
Separate command execution and pipeline section
Stars and popularity
Features summary
Polyrepo support
+
+
+
+
\ No newline at end of file
diff --git a/decisions/openapi/index.html b/decisions/openapi/index.html
new file mode 100644
index 00000000..6f761c30
--- /dev/null
+++ b/decisions/openapi/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+Decision: Choosing **_OpenAPI** generator tooling | Practica.js
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/.gitignore b/docs/.gitignore
deleted file mode 100644
index b2d6de30..00000000
--- a/docs/.gitignore
+++ /dev/null
@@ -1,20 +0,0 @@
-# Dependencies
-/node_modules
-
-# Production
-/build
-
-# Generated files
-.docusaurus
-.cache-loader
-
-# Misc
-.DS_Store
-.env.local
-.env.development.local
-.env.test.local
-.env.production.local
-
-npm-debug.log*
-yarn-debug.log*
-yarn-error.log*
diff --git a/docs/README.md b/docs/README.md
deleted file mode 100644
index 37c728d8..00000000
--- a/docs/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Website
-
-This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
-
-### Installation
-
-```
-$ npm i
-```
-
-### Local Development
-
-```
-$ npm start
-```
-
-This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
-
-### Build
-
-```
-$ npm run build
-```
-
-This command generates static content into the `build` directory and can be served using any static contents hosting service.
-
-### Deployment
-
-Using SSH:
-
-```
-$ USE_SSH=true yarn deploy
-```
-
-Not using SSH:
-
-```
-$ GIT_USER= yarn deploy
-```
-
-If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
diff --git a/docs/babel.config.js b/docs/babel.config.js
deleted file mode 100644
index e00595da..00000000
--- a/docs/babel.config.js
+++ /dev/null
@@ -1,3 +0,0 @@
-module.exports = {
- presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
-};
diff --git a/docs/blog/10-masterpiece-articles/index.md b/docs/blog/10-masterpiece-articles/index.md
deleted file mode 100644
index 4b58627a..00000000
--- a/docs/blog/10-masterpiece-articles/index.md
+++ /dev/null
@@ -1,233 +0,0 @@
----
-slug: a-compilation-of-outstanding-testing-articles-with-javaScript
-date: 2023-08-06T10:00
-hide_table_of_contents: true
-title: A compilation of outstanding testing articles (with JavaScript)
-authors: [goldbergyoni]
-tags:
- [
- node.js,
- testing,
- javascript,
- tdd,
- unit,
- integration
- ]
----
-
-## What's special about this article?
-
-As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was _shockingly good_ and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language
-
-Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling
-
-Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss
-
-**Before we start:** If you haven't heard, I launched my comprehensive Node.js testing course a week ago ([curriculum here](https://testjavascript.com/curriculum2/)). There are less than 48 hours left for the [🎁 special launch deal](https://courses.testjavascript.com/p/node-js-javascript-testing-from-a-to-z)
-
-Here they are, 10 outstanding testing articles:
-
-
-
-## 📄 1. 'Selective Unit Testing – Costs and Benefits'
-
-**✍️ Author:** Steve Sanderson
-
-**🔖 Abstract:** We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under *various scenarios*. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the *costs and benefits per module*. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:
-
-> If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any
-
-The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low
-
-
-
-Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a [the testing diamond](https://www.crispy-engineering.com/p/why-test-diamond-model-makes-sense)). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios
-
-
-**👓 Read time:** 9 min (1850 words)
-
-**🔗 Link:** [https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/](https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/)
-
-
-
-## 📄 2. 'Testing implementation details' (JavaScript example)
-
-**✍️ Author:** Kent C Dodds
-
-**🔖 Abstract:** The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing
-
-> "There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:
-> 1. Can break when you refactor application code. *False negatives*
-> 2. May not fail when you break application code. *False positives*"
-
-
-p.s. This author has another outstanding post about a modern testing strategy, checkout this one as well - ['Write tests. Not too many. Mostly integration'](https://kentcdodds.com/blog/write-tests)
-
-
-**👓 Read time:** 13 min (2600 words)
-
-**🔗 Link:** [https://kentcdodds.com/blog/testing-implementation-details](https://kentcdodds.com/blog/testing-implementation-details)
-
-
-
-## 📄 3. 'Testing Microservices, the sane way'
-
-🏅 This is a masterpiece
-
-**✍️ Author:** Cindy Sridharan
-
-**🔖 Abstract:** This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment
-
-This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism
-
-> I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:
-
-
-
-Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable
-
-
-**👓 Read time:** > 2 hours (10,500 words with many links)
-
-**🔗 Link:** [https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16](https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16)
-
-
-
-## 📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)
-
-**✍️ Author:** Ryan Jones
-
-**🔖 Abstract:** *One single recommendation for beginners:* Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world
-
-This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about [test doubles (mocking)](https://www.testim.io/blog/sinon-js-tutorial/)
-
-**👓 Read time:** 16 min (3000 words)
-
-**🔗 Link:** [https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56](https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56)
-
-
-
-## 📄 5. 'Unit test fetish'
-
-**✍️ Author:** Martin Sústrik
-
-**🔖 Abstract:** The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'
-
-**👓 Read time:** 5 min (1000 words)
-
-**🔗 Link:** [https://250bpm.com/blog:40/](https://250bpm.com/blog:40/)
-
-
-
-## 📄 6. 'Mocking is a Code Smell' (JavaScript examples)
-
-**✍️ Author:** Eric Elliott
-
-**🔖 Abstract:** Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but *many* mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:
-
-
-> "Mocking is required when our decomposition strategy has failed"
-
-The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more
-
-The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt
-
-**👓 Read time:** 32 min (6,300 words)
-
-**🔗 Link:** [https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a](https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a)
-
-
-
-## 📄 7. 'Why Good Developers Write Bad Unit Tests'
-
-🏅 This is a masterpiece
-
-**✍️ Author:** Michael Lynch
-
-**🔖 Abstract:** I love this one so much. The author exemplifies how *unexpectedly* it is sometimes the good developers with their great intentions who write bad tests:
-
-> Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach
-
-Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do
-
-**👓 Read time:** 11 min (2,2000 words)
-
-**🔗 Link:** [https://mtlynch.io/good-developers-bad-tests/](https://mtlynch.io/good-developers-bad-tests/)
-
-
-
-## 📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)
-
-**✍️ Author:** Vitali Zaidman
-
-**🔖 Abstract:** This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.
-
-> "We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."
-
- The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more
-
-**👓 Read time:** 37 min (7,400 words)
-
-**🔗 Link:** [https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870](https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870)
-
-
-
-## 📄 9. Testing in Production, the safe way
-
-**✍️ Author:** Cindy Sridharan
-
-**🔖 Abstract:** 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add _additional_ layer of confidence by _safely_ testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production
-
-> I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.
-
-> It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.
-
-
-
-**👓 Read time:** 54 min (10,725 words)
-
-**🔗 Link:** [https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1](https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1)
-
-
-
-## 📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)
-
-🏅 This is a masterpiece
-
-**✍️ Author:** Justin Searls
-
-**🔖 Abstract:** This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:
-
-> "A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"
-
-Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one
-
-**👓 Read time:** 39 min
-
-**🔗 Link:** [https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148](https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148)
-
-
-
-### 📄 Shameless plug: my articles
-
-Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?
-
-* [Node.js testing - beyond the basics](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-* [50+ JavaScript testing best practices](https://github.com/goldbergyoni/javascript-testing-best-practices)
-* [Writing clean JavaScript tests](https://yonigoldberg.medium.com/fighting-javascript-tests-complexity-with-the-basic-principles-87b7622eac9a)
-
-### 🎁 Bonus: Some other great testing content
-
-These articles are also great, some are highly popular:
-
-* [Property-Based Testing for everyone](https://www.youtube.com/watch?v=5pwv3cuo3Qk)
-* [METAMORPHIC TESTING](https://www.hillelwayne.com/post/metamorphic-testing/)
-* [Lean Testing or Why Unit Tests are Worse than You Think](https://medium.com/@eugenkiss/lean-testing-or-why-unit-tests-are-worse-than-you-think-b6500139a009)
-* [Testing Strategies in a Microservice Architecture](https://martinfowler.com/articles/microservice-testing/?utm_source=pocket_saves)
-* [Test Desiderata](https://kentbeck.github.io/TestDesiderata/)
-* [TDD is dead. Long live testing](https://dhh.dk/2014/tdd-is-dead-long-live-testing.html)
-* [Test-induced-design-damage](https://dhh.dk/2014/test-induced-design-damage.html)
-* [testing-without-mocks](https://www.jamesshore.com/v2/projects/nullables/testing-without-mocks)
-* [Testing Node.js error handling](https://blog.developer.adobe.com/testing-error-handling-in-node-js-567323397114)
-
-p.s. Last reminder, less than 48 hours left for my [online course 🎁 special launch offer](https://courses.testjavascript.com/p/node-js-javascript-testing-from-a-to-z)
\ No newline at end of file
diff --git a/docs/blog/authors.yml b/docs/blog/authors.yml
deleted file mode 100644
index c6333d3b..00000000
--- a/docs/blog/authors.yml
+++ /dev/null
@@ -1,21 +0,0 @@
-goldbergyoni:
- name: Yoni Goldberg
- title: Practica.js core maintainer
- url: https://github.com/goldbergyoni
- image_url: https://github.com/goldbergyoni.png
-michaelsalomon:
- name: Michael Salomon
- title: Practica.js core maintainer
- url: https://github.com/mikicho
- image_url: https://avatars.githubusercontent.com/u/11459632?v=4
-razluvaton:
- name: Raz Luvaton
- title: Practica.js core maintainer
- url: https://github.com/rluvaton
- image_url: https://avatars.githubusercontent.com/u/16746759?v=4
-danielgluskin:
- name: Daniel Gluskin
- title: Practica.js core maintainer
- url: https://github.com/DanielGluskin
- image_url: https://avatars.githubusercontent.com/u/17989958?v=4
-
diff --git a/docs/blog/crucial-tests/index.md b/docs/blog/crucial-tests/index.md
deleted file mode 100644
index 6cc0ebe6..00000000
--- a/docs/blog/crucial-tests/index.md
+++ /dev/null
@@ -1,497 +0,0 @@
----
-slug: testing-the-dark-scenarios-of-your-nodejs-application
-date: 2023-07-07T11:00
-hide_table_of_contents: true
-title: Testing the dark scenarios of your Node.js application
-authors: [goldbergyoni, razluvaton]
-tags:
- [
- node.js,
- testing,
- component-test,
- fastify,
- unit-test,
- integration,
- nock,
- ]
----
-
-## Where the dead-bodies are covered
-
-This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked
-
-Some context first: How do we test a modern backend? With [the testing diamond](https://ritesh-kapoor.medium.com/testing-automation-what-are-pyramids-and-diamonds-67494fec7c55), of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written [a guide with 50 best practices for integration tests in Node.js](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-
-But there is a pitfall: most developers write _only_ semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime
-
-
-
-Here are a handful of examples that might open your mind to a whole new class of risks and tests
-
-**July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at [testjavascript.com](https://testjavascript.com/)**
-
-## **Test Examples**
-
-## 🧟♀️ The zombie process test
-
-**👉What & so what? -** In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see [readiness probe](https://komodor.com/learn/kubernetes-readiness-probes-a-practical-guide/#:~:text=A%20readiness%20probe%20allows%20Kubernetes,on%20deletion%20of%20a%20pod.)). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!
-
-**📝 Code**
-
-**Code under test, api.js:**
-
-```javascript
-// A common express server initialization
-const startWebServer = () => {
- return new Promise((resolve, reject) => {
- try {
- // A typical Express setup
- expressApp = express();
- defineRoutes(expressApp); // a function that defines all routes
- expressApp.listen(process.env.WEB_SERVER_PORT);
- } catch (error) {
- //log here, fire a metric, maybe even retry and finally:
- process.exit();
- }
- });
-};
-```
-
-**The test:**
-
-```javascript
-const api = require('./entry-points/api'); // our api starter that exposes 'startWebServer' function
-const sinon = require('sinon'); // a mocking library
-
-test('When an error happens during the startup phase, then the process exits', async () => {
- // Arrange
- const processExitListener = sinon.stub(process, 'exit');
- // 👇 Choose a function that is part of the initialization phase and make it fail
- sinon
- .stub(routes, 'defineRoutes')
- .throws(new Error('Cant initialize connection'));
-
- // Act
- await api.startWebServer();
-
- // Assert
- expect(processExitListener.called).toBe(true);
-});
-```
-
-## 👀 The observability test
-
-**👉What & why -** For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error **correctly observable**. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, _including stack trace_, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:
-
-**📝 Code**
-
-```javascript
-test('When exception is throw during request, Then logger reports the mandatory fields', async () => {
- //Arrange
- const orderToAdd = {
- userId: 1,
- productId: 2,
- status: 'approved',
- };
- const metricsExporterDouble = sinon.stub(metricsExporter, 'fireMetric');
- sinon
- .stub(OrderRepository.prototype, 'addOrder')
- .rejects(new AppError('saving-failed', 'Order could not be saved', 500));
- const loggerDouble = sinon.stub(logger, 'error');
-
- //Act
- await axiosAPIClient.post('/order', orderToAdd);
-
- //Assert
- expect(loggerDouble).toHaveBeenCalledWith({
- name: 'saving-failed',
- status: 500,
- stack: expect.any(String),
- message: expect.any(String),
- });
- expect(
- metricsExporterDouble).toHaveBeenCalledWith('error', {
- errorName: 'example-error',
- })
-});
-```
-
-## 👽 The 'unexpected visitor' test - when an uncaught exception meets our code
-
-**👉What & why -** A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, **hopefully if your code subscribed**. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:
-
-researches says that, rejection
-
-**📝 Code**
-
-```javascript
-test('When an unhandled exception is thrown, then process stays alive and the error is logged', async () => {
- //Arrange
- const loggerDouble = sinon.stub(logger, 'error');
- const processExitListener = sinon.stub(process, 'exit');
- const errorToThrow = new Error('An error that wont be caught 😳');
-
- //Act
- process.emit('uncaughtException', errorToThrow); //👈 Where the magic is
-
- // Assert
- expect(processExitListener.called).toBe(false);
- expect(loggerDouble).toHaveBeenCalledWith(errorToThrow);
-});
-```
-
-## 🕵🏼 The 'hidden effect' test - when the code should not mutate at all
-
-**👉What & so what -** In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:
-
-**📝 Code**
-
-```javascript
-it('When adding an invalid order, then it returns 400 and NOT retrievable', async () => {
- //Arrange
- const orderToAdd = {
- userId: 1,
- mode: 'draft',
- externalIdentifier: uuid(), //no existing record has this value
- };
-
- //Act
- const { status: addingHTTPStatus } = await axiosAPIClient.post(
- '/order',
- orderToAdd
- );
-
- //Assert
- const { status: fetchingHTTPStatus } = await axiosAPIClient.get(
- `/order/externalIdentifier/${orderToAdd.externalIdentifier}`
- ); // Trying to get the order that should have failed
- expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({
- addingHTTPStatus: 400,
- fetchingHTTPStatus: 404,
- });
- // 👆 Check that no such record exists
-});
-```
-
-## 🧨 The 'overdoing' test - when the code should mutate but it's doing too much
-
-**👉What & why -** This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:
-
-**📝 Code**
-
-```javascript
-test('When deleting an existing order, Then it should NOT be retrievable', async () => {
- // Arrange
- const orderToDelete = {
- userId: 1,
- productId: 2,
- };
- const deletedOrder = (await axiosAPIClient.post('/order', orderToDelete)).data
- .id; // We will delete this soon
- const orderNotToBeDeleted = orderToDelete;
- const notDeletedOrder = (
- await axiosAPIClient.post('/order', orderNotToBeDeleted)
- ).data.id; // We will not delete this
-
- // Act
- await axiosAPIClient.delete(`/order/${deletedOrder}`);
-
- // Assert
- const { status: getDeletedOrderStatus } = await axiosAPIClient.get(
- `/order/${deletedOrder}`
- );
- const { status: getNotDeletedOrderStatus } = await axiosAPIClient.get(
- `/order/${notDeletedOrder}`
- );
- expect(getNotDeletedOrderStatus).toBe(200);
- expect(getDeletedOrderStatus).toBe(404);
-});
-```
-
-## 🕰 The 'slow collaborator' test - when the other HTTP service times out
-
-**👉What & why -** When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like [nock](https://github.com/nock/nock) or [wiremock](https://wiremock.org/). These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available **in production**, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use [fake timers](https://sinonjs.org/releases/latest/fake-timers/) and trick the system into believing as few seconds passed in a single tick. If you're using [nock](https://github.com/nock/nock), it offers an interesting feature to simulate timeouts **quickly**: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting
-
-**📝 Code**
-
-```javascript
-// In this example, our code accepts new Orders and while processing them approaches the Users Microservice
-test('When users service times out, then return 503 (option 1 with fake timers)', async () => {
- //Arrange
- const clock = sinon.useFakeTimers();
- config.HTTPCallTimeout = 1000; // Set a timeout for outgoing HTTP calls
- nock(`${config.userServiceURL}/user/`)
- .get('/1', () => clock.tick(2000)) // Reply delay is bigger than configured timeout 👆
- .reply(200);
- const loggerDouble = sinon.stub(logger, 'error');
- const orderToAdd = {
- userId: 1,
- productId: 2,
- mode: 'approved',
- };
-
- //Act
- // 👇try to add new order which should fail due to User service not available
- const response = await axiosAPIClient.post('/order', orderToAdd);
-
- //Assert
- // 👇At least our code does its best given this situation
- expect(response.status).toBe(503);
- expect(loggerDouble.lastCall.firstArg).toMatchObject({
- name: 'user-service-not-available',
- stack: expect.any(String),
- message: expect.any(String),
- });
-});
-```
-
-## 💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation
-
-**👉What & so what -** When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why
-
-When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. [SQS demand 60 seconds](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-using-purge-queue.html) to purge queues), to name a few challenges that you won't find when dealing with real DB
-
-Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like [this one for SQS](https://github.com/m-radzikowski/aws-sdk-client-mock) and you can code one **easily** yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):
-
-**📝 Code**
-
-1. Create a fake message queue that does almost nothing but record calls, see full example here
-
-```javascript
-class FakeMessageQueueProvider extends EventEmitter {
- // Implement here
-
- publish(message) {}
-
- consume(queueName, callback) {}
-}
-```
-
-2. Make your message queue client accept real or fake provider
-
-```javascript
-class MessageQueueClient extends EventEmitter {
- // Pass to it a fake or real message queue
- constructor(customMessageQueueProvider) {}
-
- publish(message) {}
-
- consume(queueName, callback) {}
-
- // Simple implementation can be found here:
- // https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js
-}
-```
-
-3. Expose a convenient function that tells when certain calls where made
-
-```javascript
-class MessageQueueClient extends EventEmitter {
- publish(message) {}
-
- consume(queueName, callback) {}
-
- // 👇
- waitForEvent(eventName: 'publish' | 'consume' | 'acknowledge' | 'reject', howManyTimes: number) : Promise
-}
-```
-
-4. The test is now short, flat and expressive 👇
-
-```javascript
-const FakeMessageQueueProvider = require('./libs/fake-message-queue-provider');
-const MessageQueueClient = require('./libs/message-queue-client');
-const newOrderService = require('./domain/newOrderService');
-
-test('When a poisoned message arrives, then it is being rejected back', async () => {
- // Arrange
- const messageWithInvalidSchema = { nonExistingProperty: 'invalid❌' };
- const messageQueueClient = new MessageQueueClient(
- new FakeMessageQueueProvider()
- );
- // Subscribe to new messages and passing the handler function
- messageQueueClient.consume('orders.new', newOrderService.addOrder);
-
- // Act
- await messageQueueClient.publish('orders.new', messageWithInvalidSchema);
- // Now all the layers of the app will get stretched 👆, including logic and message queue libraries
-
- // Assert
- await messageQueueClient.waitFor('reject', { howManyTimes: 1 });
- // 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message
-});
-```
-
-**📝Full code example -** [is here](https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/recipes/message-queue/fake-message-queue.test.js)
-
-## 📦 Test the package as a consumer
-
-**👉What & why -** When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts _that were built_. See the mismatch here? _after_ running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files
-
-**📝 Code**
-
-Consider the following scenario, you're developing a library, and you wrote this code:
-```js
-// index.js
-export * from './calculate.js';
-
-// calculate.js 👈
-export function calculate() {
- return 1;
-}
-```
-
-Then some tests:
-```js
-import { calculate } from './index.js';
-
-test('should return 1', () => {
- expect(calculate()).toBe(1);
-})
-
-✅ All tests pass 🎊
-```
-
-Finally configure the package.json:
-```json5
-{
- // ....
- "files": [
- "index.js"
- ]
-}
-```
-
-See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the `calculate.js` in the package.json `files` array 👆
-
-
-What can we do instead? we can test the library as _its end-users_. How? publish the package to a local registry like [verdaccio](https://verdaccio.org/), let the tests install and approach the *published* code. Sounds troublesome? judge yourself 👇
-
-**📝 Code**
-
-```js
-// global-setup.js
-
-// 1. Setup the in-memory NPM registry, one function that's it! 🔥
-await setupVerdaccio();
-
-// 2. Building our package
-await exec('npm', ['run', 'build'], {
- cwd: packagePath,
-});
-
-// 3. Publish it to the in-memory registry
-await exec('npm', ['publish', '--registry=http://localhost:4873'], {
- cwd: packagePath,
-});
-
-// 4. Installing it in the consumer directory
-await exec('npm', ['install', 'my-package', '--registry=http://localhost:4873'], {
- cwd: consumerPath,
-});
-
-// Test file in the consumerPath
-
-// 5. Test the package 🚀
-test("should succeed", async () => {
- const { fn1 } = await import('my-package');
-
- expect(fn1()).toEqual(1);
-});
-```
-
-**📝Full code example -** [is here](https://github.com/rluvaton/e2e-verdaccio-example)
-
-What else this technique can be useful for?
-
-- Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
-- You want to test ESM and CJS consumers
-- If you have CLI application you can test it like your users
-- Making sure all the voodoo magic in that babel file is working as expected
-
-## 🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug
-
-**👉What & so what -** Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.
-
-Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., [PACT](https://pact.io)), there are also leaner approaches that gets you covered _easily and quickly_ (at the price of covering less risks).
-
-The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:
-
-**📝 Code**
-
-**Code under test, an API throw a new error status**
-
-```javascript
-if (doesOrderCouponAlreadyExist) {
- throw new AppError('duplicated-coupon', { httpStatus: 409 });
-}
-```
-
-The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions
-
-```json
-"responses": {
- "200": {
- "description": "successful",
- }
- ,
- "400": {
- "description": "Invalid ID",
- "content": {}
- },// No 409 in this list😲👈
-}
-
-```
-
-**The test code**
-
-```javascript
-const jestOpenAPI = require('jest-openapi');
-jestOpenAPI('../openapi.json');
-
-test('When an order with duplicated coupon is added , then 409 error should get returned', async () => {
- // Arrange
- const orderToAdd = {
- userId: 1,
- productId: 2,
- couponId: uuid(),
- };
- await axiosAPIClient.post('/order', orderToAdd);
-
- // Act
- // We're adding the same coupon twice 👇
- const receivedResponse = await axios.post('/order', orderToAdd);
-
- // Assert;
- expect(receivedResponse.status).toBe(409);
- expect(res).toSatisfyApiSpec();
- // This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI
-});
-```
-
-Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches
-
-```javascript
-beforeAll(() => {
- axios.interceptors.response.use((response) => {
- expect(response.toSatisfyApiSpec());
- // With this 👆, add nothing to the tests - each will fail if the response deviates from the docs
- });
-});
-```
-
-## Even more ideas
-
-- Test readiness and health routes
-- Test message queue connection failures
-- Test JWT and JWKS failures
-- Test security-related things like CSRF tokens
-- Test your HTTP client retry mechanism (very easy with nock)
-- Test that the DB migration succeed and the new code can work with old records format
-- Test DB connection disconnects
-
-## It's not just ideas, it a whole new mindset
-
-The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'
-
-**My new online testing course -** If you're intrigued with beyond the basics testing patterns, [consider my online course which was just launched and is 🎁 on sale for 30 days (July 2023)](https://testjavascript.com)
diff --git a/docs/blog/is-prisma-better/commits-comparison.png b/docs/blog/is-prisma-better/commits-comparison.png
deleted file mode 100644
index 05e1c3f5..00000000
Binary files a/docs/blog/is-prisma-better/commits-comparison.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/high1-importance-slider.png b/docs/blog/is-prisma-better/high1-importance-slider.png
deleted file mode 100644
index 2b60f2b0..00000000
Binary files a/docs/blog/is-prisma-better/high1-importance-slider.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/high2-importance-slider.png b/docs/blog/is-prisma-better/high2-importance-slider.png
deleted file mode 100644
index 5d5ad39d..00000000
Binary files a/docs/blog/is-prisma-better/high2-importance-slider.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/index.md b/docs/blog/is-prisma-better/index.md
deleted file mode 100644
index 591c2897..00000000
--- a/docs/blog/is-prisma-better/index.md
+++ /dev/null
@@ -1,450 +0,0 @@
----
-slug: is-prisma-better-than-your-traditional-orm
-date: 2022-12-07T11:00
-hide_table_of_contents: true
-title: Is Prisma better than your 'traditional' ORM?
-authors: [goldbergyoni]
-tags:
- [
- node.js,
- express,
- nestjs,
- fastify,
- passport,
- dotenv,
- supertest,
- practica,
- testing,
- ]
----
-
-## Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?
-
-*Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?*
-
-Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained
-
-
-
- Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed *before* writing this article...
-
-From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?
-
-In [Practica.js](https://github.com/practicajs/practica) (the Node.js starter based off [Node.js best practices with 83,000 stars](https://github.com/goldbergyoni/nodebestpractices)) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?
-
-This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs
-
-Ready to explore how good Prisma is and whether you should throw away your current tools?
-
-
-
-## TOC
-
-1. Prisma basics in 3 minutes
-2. Things that are mostly the same
-3. Differentiation
-4. Closing
-
-## Prisma basics in 3 minutes
-
-Just before delving into the strategic differences, for the benefit of those unfamiliar with Prisma - here is a quick 'hello-world' workflow with Prisma ORM. If you're already familiar with it - skipping to the next section sounds sensible. Simply put, Prisma dictates 3 key steps to get our ORM code working:
-
-**A. Define a model -** Unlike almost any other ORM, Prisma brings a unique language (DSL) for modeling the database-to-code mapping. This proprietary syntax aims to express these models with minimum clutter (i.e., TypeScript generics and verbose code). Worried about having intellisense and validation? A well-crafted vscode extension gets you covered. In the following example, the prisma.schema file describes a DB with an Order table that has a one-to-many relation with a Country table:
-
-```prisma
-// prisma.schema file
-model Order {
- id Int @id @default(autoincrement())
- userId Int?
- paymentTermsInDays Int?
- deliveryAddress String? @db.VarChar(255)
- country Country @relation(fields: [countryId], references: [id])
- countryId Int
-}
-
-model Country {
- id Int @id @default(autoincrement())
- name String @db.VarChar(255)
- Order Order[]
-}
-```
-
-**B. Generate the client code -** Another unusual technique: to get the ORM code ready, one must invoke Prisma's CLI and ask for it:
-
-```bash
-npx prisma generate
-```
-
-Alternatively, if you wish to have your DB ready and the code generated with one command, just fire:
-
-```bash
-npx prisma migrate deploy
-```
-
-This will generate migration files that you can execute later in production and also the ORM client code
-
-
-This will generate migration files that you can execute later in production and the TypeScript ORM code based on the model. The generated code location is defaulted under '[root]/NODE_MODULES/.prisma/client'. Every time the model changes, the code must get re-generated again. While most ORMs name this code 'repository' or 'entity' or 'active record', interestingly, Prisma calls it a 'client'. This shows part of its unique philosophy, which we will explore later
-
-**C. All good, use the client to interact with the DB -** The generated client has a rich set of functions and types for your DB interactions. Just import the ORM/client code and use it:
-
-```javascript
-import { PrismaClient } from '.prisma/client';
-
-const prisma = new PrismaClient();
-// A query example
-await prisma.order.findMany({
- where: {
- paymentTermsInDays: 30,
- },
- orderBy: {
- id: 'asc',
- },
- });
-// Use the same client for insertion, deletion, updates, etc
-```
-
-That's the nuts and bolts of Prisma. Is it different and better?
-
-## What is the same?
-
-When comparing options, before outlining differences, it's useful to state what is actually similar among these products. Here is a partial list of features that both TypeORM, Sequelize and Prisma support
-
-- Casual queries with sorting, filtering, distinct, group by, 'upsert' (update or create),etc
-- Raw queries
-- Full text search
-- Association/relations of any type (e.g., many to many, self-relation, etc)
-- Aggregation queries
-- Pagination
-- CLI
-- Transactions
-- Migration & seeding
-- Hooks/events (called middleware in Prisma)
-- Connection pool
-- Based on various community benchmarks, no dramatic performance differences
-- All have huge amount of stars and downloads
-
-Overall, I found TypeORM and Sequelize to be a little more feature rich. For example, the following features are not supported only in Prisma: GIS queries, DB-level custom constraints, DB replication, soft delete, caching, exclude queries and some more
-
-With that, shall we focus on what really set them apart and make a difference
-
-## What is fundamentally different?
-
-### 1. Type safety across the board
-
-**💁♂️ What is it about:** ORM's life is not easier since the TypeScript rise, to say the least. The need to support typed models/queries/etc yields a lot of developers sweat. Sequelize, for example, struggles to stabilize a TypeScript interface and, by now offers 3 different syntaxes + one external library ([sequelize-typescript](https://github.com/sequelize/sequelize-typescript)) that offers yet another style. Look at the syntax below, this feels like an afterthought - a library that was not built for TypeScript and now tries to squeeze it in somehow. Despite the major investment, both Sequelize and TypeORM offer only partial type safety. Simple queries do return typed objects, but other common corner cases like attributes/projections leave you with brittle strings. Here are a few examples:
-
-
-```javascript
-// Sequelize pesky TypeScript interface
-type OrderAttributes = {
- id: number,
- price: number,
- // other attributes...
-};
-
-type OrderCreationAttributes = Optional;
-
-//😯 Isn't this a weird syntax?
-class Order extends Model, InferCreationAttributes> {
- declare id: CreationOptional;
- declare price: number;
-}
-```
-
-```javascript
-// Sequelize loose query types
-await getOrderModel().findAll({
- where: { noneExistingField: 'noneExistingValue' } //👍 TypeScript will warn here
- attributes: ['none-existing-field', 'another-imaginary-column'], // No errors here although these columns do not exist
- include: 'no-such-table', //😯 no errors here although this table doesn't exist
- });
- await getCountryModel().findByPk('price'); //😯 No errors here although the price column is not a primary key
-```
-
-```javascript
-// TypeORM loose query
-const ordersOnSales: Post[] = await orderRepository.find({
- where: { onSale: true }, //👍 TypeScript will warn here
- select: ['id', 'price'],
-})
-console.log(ordersOnSales[0].userId); //😯 No errors here although the 'userId' column is not part of the returned object
-```
-
-Isn't it ironic that a library called **Type**ORM base its queries on strings?
-
-
-**🤔 How Prisma is different:** It takes a totally different approach by generating per-project client code that is fully typed. This client embodies types for everything: every query, relations, sub-queries, everything (except migrations). While other ORMs struggles to infer types from discrete models (including associations that are declared in other files), Prisma's offline code generation is easier: It can look through the entire DB relations, use custom generation code and build an almost perfect TypeScript experience. Why 'almost' perfect? for some reason, Prisma advocates using plain SQL for migrations, which might result in a discrepancy between the code models and the DB schema. Other than that, this is how Prisma's client brings end to end type safety:
-
-```javascript
-await prisma.order.findMany({
- where: {
- noneExistingField: 1, //👍 TypeScript error here
- },
- select: {
- noneExistingRelation: { //👍 TypeScript error here
- select: { id: true },
- },
- noneExistingField: true, //👍 TypeScript error here
- },
- });
-
- await prisma.order.findUnique({
- where: { price: 50 }, //👍 TypeScript error here
- });
-```
-
-**📊 How important:** TypeScript support across the board is valuable for DX mostly. Luckily, we have another safety net: The project testing. Since tests are mandatory, having build-time type verification is important but not a life saver
-
-
-
-**🏆 Is Prisma doing better?:** Definitely
-
-## 2. Make you forget SQL
-
-**💁♂️ What is it about:** Many avoid ORMs while preferring to interact with the DB using lower-level techniques. One of their arguments is against the efficiency of ORMs: Since the generated queries are not visible immediately to the developers, wasteful queries might get executed unknowingly. While all ORMs provide syntactic sugar over SQL, there are subtle differences in the level of abstraction. The more the ORM syntax resembles SQL, the more likely the developers will understand their own actions
-
-For example, TypeORM's query builder looks like SQL broken into convenient functions
-
-```javascript
-await createQueryBuilder('order')
- .leftJoinAndSelect(
- 'order.userId',
- 'order.productId',
- 'country.name',
- 'country.id'
- )
- .getMany();
-```
-
-A developer who read this code 👆 is likely to infer that a *join* query between two tables will get executed
-
-
-**🤔 How Prisma is different:** Prisma's mission statement is to simplify DB work, the following statement is taken from their homepage:
-
-"We designed its API to be intuitive, both for SQL veterans and *developers brand new to databases*"
-
-Being ambitious to appeal also to database layman, Prisma builds a syntax with a little bit higher abstraction, for example:
-
-```javascript
-await prisma.order.findMany({
- select: {
- userId: true,
- productId: true,
- country: {
- select: { name: true, id: true },
- },
- },
-});
-
-```
-
-No join is reminded here also it fetches records from two related tables (order, and country). Could you guess what SQL is being produced here? how many queries? One right, a simple join? Surprise, actually, two queries are made. Prisma fires one query per-table here, as the join logic happens on the ORM client side (not inside the DB). But why?? in some cases, mostly where there is a lot of repetition in the DB cartesian join, querying each side of the relation is more efficient. But in other cases, it's not. Prisma arbitrarily chose what they believe will perform better in *most* cases. I checked, in my case it's *slower* than doing a one-join query on the DB side. As a developer, I would miss this deficiency due to the high-level syntax (no join is mentioned). My point is, Prisma sweet and simple syntax might be a bless for developer who are brand new to databases and aim to achieve a working solution in a short time. For the longer term, having full awareness of the DB interactions is helpful, other ORMs encourage this awareness a little better
-
-**📊 How important:** Any ORM will hide SQL details from their users - without developer's awareness no ORM will save the day
-
-
-
-**🏆 Is Prisma doing better?:** Not necessarily
-
-## 3. Performance
-
-**💁♂️ What is it about:** Speak to an ORM antagonist and you'll hear a common sensible argument: ORMs are much slower than a 'raw' approach. To an extent, this is a legit observation as [most comparisons](https://welingtonfidelis.medium.com/pg-driver-vs-knex-js-vs-sequelize-vs-typeorm-f9ed53e9f802) will show none-negligible differences between raw/query-builder and ORM.
-
-
-*Example: a direct insert against the PG driver is much shorter [Source](https://welingtonfidelis.medium.com/pg-driver-vs-knex-js-vs-sequelize-vs-typeorm-f9ed53e9f802)*
-
- It should also be noted that these benchmarks don't tell the entire story - on top of raw queries, every solution must build a mapper layer that maps the raw data to JS objects, nest the results, cast types, and more. This work is included within every ORM but not shown in benchmarks for the raw option. In reality, every team which doesn't use ORM would have to build their own small "ORM", including a mapper, which will also impact performance
-
-
-**🤔 How Prisma is different:** It was my hope to see a magic here, eating the ORM cake without counting the calories, seeing Prisma achieving an almost 'raw' query speed. I had some good and logical reasons for this hope: Prisma uses a DB client built with Rust. Theoretically, it could serialize to and nest objects faster (in reality, this happens on the JS side). It was also built from the ground up and could build on the knowledge pilled in ORM space for years. Also, since it returns POJOs only (see bullet 'No Active Record here!') - no time should be spent on decorating objects with ORM fields
-
-You already got it, this hope was not fulfilled. Going with every community benchmark ([one](https://dev.to/josethz00/benchmark-prisma-vs-typeorm-3873), [two](https://github.com/edgedb/imdbench), [three](https://deepkit.io/library)), Prisma at best is not faster than the average ORM. What is the reason? I can't tell exactly but it might be due the complicated system that must support Go, future languages, MongoDB and other non-relational DBs
-
-
-*Example: Prisma is not faster than others. It should be noted that in other benchmarks Prisma scores higher and shows an 'average' performance [Source](https://github.com/edgedb/imdbench)*
-
-**📊 How important:** It's expected from ORM users to live peacefully with inferior performance, for many systems it won't make a great deal. With that, 10%-30% performance differences between various ORMs are not a key factor
-
-
-
-**🏆 Is Prisma doing better?:** No
-
-## 4. No active records here!
-
-**💁♂️ What is it about:** Node in its early days was heavily inspired by Ruby (e.g., testing "describe"), many great patterns were embraced, [Active Record](https://en.wikipedia.org/wiki/Active_record_pattern) is not among the successful ones. What is this pattern about in a nutshell? say you deal with Orders in your system, with Active Record an Order object/class will hold both the entity properties, possible also some of the logic functions and also CRUD functions. Many find this pattern to be awful, why? ideally, when coding some logic/flow, one should not keep her mind busy with side effects and DB narratives. It also might be that accessing some property unconsciously invokes a heavy DB call (i.e., lazy loading). If not enough, in case of heavy logic, unit tests might be in order (i.e., read ['selective unit tests'](https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/)) - it's going to be much harder to write unit tests against code that interacts with the DB. In fact, all of the respectable and popular architecture (e.g., DDD, clean, 3-tiers, etc) advocate to 'isolate the domain', separate the core/logic of the system from the surrounding technologies. With all of that said, both TypeORM and Sequelize support the Active Record pattern which is displayed in many examples within their documentation. Both also support other better patterns like the data mapper (see below), but they still open the door for doubtful patterns
-
-
-```javascript
-// TypeORM active records 😟
-
-@Entity()
-class Order extends BaseEntity {
- @PrimaryGeneratedColumn()
- id: number
-
- @Column()
- price: number
-
- @ManyToOne(() => Product, (product) => product.order)
- products: Product[]
-
- // Other columns here
-}
-
-function updateOrder(orderToUpdate: Order){
- if(orderToUpdate.price > 100){
- // some logic here
- orderToUpdate.status = "approval";
- orderToUpdate.save();
- orderToUpdate.products.forEach((products) =>{
-
- })
- orderToUpdate.usedConnection = ?
- }
-}
-
-
-
-```
-
-**🤔 How Prisma is different:** The better alternative is the data mapper pattern. It acts as a bridge, an adapter, between simple object notations (domain objects with properties) to the DB language, typically SQL. Call it with a plain JS object, POJO, get it saved in the DB. Simple. It won't add functions to the result objects or do anything beyond returning pure data, no surprising side effects. In its purest sense, this is a DB-related utility and completely detached from the business logic. While both Sequelize and TypeORM support this, Prisma offers *only* this style - no room for mistakes.
-
-
-```javascript
-// Prisma approach with a data mapper 👍
-
-// This was generated automatically by Prisma
-type Order {
- id: number
-
- price: number
-
- products: Product[]
-
- // Other columns here
-}
-
-function updateOrder(orderToUpdate: Order){
- if(orderToUpdate.price > 100){
- orderToUpdate.status = "approval";
- prisma.order.update({ where: { id: orderToUpdate.id }, data: orderToUpdate });
- // Side effect 👆, but an explicit one. The thoughtful coder will move this to another function. Since it's happening outside, mocking is possible 👍
- products.forEach((products) =>{ // No lazy loading, the data is already here 👍
-
- })
- }
-}
-```
-
- In [Practica.js](https://github.com/practicajs/practica) we take it one step further and put the prisma models within the "DAL" layer and wrap it with the [repository pattern](https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design). You may glimpse [into the code here](https://github.com/practicajs/practica/blob/21ff12ba19cceed9a3735c09d48184b5beb5c410/src/code-templates/services/order-service/domain/new-order-use-case.ts#L21), this is the business flow that calls the DAL layer
-
-
-**📊 How important:** On the one hand, this is a key architectural principle to follow but the other hand most ORMs *allow* doing it right
-
-
-
-**🏆 Is Prisma doing better?:** Yes!
-
-## 5. Documentation and developer-experience
-
-
-**💁♂️ What is it about:** TypeORM and Sequelize documentation is mediocre, though TypeORM is a little better. Based on my personal experience they do get a little better over the years, but still by no mean they deserve to be called "good" or "great". For example, if you seek to learn about 'raw queries' - Sequelize offers [a very short page](https://sequelize.org/docs/v6/core-concepts/raw-queries/) on this matter, TypeORM info is spread in multiple other pages. Looking to learn about pagination? Couldn't find Sequelize documents, TypeORM has [some short explanation](https://typeorm.io/select-query-builder#using-pagination), 150 words only
-
-
-**🤔 How Prisma is different:** Prisma documentation rocks! See their documents on similar topics: [raw queries](https://www.prisma.io/docs/concepts/components/prisma-client/raw-database-access) and [pagingation](https://www.prisma.io/docs/concepts/components/prisma-client/pagination), thousands of words, and dozens of code examples. The writing itself is also great, feels like some professional writers were involved
-
-
-
-This chart above shows how comprehensive are Prisma docs (Obviously this by itself doesn't prove quality)
-
-**📊 How important:** Great docs are a key to awareness and avoiding pitfalls
-
-
-
-
-**🏆 Is Prisma doing better?:** You bet
-
-## 6. Observability, metrics, and tracing
-
-**💁♂️ What is it about:** Good chances are (say about 99.9%) that you'll find yourself diagnostic slow queries in production or any other DB-related quirks. What can you expect from traditional ORMs in terms of observability? Mostly logging. [Sequelize provides both logging](https://sequelize.org/api/v7/interfaces/queryoptions#benchmark) of query duration and programmatic access to the connection pool state ({size,available,using,waiting}). [TypeORM provides only logging](https://orkhan.gitbook.io/typeorm/docs/logging) of queries that suppress a pre-defined duration threshold. This is better than nothing, but assuming you don't read production logs 24/7, you'd probably need more than logging - an alert to fire when things seem faulty. To achieve this, it's your responsibility to bridge this info to your preferred monitoring system. Another logging downside for this sake is verbosity - we need to emit tons of information to the logs when all we really care for is the average duration. Metrics can serve this purpose much better as we're about to see soon with Prisma
-
-What if you need to dig into which specific part of the query is slow? unfortunately, there is no breakdown of the query phases duration - it's being left to you as a black-box
-
-```javascript
-// Sequelize - logging various DB information
-
-```
-
-
-Logging each query in order to realize trends and anomaly in the monitoring system
-
-
-**🤔 How Prisma is different:** Since Prisma targets also enterprises, it must bring strong ops capabilities. Beautifully, it packs support for both [metrics](https://www.prisma.io/docs/concepts/components/prisma-client/metrics) and [open telemetry tracing](https://www.prisma.io/docs/concepts/components/prisma-client/opentelemetry-tracing)!. For metrics, it generates custom JSON with metric keys and values so anyone can adapt this to any monitoring system (e.g., CloudWatch, statsD, etc). On top of this, it produces out of the box metrics in [Prometheus](https://prometheus.io/) format (one of the most popular monitoring platforms). For example, the metric 'prisma_client_queries_duration_histogram_ms' provides the average query length in the system overtime. What is even more impressive is the support for open-tracing - it feeds your OpenTelemetry collector with spans that describe the various phases of every query. For example, it might help realize what is the bottleneck in the query pipeline: Is it the DB connection, the query itself or the serialization?
-
-
-Prisma visualizes the various query phases duration with open-telemtry
-
-**🏆 Is Prisma doing better?:** Definitely
-
-
-**📊 How important:** Goes without words how impactful is observability, however filling the gap in other ORM will demand no more than a few days
-
-
-
-## 7. Continuity - will it be here with us in 2024/2025
-
-**💁♂️ What is it about:** We live quite peacefully with the risk of one of our dependencies to disappear. With ORM though, this risk demand special attention because our buy-in is higher (i.e., harder to replace) and maintaining it was proven to be harder. Just look at a handful of successful ORMs in the past: objection.js, waterline, bookshelf - all of these respectful project had 0 commits in the past month. The single maintainer of objection.js [announced that he won't work the project anymore](https://github.com/Vincit/objection.js/issues/2335). This high churn rate is not surprising given the huge amount of moving parts to maintain, the gazillion corner cases and the modest 'budget' OSS projects live with. Looking at OpenCollective shows that [Sequelize](https://opencollective.com/sequelize#category-BUDGET) and [TypeORM](https://opencollective.com/typeorm) are funded with ~1500$ month in average. This is barely enough to cover a daily Starbucks cappuccino and croissant (6.95$ x 365) for 5 maintainers. Nothing contrasts this model more than a startup company that just raised its series B - Prisma is [funded with 40,000,000$ (40 millions)](https://www.prisma.io/blog/series-b-announcement-v8t12ksi6x#:~:text=We%20are%20excited%20to%20announce,teams%20%26%20organizations%20in%20this%20article.) and recruited 80 people! Should not this inspire us with high confidence about their continuity? I'll surprisingly suggest that quite the opposite is true
-
-See, an OSS ORM has to go over one huge hump, but a startup company must pass through TWO. The OSS project will struggle to achieve the critical mass of features, including some high technical barriers (e.g., TypeScript support, ESM). This typically lasts years, but once it does - a project can focus mostly on maintenance and step out of the danger zone. The good news for TypeORM and Sequelize is that they already did! Both struggled to keep their heads above the water, there were rumors in the past that [TypeORM is not maintained anymore](https://github.com/typeorm/typeorm/issues/3267), but they managed to go through this hump. I counted, both projects had approximately ~2000 PRs in the past 3 years! Going with [repo-tracker](https://repo-tracker.com/r/gh/sequelize/sequelize), each see multiple commits every week. They both have vibrant traction, and the majority of features you would expect from an ORM. TypeORM even supports beyond-the-basics features like multi data source and caching. It's unlikely that now, once they reached the promise land - they will fade away. It might happen, there is no guarantee in the OSS galaxy, but the risk is low
-
-
-
-
-**🤔 How Prisma is different:** Prisma a little lags behind in terms of features, but with a budget of 40M$ - there are good reasons to believe that they will pass the first hump, achieving a critical mass of features. I'm more concerned with the second hump - showing revenues in 2 years or saying goodbye. As a company that is backed by venture capitals - the model is clear and cruel: In order to secure their next round, series B or C (depends whether the seed is counted), there must be a viable and proven business model. How do you 'sell' ORM? Prisma experiments with multiple products, none is mature yet or being paid for. How big is this risk? According to [this startup companies success statistics](https://spdload.com/blog/startup-success-rate/), "About 65% of the Series A startups get series B, while 35% of the companies that get series A fail.". Since Prisma already gained a lot of love and adoption from the community, there success chances are higher than the average round A/B company, but even 20% or 10% chances to fade away is concerning
-
-> This is terrifying news - companies happily choose a young commercial OSS product without realizing that there are 10-30% chances for this product to disappear
-
-
-
-
-Some of startup companies who seek a viable business model do not shut the doors rather change the product, the license or the free features. This is not my subjective business analysis, here are few examples: [MongoDB changed their license](https://techcrunch.com/2018/10/16/mongodb-switches-up-its-open-source-license/), this is why the majority had to host their Mongo DB over a single vendor. [Redis did something similar](https://techcrunch.com/2019/02/21/redis-labs-changes-its-open-source-license-again/). What are the chances of Prisma pivoting to another type of product? It actually already happened before, Prisma 1 was mostly about graphQL client and server, [it's now retired](https://github.com/prisma/prisma1)
-
-It's just fair to mention the other potential path - most round B companies do succeed to qualify for the next round, when this happens even bigger money will be involved in building the 'Ferrari' of JavaScript ORMs. I'm surely crossing my fingers for these great people, at the same time we have to be conscious about our choices
-
-**📊 How important:** As important as having to code again the entire DB layer in a big system
-
-
-
-
-**🏆 Is Prisma doing better?:** Quite the opposite
-
-## Closing - what should you use now?
-
-Before proposing my key take away - which is the primary ORM, let's repeat the key learning that were introduced here:
-
-1. 🥇 Prisma deserves a medal for its awesome DX, documentation, observability support and end-to-end TypeScript coverage
-2. 🤔 There are reasons to be concerned about Prisma's business continuity as a young startup without a viable business model. Also Prisma's abstract client syntax might blind developers a little more than other ORMs
-3. 🎩 The contenders, TypeORM and Sequelize, matured and doing quite well: both have merged thousand PRs in the past 3 years to become more stable, they keep introducing new releases (see [repo-tracker](https://repo-tracker.com/r/gh/sequelize/sequelize)), and for now holds more features than Prisma. Also, both show solid performance (for an ORM). Hats off to the maintainers!
-
-Based on these observations, which should you pick? which ORM will we use for [practica.js](https://github.com/practicajs/practica)?
-
-Prisma is an excellent addition to Node.js ORMs family, but not the hassle-free one tool to rule them all. It's a mixed bag of many delicious candies and a few gotchas. Wouldn't it grow to tick all the boxes? Maybe, but unlikely. Once built, it's too hard to dramatically change the syntax and engine performance. Then, during the writing and speaking with the community, including some Prisma enthusiasts, I realized that it doesn't aim to be the can-do-everything 'Ferrari'. Its positioning seems to resemble more a convenient family car with a solid engine and awesome user experience. In other words, it probably aims for the enterprise space where there is mostly demand for great DX, OK performance, and business-class support
-
-In the end of this journey I see no dominant flawless 'Ferrari' ORM. I should probably change my perspective: Building ORM for the hectic modern JavaScript ecosystem is 10x harder than building a Java ORM back then in 2001. There is no stain in the shirt, it's a cool JavaScript swag. I learned to accept what we have, a rich set of features, tolerable performance, good enough for many systems. Need more? Don't use ORM. Nothing is going to change dramatically, it's now as good as it can be
-
-### When will it shine?
-
-**Surely use Prisma under these scenarios -** If your data needs are rather simple; when time-to-market concern takes precedence over the data processing accuracy; when the DB is relatively small; if you're a mobile/frontend developer who is doing her first steps in the backend world; when there is a need for business-class support; AND when Prisma's long term business continuity risk is a non-issue for you
-
-**I'd probably prefer other options under these conditions -** If the DB layer performance is a major concern; if you're savvy backend developer with solid SQL capabilities; when there is a need for fine grain control over the data layer. For all of these cases, Prisma might still work, but my primary choices would be using knex/TypeORM/Sequelize with a data-mapper style
-
-Consequently, we love Prisma and add it behind flag (--orm=prisma) to Practica.js. At the same time, until some clouds will disappear, Sequelize will remain our default ORM
-
-## Some of my other articles
-
-- [Book: Node.js testing best practices](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-- [Book: JavaScript testing best practices](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-- [Popular Node.js patterns and tools to re-consider](https://practica.dev/blog/popular-nodejs-pattern-and-tools-to-reconsider)
-- [Practica.js - A Node.js starter](https://github.com/practicajs/practica)
-- [Node.js best practices](https://github.com/goldbergyoni/nodebestpractices)
diff --git a/docs/blog/is-prisma-better/mapper.png b/docs/blog/is-prisma-better/mapper.png
deleted file mode 100644
index ee09372f..00000000
Binary files a/docs/blog/is-prisma-better/mapper.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/medium-importance-slider.png b/docs/blog/is-prisma-better/medium-importance-slider.png
deleted file mode 100644
index b23f3661..00000000
Binary files a/docs/blog/is-prisma-better/medium-importance-slider.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/medium-importance1.png b/docs/blog/is-prisma-better/medium-importance1.png
deleted file mode 100644
index 1f52e6c4..00000000
Binary files a/docs/blog/is-prisma-better/medium-importance1.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/medium1-importance-slider.png b/docs/blog/is-prisma-better/medium1-importance-slider.png
deleted file mode 100644
index b23f3661..00000000
Binary files a/docs/blog/is-prisma-better/medium1-importance-slider.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/medium2-importance-slider.png b/docs/blog/is-prisma-better/medium2-importance-slider.png
deleted file mode 100644
index 7fdc291a..00000000
Binary files a/docs/blog/is-prisma-better/medium2-importance-slider.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/practica-banner.png b/docs/blog/is-prisma-better/practica-banner.png
deleted file mode 100644
index 5311a59e..00000000
Binary files a/docs/blog/is-prisma-better/practica-banner.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/trace-diagram.png b/docs/blog/is-prisma-better/trace-diagram.png
deleted file mode 100644
index 8a8f0cdc..00000000
Binary files a/docs/blog/is-prisma-better/trace-diagram.png and /dev/null differ
diff --git a/docs/blog/is-prisma-better/typeorm-is-dead.png b/docs/blog/is-prisma-better/typeorm-is-dead.png
deleted file mode 100644
index 1f6504e7..00000000
Binary files a/docs/blog/is-prisma-better/typeorm-is-dead.png and /dev/null differ
diff --git a/docs/blog/pattern-to-reconsider/dragonfly.jpeg b/docs/blog/pattern-to-reconsider/dragonfly.jpeg
deleted file mode 100644
index 86392b95..00000000
Binary files a/docs/blog/pattern-to-reconsider/dragonfly.jpeg and /dev/null differ
diff --git a/docs/blog/pattern-to-reconsider/index.md b/docs/blog/pattern-to-reconsider/index.md
deleted file mode 100644
index 66c65a0d..00000000
--- a/docs/blog/pattern-to-reconsider/index.md
+++ /dev/null
@@ -1,469 +0,0 @@
----
-slug: popular-nodejs-pattern-and-tools-to-reconsider
-date: 2022-08-02T10:00
-hide_table_of_contents: true
-title: Popular Node.js patterns and tools to re-consider
-authors: [goldbergyoni]
-tags:
- [
- node.js,
- express,
- nestjs,
- fastify,
- passport,
- dotenv,
- supertest,
- practica,
- testing,
- ]
----
-
-# Popular Node.js tools and patterns to re-consider
-
-Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?
-
-In his novel book 'Atomic Habits' the author James Clear states that:
-
-> "Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst
-
-We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change
-
-Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.
-
-Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, _under some circumstances_ might be a better fit
-
-
-
-_The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell_
-
-
-## TOC - Patterns to reconsider
-
-1. Dotenv
-2. Calling a service from a controller
-3. Nest.js dependency injection for all classes
-4. Passport.js
-5. Supertest
-6. Fastify utility decoration
-7. Logging from a catch clause
-8. Morgan logger
-9. NODE_ENV
-
-
-## 1. Dotenv as your configuration source
-
-**💁♂️ What is it about:** A super popular technique in which the app configurable values (e.g., DB user name) are stored in a simple text file. Then, when the app loads, the dotenv library sets all the text file values as environment variables so the code can read this
-
-```javascript
-// .env file
-USER_SERVICE_URL=https://users.myorg.com
-
-//start.js
-require('dotenv').config();
-
-//blog-post-service.js
-repository.savePost(post);
-//update the user number of posts, read the users service URL from an environment variable
-await axios.put(`${process.env.USER_SERVICE_URL}/api/user/${post.userId}/incrementPosts`)
-
-```
-
-**📊 How popular:** 21,806,137 downloads/week!
-
-**🤔 Why it might be wrong:** Dotenv is so easy and intuitive to start with, so one might easily overlook fundamental features: For example, it's hard to infer the configuration schema and realize the meaning of each key and its typing. Consequently, there is no built-in way to fail fast when a mandatory key is missing - a flow might fail after starting and presenting some side effects (e.g., DB records were already mutated before the failure). In the example above, the blog post will be saved to DB, and only then will the code realize that a mandatory key is missing - This leaves the app hanging in an invalid state. On top of this, in the presence of many keys, it's impossible to organize them hierarchically. If not enough, it encourages developers to commit this .env file which might contain production values - this happens because there is no clear way to define development defaults. Teams usually work around this by committing .env.example file and then asking whoever pulls code to rename this file manually. If they remember to of course
-
-**☀️ Better alternative:** Some configuration libraries provide out of the box solution to all of these needs. They encourage a clear schema and the possibility to validate early and fail if needed. See [comparison of options here](https://practica.dev/decisions/configuration-library). One of the better alternatives is ['convict'](https://github.com/mozilla/node-convict), down below is the same example, this time with Convict, hopefully it's better now:
-
-```javascript
-// config.js
-export default {
- userService: {
- url: {
- // Hierarchical, documented and strongly typed 👇
- doc: "The URL of the user management service including a trailing slash",
- format: "url",
- default: "http://localhost:4001",
- nullable: false,
- env: "USER_SERVICE_URL",
- },
- },
- //more keys here
-};
-
-//start.js
-import convict from "convict";
-import configSchema from "config";
-convict(configSchema);
-// Fail fast!
-convictConfigurationProvider.validate();
-
-//blog-post.js
-repository.savePost(post);
-// Will never arrive here if the URL is not set
-await axios.put(
- `${convict.get(userService.url)}/api/user/${post.userId}/incrementPosts`
-);
-```
-
-## 2. Calling a 'fat' service from the API controller
-
-**💁♂️ What is it about:** Consider a reader of our code who wishes to understand the entire _high-level_ flow or delve into a very _specific_ part. She first lands on the API controller, where requests start. Unlike what its name implies, this controller layer is just an adapter and kept really thin and straightforward. Great thus far. Then the controller calls a big 'service' with thousands of lines of code that represent the entire logic
-
-```javascript
-// user-controller
-router.post('/', async (req, res, next) => {
- await userService.add(req.body);
- // Might have here try-catch or error response logic
-}
-
-// user-service
-exports function add(newUser){
- // Want to understand quickly? Need to understand the entire user service, 1500 loc
- // It uses technical language and reuse narratives of other flows
- this.copyMoreFieldsToUser(newUser)
- const doesExist = this.updateIfAlreadyExists(newUser)
- if(!doesExist){
- addToCache(newUser);
- }
- // 20 more lines that demand navigating to other functions in order to get the intent
-}
-
-
-```
-
-**📊 How popular:** It's hard to pull solid numbers here, I could confidently say that in _most_ of the app that I see, this is the case
-
-**🤔 Why it might be wrong:** We're here to tame complexities. One of the useful techniques is deferring a complexity to the later stage possible. In this case though, the reader of the code (hopefully) starts her journey through the tests and the controller - things are simple in these areas. Then, as she lands on the big service - she gets tons of complexity and small details, although she is focused on understanding the overall flow or some specific logic. This is **unnecessary** complexity
-
-**☀️ Better alternative:** The controller should call a particular type of service, a **use-case** , which is responsible for _summarizing_ the flow in a business and simple language. Each flow/feature is described using a use-case, each contains 4-10 lines of code, that tell the story without technical details. It mostly orchestrates other small services, clients, and repositories that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily. She can now **choose** where she would like to focus. She is now exposed only to **necessary** complexity. This technique also encourages partitioning the code to the smaller object that the use-case orchestrates. Bonus: By looking at coverage reports, one can tell which features are covered, not just files/functions
-
-This idea by the way is formalized in the ['clean architecture' book](https://www.bookdepository.com/Clean-Architecture-Robert-Martin/9780134494166?redirected=true&utm_medium=Google&utm_campaign=Base1&utm_source=IL&utm_content=Clean-Architecture&selectCurrency=ILS&w=AFF9AU99ZB4MTDA8VTRQ&gclid=Cj0KCQjw3eeXBhD7ARIsAHjssr92kqLn60dnfQCLjbkaqttdgvhRV5dqKtnY680GCNDvKp-16HtZp24aAg6GEALw_wcB) - I'm not a big fan of 'fancy' architectures, but see - it's worth cherry-picking techniques from every source. You may walk-through our [Node.js best practices starter, practica.js](https://github.com/practicajs/practica), and examine the use-cases code
-
-```javascript
-// add-order-use-case.js
-export async function addOrder(newOrder: addOrderDTO) {
- orderValidation.assertOrderIsValid(newOrder);
- const userWhoOrdered = await userServiceClient.getUserWhoOrdered(
- newOrder.userId
- );
- paymentTermsService.assertPaymentTerms(
- newOrder.paymentTermsInDays,
- userWhoOrdered.terms
- );
-
- const response = await orderRepository.addOrder(newOrder);
-
- return response;
-}
-```
-
-## 3. Nest.js: Wire _everything_ with dependency injection
-
-**💁♂️ What is it about:** If you're doing Nest.js, besides having a powerful framework in your hands, you probably use DI for _everything_ and make every class injectable. Say you have a weather-service that depends upon humidity-service, and **there is no requirement to swap** the humidity-service with alternative providers. Nevertheless, you inject humidity-service into the weather-service. It becomes part of your development style, "why not" you think - I may need to stub it during testing or replace it in the future
-
-```typescript
-// humidity-service.ts - not customer facing
-@Injectable()
-export class GoogleHumidityService {
-
- async getHumidity(when: Datetime): Promise {
- // Fetches from some specific cloud service
- }
-}
-
-// weather-service.ts - customer facing
-import { GoogleHumidityService } from './humidity-service.ts';
-
-export type weatherInfo{
- temperature: number,
- humidity: number
-}
-
-export class WeatherService {
- constructor(private humidityService: GoogleHumidityService) {}
-
- async GetWeather(when: Datetime): Promise {
- // Fetch temperature from somewhere and then humidity from GoogleHumidityService
- }
-}
-
-// app.module.ts
-@Module({
- providers: [GoogleHumidityService, WeatherService],
-})
-export class AppModule {}
-```
-
-**📊 How popular:** No numbers here but I could confidently say that in _all_ of the Nest.js app that I've seen, this is the case. In the popular ['nestjs-realworld-example-ap[p'](](https://github.com/lujakob/nestjs-realworld-example-app)) all the services are 'injectable'
-
-**🤔 Why it might be wrong:** Dependency injection is not a priceless coding style but a pattern you should pull in the right moment, like any other pattern. Why? Because any pattern has a price. What price, you ask? First, encapsulation is violated. Clients of the weather-service are now aware that other providers are being used _internally_. Some clients may get tempted to override providers also it's not under their responsibility. Second, it's another layer of complexity to learn, maintain, and one more way to shoot yourself in the legs. StackOverflow owes some of its revenues to Nest.js DI - plenty of discussions try to solve this puzzle (e.g. did you know that in case of circular dependencies the order of imports matters?). Third, there is the performance thing - Nest.js, for example struggled to provide a decent start time for serverless environments and had to introduce [lazy loaded modules](https://docs.nestjs.com/fundamentals/lazy-loading-modules). Don't get me wrong, **in some cases**, there is a good case for DI: When a need arises to decouple a dependency from its caller, or to allow clients to inject custom implementations (e.g., the strategy pattern). **In such case**, when there is a value, you may consider whether the _value of DI is worth its price_. If you don't have this case, why pay for nothing?
-
-I recommend reading the first paragraphs of this blog post ['Dependency Injection is EVIL'](https://www.tonymarston.net/php-mysql/dependency-injection-is-evil.html) (and absolutely don't agree with this bold words)
-
-**☀️ Better alternative:** 'Lean-ify' your engineering approach - avoid using any tool unless it serves a real-world need immediately. Start simple, a dependent class should simply import its dependency and use it - Yeah, using the plain Node.js module system ('require'). Facing a situation when there is a need to factor dynamic objects? There are a handful of simple patterns, simpler than DI, that you should consider, like 'if/else', factory function, and more. Are singletons requested? Consider techniques with lower costs like the module system with factory function. Need to stub/mock for testing? Monkey patching might be better than DI: better clutter your test code a bit than clutter your production code. Have a strong need to hide from an object where its dependencies are coming from? You sure? Use DI!
-
-```typescript
-// humidity-service.ts - not customer facing
-export async function getHumidity(when: Datetime): Promise {
- // Fetches from some specific cloud service
-}
-
-// weather-service.ts - customer facing
-import { getHumidity } from "./humidity-service.ts";
-
-// ✅ No wiring is happening externally, all is flat and explicit. Simple
-export async function getWeather(when: Datetime): Promise {
- // Fetch temperature from somewhere and then humidity from GoogleHumidityService
- // Nobody needs to know about it, its an implementation details
- await getHumidity(when);
-}
-```
-
-___
-
-## 1 min pause: A word or two about me, the author
-
-My name is Yoni Goldberg, I'm a Node.js developer and consultant. I wrote few code-books like [JavaScript testing best practices](https://github.com/goldbergyoni/javascript-testing-best-practices) and [Node.js best practices](https://github.com/goldbergyoni/nodebestpractices) (100,000 stars ✨🥹). That said, my best guide is [Node.js testing practices](https://github.com/testjavascript/nodejs-integration-tests-best-practices) which only few read 😞. I shall release [an advanced Node.js testing course soon](https://testjavascript.com/) and also hold workshops for teams. I'm also a core maintainer of [Practica.js](https://github.com/practicajs/practica) which is a Node.js starter that creates a production-ready example Node Monorepo solution that is based on the standards and simplicity. It might be your primary option when starting a new Node.js solution
-
-___
-
-## 4. Passport.js for token authentication
-
-**💁♂️ What is it about:** Commonly, you're in need to issue or/and authenticate JWT tokens. Similarly, you might need to allow login from _one_ single social network like Google/Facebook. When faced with these kinds of needs, Node.js developers rush to the glorious library [Passport.js](https://www.passportjs.org/) like butterflies are attracted to light
-
-**📊 How popular:** 1,389,720 weekly downloads
-
-**🤔 Why it might be wrong:** When tasked with guarding your routes with JWT token - you're just a few lines of code shy from ticking the goal. Instead of messing up with a new framework, instead of introducing levels of indirections (you call passport, then it calls you), instead of spending time learning new abstractions - use a JWT library directly. Libraries like [jsonwebtoken](https://github.com/auth0/node-jsonwebtoken) or [fast-jwt](https://github.com/nearform/fast-jwt) are simple and well maintained. Have concerns with the security hardening? Good point, your concerns are valid. But would you not get better hardening with a direct understanding of your configuration and flow? Will hiding things behind a framework help? Even if you prefer the hardening of a battle-tested framework, Passport doesn't handle a handful of security risks like secrets/token, secured user management, DB protection, and more. My point, you probably anyway need fully-featured user and authentication management platforms. Various cloud services and OSS projects, can tick all of those security concerns. Why then start in the first place with a framework that doesn't satisfy your security needs? It seems like many who opt for Passport.js are not fully aware of which needs are satisfied and which are left open. All of that said, Passport definitely shines when looking for a quick way to support _many_ social login providers
-
-**☀️ Better alternative:** Is token authentication in order? These few lines of code below might be all you need. You may also glimpse into [Practica.js wrapper around these libraries](https://github.com/practicajs/practica/tree/main/src/code-templates/libraries/jwt-token-verifier). A real-world project at scale typically need more: supporting async JWT [(JWKS)](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets), securely manage and rotate the secrets to name a few examples. In this case, OSS solution like [keycloak (https://github.com/keycloak/keycloak) or commercial options like Auth0[https://github.com/auth0] are alternatives to consider
-
-```javascript
-// jwt-middleware.js, a simplified version - Refer to Practica.js to see some more corner cases
-const middleware = (req, res, next) => {
- if(!req.headers.authorization){
- res.sendStatus(401)
- }
-
- jwt.verify(req.headers.authorization, options.secret, (err: any, jwtContent: any) => {
- if (err) {
- return res.sendStatus(401);
- }
-
- req.user = jwtContent.data;
-
- next();
- });
-```
-
-## 5. Supertest for integration/API testing
-
-**💁♂️ What is it about:** When testing against an API (i.e., component, integration, E2E tests), the library [supertest](https://www.npmjs.com/package/supertest) provides a sweet syntax that can both detect the web server address, make HTTP call and also assert on the response. Three in one
-
-```javascript
-test("When adding invalid user, then the response is 400", (done) => {
- const request = require("supertest");
- const app = express();
- // Arrange
- const userToAdd = {
- name: undefined,
- };
-
- // Act
- request(app)
- .post("/user")
- .send(userToAdd)
- .expect("Content-Type", /json/)
- .expect(400, done);
-
- // Assert
- // We already asserted above ☝🏻 as part of the request
-});
-```
-
-**📊 How popular:** 2,717,744 weekly downloads
-
-**🤔 Why it might be wrong:** You already have your assertion library (Jest? Chai?), it has a great error highlighting and comparison - you trust it. Why code some tests using another assertion syntax? Not to mention, Supertest's assertion errors are not as descriptive as Jest and Chai. It's also cumbersome to mix HTTP client + assertion library instead of choosing the best for each mission. Speaking of the best, there are more standard, popular, and better-maintained HTTP clients (like fetch, axios and other friends). Need another reason? Supertest might encourage coupling the tests to Express as it offers a constructor that gets an Express object. This constructor infers the API address automatically (useful when using dynamic test ports). This couples the test to the implementation and won't work in the case where you wish to run the same tests against a remote process (the API doesn't live with the tests). My repository ['Node.js testing best practices'](https://github.com/testjavascript/nodejs-integration-tests-best-practices) holds examples of how tests can infer the API port and address
-
-**☀️ Better alternative:** A popular and standard HTTP client library like Node.js Fetch or Axios. In [Practica.js](https://github.com/practicajs/practica) (a Node.js starter that packs many best practices) we use Axios. It allows us to configure a HTTP client that is shared among all the tests: We bake inside a JWT token, headers, and a base URL. Another good pattern that we look at, is making each Microservice generate HTTP client library for its consumers. This brings strong-type experience to the clients, synchronizes the provider-consumer versions and as a bonus - The provider can test itself with the same library that its consumers are using
-
-```javascript
-test("When adding invalid user, then the response is 400 and includes a reason", (done) => {
- const app = express();
- // Arrange
- const userToAdd = {
- name: undefined,
- };
-
- // Act
- const receivedResponse = axios.post(
- `http://localhost:${apiPort}/user`,
- userToAdd
- );
-
- // Assert
- // ✅ Assertion happens in a dedicated stage and a dedicated library
- expect(receivedResponse).toMatchObject({
- status: 400,
- data: {
- reason: "no-name",
- },
- });
-});
-```
-
-## 6. Fastify decorate for non request/web utilities
-
-**💁♂️ What is it about:** [Fastify](https://github.com/fastify/fastify) introduces great patterns. Personally, I highly appreciate how it preserves the simplicity of Express while bringing more batteries. One thing that got me wondering is the 'decorate' feature which allows placing common utilities/services inside a widely accessible container object. I'm referring here specifically to the case where a cross-cutting concern utility/service is being used. Here is an example:
-
-```javascript
-// An example of a utility that is cross-cutting-concern. Could be logger or anything else
-fastify.decorate('metricsService', function (name) {
- fireMetric: () => {
- // My code that sends metrics to the monitoring system
- }
-})
-
-fastify.get('/api/orders', async function (request, reply) {
- this.metricsService.fireMetric({name: 'new-request'})
- // Handle the request
-})
-
-// my-business-logic.js
-exports function calculateSomething(){
- // How to fire a metric?
-}
-```
-
-It should be noted that 'decoration' is also used to place values (e.g., user) inside a request - this is a slightly different case and a sensible one
-
-**📊 How popular:** Fastify has 696,122 weekly download and growing rapidly. The decorator concept is part of the framework's core
-
-**🤔 Why it might be wrong:** Some services and utilities serve cross-cutting-concern needs and should be accessible from other layers like domain (i.e, business logic, DAL). When placing utilities inside this object, the Fastify object might not be accessible to these layers. You probably don't want to couple your web framework with your business logic: Consider that some of your business logic and repositories might get invoked from non-REST clients like CRON, MQ, and similar - In these cases, Fastify won't get involved at all so better not trust it to be your service locator
-
-**☀️ Better alternative:** A good old Node.js module is a standard way to expose and consume functionality. Need a singleton? Use the module system caching. Need to instantiate a service in correlation with a Fastify life-cycle hook (e.g., DB connection on start)? Call it from that Fastify hook. In the rare case where a highly dynamic and complex instantiation of dependencies is needed - DI is also a (complex) option to consider
-
-```javascript
-// ✅ A simple usage of good old Node.js modules
-// metrics-service.js
-
-exports async function fireMetric(name){
- // My code that sends metrics to the monitoring system
-}
-
-import {fireMetric} from './metrics-service.js'
-
-fastify.get('/api/orders', async function (request, reply) {
- metricsService.fireMetric({name: 'new-request'})
-})
-
-// my-business-logic.js
-exports function calculateSomething(){
- metricsService.fireMetric({name: 'new-request'})
-}
-```
-
-## 7. Logging from a catch clause
-
-**💁♂️ What is it about:** You catch an error somewhere deep in the code (not on the route level), then call logger.error to make this error observable. Seems simple and necessary
-
-```javascript
-try{
- axios.post('https://thatService.io/api/users);
-}
-catch(error){
- logger.error(error, this, {operation: addNewOrder});
-}
-```
-
-**📊 How popular:** Hard to put my hands on numbers but it's quite popular, right?
-
-**🤔 Why it might be wrong:** First, errors should get handled/logged in a central location. Error handling is a critical path. Various catch clauses are likely to behave differently without a centralized and unified behavior. For example, a request might arise to tag all errors with certain metadata, or on top of logging, to also fire a monitoring metric. Applying these requirements in ~100 locations is not a walk in the park. Second, catch clauses should be minimized to particular scenarios. By default, the natural flow of an error is bubbling down to the route/entry-point - from there, it will get forwarded to the error handler. Catch clauses are more verbose and error-prone - therefore it should serve two very specific needs: When one wishes to change the flow based on the error or enrich the error with more information (which is not the case in this example)
-
-**☀️ Better alternative:** By default, let the error bubble down the layers and get caught by the entry-point global catch (e.g., Express error middleware). In cases when the error should trigger a different flow (e.g., retry) or there is value in enriching the error with more context - use a catch clause. In this case, ensure the .catch code also reports to the error handler
-
-```javascript
-// A case where we wish to retry upon failure
-try{
- axios.post('https://thatService.io/api/users);
-}
-catch(error){
- // ✅ A central location that handles error
- errorHandler.handle(error, this, {operation: addNewOrder});
- callTheUserService(numOfRetries++);
-}
-```
-
-## 8. Use Morgan logger for express web requests
-
-**💁♂️ What is it about:** In many web apps, you are likely to find a pattern that is being copy-pasted for ages - Using Morgan logger to log requests information:
-
-```javascript
-const express = require("express");
-const morgan = require("morgan");
-
-const app = express();
-
-app.use(morgan("combined"));
-```
-
-**📊 How popular:** 2,901,574 downloads/week
-
-**🤔 Why it might be wrong:** Wait a second, you already have your main logger, right? Is it Pino? Winston? Something else? Great. Why deal with and configure yet another logger? I do appreciate the HTTP domain-specific language (DSL) of Morgan. The syntax is sweet! But does it justify having two loggers?
-
-**☀️ Better alternative:** Put your chosen logger in a middleware and log the desired request/response properties:
-
-```javascript
-// ✅ Use your preferred logger for all the tasks
-const logger = require("pino")();
-app.use((req, res, next) => {
- res.on("finish", () => {
- logger.info(`${req.url} ${res.statusCode}`); // Add other properties here
- });
- next();
-});
-```
-
-## 9. Having conditional code based on `NODE_ENV` value
-
-**💁♂️ What is it about:** To differentiate between development vs production configuration, it's common to set the environment variable NODE_ENV with "production|test". Doing so allows the various tooling to act differently. For example, some templating engines will cache compiled templates only in production. Beyond tooling, custom applications use this to specify behaviours that are unique to the development or production environment:
-
-```javascript
-if (process.env.NODE_ENV === "production") {
- // This is unlikely to be tested since test runner usually set NODE_ENV=test
- setLogger({ stdout: true, prettyPrint: false });
- // If this code branch above exists, why not add more production-only configurations:
- collectMetrics();
-} else {
- setLogger({ splunk: true, prettyPrint: true });
-}
-```
-
-**📊 How popular:** 5,034,323 code results in GitHub when searching for "NODE_ENV". It doesn't seem like a rare pattern
-
-**🤔 Why it might be wrong:** Anytime your code checks whether it's production or not, this branch won't get hit by default in some test runner (e.g., Jest set `NODE_ENV=test`). In _any_ test runner, the developer must remember to test for each possible value of this environment variable. In the example above, `collectMetrics()` will be tested for the first time in production. Sad smiley. Additionally, putting these conditions opens the door to add more differences between production and the developer machine - when this variable and conditions exists, a developer gets tempted to put some logic for production only. Theoretically, this can be tested: one can set `NODE_ENV = "production"` in testing and cover the production branches (if she remembers...). But then, if you can test with `NODE_ENV='production'`, what's the point in separating? Just consider everything to be 'production' and avoid this error-prone mental load
-
-**☀️ Better alternative:** Any code that was written by us, must be tested. This implies avoiding any form of if(production)/else(development) conditions. Wouldn't anyway developers machine have different surrounding infrastructure than production (e.g., logging system)? They do, the environments are quite difference, but we feel comfortable with it. These infrastructural things are battle-tested, extraneous, and not part of our code. To keep the same code between dev/prod and still use different infrastructure - we put different values in the configuration (not in the code). For example, a typical logger emits JSON in production but in a development machine it emits 'pretty-print' colorful lines. To meet this, we set ENV VAR that tells whether what logging style we aim for:
-
-```javascript
-//package.json
-"scripts": {
- "start": "LOG_PRETTY_PRINT=false index.js",
- "test": "LOG_PRETTY_PRINT=true jest"
-}
-
-//index.js
-//✅ No condition, same code for all the environments. The variations are defined externally in config or deployment files
-setLogger({prettyPrint: process.env.LOG_PRETTY_PRINT})
-```
-
-## Closing
-
-I hope that these thoughts, at least one of them, made you re-consider adding a new technique to your toolbox. In any case, let's keep our community vibrant, disruptive and kind. Respectful discussions are almost as important as the event loop. Almost.
-
-## Some of my other articles
-
-- [Book: Node.js testing best practices](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-- [Book: JavaScript testing best practices](https://github.com/testjavascript/nodejs-integration-tests-best-practices)
-- [How to be a better Node.js developer in 2020](https://yonigoldberg.medium.com/20-ways-to-become-a-better-node-js-developer-in-2020-d6bd73fcf424). The 2023 version is coming soon
-- [Practica.js - A Node.js starter](https://github.com/practicajs/practica)
-- [Node.js best practices](https://github.com/goldbergyoni/nodebestpractices)
diff --git a/docs/blog/pattern-to-reconsider/practica-banner.png b/docs/blog/pattern-to-reconsider/practica-banner.png
deleted file mode 100644
index 5311a59e..00000000
Binary files a/docs/blog/pattern-to-reconsider/practica-banner.png and /dev/null differ
diff --git a/docs/blog/practica-is-alive/index.md b/docs/blog/practica-is-alive/index.md
deleted file mode 100644
index 6bffb403..00000000
--- a/docs/blog/practica-is-alive/index.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-slug: practica-is-alive
-date: 2022-07-15T10:00
-hide_table_of_contents: true
-title: Practica.js v0.0.1 is alive
-authors: [goldbergyoni]
-tags:
- [
- node.js,
- express,
- fastify
- ]
-
----
-
-# Practica.js v0.0.1 is alive
-
-🥳 We're thrilled to launch the very first version of Practica.js.
-
-## What is Practica is one paragraph
-
-Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are [neatly and thoughtfully documented](./decisions/index). We strive to keep things as simple and standard as possible and base our work off the popular guide: [Node.js Best Practices](https://github.com/goldbergyoni/nodebestpractices).
-
-Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.
-
-## 90 seconds video
-
-
-
-## How to get started
-
-To get up to speed quickly, read our [getting started guide](https://practica.dev/the-basics/getting-started-quickly).
\ No newline at end of file
diff --git a/docs/blog/use-case/index.md b/docs/blog/use-case/index.md
deleted file mode 100644
index 8ba62c56..00000000
--- a/docs/blog/use-case/index.md
+++ /dev/null
@@ -1,339 +0,0 @@
----
-slug: about-the-sweet-and-powerful-use-case-code-pattern
-date: 2025-03-05T10:00
-hide_table_of_contents: true
-title: About the sweet and powerful 'use case' code pattern
-authors: [goldbergyoni]
-tags:
- [
- node.js,
- use-case,
- clean-architecture,
- javascript,
- tdd,
- workflow,
- domain,
- tdd
- ]
----
-
-## Intro: A sweet pattern that got lost in time
-
-When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.
-
-The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the *code level* by emphasizing its surprising merits how to implement it correctly.
-
-Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.
-
-But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.
-
-_Prefer a 10 min video? Watch here, or keep reading below_
-
-
-
-## The problem: too many details, too soon
-
-Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.
-
-Her journey begins promisingly smooth:
-
-**- 🤗 Testing -** She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:
-
-```javascript
-test("When adding an order with 100$ product, then the price charge should be 100$ ", async () => {
- // ....
-})
-```
-
-**- 🤗 Controller -** She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:
-
-```javascript
-app.post("/api/order", async (req: Request, res: Response) => {
- const newOrder = req.body;
- await orderService.addOrder(newOrder); // 👈 This is where the real-work is done
- res.status(200).json({ message: "Order created successfully" });
-});
-```
-
-Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.
-
-**- 😲 The service -** Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:
-
-```javascript
-let DBRepository;
-
-export class OrderService : ServiceBase {
- async addOrder(orderRequest: OrderRequest): Promise {
- try {
- ensureDBRepositoryInitialized();
- const { openTelemetry, monitoring, secretManager, priceService, userService } =
- dependencyInjection.getVariousServices();
- logger.info("Add order flow starts now", orderRequest);
- openTelemetry.sendEvent("new order", orderRequest);
-
- const validationRules = await getFromConfigSystem("order-validation-rules");
- const validatedOrder = validateOrder(orderRequest, validationRules);
- if (!validatedOrder) {
- throw new Error("Invalid order");
- }
- this.base.startTransaction();
- const user = await userService.getUserInfo(validatedOrder.customerId);
- if (!user) {
- const savedOrder = await tryAddUserWithLegacySystem(validatedOrder);
- return savedOrder;
- }
- // And it goes on and on until the pricing module is mentioned
-}
-```
-
-So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?
-
-She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.
-
-## The use-case pattern
-
-In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.
-
-The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:
-
-
-
-Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.
-
-By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.
-
-But why is this minimalistic approach so crucial?
-
-## The merits
-
-### 1. A navigation index
-
-When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.
-
-
-
-*The library catalog redirects the reader to the area of interest*
-
-Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells *when precisely* this module is used, what is the *specific* entry point and which *exact* parameters are passed.
-
-
-### 2. Deferred and spread complexity
-
-When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system
-
-When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called *accidental complexity*. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.
-
-Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:
-
-
-*The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.*
-
-This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.
-
-
-*The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.*
-
-
-### 3. A practical workflow that promotes efficiency
-
-When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here's a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.
-
-While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:
-
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- const orderWithPricing = calculateOrderPricing(validatedOrder);
- const purchasingCustomer = await assertCustomerExists(orderWithPricing.customerId);
- const savedOrder = await insertOrder(orderWithPricing);
- await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email);
-}
-```
-
-This structured approach allows you to preemptively tackle potential implementation hurdles:
-
-**- sendSuccessEmailToCustomer -** What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting *now*, before spending 3 days on coding, can make a big difference.
-
-**- calculateOrderPricing -** Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.
-
-**- assertCustomerExists -** This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.
-
-Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:
-
-### 4. The optimal design viewpoint
-
-Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:
-
-```javascript
-await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email, orderId);
-const savedOrder = await insertOrder(orderWithPricing);
-```
-
-Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.
-
-
-### 5. Better coverage reports
-
-Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code *exactly* is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.
-
-Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets *'features coverage'*, a unique look into which user features and steps lack testing:
-
-
-*The use-cases folder test coverage report, some use-cases are only partially tested*
-
-See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.
-
-### 6. Practical domain-driven code
-
-The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?
-
-Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.
-
-### 7. Consistent observability
-
-I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.
-
-Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.
-
-The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:
-
-```javascript
-// ❗️Verbose use case
-export async function addOrderUseCase(orderRequest: OrderRequest): Promise {
- logger.info("Add order use case - Adding order starts now", orderRequest);
- const validatedOrder = validateAndCoerceOrder(orderRequest);
- logger.debug("Add order use case - The order was validated", validatedOrder);
- const orderWithPricing = calculateOrderPricing(validatedOrder);
- logger.debug("Add order use case - The order pricing was decided", validatedOrder);
- const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);
- logger.debug("Add order use case - Verified the user balance already", purchasingCustomer);
- const returnOrder = mapFromRepositoryToDto(purchasingCustomer as unknown as OrderRecord);
- logger.info("Add order use case - About to return result", returnOrder);
- return returnOrder;
-}
-```
-
-One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:
-
-```javascript
-import { openTelemetry } from "@opentelemetry";
-async function runUseCaseStep(stepName, stepFunction) {
- logger.debug(`Use case step ${stepName} starts now`);
- // Create Open Telemetry custom span
- openTelemetry.startSpan(stepName);
- return await stepFunction();
-}
-```
-
-Now the use-case gets automated and consistent transparency:
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
- const validatedOrder = await runUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest));
- const orderWithPricing = await runUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder));
- await runUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing));
-}
-```
-
-The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets *automated and consistent* observability.
-
-## Implementation best practices
-
-### 1. Dead-simple 'no code'
-
-Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put *only one* If/Else in a use-case:
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- const validatedOrder = validateAndCoerceOrder(orderRequest);
- const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);
- if (purchasingCustomer.isPremium) {//❗️
- sendEmailToPremiumCustomer(purchasingCustomer);
- // This easily will grow with time to multiple if/else
- }
-}
-```
-
-A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:
-
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- const validatedOrder = validateAndCoerceOrder(orderRequest);
- const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);
- await sendEmailIfPremiumCustomer(purchasingCustomer); //🙂
-}
-```
-
-### 2. Find the right level of specificity
-
-The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- const validatedOrder = validateAndCoerceOrder(orderRequest);
- const finalOrderToSave = await applyAllBusinessLogic(validatedOrder);//🤔
- await insertOrder(finalOrderToSave);
-}
-```
-
-The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- const validatedOrder = validateAndCoerceOrder(orderRequest);
- const pricedOrder = await calculatePrice(validatedOrder);
- const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);
- const orderWithShippingInstructions = await addShippingInfo(pricedOrder, purchasingCustomer);
- await insertOrder(orderWithShippingInstructions);
-}
-```
-
-Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.
-
-### 3. When have no choice, control the DB transaction from the use-case
-
-What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?
-
-If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:
-
-```javascript
-export async function addOrderUseCase(orderRequest: OrderRequest) {
- // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
- const transaction = Repository.startTransaction();
- const purchasingCustomer = await assertCustomerHasEnoughBalance(orderRequest, transaction);
- const orderWithPricing = calculateOrderPricing(purchasingCustomer);
- const savedOrder = await insertOrder(orderWithPricing, transaction);
- const returnOrder = mapFromRepositoryToDto(savedOrder);
- Repository.commitTransaction(transaction);
- return returnOrder;
-}
-```
-
-### 4. Aggregate small use-cases in a single file
-
-A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:
-
-```javascript
-// query-orders-use-cases.ts
-export async function getOrder(id) {
- // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
- const result = await orderRepository.getOrderByID(id);
- return result;
-}
-
-export async function getAllOrders(criteria) {
- // 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
- const result = await orderRepository.queryOrders(criteria);
- return result;
-}
-```
-
-## Closing: Easy to start, use everywhere
-
-If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.
-
-Once you become accustomed to using it, you'll find that this technique extends well beyond API routes. It's equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.
-
-You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.
diff --git a/docs/blog/v0.6-is-alive/index.md b/docs/blog/v0.6-is-alive/index.md
deleted file mode 100644
index 40835e04..00000000
--- a/docs/blog/v0.6-is-alive/index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-slug: practica-v0.0.6-is-alive
-date: 2022-12-10T10:00
-hide_table_of_contents: true
-title: Practica v0.0.6 is alive
-authors: [goldbergyoni, razluvaton, danielgluskin, michaelsalomon]
-tags:
- [
- node.js,
- express,
- practica,
- prisma,
- ]
----
-
-## Where is our focus now?
-
-We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback
-
-## What's new?
-
-### Request-level store
-
-Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in [AsyncLocal](https://nodejs.org/api/async_context.html) for this task
-
-### Hardened .dockerfile
-
-Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from [this article](https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker/) and already apply 90% of the guidelines
-
-### Additional ORM option: Prisma
-
-Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma
-
-Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this [blog post](https://practica.dev/blog/is-prisma-better-than-your-traditional-orm/)
-
-### Many small enhancements
-
-More than 10 PR were merged with CLI experience improvements, bug fixes, code patterns enhancements and more
-
-## Where do I start?
-
-Definitely follow the [getting started guide first](https://practica.dev/the-basics/getting-started-quickly) and then read the guide [coding with practica](https://practica.dev/the-basics/coding-with-practica) to realize its full power and genuine value. We will be thankful to receive your feedback
\ No newline at end of file
diff --git a/docs/blog/which-monorepo/index.md b/docs/blog/which-monorepo/index.md
deleted file mode 100644
index c9282a93..00000000
--- a/docs/blog/which-monorepo/index.md
+++ /dev/null
@@ -1,152 +0,0 @@
----
-slug: monorepo-backend
-date: 2022-11-07T11:00
-title: Which Monorepo is right for a Node.js BACKEND now?
-authors: [goldbergyoni, michaelsalomon]
-tags: [monorepo, decisions]
----
-
-# Which Monorepo is right for a Node.js BACKEND now?
-
-As a Node.js starter, choosing the right libraries and frameworks for our users is the bread and butter of our work in [Practica.js](https://github.com/practicajs/practica). In this post, we'd like to share our considerations in choosing our monorepo tooling
-
-
-
-## What are we looking at
-
-
-The Monorepo market is hot like fire. Weirdly, now when the demand for Monoreps is exploding, one of the leading libraries — [Lerna- has just retired.](https://github.com/lerna/lerna/issues/2703) When looking closely, it might not be just a coincidence — With so many disruptive and shiny features brought on by new vendors, Lerna failed to keep up with the pace and stay relevant. This bloom of new tooling gets many confused — What is the right choice for my next project? What should I look at when choosing a Monorepo tool? This post is all about curating this information overload, covering the new tooling, emphasizing what is important, and finally share some recommendations. If you are here for tools and features, you’re in the right place, although you might find yourself on a soul-searching journey to what is your desired development workflow.
-
-This post is concerned with backend-only and Node.js. It also scoped to _typical_ business solutions. If you’re Google/FB developer who is faced with 8,000 packages — sorry, you need special gear. Consequently, monster Monorepo tooling like [Bazel](https://github.com/thundergolfer/example-bazel-monorepo) is left-out. We will cover here some of the most popular Monorepo tools including Turborepo, Nx, PNPM, Yarn/npm workspace, and Lerna (although it’s not actually maintained anymore — it’s a good baseline for comparison).
-
-Let’s start? When human beings use the term Monorepo, they typically refer to one or more of the following _4 layers below._ Each one of them can bring value to your project, each has different consequences, tooling, and features:
-
-
-
-
-# Layer 1: Plain old folders to stay on top of your code
-
-With zero tooling and only by having all the Microservice and libraries together in the same root folder, a developer gets great management perks and tons of value: Navigation, search across components, deleting a library instantly, debugging, _quickly_ adding new components. Consider the alternative with multi-repo approach — adding a new component for modularity demands opening and configuring a new GitHub repository. Not just a hassle but also greater chances of developers choosing the short path and including the new code in some semi-relevant existing package. In plain words, zero-tooling Monorepos can increase modularity.
-
-This layer is often overlooked. If your codebase is not huge and the components are highly decoupled (more on this later)— it might be all you need. We’ve seen a handful of successful Monorepo solutions without any special tooling.
-
-With that said, some of the newer tools augment this experience with interesting features:
-
-- Both [Turborepo](https://turborepo.org/) and [Nx](https://nx.dev/structure/dependency-graph) and also [Lerna](https://www.npmjs.com/package/lerna-dependency-graph) provide a visual representation of the packages’ dependencies
-- [Nx allows ‘visibility rules’](https://nx.dev/structure/monorepo-tags) which is about enforcing who can use what. Consider, a ‘checkout’ library that should be approached only by the ‘order Microservice’ — deviating from this will result in failure during development (not runtime enforcement)
-
-
-
-Nx dependencies graph
-
-- [Nx workspace generator](https://nx.dev/generators/workspace-generators) allows scaffolding out components. Whenever a team member needs to craft a new controller/library/class/Microservice, she just invokes a CLI command which products code based on a community or organization template. This enforces consistency and best practices sharing
-
-# Layer 2: Tasks and pipeline to build your code efficiently
-
-Even in a world of autonomous components, there are management tasks that must be applied in a batch like applying a security patch via npm update, running the tests of _multiple_ components that were affected by a change, publish 3 related libraries to name a few examples. All Monorepo tools support this basic functionality of invoking some command over a group of packages. For example, Lerna, Nx, and Turborepo do.
-
-
-
-Apply some commands over multiple packages
-
-In some projects, invoking a cascading command is all you need. Mostly if each package has an autonomous life cycle and the build process spans a single package (more on this later). In some other types of projects where the workflow demands testing/running and publishing/deploying many packages together — this will end in a terribly slow experience. Consider a solution with hundred of packages that are transpiled and bundled — one might wait minutes for a wide test to run. While it’s not always a great practice to rely on wide/E2E tests, it’s quite common in the wild. This is exactly where the new wave of Monorepo tooling shines — _deeply_ optimizing the build process. I should say this out loud: These tools bring beautiful and innovative build optimizations:
-
-- **Parallelization —** If two commands or packages are orthogonal to each other, the commands will run in two different threads or processes. Typically your quality control involves testing, lining, license checking, CVE checking — why not parallelize?
-- **Smart execution plan —**Beyond parallelization, the optimized tasks execution order is determined based on many factors. Consider a build that includes A, B, C where A, C depend on B — naively, a build system would wait for B to build and only then run A & C. This can be optimized if we run A & C’s _isolated_ unit tests _while_ building B and not afterward. By running task in parallel as early as possible, the overall execution time is improved — this has a remarkable impact mostly when hosting a high number of components. See below a visualization example of a pipeline improvement
-
-
-
-A modern tool advantage over old Lerna. Taken from Turborepo website
-
-- **Detect who is affected by a change —** Even on a system with high coupling between packages, it’s usually not necessary to run _all_ packages rather than only those who are affected by a change. What exactly is ‘affected’? Packages/Microservices that depend upon another package that has changed. Some of the toolings can ignore minor changes that are unlikely to break others. This is not a great performance booster but also an amazing testing feature —developers can get quick feedback on whether any of their clients were broken. Both Nx and Turborepo support this feature. Lerna can tell only which of the Monorepo package has changed
-- **Sub-systems (i.e., projects) —** Similarly to ‘affected’ above, modern tooling can realize portions of the graph that are inter-connected (a project or application) while others are not reachable by the component in context (another project) so they know to involve only packages of the relevant group
-- **Caching —** This is a serious speed booster: Nx and Turborepo cache the result/output of tasks and avoid running them again on consequent builds if unnecessary. For example, consider long-running tests of a Microservice, when commanding to re-build this Microservice, the tooling might realize that nothing has changed and the test will get skipped. This is achieved by generating a hashmap of all the dependent resources — if any of these resources haven’t change, then the hashmap will be the same and the task will get skipped. They even cache the stdout of the command, so when you run a cached version it acts like the real thing — consider running 200 tests, seeing all the log statements of the tests, getting results over the terminal in 200 ms, everything acts like ‘real testing while in fact, the tests did not run at all rather the cache!
-- **Remote caching —** Similarly to caching, only by placing the task’s hashmaps and result on a global server so further executions on other team member’s computers will also skip unnecessary tasks. In huge Monorepo projects that rely on E2E tests and must build all packages for development, this can save a great deal of time
-
-# Layer 3: Hoist your dependencies to boost npm installation
-
-The speed optimizations that were described above won’t be of help if the bottleneck is the big bull of mud that is called ‘npm install’ (not to criticize, it’s just hard by nature). Take a typical scenario as an example, given dozens of components that should be built, they could easily trigger the installation of thousands of sub-dependencies. Although they use quite similar dependencies (e.g., same logger, same ORM), if the dependency version is not equal then npm will duplicate ([the NPM doppelgangers problem](https://rushjs.io/pages/advanced/npm_doppelgangers/)) the installation of those packages which might result in a long process.
-
-This is where the workspace line of tools (e.g., Yarn workspace, npm workspaces, PNPM) kicks in and introduces some optimization — Instead of installing dependencies inside each component ‘NODE_MODULES’ folder, it will create one centralized folder and link all the dependencies over there. This can show a tremendous boost in install time for huge projects. On the other hand, if you always focus on one component at a time, installing the packages of a single Microservice/library should not be a concern.
-
-Both Nx and Turborepo can rely on the package manager/workspace to provide this layer of optimizations. In other words, Nx and Turborepo are the layer above the package manager who take care of optimized dependencies installation.
-
-
-
-On top of this, Nx introduces one more non-standard, maybe even controversial, technique: There might be only ONE package.json at the root folder of the entire Monorepo. By default, when creating components using Nx, they will not have their own package.json! Instead, all will share the root package.json. Going this way, all the Microservice/libraries share their dependencies and the installation time is improved. Note: It’s possible to create ‘publishable’ components that do have a package.json, it’s just not the default.
-
-I’m concerned here. Sharing dependencies among packages increases the coupling, what if Microservice1 wishes to bump dependency1 version but Microservice2 can’t do this at the moment? Also, package.json is part of Node.js _runtime_ and excluding it from the component root loses important features like package.json main field or ESM exports (telling the clients which files are exposed). I ran some POC with Nx last week and found myself blocked — library B was wadded, I tried to import it from Library A but couldn’t get the ‘import’ statement to specify the right package name. The natural action was to open B’s package.json and check the name, but there is no Package.json… How do I determine its name? Nx docs are great, finally, I found the answer, but I had to spend time learning a new ‘framework’.
-
-# Stop for a second: It’s all about your workflow
-
-We deal with tooling and features, but it’s actually meaningless evaluating these options before determining whether your preferred workflow is _synchronized or independent_ (we will discuss this in a few seconds)_._ This upfront _fundamental_ decision will change almost everything.
-
-Consider the following example with 3 components: Library 1 is introducing some major and breaking changes, Microservice1 and Microservice2 depend upon Library1 and should react to those breaking changes. How?
-
-**Option A — The synchronized workflow-** Going with this development style, all the three components will be developed and deployed in one chunk _together_. Practically, a developer will code the changes in Library1, test libray1 and also run wide integration/e2e tests that include Microservice1 and Microservice2. When they're ready, the version of all components will get bumped. Finally, they will get deployed _together._
-
-Going with this approach, the developer has the chance of seeing the full flow from the client's perspective (Microservice1 and 2), the tests cover not only the library but also through the eyes of the clients who actually use it. On the flip side, it mandates updating all the depend-upon components (could be dozens), doing so increases the risk’s blast radius as more units are affected and should be considered before deployment. Also, working on a large unit of work demands building and testing more things which will slow the build.
-
-**Option B — Independent workflow-** This style is about working a unit by unit, one bite at a time, and deploy each component independently based on its personal business considerations and priority. This is how it goes: A developer makes the changes in Library1, they must be tested carefully in the scope of Library1. Once she is ready, the SemVer is bumped to a new major and the library is published to a package manager registry (e.g., npm). What about the client Microservices? Well, the team of Microservice2 is super-busy now with other priorities, and skip this update for now (the same thing as we all delay many of our npm updates,). However, Microservice1 is very much interested in this change — The team has to pro-actively update this dependency and grab the latest changes, run the tests and when they are ready, today or next week — deploy it.
-
-Going with the independent workflow, the library author can move much faster because she does not need to take into account 2 or 30 other components — some are coded by different teams. This workflow also _forces her_ to write efficient tests against the library — it’s her only safety net and is likely to end with autonomous components that have low coupling to others. On the other hand, testing in isolation without the client’s perspective loses some dimension of realism. Also, if a single developer has to update 5 units — publishing each individually to the registry and then updating within all the dependencies can be a little tedious.
-
-
-
-Synchronized and independent workflows illustrated
-
-**On the illusion of synchronicity**
-
-In distributed systems, it’s not feasible to achieve 100% synchronicity — believing otherwise can lead to design faults. Consider a breaking change in Microservice1, now its client Microservice2 is adapting and ready for the change. These two Microservices are deployed together but due to the nature of Microservices and distributed runtime (e.g., Kubernetes) the deployment of Microservice1 only fail. Now, Microservice2’s code is not aligned with Microservice1 production and we are faced with a production bug. This line of failures can be handled to an extent also with a synchronized workflow — The deployment should orchestrate the rollout of each unit so each one is deployed at a time. Although this approach is doable, it increased the chances of large-scoped rollback and increases deployment fear.
-
-This fundamental decision, synchronized or independent, will determine so many things — Whether performance is an issue or not at all (when working on a single unit), hoisting dependencies or leaving a dedicated node_modules in every package’s folder, and whether to create a local link between packages which is described in the next paragraph.
-
-# Layer 4: Link your packages for immediate feedback
-
-When having a Monorepo, there is always the unavoidable dilemma of how to link between the components:
-
-**Option 1: Using npm —** Each library is a standard npm package and its client installs it via the standards npm commands. Given Microservice1 and Library1, this will end with two copies of Library1: the one inside Microservices1/NODE_MODULES (i.e., the local copy of the consuming Microservice), and the 2nd is the development folder where the team is coding Library1.
-
-**Option2: Just a plain folder —** With this, Library1 is nothing but a logical module inside a folder that Microservice1,2,3 just locally imports. NPM is not involved here, it’s just code in a dedicated folder. This is for example how Nest.js modules are represented.
-
-With option 1, teams benefit from all the great merits of a package manager — SemVer(!), tooling, standards, etc. However, should one update Library1, the changes won’t get reflected in Microservice1 since it is grabbing its copy from the npm registry and the changes were not published yet. This is a fundamental pain with Monorepo and package managers — one can’t just code over multiple packages and test/run the changes.
-
-With option 2, teams lose all the benefits of a package manager: Every change is propagated immediately to all of the consumers.
-
-How do we bring the good from both worlds (presumably)? Using linking. Lerna, Nx, the various package manager workspaces (Yarn, npm, etc) allow using npm libraries and at the same time link between the clients (e.g., Microservice1) and the library. Under the hood, they created a symbolic link. In development mode, changes are propagated immediately, in deployment time — the copy is grabbed from the registry.
-
-
-
-Linking packages in a Monorepo
-
-If you’re doing the synchronized workflow, you’re all set. Only now any risky change that is introduced by Library3, must be handled NOW by the 10 Microservices that consume it.
-
-If favoring the independent workflow, this is of course a big concern. Some may call this direct linking style a ‘monolith monorepo’, or maybe a ‘monolitho’. However, when not linking, it’s harder to debug a small issue between the Microservice and the npm library. What I typically do is _temporarily link_ (with npm link) between the packages_,_ debug, code, then finally remove the link.
-
-Nx is taking a slightly more disruptive approach — it is using [TypeScript paths](https://www.typescriptlang.org/tsconfig#paths) to bind between the components. When Microservice1 is importing Library1, to avoid the full local path, it creates a TypeScript mapping between the library name and the full path. But wait a minute, there is no TypeScript in production so how could it work? Well, in serving/bundling time it webpacks and stitches the components together. Not a very standard way of doing Node.js work.
-
-# Closing: What should you use?
-
-It’s all about your workflow and architecture — a huge unseen cross-road stands in front of the Monorepo tooling decision.
-
-**Scenario A —** If your architecture dictates a _synchronized workflow_ where all packages are deployed together, or at least developed in collaboration — then there is a strong need for a rich tool to manage this coupling and boost the performance. In this case, Nx might be a great choice.
-
-For example, if your Microservice must keep the same versioning, or if the team really small and the same people are updating all the components, or if your modularization is not based on package manager but rather on framework-own modules (e.g., Nest.js), if you’re doing frontend where the components inherently are published together, or if your testing strategy relies on E2E mostly — for all of these cases and others, Nx is a tool that was built to enhance the experience of coding many _relatively_ coupled components together. It is a great a sugar coat over systems that are unavoidably big and linked.
-
-If your system is not inherently big or meant to synchronize packages deployment, fancy Monorepo features might increase the coupling between components. The Monorepo pyramid above draws a line between basic features that provide value without coupling components while other layers come with an architectural price to consider. Sometimes climbing up toward the tip is worth the consequences, just make this decision consciously.
-
-
-
-**Scenario B—** If you’re into an _independent workflow_ where each package is developed, tested, and deployed (almost) independently — then inherently there is no need to fancy tools to orchestrate hundreds of packages. Most of the time there is just one package in focus. This calls for picking a leaner and simpler tool — Turborepo. By going this route, Monorepo is not something that affects your architecture, but rather a scoped tool for faster build execution. One specific tool that encourages an independent workflow is [Bilt](https://github.com/giltayar/bilt) by Gil Tayar, it’s yet to gain enough popularity but it might rise soon and is a great source to learn more about this philosophy of work.
-
-**In any scenario, consider workspaces —** If you face performance issues that are caused by package installation, then the various workspace tools Yarn/npm/PNPM, can greatly minimize this overhead with a low footprint. That said, if you’re working in an autonomous workflow, smaller are the chances of facing such issues. Don’t just use tools unless there is a pain.
-
-We tried to show the beauty of each and where it shines. If we’re allowed to end this article with an opinionated choice: We greatly believe in an independent and autonomous workflow where the occasional developer of a package can code and deploy fearlessly without messing with dozens of other foreign packages. For this reason, Turborepo will be our favorite tool for the next season. We promise to tell you how it goes.
-
-# Bonus: Comparison table
-
-See below a detailed comparison table of the various tools and features:
-
-
-
-Preview only, the complete table can be [found here](https://github.com/practicajs/practica/blob/main/docs/docs/decisions/monorepo.md)
diff --git a/docs/blog/which-monorepo/monorepo-commands.png b/docs/blog/which-monorepo/monorepo-commands.png
deleted file mode 100644
index 50a0b66c..00000000
Binary files a/docs/blog/which-monorepo/monorepo-commands.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-comparison.png b/docs/blog/which-monorepo/monorepo-comparison.png
deleted file mode 100644
index f59f2fed..00000000
Binary files a/docs/blog/which-monorepo/monorepo-comparison.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-global-modules.png b/docs/blog/which-monorepo/monorepo-global-modules.png
deleted file mode 100644
index 54032fae..00000000
Binary files a/docs/blog/which-monorepo/monorepo-global-modules.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-linking.png b/docs/blog/which-monorepo/monorepo-linking.png
deleted file mode 100644
index 8ce3e468..00000000
Binary files a/docs/blog/which-monorepo/monorepo-linking.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-pipeline.png b/docs/blog/which-monorepo/monorepo-pipeline.png
deleted file mode 100644
index 89879483..00000000
Binary files a/docs/blog/which-monorepo/monorepo-pipeline.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-pyramid.png b/docs/blog/which-monorepo/monorepo-pyramid.png
deleted file mode 100644
index e9d10a6f..00000000
Binary files a/docs/blog/which-monorepo/monorepo-pyramid.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-visual-components.jpeg b/docs/blog/which-monorepo/monorepo-visual-components.jpeg
deleted file mode 100644
index 0d9900e3..00000000
Binary files a/docs/blog/which-monorepo/monorepo-visual-components.jpeg and /dev/null differ
diff --git a/docs/blog/which-monorepo/monorepo-workflow.png b/docs/blog/which-monorepo/monorepo-workflow.png
deleted file mode 100644
index ff5707ca..00000000
Binary files a/docs/blog/which-monorepo/monorepo-workflow.png and /dev/null differ
diff --git a/docs/blog/which-monorepo/practica-banner.png b/docs/blog/which-monorepo/practica-banner.png
deleted file mode 100644
index 5311a59e..00000000
Binary files a/docs/blog/which-monorepo/practica-banner.png and /dev/null differ
diff --git a/docs/docs/contribution/_category_.json b/docs/docs/contribution/_category_.json
deleted file mode 100644
index 4bf61bd5..00000000
--- a/docs/docs/contribution/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "position": 7,
- "label": "Contribution"
-}
\ No newline at end of file
diff --git a/docs/docs/contribution/contribution-long-guide.md b/docs/docs/contribution/contribution-long-guide.md
deleted file mode 100644
index b45078f2..00000000
--- a/docs/docs/contribution/contribution-long-guide.md
+++ /dev/null
@@ -1,189 +0,0 @@
----
-sidebar_position: 2
-sidebar_label: Long guide
----
-
-# The comprehensive contribution guide
-
-## You belong with us
-
-If you reached down to this page, you probably belong with us 💜. We are in an ever-going quest for better software practices. This journey can bring two things to your benefit: A lot of learning and global impact on many people's craft. Does this sounds attractive?
-
-## Consider the shortened guide first
-****
-Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, small code enhancement or anything that is small and obvious - start by reading the [shortened guide here](./contribution-short-guide.md). As you'll expand your engagement with this repo, it might be a good idea to visit this long guide again
-
-
-## Philosophy
-
-Our main selling point is our philosophy, our philosophy is 'make it SIMPLE'. There is one really important holy grail in software - Speed. The faster you move, the more features and value is created for the users. The faster you move, more improvements cycles are deployed and the software/ops become better. [Researches show](https://puppet.com/resources/report/2020-state-of-devops-report) that faster team produces software that is more reliable. Complexity is the enemy of speed - Commonly apps are big, sophisticated, has a lot of internal abstractions and demand long training before being productive. Our mission is to minimize complexity, get onboarded developers up to speed quickly, or in simple words - Let the reader of the code understand it in a breeze. If you make simplicity a 1st principle - Great things will come your way.
-
-
-
-Big words, how exactly? Here are few examples:
-
-**- Simple language -** We use TypeScript because we believe in types, but we minimize advanced features. This boils down to using functions only, sometimes also classes. No abstracts, generic, complex types or anything that demand more CPU cycles from the reader.
-
-**- Less generic -** Yes, you read it right. If you can code a function that covers less scenarios but is shorter and simpler to understand - Consider this option first. Sometimes one if forced to make things generic - That's fine, at least we minimized the amount of complex code locations
-
-**- Simple tools -** Need to use some 3rd party for some task? Choose the library that is doing the minimal amount of work. For example, when seeking a library that parses JWT tokens - avoid picking a super-fancy framework that can solve any authorization path (e.g., Passport). Instead, Opt for a library that is doing exactly this. This will result in code that is simpler to understand and reduced bug surface
-
-**- Prefer Node/JavaScript built-in tooling -** Some new frameworks have abstractions over some standard tooling. They have their way of defining modules, libraries and others which demand learning one more concept and being exposed to unnecessary layer of bugs. Our preferred way is the vanilla way, if it's part of JavaScript/Node - We use it. For example, should we need to group a bunch of files as a logical modules - We use ESM to export the relevant files and functions
-
-[Our full coding guide will come here soon](http://www/no-link-yet)
-
-
-
-## Workflow
-
-### Got a small change? Choose the fast lane
-
-Every small change can make this repo much better. If you intend to contribute a relatively small change like documentation change, linting rules, look&feel fixes, fixing TYPOs, comments or anything that is small and obvious - Just fork to your machine, code, ensure all tests pass (e.g., `npm test`), PR with a meaningful title, get **1** approver before merging. That's it.
-
-
-
-### Need to change the code itself? Here is a typical workflow
-
-| | **➡️ Idea** | **➡ Design decisions** | **➡ Code** | **➡️ Merge** |
-|---------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **When** | Got an idea how to improve? Want to handle an existing issue? | When the change implies some major decisions, those should be discussed in advance | When got confirmation from core maintainer that the design decisions are sensible | When you have accomplished a *short iteration* . If the whole change is small, PR in the end |
-| **What** | **1.** Create an issue (if doesn't exist) **2.** Label the issue with the its type (e.g., question, bug) and the area of improvement (e.g., area-generator, area-express) **3.** Comment and specify your intent to handle this issue | **1.** Within the issue, specify your overall approach/design. Or just open a discussion **2.** If choosing a 3rd party library, ensure to follow our standard decision and comparison template. [Example can be found here](../decisions/configuration-library.md) | **1.** Do it with passions 💜 **2.** Follow our coding guide. Keep it simple. Stay loyal to our philosophy **3.** Run all the quality measures frequently (testing, linting) | **1.** Share your progress early by submit a [work in progress PR](https://github.blog/2019-02-14-introducing-draft-pull-requests/) **2.** Ensure all CI checks pass (e.g., testing) **3.** Get at least one approval before merging |
-
-## Roles
-
-
-## Project structure
-
-### High-level sections
-
-The repo has 3 root folders that represents what we do:
-
-- **docs** - Anything we write to make this project super easy to work with
-- **code-generator** - A tool with great DX to choose and generate the right app for the user
-- **code-templates** - The code that we generate with the right patterns and practices
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Practica] -->|How we create apps| B(Code Generators)
- A -->|The code that we generate!| C(Code Templates)
- A -->|How we explain ourself| D(Docs)
-
-
-```
-
-### The code templates
-
-Typically, the two main sections are the Microservice (apps) and cross-cutting-concern libraries:
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Code Templates] -->|The example Microservice/app| B(Services)
- B -->|Where the API, logic and data lives| D(Example Microservice)
- A -->|Cross Microservice concerns| C(Libraries)
- C -->|Explained in a dedicated section| K(*Multiple libraries like logger)
- style D stroke:#333,stroke-width:4px
-
-
-```
-
-**The Microservice structure**
-
-
-The entry-point of the generated code is an example Microservice that exposes API and has the traditional layers of a component:
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Services] -->|Where the API, logic and data lives| D(Example Microservice)
- A -->|Almost empty, used to exemplify Microservice communication| E(Collaborator Microservice)
- D -->|The web layer with REST/Graph| G(Web/API layer)
- N -->|Docker-compose based DB, MQ and Cache| F(Infrastructure)
- D -->|Where the business lives| M(Domain layer)
- D -->|Anything related with database| N(Data-access layer)
- D -->|Component-wide testing| S(Testing)
- style D stroke:#333,stroke-width:4px
-```
-
-**Libraries**
-
-All libraries are independent npm packages that can be testing in isolation
-
-```mermaid
-%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#99BF2C','secondaryColor':'#C2DF84','lineColor':'#ABCA64','fontWeight': 'bold', 'fontFamily': 'comfortaa, Roboto'}}}%%
-graph
- A[Libraries] --> B(Logger)
- A[Libraries] --> |Token-based auth| C(Authorization)
- A[Libraries] --> |Retrieve and validate the configuration| D(Configuration)
- A[Libraries] --> E(Error handler)
- A[Libraries] --> E(MetricsService)
- A[Libraries] --> Z(More to come...)
- style Z stroke:#333,stroke-width:4px
-```
-
-### The code generator structure
-
-## Packages (domains)
-
-This solution is built around independent domains that share _almost_ nothing with others. It is recommended to start with understanding a single and small domain (package), then expanding and getting acquainted with more. This is also an opportunity to master a specific topic that you're passionate about. Following is our packages list, choose where you wish to contribute first
-
-
-
-
-
-| **Package** | **What** | **Status** | **Chosen libs** | **Quick links** |
-|---------------------------------- |----------------------------------------------------------- |----------------------------------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--------------------------------------------- |
-| microservice/express | A web layer of an example Microservice based on expressjs | 🧓🏽 Stable | - | - [Code & readme](http://not-exist-yet) - [Issues & ideas](http://not-exist-yet) |
-| microservice/fastify | A web layer of an example Microservice based on Fastify | 🐣 Not started
(Take the heel, open an issue) | - | - [Code & readme](http://not-exist-yet) - [Issues & ideas](http://not-exist-yet) |
-| microservice/dal/prisma | A DAL layer of an example Microservice based on Prisma.js | 🐥 Beta/skeleton | - | - [Code & readme](http://not-exist-yet) - [Issues & ideas](http://not-exist-yet) |
-| library/logger | A logging library wrapper | 🐥 Beta/skeleton
Why: [Decision here](https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md) | - [Code & readme](http://not-exist-yet) - [Issues & ideas](http://not-exist-yet) |
-| library/jwt-based-authentication | A library that authenticates requests with JWT token | 🧓🏽 Stable | [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken)
Why: [Decision here](https://github.com/bestpractices/practica/blob/main/docs/decisions/configuration-library.md) | - [Code & readme](http://not-exist-yet) - [Issues & ideas](http://not-exist-yet) |
-
-
-
-## Development machine setup
-
-✅ Ensure Node, Docker and [NVM](https://github.com/nvm-sh/nvm#installing-and-updating) are installed
-
-✅ Configure GitHub and npm 2FA!
-
-✅ Close the repo if you are a maintainer, or fork it if have no collaborators permissions
-
-✅ With your terminal, ensure the right Node version is installed:
-
-```
-nvm use
-```
-
-✅ Install dependencies:
-
-
-```
-nvm i
-```
-
-✅ Ensure all tests pass:
-
-```
-npm t
-```
-
-✅ Code. Run the test. And vice versa
-
-
-## Areas to focus on
-
-
-
-
-## Supported Node.js version
-
-- The generated code should be compatible with Node.js versions >14.0.0.
-- It's fair to demand LTS version from the repository maintainers (the generator code)
-
-
-## Code structure
-
-Soon
diff --git a/docs/docs/contribution/contribution-short-guide.md b/docs/docs/contribution/contribution-short-guide.md
deleted file mode 100644
index 0b64e130..00000000
--- a/docs/docs/contribution/contribution-short-guide.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-sidebar_position: 1
-sidebar_label: Short guide
----
-
-# Contributing to Practica.js - The short guide
-
-## You belong with us
-
-We are in an ever-going quest for better software practices. If you reached down to this page, you probably belong with us 💜.
-
-Note: This is a shortened guide that suits those are willing to quickly contribute. Once you deepen your relations with Practica.js - It's a good idea to read the [full guide](https://github.com/practicajs/practica/blob/main/CONTRIBUTING.md)
-
-## 2 things to consider
-
-- Our philosophy is all about minimalism and simplicity - We strive to write less code, rely on existing and reputable libraries, stick to Node/JS standards and avoid adding our own abstractions
-- Popular vendors only - Each technology and vendor that we introduce must super popular and reliable. For example, a library must one of the top 5 most starred and downloaded in its category. . See [full vendor choose instructions here](./vendor-pick-guidelines.md)
-
-## The main internals tiers (in a nutshell)
-
-For a quick start, you don't necessarily need to understand the entire codebase. Typically, your contribution will fall under one of these three categories:
-
-### Option 1 - External or configuration change
-
-**High-level changes**
-
-If you simply mean to edit things beyond the code - There is no need to delve into the internals. For example, when changing documentation, CI/bots, and alike - One can simply perform the task without delving into the code
-
-### Option 2 - The code generator
-
-**Code and CLI to get the user preferences and copy the right code to her computer**
-
-Here you will find CLI, UI, and logic to generate the right code. We run our own custom code to go through the code-template folder and filter out parts/files based on the user preferences. For example, should she ask NOT to get a GitHub Actions file - The generator will remove this file from the output
-
-How to work with it?
-
-1. If all you need is to alter the logic, you may just code in the ~/code-generator/generation-logic folder and run the tests (located in the same folder)
-2. If you wish to modify the CLI UI, then you'll need to build the code before running (because there is no way to run TypeScript in CLI). Open two terminals:
-
-- Open one terminal to compile the code:
-
-```bash
-npm run build:watch
-```
-
-- Open second terminal to run the CLI UI:
-
-```bash
-npm run start:cli
-```
-
-### Option 3 - The code templates
-
-**The output of our program: An example Microservice and libraries**
-
-Here you will the generated code that we will selectively copy to the user's computer which is located under {root}/src/code-templates. It's preferable to work on this code outside the main repository in some side folder. To achieve this, simply generate the code using the CLI, code, run the tests, then finally copy to the main repository
-
-1. Install dependencies
-
-```bash
-nvm use && npm i
-```
-
-2. Build the code
-
-```bash
-npm run build
-```
-
-3. Bind the CLI command to our code
-
-```bash
-cd .dist && npm link
-```
-
-4. Generate the code to your preferred working folder
-
-```bash
-cd {some folder like $HOME}
-create-node-app immediate --install-dependencies
-```
-
-4. Now you can work on the generated code. Later on, once your tests pass and you're happy - copy the changes back to `~/practica/src/code-templates`
-
-5. Run the tests while you code
-
-```bash
-#From the folder where you generated the code to. You might need to 'git init'
-cd default-app-name/services/order-service
-npm run test:dev
-```
-
-
-## Workflow
-
-1. Idea - Claim an existing issue or open a new one
-2. Optional: Design - If you're doing something that is not straightforward, share your high-level approach to this within the issue
-3. PR - Once you're done, run the tests locally then PR to main. Ensure all checks pass. If you introduced a new feature - Update the docs
-
-## Development machine setup
-
-✅ Ensure Node, Docker and [NVM](https://github.com/nvm-sh/nvm#installing-and-updating) are installed
-
-✅ Configure GitHub and npm 2FA!
-
-✅ Close the repo if you are a maintainer, or fork it if have no collaborators permissions
-
-✅ With your terminal, ensure the right Node version is installed:
-
-```bash
-nvm use
-```
-
-✅ Install dependencies:
-
-
-```bash
-npm i
-```
-
-✅ Ensure all tests pass:
-
-```bash
-npm t
-```
-
-✅ You can safely start now: Code, run the test and vice versa
diff --git a/docs/docs/contribution/questions-and-answers.MD b/docs/docs/contribution/questions-and-answers.MD
deleted file mode 100644
index b468119a..00000000
--- a/docs/docs/contribution/questions-and-answers.MD
+++ /dev/null
@@ -1,30 +0,0 @@
----
-sidebar_position: 4
-sidebar_label: Questions and answers
----
-
-# Questions and answers for contributors
-
-## Code generator
-
-**1. Question-** When running the local shell command 'create-node-app' a wrong version of the code is running or it is not running at all
-
-**Answer-** Consider re-linking the local command to the code and cleaning the .dist folder:
-
-a. Locate your bin folder by typing `which node`
-
-Copy the path
-
-b. CD to the path that was copied
-
-c. `rm create-node-app`
-
-d. Navigate to the code folder and build the code `npm run build`
-
-e. Delete [root]/.dist folder
-
-f. Build the project again with ```npm run build```
-
-f. Navigate to `.dist` and run the command again
-
-`npm link`
diff --git a/docs/docs/contribution/release-checklist.md b/docs/docs/contribution/release-checklist.md
deleted file mode 100644
index 9c8743e8..00000000
--- a/docs/docs/contribution/release-checklist.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-sidebar_position: 8
-sidebar_label: Release checklist
----
-
-# A checklist for releasing a new Practica version
-
-✅ Bump package.json of both root and example Microservice
-
-✅ Ensure you're on the master branch
-
-✅ Publish from the root
-
-```npm run publish:build```
-
-✅ Test manually by cleaning local .bin and running the get started guide
-
-
-
diff --git a/docs/docs/contribution/vendor-pick-guidelines.md b/docs/docs/contribution/vendor-pick-guidelines.md
deleted file mode 100644
index 4af37089..00000000
--- a/docs/docs/contribution/vendor-pick-guidelines.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-sidebar_position: 6
-sidebar_label: Library picking guidelines
----
-
-# Choosing npm package dependency thoughtfully
-
-✅ The decision must follow a comparison table of options using [this template](https://github.com/practicajs/practica/blob/main/docs/docs/decisions/configuration-library.md)
-
-✅ Usage state must be captured including weekly downloads, GitHub stars and dependents. Only top 5 most popular vendors can be evaluated
-
-✅ The evaluated libs must have been updated at least once in the last 6 months
-
diff --git a/docs/docs/decisions/_category_.json b/docs/docs/decisions/_category_.json
deleted file mode 100644
index 764c10fa..00000000
--- a/docs/docs/decisions/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "position": 4,
- "label": "Decisions"
-}
\ No newline at end of file
diff --git a/docs/docs/decisions/configuration-library.md b/docs/docs/decisions/configuration-library.md
deleted file mode 100644
index d9acd54a..00000000
--- a/docs/docs/decisions/configuration-library.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-sidebar_position: 1
-sidebar_label: Configuration Library
----
-
-# Decision: Choosing a **_configuration_** library
-
-**📔 What is it** - A decision data and discussion about the right configuration library
-
-**⏰ Status** - Open, closed in April 1st 2022
-
-**📁 Corresponding discussion** - [Here](https://github.com/practicajs/practica/issues/10)
-
-**🎯Bottom-line: our recommendation** - **✨convict✨** ticks all the boxes by providing both strict schema, fail fast option, entry documentation and hierarchical structure
-
-**📊 Detailed comparison table**
-
-| | dotenv | Convict | nconf | config |
-| --- | --- | --- | --- | --- |
-| **Executive Summary** |
-| Performance (load time for 100 keys) |  1ms |  5ms |  4ms |  5ms |
-| Popularity |  Superior |  Less popular than competitors |  Highly popular |  Highly popular |
-| ❗ Fail fast & strict schema |  No |  Yes |  No |  No |
-| Items documentation |  No |  Yes |  No |  No |
-| Hierarchical configuration schema |  No |  Yes |  Yes |  No |
-| **More details: Community & Popularity - March 2022** |
-| Stars | 4200 ✨ | 2500 ✨ | 2500 ✨ | 1000 ✨ |
-| Downloads/Week | 12,900,223 📁 | 4,000,000 📁 | 6,000,000 📁 | 5,000,000 📁 |
-| Dependents | 26,000 👩👧 | 600 👧 | 800 👧 | 1000 👧 |
diff --git a/docs/docs/decisions/docker-base-image.md b/docs/docs/decisions/docker-base-image.md
deleted file mode 100644
index 4a92bef0..00000000
--- a/docs/docs/decisions/docker-base-image.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-sidebar_position: 5
-sidebar_label: Docker base image
----
-
-# Decision: Choosing a **Docker base image**
-
-**📔 What is it** - The Dockerfile that is included inherits from a base Node.js image. There are variois considerations when choosing the right option which are listed below
-
-**⏰ Status** - Open for discussions
-
-**📁 Corresponding discussion** - [Here](https://github.com/practicajs/practica/issues/229)
-
-**🎯Bottom-line: our recommendation** - TBD
-
-**📊 Detailed comparison table**
-
-
-
-
-
full-blown
-
bullseye-slim
-
alpine
-
-
-
Key Dimensions
-
-
-
Officially supported
-
Yes
-
Yes
-
No? Looking for sources
-
-
-
CVEs (Medium severity and above)
-
❗️Trivy: 521, Snyk: TBD
-
Trivy: 11 high, Snyk: TBD
-
Trivy: 0 high, Snyk: TBD
-
-
-
Size in MB
-
950 MB
-
150 MB
-
90 MB
-
-
-
Native modules installation Packages that run native code installer (e.g., with node-gyp)
-
Standard C compiler glibc
-
Standard C compiler glibc
-
A less standard compiler, musl - might break under some circumstances
-
-
Other important dimensions to consider?
diff --git a/docs/docs/decisions/index.md b/docs/docs/decisions/index.md
deleted file mode 100644
index 956261f8..00000000
--- a/docs/docs/decisions/index.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-sidebar_position: 1
-sidebar_label: README
----
-# Decision making documentation
-
-Making our decisions transparent and collaborative is at the heart of Practica. In this folder, all decisions should be documented using our decision template
-
-## Index
-- [Configuration Library](./configuration-library.md)
-- [Monorepo](./monorepo.md)
-- [OpenAPI](./openapi.md)
-- More will come soon
\ No newline at end of file
diff --git a/docs/docs/decisions/monorepo.md b/docs/docs/decisions/monorepo.md
deleted file mode 100644
index f2a8580b..00000000
--- a/docs/docs/decisions/monorepo.md
+++ /dev/null
@@ -1,243 +0,0 @@
----
-sidebar_position: 2
-sidebar_label: Monorepo
----
-
-# Decision: Choosing **Monorepo** approach and tooling
-
-**📔 What is it** - Choosing the right Monorepo tool and features for the boilerplate
-
-**⏰ Status** - Open for discussions
-
-**📁 Corresponding discussion** - [Here](https://github.com/practicajs/practica/issues/80)
-
-**🎯Bottom-line: our recommendation** - TBD
-
-**📊 Detailed comparison table**
-
-*For some lacking features there is a community package that bridges the gap; For workspace, we evaluated whether most of them support a specific feature
-
-
-
-
-
nx
-
Turborepo
-
Lerna
-
workspace (npm, yarn, pnpm)
-
-
-
Executive Summary
-
-
-
Community and maintenance
-
Huge eco-system and commercial-grade maintenance
-
Trending, commercial-grade maintenance
-
-
Not maintained anymore
-
Solid
-
-
-
❗Encourage component autonomy
-
Packages are highly coupled
-
Workflow is coupled
-
npm link bypasses the SemVer
-
-
Minor concern: shared NODE_MODULES on the root
-
-
-
Build speed
-
Smart inference and execution plan, shared dependencies, cache
-
Smart inference and execution plan, shared dependencies, cache
-
Parallel tasks execution, copied dependencies
-
-
Shared dependencies
-
-
-
Standardization
-
Non standard Node.js stuff: One single root package.json by default, TS-paths for linking
-
An external build layer
-
An external build layer
-
-
An external package centralizer
-
-
-
Tasks and build pipeline
-
-
-
Run recursive commands (affect a group of packages)
-
Yes
-
Yes
-
Yes
-
Yes
-
-
-
❗️Parallel task execution
-
Yes
-
Yes
-
No
-
Yes* (Yarn & Pnpm)
-
-
-
❗️Realize which packages changed
-
Yes
-
Yes
-
Yes
-
No
-
-
-
❗️Realize packages that are affected by a change
-
Yes both through package.json and code
-
Yes through package.json
-
None
-
None
-
-
-
Ignore missing commands/scripts
-
No
-
Yes
-
Yes
-
Yes
-
-
-
❗️In-project cache - Skip tasks if local result exists
-
Yes
-
Yes
-
No
-
No
-
-
-
Remote cache - Skip tasks if remote result exists
-
Yes
-
Yes
-
No
-
No
-
-
-
Visual dependency graph
-
Yes
-
Yes
-
Partially, via plugin
-
No
-
-
-
❗️Smart waterfall pipeline - Schedule unrelated tasks in parallel, not topologically
-
Yes
-
Yes
-
No
-
No
-
-
-
Distributed task execution - Spread tasks across machines
-
Yes
-
No
-
No
-
No
-
-
-
Locally linking packages
-
-
-
❗️Is supported
-
Partially Achieved through TS paths
-
No Relies on workspaces
-
Yes
-
Yes
-
-
-
How
-
❗️Via TypeScript paths and webpack
-
Relies on workspaces
-
Symlink
-
Symlink
-
-
-
❗️Can opt-out?
-
Yes By default local packages are linked
-
-
-
No
-
Partially Pnpm allows preferring remote packages, Yarn has a [focused package](https://classic.yarnpkg.com/blog/2018/05/18/focused-workspaces/) option which only works per a single package
-
-
-
Link a range - only specific versions will be symlinked
-
No
-
-
-
No
-
Some Yarn and Pnpm allows workspace versioning
-
-
-
Optimizing dependencies installation speed
-
-
-
Supported
-
Yes Via a single Root package.json and NODE_MODULES
-
Yes Via caching
-
No Can be used on top of yarn workspace
-
Yes Via single node_modules folder
-
-
-
Retain origin file path (some module refers to relative paths)
****
-
Partially NODE_MODULES is on the root, not per package
-
Yes
-
Not relevant
-
Partially Pnpm uses hard link instead of symlinks
-
-
-
Keep single NODE_MODULES per machine (faster, less disc space)
-
No
-
No
-
No
-
Partially Pnpm supports this
-
-
-
Other features and considerations
-
-
-
Community plugins
-
Yes
-
No
-
Yes
-
Yes
-
-
-
Scaffold new component from a gallery
-
Yes
-
None
-
None
-
None
-
Create a new package to the repo
-
Built it code genreation with useful templates
-
None, 3rd party code generator can be used
-
None, 3rd party code generator can be used
-
None, 3rd party code generator can be used
-
-
-
Adapt changes in the monorepo tool
-
Supported via nx migrate
-
Supported via codemod
-
None
-
None
-
-
-
Incremental builds
-
Supported
-
Supported
-
None
-
None
-
-
-
Cross-package modifications
-
Supported via nx generate
-
None
-
None
-
None
-
-
-
-__
-
-Ideas for next iteration:
-- Separate command execution and pipeline section
-- Stars and popularity
-- Features summary
-- Polyrepo support
-
diff --git a/docs/docs/decisions/openapi.md b/docs/docs/decisions/openapi.md
deleted file mode 100644
index a9e096b2..00000000
--- a/docs/docs/decisions/openapi.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-sidebar_position: 3
-sidebar_label: OpenAPI
----
-
-# Decision: Choosing **_OpenAPI** generator tooling
-
-**📔 What is it** - A decision data and discussion about the right OpenAPI tools and approach
-
-**⏰ Status** - Open, closed in June 1st 2022
-
-**📁 Corresponding discussion** - [Here](https://github.com/practicajs/practica/issues/67)
-
-**🎯Bottom-line: our recommendation** - TBD
-
-**📊 Detailed comparison table**
-
-
+
+
+
+
\ No newline at end of file
diff --git a/img/3-tiers.png b/img/3-tiers.png
new file mode 100644
index 00000000..72777254
Binary files /dev/null and b/img/3-tiers.png differ
diff --git a/static/images/abstractions-vs-simplicity.png b/img/abstractions-vs-simplicity.png
similarity index 100%
rename from static/images/abstractions-vs-simplicity.png
rename to img/abstractions-vs-simplicity.png
diff --git a/docs/static/img/almost-full.png b/img/almost-full.png
similarity index 100%
rename from docs/static/img/almost-full.png
rename to img/almost-full.png
diff --git a/docs/static/img/discord-logo.png b/img/discord-logo.png
similarity index 100%
rename from docs/static/img/discord-logo.png
rename to img/discord-logo.png
diff --git a/static/images/balance.png b/img/docs/balance.png
similarity index 100%
rename from static/images/balance.png
rename to img/docs/balance.png
diff --git a/docs/static/img/docs/decisions/almost-full.png b/img/docs/decisions/almost-full.png
similarity index 100%
rename from docs/static/img/docs/decisions/almost-full.png
rename to img/docs/decisions/almost-full.png
diff --git a/docs/static/img/docs/decisions/full.png b/img/docs/decisions/full.png
similarity index 100%
rename from docs/static/img/docs/decisions/full.png
rename to img/docs/decisions/full.png
diff --git a/docs/static/img/docs/decisions/partial.png b/img/docs/decisions/partial.png
similarity index 100%
rename from docs/static/img/docs/decisions/partial.png
rename to img/docs/decisions/partial.png
diff --git a/docs/static/img/favicon-32x32.png b/img/favicon-32x32.png
similarity index 100%
rename from docs/static/img/favicon-32x32.png
rename to img/favicon-32x32.png
diff --git a/docs/static/img/favicon.ico b/img/favicon.ico
similarity index 100%
rename from docs/static/img/favicon.ico
rename to img/favicon.ico
diff --git a/docs/static/img/full.png b/img/full.png
similarity index 100%
rename from docs/static/img/full.png
rename to img/full.png
diff --git a/docs/static/img/logo.svg b/img/logo.svg
similarity index 100%
rename from docs/static/img/logo.svg
rename to img/logo.svg
diff --git a/img/monorepo-structure.png b/img/monorepo-structure.png
new file mode 100644
index 00000000..2d1e058a
Binary files /dev/null and b/img/monorepo-structure.png differ
diff --git a/docs/static/img/monorepo-theme-1.png b/img/monorepo-theme-1.png
similarity index 100%
rename from docs/static/img/monorepo-theme-1.png
rename to img/monorepo-theme-1.png
diff --git a/static/images/on-top-of-frameworks.png b/img/on-top-of-frameworks.png
similarity index 100%
rename from static/images/on-top-of-frameworks.png
rename to img/on-top-of-frameworks.png
diff --git a/docs/static/img/partial.png b/img/partial.png
similarity index 100%
rename from docs/static/img/partial.png
rename to img/partial.png
diff --git a/static/images/practica-logo.png b/img/practica-logo.png
similarity index 100%
rename from static/images/practica-logo.png
rename to img/practica-logo.png
diff --git a/docs/static/img/practica.png b/img/practica.png
similarity index 100%
rename from docs/static/img/practica.png
rename to img/practica.png
diff --git a/docs/static/img/site-icon.png b/img/site-icon.png
similarity index 100%
rename from docs/static/img/site-icon.png
rename to img/site-icon.png
diff --git a/static/images/tech-stack.png b/img/tech-stack.png
similarity index 100%
rename from static/images/tech-stack.png
rename to img/tech-stack.png
diff --git a/docs/static/img/twitter-icon.png b/img/twitter-icon.png
similarity index 100%
rename from docs/static/img/twitter-icon.png
rename to img/twitter-icon.png
diff --git a/index.html b/index.html
new file mode 100644
index 00000000..ca271120
--- /dev/null
+++ b/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+home | Practica.js
+
+
+
+
+
+
+
+
+
+
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices
1 min video 👇
Our Philosophies and Unique Values
1. Best Practices on top of known Node.js frameworks
We don't re-invent the wheel. Rather, we use your favorite framework and empower it with structure and real examples. With a single command you can get an Express/Fastify-based codebase with ~100 examples of best practices inside.
Keeping it simple, flat and based on native Node/JS capabilities is part of this project DNA. We believe that too many abstractions, high-complexity or fancy language features can quickly become a stumbling block for the team.
To name a few examples, our code flow is flat with almost no level of indirection, although using TypeScript - almost no features are being used besides types, for modularization we simply use Node.js modules
Good Practices and Simplicity is the name of the game with Practica. There is no need to narrow our code to a specific framework or database. We aim to support a majority of popular Node.js frameworks and databases.
Practices and Features
We apply more than 100 practices and optimizations. You can opt in or out for most of these features using option flags on our CLI. The follow table is just a few examples of features we provide. To see the full list of features, please visit our website here.
Feature
Explanation
Flag
Docs
Monorepo setup
Generates two components (e.g., Microservices) in a single repository with interactions between the two
Clean-out outgoing responses from potential HTML security risks like XSS
--oe, --output-escape
Docs coming soon
Integration (component) testing
Generates full-blown component/integration tests setup including DB
--t, --tests
Docs coming soon
Unique request ID (Correlation ID)
Generates module that creates a unique correlation/request ID for every incoming request. This is available for any other object during the request life-span. Internally it uses Node's built-in AsyncLocalStorage
--coi, --correlation-id
Docs coming soon
Dockerfile
Generates dockerfile that embodies 20> best practices
--df, --docker-file
Docs coming soon
Strong-schema configuration
A configuration module that dynamically load run-time configuration keys and includes a strong schema so it can fail fast
Q: How to obtain a valid token to manually invoke the route (e.g., via POSTMAN)?
Answer: By default, Practica routes are guarded from unauthorized requests. The automated testing already embed valid tokens. Should you wish to invoke the routes manually a token must be signed.
Option 1 - Visit an online JWT token signing tool like jwt builder, change the key (bottom part of the form) to the key that is specified under ./services/order-service/config.ts/jwtTokenSecret/default. If you never changed it, the default secret is: just-a-default-secret. Click the submit button and copy the generated token.
Given the signed token, add a new header to your request with the name 'Authorization' and the value 'Bearer {put here the token}'
Option 2 - We already generated this token for you 👇, it should work with the default configuration in a development environment. Obviously, before going to production - the JWT secret must be changed:
Now that you have Practice installed (if not, do this first), it's time to code a great app using it and understand its unique power. This journey will inspire you with good patterns and practices. All the concepts in this guide are not our unique ideas, quite the opposite, they are all standard patterns or libraries that we just put together. In this tutorial we will implement a simple feature using Practica, ready?
Just before you start coding, ensure you have Docker and nvm (a utility that installs Node.js) installed. Both are common development tooling that are considered as a 'good practice'.
You now have a folder with Practica code. What will you find inside this box? Practica created for you an example Node.js solution with a single component (API, Microservice) that is called 'order-service'. Of course you'll change its name to something that represents your solution. Inside, it packs a lot of thoughtful and standard optimizations that will save you countless hours doing what others have done before.
Besides this component, there are also a bunch of reusable libraries like logger, error-handler and more. All sit together under a single root folder in a single Git repository - this popular structure is called a 'Monorepo'.
+A typical Monorepo structure
The code inside is coded with Node.js, TypeScript, express and Postgresql. Later version of Practica.js will support more frameworks.
A minute before we start coding, let's ensure the solution starts and the tests pass. This will give us confidence to add more and more code knowing that we have a valid checkpoint (and tests to watch our back).
Just run the following standard commands:
CD into the solution folder
cd{your-solution-folder}
Install the right Node.js version
nvm use
Install dependencies
npminstall
Run the tests
npmtest
Tests pass? Great! 🥳✅
They fail? oppss, this does not happen too often. Please approach our discord or open an issue in Github? We will try to assist shortly
Optional: Start the app and check with Postman
Some rely on testing only, others like also to invoke routes using POSTMAN and test manually. We're good with both approach and recommend down the road to rely more and more on testing. Practica includes testing templates that are easy to write
Start the process first by navigating to the example component (order-service):
cd services/order-service
Start the DB using Docker and install tables (migration):
docker-compose -f ./test/docker-compose.yml up
npm run db:migrate
This step is not necessary for running tests as it will happen automatically
Then start the app:
npm start
Now visit our online POSTMAN collection, explore the routes, invoke and make yourself familiar with the app
Note: The API routes authorize requests, a valid token must be provided. You may generate one yourself (see here how), or just use the default development token that we generated for you 👇. Put it inside an 'Authorization' header:
A typical component (e.g., Microservice) contains 3 main layers. This is a known and powerful pattern that is called "3-Tiers". It's an architectural structure that strikes a great balance between simplicity and robustness. Unlike other fancy architectures (e.g. hexagonal architecture, etc), this style is more likely to keep things simple and organized. The three layers represent the physical flow of a request with no abstractions:
+A typical Monorepo structure
- Layer 1: Entry points - This is the door to the application where flows start and requests come-in. Our example component has a REST API (i.e., API controllers), this is one kind of an entry-point. There might be other entry-points like a scheduled job, CLI, message queue and more. Whatever entry-point you're dealing with, the responsibility of this layer is minimal - receive requests, perform authentication, pass the request to be handled by the internal code and handle errors. For example, a controller gets an API request then it does nothing more than authenticating the user, extract the payload and call a domain layer function 👇
- Domain - A folder containing the heart of the app where the flows, logic and data-structure are defined. Its functions can serve any type of entry-points - whether it's being called from API or message queue, the domain layer is agnostic to the source of the caller. Code here may call other services via HTTP/queue. It's likely also to fetch from and save information in a DB, for this it will call the data-access layer 👇
- Data-access - Your entire DB interaction functionality and configuration is kept in this folder. For now, Practica.js uses ORM to interact with the DB - we're still debating on this decision
Now that you understand the structure of the example component, it's much easier to code over it 👇
We're about to implement a simple feature to make you familiar with the major code areas. After reading/coding this section, you should be able to add routes, logic and DB objects to your system easily. The example app deals with an imaginary e-commerce app. It has functionality for adding and querying for Orders. Goes without words that you'll change this to the entities and columns that represent your app.
🗝 Key insight: Practica has no hidden abstractions, you have to become familiar with the (popular) chosen libraries. This minimizes future scenarios where you get stuck when an abstraction is not suitable to your need or you don't understand how things work.
Requirements - - Our missions is to code the following: Allow updating an order through the API. Orders should also have a new field: Status. When trying to edit an existing order, if the field order.'paymentTermsInDays' is 0 (i.e., the payment due date is now) or the order.status is 'delivered' - no changes are allowed and the code should return HTTP status 400 (bad request). Otherwise, we should update the DB with new order information
1. Change the example component/service name
Obviously your solution, has a different context and name. You probably want to rename the example service name from 'order-service' to {your-component-name}. Change both the folder name ('order-service') and the package.json name field:
./services/order-service/package.json
{ "name":"your-name-here", "version":"0.0.2", "description":"An example Node.js app that is packed with best practices", }
If you're just experimenting with Practica, you may leave the name as-is for now.
2. Add a new 'Edit' route
The express API routes are located in the entry-points layer, in the file 'routes.ts': [root]/services/order-service/entry-points/api/routes.ts
This is a very typical express code, if you're familiar with express you'll be productive right away. This is a core principle of Practica - it uses battle tested technologies as-is. Let's just add a new route in this file:
// A new route to edit order router.put('/:id',async(req, res, next)=>{ try{ logger.info(`Order API was called to edit order ${req.params.id}`); // Later on we will call the main code in the domain layer // Fow now let's put hard coded values res.json({id:1,userId:1,productId:2,countryId:1, deliveryAddress:'123 Main St, New York', paymentTermsInDays:30}).status(200).end(); }catch(err){ next(err); } });
✅Best practice: The API entry-point (controller) should stay thin and focus on forwarding the request to the domain layer.
Looks highly familiar, right? If not, it means you should learn first how to code first with your preferred framework - in this case it's Express. That's the thing with Practica - We don't replace neither abstract your reputable framework, we only augment it.
3. Test your first route
Commonly, once we have a first code skeleton, it's time to start testing it. In Practica we recommend writing 'component tests' against the API and including all the layers (no mocking), we have great examples for this under [root]/services/order-service/test
You may visit the file: [root]/services/order-service/test/add-order.test.ts, read one of the test and you're likely to get the intent shortly. Our testing guide will be released shortly.
🗝 Key insight: Practica's testing strategy is based on 'component tests' that include all the layers including the DB using docker-compose. We include rich testing patterns that mitigate various real-world risks like testing error handling, integrations and other things beyond the basics. Thanks to thoughtful setup, we're able to run 50 tests with DB in ~6 seconds. This is considered as a modern and highly-efficient strategy for testing Microservices
In this guide though, we're more focused on features craft - it's OK for now to test with POSTMAN or any other API explorer tool.
4. Create a DTO and a validation function
We're about to receive a payload from the caller, the edited order JSON. We obviously want to declare a strong schema/type so we can validate the incoming payloads and work with strong TypeScript types
✅Best practice: Validate incoming request and fail early. Both in run-time and development time
To meet these goals, we use two popular and powerful libraries: typebox and ajv. The first library, Typebox allows defining a schema with two outputs: TypeScript type and also JSON Schema. This is a standard and popular format that can be reused in many other places (e.g., to define OpenAPI spec). Based on this, the second library, ajv, will validate the requests.
Open the file [root]/services/order-service/domain/order-schema.ts
// Declare the basic order schema import{Static,Type}from'@sinclair/typebox'; exportconst orderSchema =Type.Object({ deliveryAddress:Type.String(), paymentTermsInDays:Type.Number(), productId:Type.Integer(), userId:Type.Integer(), status:Type.Optional(Type.String()),// 👈 Add this field });
This is Typebox's syntax for defines the basic order schema. Based on this we can get both JSON Schema and TypeScript type (!), this allows both run-time and development time protection. Add the status field to it and the following line to get a TypeScript type:
// This is a standard TypeScript type - we can use it now in the code and get intellisense + Typescript build-time validation export type editOrderDTO =Static<typeof orderSchema>;
We have now strong development types to work with, it's time to configure our runtime validator. The library ajv gets JSON Schema, and validates the payload against it.
In the same file, let's define a validation function for edited orders:
// [root]/services/order-service/domain/order-schema import{ ajv }from'@practica/validation'; exportfunctioneditOrderValidator(){ // For performance reason we cache the compiled validator function const validator = ajv.getSchema<editOrderDTO>('edit-order'); if(!validator){ ajv.addSchema(editOrderSchema,'edit-order'); } return ajv.getSchema<editOrderDTO>('edit-order')!; }
We now have a TypeScript type and a function that can validate it on run-time. Knowing that we have safe types, it's time for the 'main thing' - coding the flow and logic
5. Create a use case (what the heck is 'use case'?)
Let's code our logic, but where? Obviously not in the controller/route which merely forwards request to our business logic layer. This should be done inside our domain folder, where the logic lives. Let's create a special type of code object - a use case.
A use-case is a plain JavaScript object/class which is created for every flow/feature. It summarizes the flow in a business and simple language without delving into the technical and small details. It mostly orchestrates other small services that hold all the implementation details. With use cases, the reader can grasp the high-level flow easily and avoid exposure to unnecessary complexity.
Let's add a new file inside the domain layer: edit-order-use-case.ts, and code the requirements:
// [root]/services/order-service/domain/edit-order-use-case.ts import*as orderRepositoryfrom'../data-access/repositories/order-repository'; exportdefaultasyncfunctioneditOrder(orderId: number,updatedOrder: editOrderDTO){ // Note how we use 👆 the editOrderDTO that was defined in the previous step assertOrderIsValid(updatedOrder); assertEditingIsAllowed(updatedOrder.status, updatedOrder.paymentTermsInDays); // Call the DB layer here 👇 - to be explained soon returnawait orderRepository.editOrder(orderId, updatedOrder); }
Note how reading this function above easily tells the flow without messing with too much details. This is where use cases shine - by summarizing long details.
✅Best practice: Describe every feature/flow with a 'use case' object that summarizes the flow for better readability
Now we need to implement the functions that the use case calls. Since this is just a simple demo, we can put everything inside the use case. Consider a real-world scenario with heavier logic, calls to 3rd parties and DB work - In this case you'll need to spread this code across multiple services.
🗝 Key insight: Note how everything we did thus far is mostly coding functions. No fancy constructs, no abstractions, not even classes - we try to keep things as simple as possible. You may of course use other language features when the need arises. We suggest by-default to stick to plain functions and use other constructs when a strong need is identified.
6. Put the data access code
We're tasked with saving the edited order in the database. Any DB-related code is located within the folder: [root]/services/order-service/data-access.
Practica supports two popular ORM, Sequelize (default) and Prisma. Whatever you chose, both are a battle-tested and reputable option that will surely serve you well as long as the DB complexity is not overwhelming.
Before discussing the ORM-side, we wrap the entire DB layer with a simple class that externalizes all the DB functions to the domain layer. This is the repository pattern which advocates decoupling the DB narratives from the one who codes business logic. Inside [root]/services/order-service/data-access/repositories, you'll find a file 'order-repository', open it and add a new function:
[root]/services/order-service/data-access/order-repository.js import{ getOrderModel }from'./models/order-model';// 👈 This is the ORM code which will get explained soon exportasyncfunctioneditOrder(orderId: number, orderDetails):OrderRecord{ const orderEditingResponse =awaitgetOrderModel().update(orderDetails,{ where:{id: orderId }, }); return orderEditingResponse; }
Note that this file contains a type - OrderRecord. This is a plain JS object (POJO) that is used to interact with the data access layer. This approach prevents leaking DB/ORM narratives to the domain layer (e.g., ActiveRecord style)
✅Best practice: Externalize any DB data with a response that contains plain JavaScript objects (the repository pattern)
Add the new Status field to this type:
type OrderRecord={ id: number; // ... other existing fields status: string;// 👈 Add this field per our requirements };
Let's configure the ORM now and define the Order model - a mapper between JavaScript object and a database table (a common ORM notion). Open the file [root]/services/order-service/data-access/models/order-model.ts:
import{DataTypes}from'sequelize'; importgetDbConnectionfrom'../db-connection'; exportdefaultfunctiongetOrderModel(){ // getDbConnection returns a singleton Sequelize (ORM) object - This is necessary to avoid multiple DB connection pools returngetDbConnection().define('Order',{ id:{ type:DataTypes.INTEGER, primaryKey:true, autoIncrement:true, }, deliveryAddress:{ type:DataTypes.STRING, }, //some other fields here status:{ type:DataTypes.String,// 👈 Add this field per our requirements allowNull:true } }); }
This file defines the mapping between our received and returned JavaScript object and the database. Given this definition, the ORM can now expose functions to interact with data.
7. 🥳 You have a robust working flow now
You should now be able to run the automated tests or POSTMAN and see the full flow working. It might feel like an overkill to create multiple layers and objects - naturally this level of modularization pays off when things get more complicated in real-world scenarios. Follow these layers and principles to write great code. In a short time, once you become familiar with these techniques - it will feel quick and natural
🗝 Key insight: Anything we went through in this article is not unique to Practica.js rather ubiquitous backend concepts. Practica.js brings no overhead beyond the common best practices. This knowledge will serve you in any other scenario, regardless of Practica.js
We will be grateful if you share with us how to make this guide better
Ideas for future iterations: How to work with the Monorepo commands, Focus on a single componenent or run commands from the root, DB migration
You just got a Node.js Monorepo solution with one example component/Microservice and multiple libraries. Based on this hardened solution you can build a robust application. The example component/Microservice is located under: {your chosen folder name}/services/order-service. This is where you'll find the API and a good spot to start your journey from.
Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices
1. Best Practices on top of known Node.js frameworks
We don't re-invent the wheel. Rather, we use your favorite framework and empower it with structure and real examples. With a single command you can get an Express/Fastify-based codebase with ~100 examples of best practices inside.
Keeping it simple, flat and based on native Node/JS capabilities is part of this project DNA. We believe that too many abstractions, high-complexity or fancy language features can quickly become a stumbling block for the team.
To name a few examples, our code flow is flat with almost no level of indirection, although using TypeScript - almost no features are being used besides types, for modularization we simply use Node.js modules
Good Practices and Simplicity is the name of the game with Practica. There is no need to narrow our code to a specific framework or database. We aim to support a majority of popular Node.js frameworks and databases.
Practices and Features
We apply more than 100 practices and optimizations. You can opt in or out for most of these features using option flags on our CLI. The follow table is just a few examples of features we provide. To see the full list of features, please visit our website here.
Feature
Explanation
Flag
Docs
Monorepo setup
Generates two components (e.g., Microservices) in a single repository with interactions between the two
--mr, --monorepo
Docs coming soon
Output escaping and sanitizing
Clean-out outgoing responses from potential HTML security risks like XSS
--oe, --output-escape
Docs coming soon
Integration (component) testing
Generates full-blown component/integration tests setup including DB
--t, --tests
Docs coming soon
Unique request ID (Correlation ID)
Generates module that creates a unique correlation/request ID for every incoming request. This is available for any other object during the request life-span. Internally it uses Node's built-in AsyncLocalStorage
--coi, --correlation-id
Docs coming soon
Dockerfile
Generates dockerfile that embodies 20> best practices
--df, --docker-file
Docs coming soon
Strong-schema configuration
A configuration module that dynamically load run-time configuration keys and includes a strong schema so it can fail fast
+
+
+
+
\ No newline at end of file
diff --git a/tsconfig.json b/tsconfig.json
deleted file mode 100644
index bcfe75ec..00000000
--- a/tsconfig.json
+++ /dev/null
@@ -1,102 +0,0 @@
-{
- "compilerOptions": {
- /* Visit https://aka.ms/tsconfig.json to read more about this file */
-
- /* Projects */
- // "incremental": true, /* Enable incremental compilation */
- // "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
- // "tsBuildInfoFile": "./", /* Specify the folder for .tsbuildinfo incremental compilation files. */
- // "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects */
- // "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
- // "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
-
- /* Language and Environment */
- "target": "es2020" /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */,
- // "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
- "jsx": "react" /* Specify what JSX code is generated. */,
- // "experimentalDecorators": true, /* Enable experimental support for TC39 stage 2 draft decorators. */
- // "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
- // "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h' */
- // "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
- // "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using `jsx: react-jsx*`.` */
- // "reactNamespace": "", /* Specify the object invoked for `createElement`. This only applies when targeting `react` JSX emit. */
- // "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
- // "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
-
- /* Modules */
- "module": "commonjs" /* Specify what module code is generated. */,
- // "rootDir": "./", /* Specify the root folder within your source files. */
- // "moduleResolution": "node", /* Specify how TypeScript looks up a file from a given module specifier. */
- // "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
- // "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
- // "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
- // "typeRoots": [], /* Specify multiple folders that act like `./node_modules/@types`. */
- // "types": [], /* Specify type package names to be included without being referenced in a source file. */
- // "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
- "resolveJsonModule": true /* Enable importing .json files */,
- // "noResolve": true, /* Disallow `import`s, `require`s or ``s from expanding the number of files TypeScript should add to a project. */
-
- /* JavaScript Support */
- "allowJs": true /* Allow JavaScript files to be a part of your program. Use the `checkJS` option to get errors from these files. */,
- // "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
- // "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from `node_modules`. Only applicable with `allowJs`. */
-
- /* Emit */
- // "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
- // "declarationMap": true, /* Create sourcemaps for d.ts files. */
- // "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
- // "sourceMap": true, /* Create source map files for emitted JavaScript files. */
- // "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If `declaration` is true, also designates a file that bundles all .d.ts output. */
- "outDir": ".dist/" /* Specify an output folder for all emitted files. */,
- // "removeComments": true, /* Disable emitting comments. */
- // "noEmit": true, /* Disable emitting files from a compilation. */
- // "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
- // "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types */
- // "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
- // "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
- // "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
- // "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
- // "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
- // "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
- // "newLine": "crlf", /* Set the newline character for emitting files. */
- // "stripInternal": true, /* Disable emitting declarations that have `@internal` in their JSDoc comments. */
- // "noEmitHelpers": true, /* Disable generating custom helper functions like `__extends` in compiled output. */
- // "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
- // "preserveConstEnums": true, /* Disable erasing `const enum` declarations in generated code. */
- // "declarationDir": "./", /* Specify the output directory for generated declaration files. */
-
- /* Interop Constraints */
- // "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
- // "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
- "esModuleInterop": true /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables `allowSyntheticDefaultImports` for type compatibility. */,
- // "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
- "forceConsistentCasingInFileNames": true /* Ensure that casing is correct in imports. */,
-
- /* Type Checking */
- "strict": true /* Enable all strict type-checking options. */,
- "noImplicitAny": false /* Enable error reporting for expressions and declarations with an implied `any` type.. */,
- // "strictNullChecks": true, /* When type checking, take into account `null` and `undefined`. */
- // "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
- // "strictBindCallApply": true, /* Check that the arguments for `bind`, `call`, and `apply` methods match the original function. */
- // "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
- // "noImplicitThis": true, /* Enable error reporting when `this` is given the type `any`. */
- // "useUnknownInCatchVariables": true, /* Type catch clause variables as 'unknown' instead of 'any'. */
- // "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
- // "noUnusedLocals": true, /* Enable error reporting when a local variables aren't read. */
- // "noUnusedParameters": true, /* Raise an error when a function parameter isn't read */
- // "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
- // "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
- // "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
- // "noUncheckedIndexedAccess": true, /* Include 'undefined' in index signature results */
- // "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
- // "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type */
- // "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
- // "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
-
- /* Completeness */
- // "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
- "skipLibCheck": true /* Skip type checking all .d.ts files. */
- },
- "includes": ["./package.json"],
- "exclude": ["node_modules", "**/code-templates", "**/output-folders-for-testing", "**/docs"]
-}