<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[EETechy]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://eetechy.com/</link><generator>Ghost 4.12</generator><lastBuildDate>Thu, 01 Jan 2026 05:42:03 GMT</lastBuildDate><atom:link href="https://eetechy.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Develop performant REST API with Node, Fastify and Objection.js]]></title><description><![CDATA[<p>After developing for some years REST API-s with <a href="https://expressjs.com/">Express</a> and <a href="https://sequelize.org/">Sequelize ORM</a> I felt that it was the time to try something new. During my research for other NodeJS Web Frameworks I came across <a href="https://www.fastify.io/">Fastify</a>. Among other features, selling point that caught me off was their official <a href="https://www.fastify.io/benchmarks/">benchmark</a> which shows</p>]]></description><link>https://eetechy.com/develop-rest-api-with-node-fastify-and-objection-js/</link><guid isPermaLink="false">61829e3e7edf2d4284896d8f</guid><category><![CDATA[NodeJS]]></category><category><![CDATA[Node]]></category><dc:creator><![CDATA[Eduard Hasanaj]]></dc:creator><pubDate>Thu, 04 Nov 2021 15:33:23 GMT</pubDate><media:content url="https://eetechy.com/content/images/2021/11/tutorial3.png" medium="image"/><content:encoded><![CDATA[<img src="https://eetechy.com/content/images/2021/11/tutorial3.png" alt="Develop performant REST API with Node, Fastify and Objection.js"><p>After developing for some years REST API-s with <a href="https://expressjs.com/">Express</a> and <a href="https://sequelize.org/">Sequelize ORM</a> I felt that it was the time to try something new. During my research for other NodeJS Web Frameworks I came across <a href="https://www.fastify.io/">Fastify</a>. Among other features, selling point that caught me off was their official <a href="https://www.fastify.io/benchmarks/">benchmark</a> which shows this framework being 4x faster than Express. Afterwards I was really curious what&apos;s the developer experience with this framework. For this reason I developed a simple REST API about task management which development stages will be detailed throughout this article.</p><p>Another focus of this article was the ORM selection. It is very important to avoid any performance bottleneck in the data layer which may be introduced by a particular ORM. For me as a developer, Sequelize provides a very friendly user interface but at what cost? Of course performance!</p><h3 id="first-things-first">First things first</h3><p>The first step in this journey is to create a fastify project. We can use the <a href="https://github.com/fastify/fastify-cli">fastify-cli</a>. For this project I decided to go ahead with strong type safety by using Typescript. The command used to create a project with typescript support is as following:</p><pre><code class="language-bash">npx fastify-cli generate &lt;app-name&gt; --lang=ts</code></pre><p>A database and and table will be required for this tutorial. The following SQL snippet of code defines required tasks table:</p><pre><code class="language-sql">CREATE TYPE task_status AS ENUM(&apos;backlog&apos;, &apos;pending&apos;, &apos;failed&apos;, &apos;done&apos;);

CREATE TABLE tasks(
    id SERIAL PRIMARY KEY,
    title VARCHAR(30) NOT NULL,
    &quot;description&quot; VARCHAR(256) NOT NULL,
    &quot;status&quot; task_status NOT NULL,
    start_time TIMESTAMP NOT NULL,
    end_time TIMESTAMP NOT NULL,
    deleted BOOLEAN DEFAULT FALSE
);</code></pre><h3 id="picking-up-an-orm">Picking up an ORM </h3><p>One of the most critical parts of a backend app is the Data Layer. This layer is mostly represented by ORM. With a poor choice, the ORM can negatively affect the app performance. </p><p>As a Go developer I used to work with <a href="Work with existing databases: Don&apos;t be the tool to define the schema, that&apos;s better left to other tools. ActiveRecord-like productivity: Eliminate all sql boilerplate, have relationships as a first-class concept. optimize hot paths by generating specific code for each schema model.">Sqlboiler ORM</a>. I was really amazed how it provided the functionalities by avoiding the reflection as much as possible. The ORM itself is schema which means that the models are generated from the database schema. This philosophy has the following benefits:</p><ul><li>Work with existing databases: Don&apos;t be the tool to define the schema, that&apos;s better left to other tools.</li><li>ActiveRecord-like productivity: Eliminate all sql boilerplate, have relationships as a first-class concept.</li><li>Optimize hot paths by generating specific code for each schema model.</li></ul><p>The biggest performance gain is due to the fact that for specific models, specific like hand-roled queries are generated. This allows for hot path optimization.</p><p>Unfortunately, in the NodeJS environment all ORM seems to be code-first. They are more focused on being user friendly than performant. In my opinion a balance between user-friendliness and performance of an ORM &#xA0;must be established. However, during my ORM research I came across <a href="https://www.npmjs.com/package/objection">Objection.js</a> which is built on top of a highly performanc query builder called <a href="https://www.npmjs.com/package/knex">Knex</a>. This convinced me to some degree so I picked it. </p><p>Unfortunately I did no benchmarks to confirm my choice of ORM, but I was based on an external <a href="https://github.com/emanuelcasco/typescript-orm-benchmark">benchmark</a>. After a close inspection of the benchmark, it can be concluded that the benchmark involves also the network overhead incurred by HTTP request and database connection. For a more fair experiment, both of them must be crossed out from the benchmark by mocking the database driver. Despite of that, Objections was more performant than Sequelize.</p><h3 id="setting-up-data-layer">Setting up Data Layer</h3><p>At first I was trying to reinvent the wheel by introducing some infrastructure services which are used among different API-s during the app lifecycle. After reading this <a href="https://daily.dev/blog/how-to-build-blazing-fast-apis-with-fastify-and-typescript">article</a> I realized that I was doing it in a wrong way. Fastify allows us to declare plugins to handle such kind of services. More specifically in the README of the plugins folder the following note is written:</p><blockquote>Plugins define behavior that is common to all the routes in your application. Authentication, caching, templates, and all the other crosscutting concerns should be handled by plugins placed in this folder.</blockquote><p>With that in mind, I created a database plugin with the following code:</p><pre><code class="language-ts">// src/plugins/database.ts
import Knex = require(&apos;knex&apos;)
import { Model } from &apos;objection&apos;;
import config from &apos;../config&apos;
import fp from &apos;fastify-plugin&apos;

export interface DBConfig {
    
}

export default fp&lt;DBConfig&gt;(async (fastify, opts) =&gt; {
    const knex = Knex(config.development.database)
    Model.knex(knex);

    await checkHeartbeat(knex);
    
    fastify.decorate(&apos;knex&apos;, knex)
});

async function checkHeartbeat(knex: Knex&lt;any,unknown[]&gt;) {
    await knex.raw(&apos;SELECT 1&apos;)
}</code></pre><p>After the Knex instance is created, it does not check if the given configuration is correct. For that we need to make a random query by calling <strong>checkHeartbeat</strong>.</p><p>After that we need to define a model for our task entity. We create a folder called models under src folder.</p><pre><code class="language-ts">// src/models/task.ts
import { Model } from &apos;objection&apos;

export default class Task extends Model {
    id!: number
    title!: string
    description!: string
    status!: string
    startTime!: Date
    endTime!: Date
    deleted!: boolean
    
    static tableName = &apos;tasks&apos;
}</code></pre><h3 id="fastify-philosophy-on-creating-rest-apis">Fastify philosophy on creating REST APIs</h3><p>Before diving straight into API development, I think it is best to think a bit about the anatomy of a fastify route declaration. This would help us to see the big picture and plan a better organization of the code for the route declaration which must be clear and easy to maintain.</p><p>Traditionally, when we talk about structure of an API endpoint we are referring to a path which is string, and the handler which is the function that is executed when a request hits the endpoint&apos;s path. Usually, the handler contains following logic blocks:</p><ol><li>parse request body/parameters</li><li>validate the obtained input</li><li>process business logic</li></ol><p>Fastify handlers are much more different. Due to the architecture enforced by the framework, a handler should not contain parsing or validation logic. When setting up a route, we can configure the model where the request input should be deserialized into, and a validation schema which should be satisfied in order to process the request, otherwise a status code of 400 which indicates a bad request will be sent to the client. S0 only the business logic is left to be processed in the handler. Let&apos;s see the example of creating a task:</p><pre><code class="language-ts">// src/routes/api/tasks/index.ts
fastify.post&lt;{Body: Task, Reply: Task}&gt;(
  &apos;/&apos;,
  {
    schema: {
      body: TaskSchema
    }, 
  },
  async (req, res) =&gt; {
    try {
      return TaskService.createTask(req.body)
    }
    catch(err: any) {
      return err;
    }
  }
)</code></pre><ol><li>in line 2 a route at path POST /api/tasks is being declared</li><li>generic types are used to specify the types of the body being received &#xA0;and the reply that should be sent; in case we return in handler an object whose type is not task, an error is thrown; this guarantees strong type safety.</li><li>in line 3 the path is specified; please see <a href="https://github.com/fastify/fastify-autoload">fastify-autoload</a> to get more contexts how routes are registered with this plugin.</li><li>in line 5 we setup the schema for validation; I have created a folder at src/schemas which contains all validation schemas.</li><li>in line 9 the handler is declared</li></ol><p>The validation task schema used at line 6:</p><pre><code class="language-ts">// src/schemas/task_schema.ts
import { Type } from &quot;@sinclair/typebox&quot;;

export const RequiredTaskSchema = Type.Object({
    id: Type.Optional(Type.Integer()),
    title: Type.String(),
    description: Type.String(),
    status: Type.String(),
    start_time: Type.Optional(Type.String()),
    end_time: Type.Optional(Type.String()),
    deleted: Type.Optional(Type.Boolean())
});</code></pre><p>In order to keep handlers as slim as possible I decided to extract all business logic in a separate layer called services. I went with static methods instead of creating a service object instance for each request. Here is the declaration of TaskService.createTask used at line 11:</p><pre><code class="language-ts">// src/services/task_service.ts
static async createTask(task: Task): Promise&lt;Task&gt; {
    return await Task.query().insert(task);
}</code></pre><p>With this philosophy we provide a better encapsulation of the business logic. At the same time, we prevent ending up with large route files which are difficult to read and maintain.</p><h3 id="other-apis">Other APIs</h3><p>In this section I will talk about remaining APIs such as: list tasks, get, update and delete a particular task.</p><p><strong>List Task API - [GET] /api/tasks</strong></p><p>Route declaration for getting the list of tasks:</p><pre><code class="language-ts">// src/routes/api/tasks/index.ts
fastify.get&lt;{Querystring: ListQueryOptions, Reply: Task[]}&gt;(
  &apos;/&apos;,
  {
    schema: {
      querystring: ListQueryOptionsSchema,
    },
  },
  async (req, res) =&gt; {
    return await TaskService.getTaskList(req.query)
  }
  
)</code></pre><p>This API accepts query parameters defined by ListQueryOptionsSchema. Here is the declaration of the type and the validation schema:</p><pre><code class="language-ts">// src/types/list_query_options.ts
import { Type } from &apos;@sinclair/typebox&apos;

export interface ListQueryOptions {
    page: number
    count: number
    query: string
}

export const ListQueryOptionsSchema = Type.Object({
    page: Type.Integer(),
    count: Type.Integer(),
    query: Type.Optional(Type.String())
})</code></pre><p>Reading ListQueryOptionsSchema definition we can see that both page and count are mandatory. &#xA0;This is done just for demonstration purposes. It is pretty doable to have both these parameters as optional and fill them by default values.</p><p>However it is really important to put a limit in the count because the API can be exploited by hackers in a DDOS attack which can lead to a high consumption of the resources.</p><p>The business logic for getTaskList:</p><pre><code class="language-ts">// src/services/task_service.ts
static async getTaskList(lso: ListQueryOptions): Promise&lt;Task[]&gt; {
    const offset = lso.count * (lso.page - 1)
    return await Task.query()
        .where(&apos;deleted&apos;, false)
        .limit(lso.count)
        .offset(offset);
}</code></pre><p>This is an excellent example how the service is helping in keep the route declaration code as small as possible.</p><p><strong>Get Task by Id - [GET] /api/tasks/:id\</strong></p><p>Route declaration for getting a task by id:</p><pre><code class="language-ts">// src/routes/api/tasks/index.ts
fastify.get&lt;{ Params: PathIdParam, Reply: Task | Error}&gt;(
  &apos;/:id&apos;,
  {
    schema: {
      params: PathIdParamSchema,
    }
  },
  async (req, res) =&gt; {
    try {
      return await TaskService.getTask(req.params.id);
    }
    catch(err: any) {
      return err;
    }
  }
)</code></pre><p>The API requires a path parameter which is used to specify the resource id. &#xA0;PathIdParamSchema is the validation schema and PathIdParam is the type which declares the id parameter. Here is the relevant code:</p><pre><code class="language-ts">// src/types/path_id_param.ts
import { Type } from &quot;@sinclair/typebox&quot;;

export interface PathIdParam {
    id: number
}

export const PathIdParamSchema = Type.Object({
    id: Type.Integer()
})</code></pre><p>Relevant code of service method getTask:</p><pre><code class="language-ts">// src/services/task_service.ts
static async getTask(id: number): Promise&lt;Task&gt; {
    const task = await Task.query()
        .findById(id)
        .where(&apos;deleted&apos;, false);

    if (!task) {
        throw new httpErrors.NotFound()
    }

    return task;
}</code></pre><p><strong>Update Task by Id - [PUT] /api/tasks/:id</strong></p><p>Route declaration for updating a task:</p><pre><code class="language-ts">// src/routes/api/tasks/index.ts
fastify.put&lt;{Params: PathIdParam, Body: Task, Reply: Task | Error}&gt;(
  &apos;/:id&apos;,
  {
    schema: {
      params: PathIdParamSchema,
      body: TaskSchema
    },
  },
  async (req, res) =&gt; {
    req.body.id = req.params.id;
    try {
      return await TaskService.updateTask(req.body)
    }
    catch(err: any) {
      return err
    }
  }
)</code></pre><p>The route configuration is almost identical with the previous one. However, here we are accepting a body which uses task schema for validating the body.</p><p>The relevant code of service method updateTask:</p><pre><code class="language-ts">// src/services/task_service.ts
static async updateTask(task: Task): Promise&lt;Task&gt; {
    const oldTask = await this.getTask(task.id);

    return await oldTask.$query().updateAndFetch(task);
}</code></pre><p><strong>Delete Task by Id - [DELETE] /api/tasks/:id</strong></p><p>Route declaration for deleting a task:</p><pre><code class="language-ts">// src/routes/api/tasks/index.ts
fastify.delete&lt;{Params: PathIdParam, Reply: Task | Error}&gt;(
  &apos;/:id&apos;,
  {
    schema: {
      params: PathIdParamSchema,
    },
  },
  async (req, res) =&gt; {
    try {
      return await TaskService.deleteTask(req.params.id)
    }
    catch(err: any) {
      return err
    }
  }
)</code></pre><p>The route configuration is identical with the configuration of the route of getting a task by id.</p><p>The relevant code of service method deleteTask:</p><pre><code class="language-ts">// src/services/task_service.ts
static async deleteTask(id: number): Promise&lt;Task&gt; {
    const t = await this.getTask(id);

    await t.$query().updateAndFetch({
        deleted: true
    });

    return t;
}</code></pre><h3 id="conclusion">Conclusion</h3><p>Creating blazing fast REST API-s with NodeJS has never been easier and developer friendly than it is now. With fastify framework we can take advantage of declarative programming to setup route validation for input and output formats. However, we need to be careful about the usage of a particular ORM as it can dramatically affect the performance of the application. In the NodeJS environment, ORM-s are more focused on being user-friendly than performant. In my opinion a balance between them must be established.</p><h3 id="the-source-code-can-be-found-here">The source code can be found <a href="https://github.com/eduardhasanaj/eetechy-blog-examples/tree/main/node-rest-api">here</a></h3>]]></content:encoded></item><item><title><![CDATA[Create a client library for an external REST Service]]></title><description><![CDATA[<p>When developing backend applications, in order to provide a functionality, we may need to interact with other REST services. Most popular services may have client libraries in different languages like C#, JS, Python. However, there are situations where a service provider does not offer such library for the language that</p>]]></description><link>https://eetechy.com/create-a-client-library-for-an-external-http-service/</link><guid isPermaLink="false">617683067edf2d4284896aca</guid><category><![CDATA[go]]></category><dc:creator><![CDATA[Eduard Hasanaj]]></dc:creator><pubDate>Tue, 26 Oct 2021 10:54:06 GMT</pubDate><media:content url="https://eetechy.com/content/images/2021/10/rest-service.png" medium="image"/><content:encoded><![CDATA[<img src="https://eetechy.com/content/images/2021/10/rest-service.png" alt="Create a client library for an external REST Service"><p>When developing backend applications, in order to provide a functionality, we may need to interact with other REST services. Most popular services may have client libraries in different languages like C#, JS, Python. However, there are situations where a service provider does not offer such library for the language that we are currently working with. What to do in such case? Can&apos;t we just do some HTTP request directly in our business logic? What about authentication, how to handle expirable tokens? These and more questions will be answered in this article.</p><h3 id="first-things-first">First things first</h3><p>For the sake of this article, I picked <a href="https://thecatapi.com/">Cat API</a> which as stated in the official page offers an API for retrieving cat information such breeds, categories etc.</p><blockquote>A public service API all about Cats, free to use when making your fancy new App, Website or Service.</blockquote><p>Cat API does not offer a client library for consuming the service in Go. So we are going to create it for Go.</p><h3 id="creating-the-client">Creating the client</h3><p>GoLang is more of a functional language but it also supports development in OOP style to some degree by offering interfaces and structs. In our case we are going to use a struct for the client which will hold all information needed by the client to operate.</p><pre><code class="language-go">type CatApiClient struct {
	host       url.URL
	accessKey  string
	version    string
	httpClient *http.Client
}</code></pre><ul><li>host: it is the URL of the service and in our case is: https://api.thecatapi.com.</li><li>accessKey: you can get one by signing up at <a href="https://thecatapi.com/signup">Cat API</a>.</li><li>version: it is the api version such as v1, v2 etc.</li><li>httpClient: it is the http client which carries out http requests for the client and is reused throughout the lifecycle of the client for efficiency. </li></ul><p>In Go specification there are no constructors and for struct initialization we use functions. To create and initialize a CatApiClient, the function constructor is declared as following:</p><pre><code class="language-go">func NewClient(host, accessKey, v string) (*CatApiClient, error) {
	u, err := url.Parse(host)
	if err != nil {
		return nil, ErrInvalidHost
	}

	apiClient := &amp;CatApiClient{host: *u, accessKey: accessKey, version: v}

	// create default http client which will execute requests.
	apiClient.httpClient = &amp;http.Client{Timeout: 10 * time.Second}

	return apiClient, nil
}</code></pre><p>Before continuing, we need struct entities where response serialized data will be deserialized into. Lets take for example /breeds API which returns a list of breeds. In order to get the response format we make a request in Postman at <a href="https://api.thecatapi.com/v1/breeds">breeds API</a> and we get a response like this:</p><pre><code class="language-JSON">[
    {
        &quot;weight&quot;: {
            &quot;imperial&quot;: &quot;7  -  10&quot;,
            &quot;metric&quot;: &quot;3 - 5&quot;
        },
        &quot;id&quot;: &quot;abys&quot;,
        &quot;name&quot;: &quot;Abyssinian&quot;,
        &quot;cfa_url&quot;: &quot;http://cfa.org/Breeds/BreedsAB/Abyssinian.aspx&quot;,
        &quot;vetstreet_url&quot;: &quot;http://www.vetstreet.com/cats/abyssinian&quot;,
        &quot;vcahospitals_url&quot;: &quot;https://vcahospitals.com/know-your-pet/cat-breeds/abyssinian&quot;,
        &quot;temperament&quot;: &quot;Active, Energetic, Independent, Intelligent, Gentle&quot;,
        &quot;origin&quot;: &quot;Egypt&quot;,
        &quot;country_codes&quot;: &quot;EG&quot;,
        &quot;country_code&quot;: &quot;EG&quot;,
        &quot;description&quot;: &quot;The Abyssinian is easy to care for, and a joy to have in your home. They&#x2019;re affectionate cats and love both people and other animals.&quot;,
        &quot;life_span&quot;: &quot;14 - 15&quot;,
        &quot;indoor&quot;: 0,
        &quot;lap&quot;: 1,
        &quot;alt_names&quot;: &quot;&quot;,
        &quot;adaptability&quot;: 5,
        &quot;affection_level&quot;: 5,
        &quot;child_friendly&quot;: 3,
        &quot;dog_friendly&quot;: 4,
        &quot;energy_level&quot;: 5,
        &quot;grooming&quot;: 1,
        &quot;health_issues&quot;: 2,
        &quot;intelligence&quot;: 5,
        &quot;shedding_level&quot;: 2,
        &quot;social_needs&quot;: 5,
        &quot;stranger_friendly&quot;: 5,
        &quot;vocalisation&quot;: 1,
        &quot;experimental&quot;: 0,
        &quot;hairless&quot;: 0,
        &quot;natural&quot;: 1,
        &quot;rare&quot;: 0,
        &quot;rex&quot;: 0,
        &quot;suppressed_tail&quot;: 0,
        &quot;short_legs&quot;: 0,
        &quot;wikipedia_url&quot;: &quot;https://en.wikipedia.org/wiki/Abyssinian_(cat)&quot;,
        &quot;hypoallergenic&quot;: 0,
        &quot;reference_image_id&quot;: &quot;0XYvRd7oD&quot;,
        &quot;image&quot;: {
            &quot;id&quot;: &quot;0XYvRd7oD&quot;,
            &quot;width&quot;: 1204,
            &quot;height&quot;: 1445,
            &quot;url&quot;: &quot;https://cdn2.thecatapi.com/images/0XYvRd7oD.jpg&quot;
        }
    },
    ...
]</code></pre><p>We need to convert the object inside the JSON array to a Go struct. Luckily for as there is an <a href="https://mholt.github.io/json-to-go/">online tool</a> where we can create the required Go struct for that JSON Object.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://eetechy.com/content/images/2021/10/json-to-go.png" class="kg-image" alt="Create a client library for an external REST Service" loading="lazy" width="2000" height="694" srcset="https://eetechy.com/content/images/size/w600/2021/10/json-to-go.png 600w, https://eetechy.com/content/images/size/w1000/2021/10/json-to-go.png 1000w, https://eetechy.com/content/images/size/w1600/2021/10/json-to-go.png 1600w, https://eetechy.com/content/images/size/w2400/2021/10/json-to-go.png 2400w" sizes="(min-width: 1200px) 1200px"><figcaption>JSON to Go struct</figcaption></figure><p>We &#xA0;change struct name from AutoGenerated to Breed and save it in a file breed.go. You may also extract abstract struct such Image property and store it in a separate struct like ImageDescriptor(this is not required)</p><p>Now that we have the required data models for the breed API we can continue to the next topic.</p><h3 id="create-client-helper-methods">Create client helper methods</h3><p>For maximum code reusability, it is necessary to see the big picture and think about it before going straight into writing a certain functionality. With that I mean that we need to think about authentication and request construction which both can be reused between different API calls. This increases code reusability and it is easier to maintain. A total disaster would be if such logic is scattered among &#xA0;different API and when we find a bug in one API we would need to review and fix all API-s where such logic is copy-pasted. So its is a maintenance nightmare.</p><p>For the Cat API I created a generic purpose method for performing a GET request. It takes three parameters as shown in the following code snippet:</p><ul><li>path: defines the path of the request without including host or version; both host and version are handled automatically inside this function.</li><li>query: it is basically a map with query parameters that will be appended to the request</li><li>output: it is the structure where the response will we deserialized into; for example for breed list we pass a pointer to a Breed slice.</li></ul><pre><code class="language-go">func (c *CatApiClient) execGet(path string, query url.Values, output interface{}) error {
	url := c.host
	url.Path = &quot;/&quot; + c.version + path

	fmt.Println(url.String())

	req, err := http.NewRequest(http.MethodGet, url.String(), nil)
	if err != nil {
		return err
	}
    
        req.Header.Add(&quot;x-api-key&quot;, c.accessKey)

	req.URL.RawQuery = query.Encode()

	resp, err := c.httpClient.Do(req)
	if err != nil {
		return err
	}
	defer resp.Body.Close()
    
	if resp.StatusCode != 200 {
		errBytes, _ := io.ReadAll(resp.Body)
		return errors.New(string(errBytes))
	}

	dec := json.NewDecoder(resp.Body)
	return dec.Decode(output)
}</code></pre><p>The method is also responsible for taking care of authentication. As it can be seen in the code, in the line 12 we add the x-api-key header to the request.</p><h3 id="get-breeds-api">Get Breeds API</h3><p>Finally all required logic is in place so we can with not much effort complete the first Cat API call. Making use of execGet too much logic is abstracted away and the GetBreeds method is soo thin as shown in the following snippet:</p><pre><code class="language-go">func (c *CatApiClient) GetBreeds(query url.Values) ([]*Breed, error) {
	var breeds []*Breed

	if err := c.execGet(&quot;/breeds&quot;, query, &amp;breeds); err != nil {
		return nil, err
	}

	return breeds, nil
}</code></pre><p>Basically we have 9 loc for one API. WHAAAA!</p><h3 id="search-breeds-api">Search Breeds API</h3><p>Let&apos;s extend the Cat API Client with a search functionality for demo purposes. The response format does not change so we have the Breed struct and all required data models in place.</p><pre><code class="language-go">func (c *CatApiClient) SearchBreeds(query url.Values) ([]*Breed, error) {
	var breeds []*Breed

	if err := c.execGet(&quot;/breeds/search&quot;, query, &amp;breeds); err != nil {
		return nil, err
	}

	return breeds, nil
}</code></pre><p>As expected also this API call did not forced us to write any extra logic for making a request to Cat API.</p><h3 id="lazy-authentication">Lazy Authentication</h3><p>There was a topic that I really wanted to discuss there about a specific scenario that I encountered at my work. What if our access key is expirale? In such case we would need to renew it. The best is to handle this process as smoothly as possible. Unfortunately, Cat API has a non expirable access key so the code from now on will be hypothetical just for demonstration purposes.</p><p>First of all we need a new method for handling refresh token/access key procedure. This method will be called in low level helper functions such as <strong>execGet </strong>for better code reusability.</p><pre><code class="language-go">func (c *CatApiClient) execGet(path string, query url.Values, output interface{}) error {
	if c.hasTokenExpired() {
		if err := c.refreshToken(); err != nil {
			return err
		}
	}

	url := c.host
	url.Path = &quot;/&quot; + c.version + path
    ...
}

func (c *CatApiClient) refreshToken() error {
	c.refreshMutex.Lock()

	if c.hasTokenExpired() == false {
		return nil
	}

	accessKey, err := retrieveNewToken(&quot;token refresh string or credentials&quot;)
	if err != nil {
		return err
	}

	c.accessKey = accessKey
	expiry := 120 * time.Second // define token expiry in seconds or whatever 		and add offset to avoid invalid expiry
	c.tokenExp = time.Now().Add(expiry)

	c.refreshMutex.Unlock()

	return nil
}

func (c *CatApiClient) hasTokenExpired() bool {
	t := time.Now()

	return t.After(c.tokenExp)
}</code></pre><p>The refresh token procedure starts with a mutex lock. You may ask why we need one? In such case I did not want to call the <strong>retrieveToken </strong>multiple times to get different tokens because that would be just a waste of resources. Instead only one caller who gets the lock can enter inside the critical region(code between lock and unlock). </p><p>After the lock is passed, a check for token expiration is done. If the token has expired, a new one is retrieved calling <strong>retrieveNewToken</strong>. For other callers which are blocked by the mutex, this check condition will be satisfied as the token is renewed by the first call.</p><p>After the token is retrieved, client struct properties will be updated accordingly. Here it is important to mention the calculation of the expiry for the token. I would suggest to set an expiry earlier that the amount given in the service authentication specification by 5 - 10s so we avoid a false positive in token expiry check.</p><h3 id="conclusions">Conclusions</h3><p>Sometimes we may &#xA0;need to interact with external REST API services which may not have a client library in the language our application is being coded. In such case the best approach is to abstract the interaction with the said service in a dedicated entity for better testing and maintenance. It is critical to provide a smooth authentication mechanism which supports token token renewal under the hood. A good attention should be paid to the code reusability which is an investment in the long-term. It is necessary to see the big picture and think about it before going straight into writing a certain functionality.</p><p><strong>The source code can be found <a href="https://github.com/eduardhasanaj/eetechy-blog-examples/tree/main/cat-api">there</a></strong>.</p>]]></content:encoded></item><item><title><![CDATA[NodeJS Internals: Event Loop]]></title><description><![CDATA[<p>When getting into NodeJS, the first thing that might be a surprise is its single threaded model. Nowadays most OS support multi-core CPU architecture and threading which enables developers to perform multiple operations at the same time. But does NodeJS take advantage of multi-threading or it is really single threaded?</p>]]></description><link>https://eetechy.com/nodejs-internals-event-loop/</link><guid isPermaLink="false">613f63eb7edf2d42848964ab</guid><category><![CDATA[NodeJS]]></category><category><![CDATA[Node]]></category><dc:creator><![CDATA[Eduard Hasanaj]]></dc:creator><pubDate>Mon, 18 Oct 2021 15:54:35 GMT</pubDate><media:content url="https://eetechy.com/content/images/2021/10/Layer-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://eetechy.com/content/images/2021/10/Layer-1.png" alt="NodeJS Internals: Event Loop"><p>When getting into NodeJS, the first thing that might be a surprise is its single threaded model. Nowadays most OS support multi-core CPU architecture and threading which enables developers to perform multiple operations at the same time. But does NodeJS take advantage of multi-threading or it is really single threaded? How does it perform CPU-bound operations linked to hardware like I/O or other heavy operations like DNS lookup? I will try to give an answer to these questions throughout this article.</p><h2 id="the-nature-of-io-and-cpu-bound-operations">The Nature of I/O and CPU Bound Operations</h2><p>Peripheral devices require a way of communication with CPU to arrange their operations like data transfer etc. The most used modes of achieving such communication are: programmed I/O, interrupt initiated I/O, direct memory access. All of these modes require some kind interruption signal to notify the CPU to continue. This means that a waiting mechanism is needed for the data to be ready and process them chunk by chunk, but can we wait on the main thread? The answer is no because we have just one thread and this will result in a very high impact in performance and thus in scalability. Imagine that at one moment the application is serving 1000 clients and one of them just called an API which needs to perform some I/O by reading a file. At this point, the time needed to read that file will add up to the response time of all connected clients. Such situation can quickly escalate if several clients needs to perform I/O operations at the same time.</p><p>CPU-bound operations are also very demanding in CPU processing and they may take a considerable time to complete. During this period of time, the thread is completely busy and cannot process other operations. &#xA0;In this category can be highlighted cryptographic algorithms such as PB2KF or DNS lookup which are very demanding in CPU processing. </p><p>NodeJS really needs to offload such kind of operation from the main thread in order to offer the promised scalability. That is achieved by taking the advantage of Libuv.</p><h2 id="libuv">Libuv</h2><p>At the core of NodeJS stands Libuv whose main functionality is to provide an engine for offloading heavy operations from the main thread. Here is a description from <a href="https://github.com/libuv/libuv">libuv repository</a></p><blockquote>libuv is cross-platform support library which was originally written for <a href="https://nodejs.org/">Node.js</a>. It&#x2019;s designed around the event-driven asynchronous I/O model.</blockquote><blockquote>The library provides much more than a simple abstraction over different I/O polling mechanisms: &#x2018;handles&#x2019; and &#x2018;streams&#x2019; provide a high level abstraction for sockets and other entities; cross-platform file I/O and threading functionality is also provided, amongst other things.</blockquote><p>Libuv is a library written in C which implements event loop queues and OS API calls. &#xA0;Let&apos;s take a closer look into libuv internals.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://eetechy.com/content/images/2021/09/image.png" class="kg-image" alt="NodeJS Internals: Event Loop" loading="lazy" width="1020" height="493" srcset="https://eetechy.com/content/images/size/w600/2021/09/image.png 600w, https://eetechy.com/content/images/size/w1000/2021/09/image.png 1000w, https://eetechy.com/content/images/2021/09/image.png 1020w" sizes="(min-width: 720px) 720px"><figcaption>Source: http://docs.libuv.org/en/v1.x/_images/architecture.png</figcaption></figure><p>Network I/O is performed in the main thread. Libuv takes advantage of asynchronous sockets which are polled using the best mechanism provided by the OS: epoll on Linux, kqueue on MacOS, event ports on SunOS and IOCP on Windows. During the loop iteration, the main thread will block for a specific amount of time waiting for the activity on the sockets which have been added to the poller and respective callbacks will be fired to indicate socket conditions(closed, open, ready to read / write).</p><p>On the other hand, functionalities that are synchronous by nature such as file I/O, are processed on a separate thread taken from a global thread pool. The same is done for some very expensive operations related to DNS such as getaddrinfo and getnameinfo.</p><h2 id="event-loop">Event Loop</h2><p>Event Loop is an abstract concept which is used to model the event-driven architecture of NodeJS. It consist of six phases. Each phase has a dedicated FIFO queue of callbacks which are drained when the loop enters in that given phase.</p><p>As illustrated in the following diagram, a loop tick starts with timers, continues clock-wise to execute all phases and finishes with the execution of Close Callback phase. At the end, loop decides if it should continue another tick or exit there depending on the loop running mode(default, once, nowait) and whether there are active handles or not.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://eetechy.com/content/images/2021/10/event.gif" class="kg-image" alt="NodeJS Internals: Event Loop" loading="lazy" width="994" height="742"><figcaption>Event Loop Phases</figcaption></figure><blockquote>Note: As the execution of the loop progresses, more and more callback may be pushed into a given queue. This can block the event loop forever if we &#xA0;make a high number of calls which adds to MicroTasks queue for example. This scenario can happen in the case when we call a Promise recursively. This does not apply to timers which despite of giving them a 0 threshold, the execution of them is delayed to the next loop tick, after which it is rescheduled again.</blockquote><h3 id="timers">Timers</h3><blockquote>A timer specifies the threshold in ms after which its callback will be fired.</blockquote><p>The loop starts with the timers phase. A timer&apos;s callback is executed as soon as its specified time threshold has passed. However, OS scheduling like process / thread priority or the execution of other callbacks in other phases may delay the execution of a timer. Technically, the polling time in poll phase adds up to the execution delay of a timer.</p><h3 id="pending-callback">Pending Callback</h3><p>After &quot;Timers&quot; phase, the loop enters into &quot;Pending Callbacks&quot; phase. This phase is dedicated for operation which could not be completed during the previous run of the loop. More precisely lets take a TCP socket <strong>ECONNREFUSED </strong>error as an example. Most operating systems will wait for a time threshold/timeout before reporting such problem and for that reason the throw of this error may be scheduled in the Pending Callback phase.</p><h3 id="idle-prepare">Idle, Prepare</h3><p>This phase is completely dedicated for internal usage to libuv. For example, in this phase are initialized libuv handles such as TCP/UDP handles etc. Some handles may need some time to be ready like setting up a TCP connection and this phase is ideal for running post handle initializations.</p><h3 id="poll">Poll</h3><p>This is one of the most important phases which takes care for the processing of I/O operations. It has two important tasks:</p><ol><li>calculate timeout for I/O polling (it cannot wait forever because this will block the event loop)</li><li>drain poll event queue which is populated during the polling</li></ol><p>How is &#xA0;the polling timeout calculated? If we poll too much time, timers will be executed late and this impacts negatively their reliability. To understand this we need to dig deep in NodeJS source code at <a href="https://github.com/nodejs/node/blob/master/deps/uv/src/unix/core.c">core.c</a>.</p><pre><code class="language-cpp">int uv_run(uv_loop_t* loop, uv_run_mode mode) {
 	...
    timeout = 0;
    if ((mode == UV_RUN_ONCE &amp;&amp; !ran_pending) || mode == UV_RUN_DEFAULT)
      timeout = uv_backend_timeout(loop);

    uv__io_poll(loop, timeout);
    ...
}

int uv_backend_timeout(const uv_loop_t* loop) {
  if (loop-&gt;stop_flag != 0)
    return 0;

  if (!uv__has_active_handles(loop) &amp;&amp; !uv__has_active_reqs(loop))
    return 0;

  if (!QUEUE_EMPTY(&amp;loop-&gt;idle_handles))
    return 0;

  if (!QUEUE_EMPTY(&amp;loop-&gt;pending_queue))
    return 0;

  if (loop-&gt;closing_handles)
    return 0;

  return uv__next_timeout(loop);
}</code></pre><p>From the given snippet of code, &#xA0;uv_run is the main method which executes all phases. It can be seen that before jumping to the Poll Phase a timeout is calculated by calling <strong>uv_backend_timeout</strong>. If there are more pressing matters such as processing idle or active handles etc the loop will enter in the Poll phase but it will return early because the timeout is 0. Now for the timers, the polling timeout is calculated by calling <strong>uv__next_timeout</strong> which gets the nearest timer duration when the timer should be fired and this ensures that timers wont be blocked by polling for I/O.</p><p>When the watcher queue is empty, there is no need to wait for the calculated polling timeout because there are no I/O operations registered so the loop will move immediately to the Check phase.</p><h3 id="check-phase">Check Phase</h3><p>This phase processes setImmediate callback queue. Based on my analysis of the source code, after the first immediate callback is processed, the MicroTask queue is exhausted implicitly as can be seen in the following code snippet by calling runNextTicks:</p><pre><code class="language-JS">function processImmediate() {
    const queue = outstandingQueue.head !== null ?
      outstandingQueue : immediateQueue;
    let immediate = queue.head;

    // Clear the linked list early in case new `setImmediate()`
    // calls occur while immediate callbacks are executed
    if (queue !== outstandingQueue) {
      queue.head = queue.tail = null;
      immediateInfo[kHasOutstanding] = 1;
    }

    let prevImmediate;
    let ranAtLeastOneImmediate = false;
    while (immediate !== null) {
      if (ranAtLeastOneImmediate)
        runNextTicks();
      else
        ranAtLeastOneImmediate = true;
        
        ...
      }
    }</code></pre><p>Here it is the definition of <strong>runNextTicks </strong>where <strong>runMicroTasks </strong>is invoked.</p><pre><code class="language-JS">function runNextTicks() {
  if (!hasTickScheduled() &amp;&amp; !hasRejectionToWarn())
    runMicrotasks();
  if (!hasTickScheduled() &amp;&amp; !hasRejectionToWarn())
    return;

  processTicksAndRejections();
}</code></pre><p>During my study of NodeJS source code I had a hard time figuring out how the MicroTask queue was handled. It seems that they are treated with a high priority after the nextTick callbacks.</p><h3 id="close-callbacks">Close Callbacks</h3><p>This phase is dedicated to process callbacks that should be fired after a resource is destroyed abruptly such a socket for example where <strong>&apos;close&apos; </strong>is invoked during this phase. However, in normal situations such events are dispatched by using nextTick.</p><h3 id="processnexttick">process.nextTick</h3><p>Technically, nextTick is part of asynchronous NodeJS API. It has a dedicated queue for queuing callbacks called nextTickQueue. This queue is exhausted after the current operation is completed, regardless of the current phase of the event loop.</p><p>This API is very powerful for executing logic at a very high priority, even higher than micro tasks. At the same time it should be used with caution as it can starve the event loop as stated in the <a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/">official docs</a>:</p><blockquote>Looking back at our diagram, any time you call <code>process.nextTick()</code> in a given phase, all callbacks passed to <code>process.nextTick()</code> will be resolved before the event loop continues. This can create some bad situations because <strong>it allows you to &quot;starve&quot; your I/O by making recursive <code>process.nextTick()</code> calls</strong>, which prevents the event loop from reaching the <strong>poll</strong> phase.</blockquote><h2 id="references">References</h2><ul><li><a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/">https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/</a></li><li><a href="https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/13_IOSystems.html">https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/13_IOSystems.html</a></li><li><a href="https://www.geeksforgeeks.org/io-interface-interrupt-dma-mode/">https://www.geeksforgeeks.org/io-interface-interrupt-dma-mode/</a></li><li><a href="https://cfsamson.github.io/book-exploring-async-basics/6_epoll_kqueue_iocp.html">https://cfsamson.github.io/book-exploring-async-basics/6_epoll_kqueue_iocp.html</a></li></ul>]]></content:encoded></item></channel></rss>