Using Knex.js to Enhance the Development Experience with Node.js and YugabyteDB

Brett Hoyer

In the world of software development, the debate between query builders and ORMs is hotly contested. Some developers even choose to shelve both approaches, instead building their own interfaces and writing raw database queries.

Node.js ORMs such as Sequelize, TypeORM and Prisma require developers to first model their data as classes before executing queries. These classes are used to map between database tables and JavaScript objects. While tedious to set up, these ORMs are helpful since they keep codebases DRY during frequent database migrations, along with numerous other benefits.

Many find query builders to be a happy medium, providing some of the conveniences of ORMs, without as much boilerplating and abstraction. The queries being executed are often far more performant than those generated by ORMs, and the syntax more closely mirrors raw queries that backend developers should already be familiar with.

Knex.js is the most popular query builder in the Node.js ecosystem. It supports multiple SQL dialects, including PostgreSQL, out-of-the-box. How will it fare with PostgreSQL-compliant distributed SQL databases like YugabyteDB Let’s find out by connecting to YugabyteDB and building a REST API.

Getting Connected

To begin using Knex.js, we first need to install the library, along with the accompanying database driver we want to use. YugabyteDB is a fully PostgreSQL-compliant database, so we’ll use the node-postgres driver.

// terminal
> npm install knex
> npm install pg

Next, we’ll install YugabyteDB locally and create a connection pool using Knex.

// index.js
const knex = require("knex")({
 client: "postgres",
 connection: {
   host: "127.0.0.1",
   port: 5433,
   user: "yugabyte",
   password: "yugabyte",
   database: "yugabyte",
 },
 pool: {
   min: 0,
   max: 5,
   afterCreate: (conn, done) => {
     console.log("Connection Established.");
     done();
   },
 },
});

Creating Tables

We’re ready to start working with our database. Let’s build a basic items table, with id, name, price and description columns using the Knex Schema Builder.

// index.js
...
 
async function createTable() {
 await knex.schema.dropTableIfExists("items");
 
 await knex.schema.createTable("items", function (table) {
   table.increments("id").primary();
   table.varchar("name", 100);
   table.decimal("price", 14, 2);
   table.text("description");
 });
 
 return knex
}
 
createTable();

The beauty of this code is that it does exactly what we expect. Knex provides utility functions to drop a table if it exists, create a new table, and modify the table structure. In this case, we’ve dropped the items table if it exists and created a new table. We’ve then modified this table with an auto-incrementing id column and made it the primary key, a character varying name column with a max length of 100, a decimal price column with 14 precision and 2 scale, and a text description column.

By using the ysqlsh command-line utility, we can verify that the underlying table created by Knex is correct.

// terminal
> ./bin/ysqlsh -h 127.0.0.1 -p 5433
 
yugabyte=# \d items;
                                     Table "public.items"
  Column    |          Type          | Collation | Nullable |              Default
-------------+------------------------+-----------+----------+-----------------------------------
id          | integer                |           | not null | nextval('items_id_seq'::regclass)
name        | character varying(255) |           |          |
price       | numeric(14,2)          |           |          |
description | text                   |           |          |
Indexes:
   "items_pkey" PRIMARY KEY, lsm (id HASH)

Querying The Database

Now that we have a table to work with, we can begin to perform CRUD operations against it. Let’s begin by inserting some items. Knex provides the flexibility to do this in a variety of ways.

Here’s how we can achieve a single insert.

// index.js
...
 
createTable().then(knex => {
 await knex("items").insert({
   name: "iPhone 14 Pro",
   price: 999.00,
   description: "It's the new one.",
 });
})

Alternatively, we can insert multiple records in one insert command by supplying an array.

// index.js
...
 
createTable().then(async knex => {
 await knex("items").insert([
   {
     name: "iPhone 14 Pro",
     price: 999.00,
     description: "It's the new one.",
   },
   {
     name: "iPhone 13",
     price: 799.00,
     description: "It's not the new one.",
   },
   {
     name: "iPhone 4S",
     price: 199.00,
     description: "It's not what you want in 2023.",
   }
 ]);
})

This interface is extremely intuitive. We simply pass the name of the table we want to interact with, and use the aptly-named methods made available by the client.

How might we select the records we just inserted? You’ve probably already guessed.

// index.js
...
 
const items = await knex("items").select();

How about selecting a single record?

// index.js
...
 
const items = await knex("items").where({id: 1}).first();

In this case, we’ve introduced a where clause. By default, the where clause returns an array of records, but by chaining the first method, we’re able to return a single record.

Looks like the iPhone 13 is on sale. Let’s update its price.

// index.js
...
 
await knex("items").where({id: 2}).update({price: 699.99});

Lastly, we can delete the iPhone 4s from our items table. You’ll notice that the del method handles deletes because delete is a reserved word in JavaScript.

// index.js
...
 
await knex("items").where({id: 3}).del();

Putting It All Together

As promised, let’s put the pieces together to build out our REST API.

First, let’s install Express and include it in our project.

// terminal
> npm install express

Now, we can glue everything together.

// index.js
const express = require("express");
const App = express();
 
// Express middleware for parsing JSON payloads
App.use(express.json());
 
// Establishing a connection pool to YugabyteDB
// Running on our local machine
const knex = require("knex")({
 client: "postgres",
 connection: {
   host: "127.0.0.1",
   port: 5434,
   user: "yugabyte",
   password: "yugabyte",
   database: "yugabyte",
 },
 pool: {
   min: 0,
   max: 5,
   afterCreate: (conn, done) => {
     console.log("Connection Established.");
     done();
   },
 },
});
 
// Creating a table
async function createTable() {
 console.log("Dropping 'items' table if it exists.");
 await knex.schema.dropTableIfExists("items");
 
 console.log("Creating 'items' table.");
 await knex.schema.createTable("items", function (table) {
   table.increments("id").primary();
   table.varchar("name", 100);
   table.decimal("price", 14, 2);
   table.text("description");
 });
 
 return knex;
}
 
createTable().then(async (k) => {
 // Seeding Table with 3 Items
 await k("items").insert([
   {
     name: "iPhone 14 Pro",
     price: 999.0,
     description: "It's the new one.",
   },
   {
     name: "iPhone 13",
     price: 799.0,
     description: "It's not the new one.",
   },
   {
     name: "iPhone 4S",
     price: 199.0,
     description: "It's not what you want in 2023.",
   },
 ]);
 
 // Starting server on port 3000
 App.listen(3000, () => {
   console.log("Running application on port 3000");
 });
});
 
// REST Endpoints
 
// CREATE
App.post("/items", async (req, res) => {
 try {
   let body = req.body;
   // Insert item, returning id and name fields
   // This second param can be omitted if you want to return all fields
   const item = await knex("items").insert(body, ["id", "name"]);
   res.status(200).json({ data: item });
 } catch (e) {
   console.log("error in creating item", e);
   res.status(400).send(e);
 }
});
 
// READ
App.get("/items", async (req, res) => {
 try {
   // Select all items
   const items = await knex("items").select();
   res.status(200).json({ data: items });
 } catch (e) {
   console.log("error in fetching item", e);
   res.status(400).send(e);
 }
});
 
App.get("/items/:id", async (req, res) => {
 try {
   const { id } = req.params;
 
   // Select single item
   const item = await knex("items").where({ id: id }).first();
 
   res.status(200).json({ data: item });
 } catch (e) {
   console.log("error in fetching item", e);
   res.status(400).send(e);
 }
});
 
// UPDATE
App.patch("/items/:id", async (req, res) => {
 try {
   const { id } = req.params;
   const body = req.body;
 
   //Update
   await knex("items").where({ id: id }).update(body, ["id", "name"]);
   res.sendStatus(204);
 } catch (e) {
   console.log("error in updating item", e);
   res.status(400).send(e);
 }
});
 
// DELETE
App.delete("/items/:id", async (req, res) => {
 try {
   const { id } = req.params;
   await knex("items").where({ id: id }).del();
   res.status(200).json({ data: "Deleted successfully." });
 } catch (e) {
   console.log("error in deleting item", e);
   res.status(400).send(e);
 }
});

Pros and Cons

Query builders like Knex don’t come with all the bells and whistles of many of the ORMs on the market, but that might be a good thing. The further we sit from our data layer, the less we may understand it.

The interface provided by Knex feels just like writing raw SQL queries, but without having to deal with the mess of concatenating strings. This intuitive design increases developer productivity and reduces errors. It also improves application performance, as queries being generated are often faster than those being made by ORMs.

I’ve confirmed that Knex works out-of-the-box with YugabyteDB, a distributed SQL database built on PostgreSQL. This means that users looking to switch from vanilla PostgreSQL to YugabyteDB will be able to distribute their data layer to scale reads and writes across a cluster of database nodes.

I suggest you give Knex and YugabyteDB Managed a try in your next project, to reduce setup time and get straight to coding!

Brett Hoyer

Related Posts

Explore Distributed SQL and YugabyteDB in Depth

Discover the future of data management.
Learn at Yugabyte University
Get Started
Browse Yugabyte Docs
Explore docs
PostgreSQL For Cloud Native World
Read for Free