Isaac's Blog

Developing a Modern Full-Stack Application: Choosing the Right Tech Stack

I recently had the chance to develop a full-stack application from scratch, and I chose a very modern stack for it. I wanted to prioritize developer experience above all, in order to maximize my time spent on shipping, talking to users and iterating on the product.

I chose the following stack:

I'll describe some of my motivations for those choices, and share my learnings for anyone else who is interested in using the same stack.

Typescript

I was very excited about Typescript on the backend. Having first worked at a late stage startup with a very mature JRE stack (mostly Java and Kotlin) and then at scrappy early stage startup with a Python backend, the latter experience fully sold me on the need for static typing. I was intrigued about having a single language for the whole stack, as I didn't want to have to ramp up on two different languages after spending the last 5 years as a PM and crying in Jira. In addition, using a single language allowed me to share types and functions across the stack, and bring me to the promised land of end-to-end type safety.

That naturally led me to Typescript, given javascript on the client side is basically a must. I loved how Typescript is a multi-paradigm language, as a convert to the joys of functional programming for data intensive tasks after using Java's Stream API. Typescript seemed to give me the same affordances without the verbosity of invoking the stream() and collecting to an Collections.unmodifiableList every time.

Finally in terms of performance, Typescript offers adequate single threaded performance, helping me avoid the carbon footprint of Python. I wanted to deploy my backend as serverless functions to keep my cloud costs low, as my usage was very low, and Node JS offers very good cold start times.I wanted to deploy my backend as serverless functions to keep my cloud costs low, as traffic was low and sporadic.

Typescript itself was a joy to use. I did however, find it more verbose than I was expecting due to the named parameter syntax for functions:

function divide({ dividend, divisor }: { dividend: number, divisor: number }) {  
  return dividend / divisor;  
}

There was also one very frustrating bug that took me hours to figure out, where I was missing an await when calling an async function. ESLint was supposed to catch this one so I can't explain how this slipped into my codebase. Otherwise, I have no complaints about Typescript as a language, and the rest of the article I'll discuss some learnings and gotchas with the rest of my stack.

Postgres + Prisma

I'm an advocate for domain-driven design, and so the next consideration was how I was modeling and persisting data. PostgreSQL is a no-brainer in terms of DB choice, and I evaluated libraries that would give me type-safe queries, and simple migrations. Prisma kept on coming up, and I decided to try it.

I loved how easy it was to manage my DB schema, but there was a gotcha.

Here's how I originally defined my user in the prisma schema:

model User {
	id Int @id @default(autoincrement())
	email String @unique
	password String
	ipAddress String? @db.Inet
	createdAt DateTime @default(now())
	updatedAt DateTime @updatedAt
}

This generates a migration that looks something like this:

-- CreateTable
CREATE TABLE "User" (
  "id" SERIAL NOT NULL,
  "email" TEXT NOT NULL,
  "password" TEXT NOT NULL,
  "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
  "updatedAt" TIMESTAMP(3) NOT NULL,
  "ipAddress" INET,
CONSTRAINT "User_pkey" PRIMARY KEY ("id")
);

Note that the names are case sensitive match to the schema definition, leading to a table name in TitleCase and column names are in camelCase. This doesn't play nicely if you ever want to query the DB directly outside of Prisma, as you'll have to enclose the names in quotation marks.

select id, "createdAt" from "User"

To deal with this, you'll add a @map annotation everywhere that you see caps:

model User {
	id Int @id @default(autoincrement())
	email String @unique
	password String
	createdAt DateTime @default(now()) @map("created_at")
	updatedAt DateTime @updatedAt @map("updated_at")
	ipAddress String? @db.Inet

	@@map("users")
}

Make sure that you haven't run the first migration yet, otherwise the change to the schema above will end up dropping and creating new columns! Given the level of magic with Prisma, I needed to be really careful and review each migration file to ensure that there weren't any unintended consequences.

One surprising bug caused a lot of difficulty until it was resolved by the Prisma team and I couldn't run any migrations safely until it was resolved. The root cause issue was an interaction with a Vercel environment variable, which is just bad. It really made me question the Javascript on back-end idea as a whole, as I lost a lot of trust in my critical dependencies.

That wasn't the last of my database issues. After only using Vercel Postgres for only a few days, with just a couple thousand records, I had hit the "Written Data Limit". I contacted customer support, linking to some Github issues that I believe it was related to. It took nearly a month for support to get back to me, who just provided an explanation of how the Written Data Limit is calculated, and no indication of fix. Within that time, I had confirmed the problem, that every time you generate a prisma migration with prisma migrate dev a second, temporary database, a "shadow database", is created and deleted. To mitigate this, I spun up local dockerized Postgres instance, and ran my prisma migrate dev against it.

Vercel

As I hinted above, I used Vercel for both my back-end and front-end development, which seems like the path of least resistance for NextJS projects. I'll be discussing my front-end choices in the next article.

I was impressed by how quickly and seamlessly I deployed my project with Vercel. I was constantly delighted by little features such as clicking on any request in my logs and seeing its response latency profiled.

I don't want to complain too much about Vercel, as overall I think it's an outstanding platform for frontend development, as well as quick and dirty full-stack proof of concepts, but I just can't recommend its backend hosting yet for serious projects. Aside from my issue with Vercel Postgres, I also struggled with their Cron Jobs.

According to Vercel, my cron jobs seemed to be running on schedule, and returning a 200 response. However, when I peered into my database, I knew that my business logic simply wasn't executing reliably. It turned out that the requests that trigger the cron job are cached. So my logic was executed only when the cache was invalidated by each deploy. Caching a cron job is not a sensible default, and there was no mention of this in their documentation.

Given that the crons run on Vercel Serverless Function, I found documentation on cache here under Edge Cache, which you would expect to only be relevant for Vercel Edge Function. I tried out their recommendation of adding "Cache Control" headers, but interestingly enough, it didn't resolve the problem.

I eventually managed to disable the cache by adding authorization to my endpoints. This solved my problem in a roundabout way; any request with an Authorization header isn't cached by Vercel. I realize it's just a couple lines of code, and I should have added authorization anyways, but this didn't inspire much confidence.

export async function GET(request: NextRequest) {
	const cronSecretPresent = request.headers.get("authorization") == `Bearer ${process.env.CRON_SECRET}`;
	if (!cronSecretPresent) {
		return new Response("Unauthorized", {
			status: 401,
	});
...
}

Authorization

My last major dependency was for authorization, for which I used Auth.js. Overall I don't have a strong opinion on it, it seemed to do the job. One thing that I found unexpectedly tricky was accessing the user's id in the session. I assumed that this would be covered in documentation or other dev's examples as it seemed like a very common use-case. This is how I accomplished it, I hope it will save other developers time if they are looking to achieve something similar.

//File Path /app/api/auth/[...nextauth]/route.ts

import NextAuth, { DefaultSession, type NextAuthOptions } from "next-auth";
import { authOptions } from "./auth-options";

declare module 'next-auth' {

	interface Session {
	  user: {
	  id: string;
	  } & DefaultSession['user'];
	}
}
const handler = NextAuth(authOptions);

export { handler as GET, handler as POST };
// File Path /app/api/auth/[...nextauth]/auth-options.ts

import { NextAuthOptions } from "next-auth";
import CredentialsProvider from "next-auth/providers/credentials";
import { db } from "@/lib/prisma";
import { compare } from "bcrypt";

export const authOptions: NextAuthOptions = {
    providers: [
      CredentialsProvider({
        credentials: {
          email: { label: "Email", type: "email" },
          password: { label: "Password", type: "password" }
        },
        async authorize(credentials) {
          const { email, password } = credentials ?? {}
          if (!email || !password) {
            throw new Error("Missing username or password");
          }
          const user = await db.user.findUnique({
            where: {
              email,
            },
          });
          // if user doesn't exist or password doesn't match
          if (!user || !(await compare(password, user.password))) {
            throw new Error("Invalid username or password");
          }
          return {
            id: user.id.toString(),
            email: user.email,
            password: user.password,
            ipAddress: user.ipAddress
          }
        },
      }),
    ],
    callbacks: {
      session: async ({ session, token }) => {
        if (session?.user) {
          session.user.id = token.sub!;
        }
        return session;
      },
      jwt: async ({ user, token }) => {
        if (user) {
          token.uid = user.id;
        }
        return token;
      },
    },
    session: {
      strategy: 'jwt',
    },
  };

I've seen Lucia also recommended as an alternative, lightweight auth library. There's of course always the option of using full-blown auth provider, like AWS Cognito or Firebase.

In Conclusion

In many ways, I think I picked a great stack to move quickly and ship things. However, considering all the time spent debugging some of issues that stemmed from using more cutting edge infra providers, next time I'll more carefully consider setting up parts of my infra with a more traditional solution like AWS.

Thank you for reading this post. If you enjoyed it, please check out my next post, where I walk through the front-end of the stack.