Skip to content

Using Notion as a CMS

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Unlike most note-taking applications, Notion features a full suite of notation-like tools, from tools to track tasks to ones that help with journaling, all in one place. If you're an experienced user, you may have used some of their advanced concepts like tables as databases, custom templates, and table functions.

As of May 13, 2021, Notion released a beta version of their API client, opening up another range of possibilities for the app. Users of the client can query databases, create pages, update blocks, perform searches, and much more.

We'll look at how easy it is to get started with the client today.

Note: For this simple demonstration, we won't use authentication, but it's highly recommended no matter what type of app you decide to create.

Project Setup

We have a few options for building a scalable blog site. For this stack, we'll be using NextJS because it's relatively lightweight compared to most frameworks, and has several readily available features for a standard React app. But Remix works just as well.

In a folder of your choice, create the NextJS app with:

npx create-next-app@latest

or

yarn create next-app

Note: add --typescript at the end to generate a TypeScript project.

I'm using the TypeScript project for this demo.

NextJS Welcome

If you're unfamiliar with NextJS, I recommend their Getting Started Guide. In this new project folder, we have all the necessary things to run the app.

Install the necessary dependencies:

npm install -S @notionhq/client

Create the Notion Database

For this next step and the next, you'll need a Notion account to create databases, and an integration key.

Now, create a table for your blog posts in Notion. This will become the database for referencing posts for the site.

Notion Table Creation

I'm making this generic developer blog database, but feel free to make your database specific to a target audience or subjects.

Notion Create Database Example

Notion Setup and Integration

Before using Notion's API to query any data, we need access via an integration key. Head over to the Notion's authorization guide and follow the instructions to create your key. I'm using the most basic setup for reading from my database.

Notion integration Key Setup

Continue with the integration key guide up to Step 2 which references how to grant the intergration key rights to the database.

With the integration key and database ID handy, let's configure the app to use them.

Querying the database

In your favourite code editor, create an .env.local file with the following:

NOTION_API_KEY=secret_xxxxxxxxxxxxxxxxxxxxxx
NOTION_DATABASE_ID=xxxxxxxxxxxxxxxxxxxxxx

Note: it won't matter if you wrap your values in quotations.

NextJS comes with the ability to read local environment variables. It also ignores versions of this file in its .gitignore so we don't have to. If publishing an app to production, it's recommended to use environment variables.

Next, create a src/api/client.ts file at the root of the project. This will contain an easily referenced version of our client for querying Notion.

import { Client } from '@notionhq/client';

const NOTION_API_KEY = process.env.NOTION_API_KEY ?? '';

export const notion = new Client({ auth: NOTION_API_KEY });

Follow that up with a src/api/query-database.ts file:

import { notion } from './client';

export const queryDatabase = async () =>
  await notion.databases.query({
    database_id: process.env.NOTION_DATABASE_ID ?? '',
  });

Parse the Query

Because the returned query object is so complex, we need to parse it. There are better ways to do this using more advanced techniques, but basic mapping will suffice.

In a new src/utils/parse-properties file:

import { QueryDatabaseResponse } from '@notionhq/client/build/src/api-endpoints';

export type Post = {
  id: string;
  title: string;
};

export const parseProperties = (database: QueryDatabaseResponse): Post[] =>
  database.results.map((row) => {
    const id = row.id;
    const titleCell = row.properties.Title.title;
    const title = titleCell?.[0].plain_text;
    return { id, title };
  });

Now we can leverage NextJS server rendering feature via getStaticProps to prefetch and render our Home page. In index.tsx:

import type { NextPage } from 'next';
import Head from 'next/head';
import { queryDatabase } from '../src/api/query-database';
import styles from '../styles/Home.module.css';
import { parseProperties, Post } from '../src/utils/parse-properties';

type HomeProps = {
  posts: Post[];
};

const Home: NextPage<HomeProps> = ({ posts }) => {
  return (
    <div className={styles.container}>
      <Head>
        <title>My Notion Blog</title>
        <meta
          name="description"
          content="Notion blog generated from Notion API"
        />
        <link rel="icon" href="/favicon.ico" />
      </Head>

      <main className={styles.main}>
        <h1>My Notion Blog</h1>
        <ul>
          {posts.map(({ id, title }) => (
            <li key={id}>{title}</li>
          ))}
        </ul>
      </main>
    </div>
  );
};

export async function getStaticProps() {
  const database = await queryDatabase();
  const posts = parseProperties(database);

  return {
    props: {
      posts,
    },
  };
}

export default Home;

We should now see our post titles loaded on the Home page when we run npm run dev.

NextJS + Notion Integration

Finishing Up

With a few setup pieces, it's easy to setup a basic blog using Notion's API, but there are more possibilities and use cases for Notion as a CMS. Keep in mind that this may not be the best database to use in a production environment, but playing around with one of my favourite tools creates some new possibilies for non-Notion-tailored experiences.

Full Code Example Here

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

The Future of Dates in JavaScript: Introducing Temporal cover image

The Future of Dates in JavaScript: Introducing Temporal

The Future of Dates in JavaScript: Introducing Temporal What is Temporaal? Temporal is a proposal currently at stage 3 of the TC39 process. It's expected to revolutionize how we handle dates in JavaScript, which has always been a challenging aspect of the language. But what does it mean that it's at stage 3 of the process? * The specification is complete * It has been reviewed * It's unlikely to change significantly at this point Key Features of Temporal Temporal introduces a new global object with a fresh API. Here are some important things to know about Temporal: 1. All Temporal objects are immutable 2. They're represented in local calendar systems, but can be converted 3. Time values use 24-hour clocks 4. Leap seconds aren't represented Why Do We Need Temporal? The current Date object in JavaScript has several limitations: * No support for time zones other than the user's local time and UTC * Date objects can be mutated * Unpredictable behavior * No support for calendars other than Gregorian * Daylight savings time issues While some of these have workarounds, not all can be fixed with the current Date implementation. Let's see some useful examples where Temporal will improve our lives: Some Examples Creating a day without a time zone is impossible using Date, it also adds time beyond the date. Temporal introduces PlainDate to overcome this. ` But what if we want to add timezone information? Then we have ZonedDateTime for this purpose. The timezone must be added in this case, as it also allows a lot of flexibility when creating dates. ` Temporal is very useful when manipulating and displaying the dates in different time zones. ` Let's try some more things that are currently difficult or lead to unexpected behavior using the Date object. Operations like adding days or minutes can lead to inconsistent results. However, Temporal makes these operations easier and consistent. ` Another interesting feature of Temporal is the concept of Duration, which is the difference between two time points. We can use these durations, along with dates, for arithmetic operations involving dates and times. Note that Durations are serialized using the ISO 8601 duration format ` Temporal Objects We've already seen some of the objects that Temporal exposes. Here's a more comprehensive list. * Temporal * Temporal.Duration` * Temporal.Instant * Temporal.Now * Temporal.PlainDate * Temporal.PlainDateTime * Temporal.PlainMonthDay * Temporal.PlainTime * Temporal.PlainYearMonth * Temporal.ZonedDateTime Try Temporal Today If you want to test Temporal now, there's a polyfill available. You can install it using: ` Note that this doesn't install a global Temporal object as expected in the final release, but it provides most of the Temporal implementation for testing purposes. Conclusion Working with dates in JavaScript has always been a bit of a mess. Between weird quirks in the Date object, juggling time zones, and trying to do simple things like “add a day,” it’s way too easy to introduce bugs. Temporal is finally fixing that. It gives us a clear, consistent, and powerful way to work with dates and times. If you’ve ever struggled with JavaScript dates (and who hasn’t?), Temporal is definitely worth checking out....

Common Patterns and Nuances Using React Query cover image

Common Patterns and Nuances Using React Query

React hosts a number of solutions to various design problems due to its flexible paradigm. The decisions we make at the design and architecture phase of a project can either alleviate the time cost of development for a simple robust solution, or hinder it due to taxing implementations. One easy-to-implement yet sometimes tricky to use tool is react-query- a powerful library for asynchronous state management. It's simplicity in implementation makes it a desireable choice for writing component state logic. However, there are some unspoken aspects of React Query that may seem frustrating, and increase its difficulty, yet often fall on the context of the problem we hope to resolve. The next few concepts demonstrate some patterns for simple solutions while addressing some nuances one might encounter along the way. Note that react-query is now TanStack Query, and these concepts can be used in Vue, Solid, and Svelte. It'll continue to be referred to as React Query (RQ) in this React article. Understanding Query State Keys Under the hood, RQ performs some mappings similarly to "typed" actions works in other state machines. Actions, or in our case, queries, are mapped to a key where the value is either null, or some initial state (more on this in the following section). Because RQ is a state management tool, state will update relative to other stateful changes within a component. This means that whenever state changes in a component, these changes also affect the query's state performing state checks, and update accordingly. Take a Todo app that loads some tasks. The snapshot of the query's state can look like: ` If the state key changes, so does the cached data accessible at the key at that moment. State changes will only occur on a few select events. If the state key doesn't change, the query can still run, but the cached data won't update. Using Initial and Placeholder Data An example of this is with the use of initial data to represent values that haven't been fully loaded. It's common in state machines to introduce an initial state before hydration). Take the Todo app earlier that needs to show an initial loading state. On initial query, the tasks query is in loading status (aka isLoading = true). The UI will flicker this content in place when it's ready. This isn't the best UX, but it can be fixed quickly. RQ provides options for settings initialData or placeholderData. Although these properties have similarities, the difference is where caching occurs: cache-level vs observer-level. Cache-level refers to caching via Query Key, which is where initialData resides. This initial cache overrides observer-level caches. Observer-level refers to the location the subscription lives, and where placeholderData renders. Data at this level won't cache, and works if no initial data was cached. With initialData, you have more control over cache staleTime and refetching strategies. Whereas, placeholderData is a valid option for a simple UX enhancement. Keep in mind that error states change with the choice between caching initial data or not. ` Managing Refreshing State Expanding on the previous concept, updating the query happens when either state is stale, and we control the refresh, or we use other mechanisms, provided by RQ, to perform the same tasks. Re-hydrating Data RQ exposes a refresh method that makes a new network request for the query and updates state. However, if the cached state is the same as the newly fetched state, nothing will update. This can be a desired outcome to limit state updates, but take a situation where manual refetching is required once a mutation occurs. Using a query key with fetch queries increases the query's specificity, and adds control refetching. This is known as *Invalidation*, where the Query Key's data is marked as stale, cleared, and refetched. Refetching data can occur automatically or manually in varying degree for either approach. What this means is we can automatically have a query refetch data on stateTime or other refetching options (refetchInterval, refetchIntervalInBackground, refetchOnMount, refetchOnReconnect, and refetchOnWindowFocus). And we can refetch manually with the refetch function. Automatic refresh is rather straightforward, but there are times when we want to manually trigger an update to a specific Query Key thus fetching the latest data. However, you probably encountered a situation where the refetch function doesn't perform a network request. Processing HTTP Errors One of the more common situations beginners to RQ face is handling errors returned from failed http requests. In a standard fetch request using useState and useEffect, it is common to create some state to manage network errors. However, RQ can capture runtime errors if the API handler used doesn't contain proper error-handling. Whether runtime or network error, they appear in the error property of the query or mutation (also indicated by isError status). To overcome this and pass network error into the query, we can either do one or a combination of: - telling the fetch API how to process the failed response - using error boundaries with queries and mutations Handling Errors with Fetch For a simple solution to handling errors, process network requests from the fetch API (or axios) like the following: ` Then, in your component: ` This pattern gets repetitive, but is useful for simple apps that may not require heavy query use. Handling Errors with Error Boundaries Arguably the best thing a React dev can do for their app is to set an Error Boundary. Error Boundaries help contain runtime errors that would normally crash an app by propagating through the component tree. However, they can't process network errors without some setup. Thankfully, RQ makes this very easy with the useErrorBoundary option: ` The query hook takes the error, caches it, and rethrows it, so the error boundary captures it accordingly. Additionally, passing a function to useErrorBoundary that returns a boolean increases the granularity of network error handling like: ` Takeaways The three main concepts using React Query are: - hydrating the cache with placeholders or defaults - re-hydrating the cache with fresh data - handling network errors with the correct setup and error boundaries There are a number of state management tools to use with React, but React Query makes it simple to get up and running with an effective tool that adheres to some simple patterns and React rendering cycles. Learning how and when to execute queries to achieve the desire pattern takes some understanding of the React ecosystem....

AI Is Speeding Up Development. But Where Are the New Bottlenecks? cover image

AI Is Speeding Up Development. But Where Are the New Bottlenecks?

AI is accelerating development, but it’s also exposing everything else that’s broken. At the Leadership Exchange, leaders unpacked how AI is reshaping the SDLC and what organizations need to address beyond just coding to make adoption successful. Moderated by Rob Ocel, VP of Innovation at This Dot Labs, the panel featured Itai Gerchikov at Anthropic and Harald Kirschner, Principal Product Manager for GitHub Copilot & VS Code at Microsoft. Panelists explored the current state of AI adoption across the software development lifecycle and shared practical insights into how organizations can effectively integrate AI tools. Panelists discussed how companies are investing in AI tools, skills, and managed competency programs to support developers. While AI can dramatically accelerate coding, the panel emphasized that adoption affects every stage of the SDLC. Bottlenecks now appear in testing, DevOps, product delivery, and marketing as AI speeds up development. Organizations that address technical debt and process inefficiencies are better positioned to extract maximum value from AI tools. The conversation also focused on opportunities and risks. Security, governance, and workforce education were highlighted as critical factors for adoption. Panelists stressed that AI initiatives should be aligned with broader business goals rather than pursued in isolation. They noted that companies experimenting at the cutting edge need to consider organizational readiness just as carefully as technical capabilities. Panelists also explored how leading organizations are navigating the early stages of adoption. Those ahead of the curve are using structured experimentation, prioritizing process improvements, and continuously evaluating outcomes to refine their AI strategies. Learning from these early adopters allows other organizations to anticipate emerging trends and prepare for the next phase of AI adoption rather than simply replicating past approaches. Key Takeaways - Investing in AI skills and tools should be done thoughtfully, with clear alignment to business objectives. - Examining the full SDLC helps identify bottlenecks that AI may accelerate or expose. - Organizations can gain a competitive advantage by learning from early adopters and planning for where AI adoption is heading. AI adoption is not just a technical initiative; it is a strategic transformation that requires attention to people, process, and technology. Organizations that balance innovation with operational discipline will be best positioned to capture the full potential of AI across the software lifecycle. Seeing similar challenges in your own SDLC? Let’s compare notes. Join us at an upcoming Leadership Exchange or reach out to continue the conversation. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co