Skip to content

Hey Deno! Where is my package.json?

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Disclaimer: This blog was written for Deno versions prior to 1.3.1. After 1.3.1, while Deno can handle projects that contain a package.json (to help facilitate migration), it is still recommended that you handle your dependencies as discussed in this article.

Introduction

Where is my package.json?

That was one of my first questions when I started learning Deno. Coming from a NodeJS background, I am used to having a package manager (such as NPM) for managing dependencies, scripts and other configurations, and I’m used to using the package.json file to declare my dependencies and their versions.

Deno has no concept of a package manager, as external modules are imported directly into local modules (e.g import { bold } from ‘https://deno.land/std@v0.32.0/fmt/colors.ts’). At first, this seems very convenient, but it got me wondering how I would be able to manage and update dependencies when they are imported into several different modules across a large project. And how about running scripts? Node allows you to define arbitrary scripts in the package.json that can be executed using npm run. How do we define and manage scripts in Deno?

In this article, we will discuss the various ways to manage dependencies in Deno, and also how to manage all scripts needed for our project.

Managing Dependencies

Using deps.ts

The standard practice for managing dependencies in Deno is to create a deps.ts file that imports, and then immediately re-exports all third-party code.

/**
 * deps.ts
 * Exports all project dependencies from the file.
 */
export * as log from 'https://deno.land/std@0.167.0/log/mod.ts';
export { Application, Router, Context } from 'https://deno.land/x/oak@v11.1.0/mod.ts';
export type { Middleware } from 'https://deno.land/x/oak@v11.1.0/mod.ts';
export { DataTypes, Database, Model, PostgresConnector } from 'https://deno.land/x/denodb@v1.1.0/mod.ts';
export { oakCors } from 'https://deno.land/x/cors@v1.2.2/mod.ts';
export { applyGraphQL, gql, GQLError } from 'https://deno.land/x/oak_graphql@0.6.4/mod.ts';

In your local module, these methods, and classes can be referenced from the deps.ts file.

import { Application, applyGraphQL, oakCors, Router } from '../deps.ts';

You may be familiar with the concept of dev dependencies in NPM. You can define dev dependencies in Deno using a separate dev_deps.ts file, allowing for a clean separation between dev-only and production dependencies.

With this approach, managing dependencies in Deno becomes much simpler. For example, to upgrade a dependency version, you make the change in the depts.ts file, and it propagates automatically to all modules in which it is referenced.

When using this approach, one should consider integrity checking & lock files. This is basically Deno’s solution for avoiding production issues if the content in the remote url (e.g https://some.url/a.ts) is changed. This could lead to the production module running with different dependency code than your local module.

Just like package-lock.json in NodeJS, Deno can store and check subresource integrity for modules using a small JSON file. To autogenerate a lock file, create a deno.json file at the root of your project and a deno.lock file will be autogenerated.

You can also choose a different file name by updating the deno.json file like so:

{
  "lock": "./lock.json"
}

You can also disable automatically creating, and validating a lock file by specifying:

{
  "lock": false
}

We can manually create or update the lock file using the Deno cache command, and --lock and --lock-write flags like so:

deno cache --lock=deno.lock --lock-write deps.ts

Then a new collaborator can clone the project on their machine and run:

deno cache --reload --lock=deno.lock deps.ts

Using import_map.json

Another way to manage dependencies in Deno is by using the import_map.json file.

This method is useful if you want to use "bare-specifiers" (specifiers without an absolute or relative path to them, e.g import react from ‘react’).

This file allows you to alias a specific import URL or file path, which can be useful if you want to use a custom alias for a dependency.

To use the import_map.json file, you first need to create it in the root directory of your project. The file should contain a JSON object with a single key, "imports", which maps import aliases to fully-qualified module URLs or file paths. You can use the import_map.json file to map import paths to remote dependencies and even NPM specifiers.

You can also use the import_map.json file to map aliases to local file paths. For example, if you have a local module in your project at ./src/lib/my_module.ts, you can map the import path "my_module" to this file.

Here's an example of an import_map.json file:

{
  "imports": {
    "lodash": "npm:lodash@^4.17",
    "react": "https://cdn.skypack.dev/react",
    "my_module": "./src/lib/my_module.ts"
  }
}

With this import_map.json file in place, you can now import the libraries using their aliases:

import lodash from 'lodash';
import react from 'react';
import { myFunction } from 'my_module';

console.log(lodash.defaults({ 'a': 1 }, { 'a': 3, 'b': 2 }))

Using the import_map.json file can be a convenient way to manage dependencies in your Deno projects, especially if you want to use custom aliases for your imports. Just be sure to include the --import-map flag when running your Deno application, like so:

deno run --import-map=./import_map.json main.ts

This will ensure that Deno uses the import map specified in the import_map.json file when resolving dependencies.

Managing Command Scripts

Like npm run, the Deno CLI also has a run command that is used to run scripts in files. Depending on the permission needed or the type of operation, there are certain flags that need to be passed to the run command. For example, if you want to run a web server that uses an env file and reads from it, your command will look like this:

deno run --allow-net --allow-env --allow-read main.ts

If we are reading and writing to a file, we need to add the --allow-write, or if we have an API that needs information about the operating system of the user, then we will also need to add --allow-sys. This can go on and on. We could decide to use --allow-all to accept all permissions, but this is not advisable.

The good news is that we can manage all our command scripts without having to retype them every time. We can add these scripts to the deno.json file.

The deno.json file is a configuration file for customizing basic deno CLI commands like fmt, lint, test etc. In addition, the file can also be used to define tasks using the "tasks" field. To define tasks, you can specify a mapping of task names to “tasks”. For example:

{
 "tasks": {
    "start-web": "deno run --watch --allow-net --allow-env --allow-read ./src/main.ts",
    "generate-type-definition": "deno run --allow-net --allow-env --allow-read --allow-write --allow-sys ./tools/generate_type_definition.ts"
 }
}

You can then run these tasks using the Deno task command, and specifying the task name, like this:

deno task start-web
deno task generate-type-definition

Conclusion

In this article, we saw how to manage dependencies and command scripts in a Deno project. If you’re coming from a Node background, and were confused about where the package.json file was, we hope this clarified how to accomplish some of the same things using the depts.ts, dev_depts.ts, import_map.json, and deno.json files.

We hope this article was helpful, and you were able to learn and be more comfortable with using Deno for your projects. If you want to learn more about Deno, check out deno.framework.dev for a list of libraries and resources. If you are looking to start a new Deno project, check out our starter kit resources at starter.dev.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Splitting Work: Multi-Threaded Programming in Deno cover image

Splitting Work: Multi-Threaded Programming in Deno

Deno is a new runtime for JavaScript/TypeScript built on top of the V8 JavaScript engine. It was created as an alternative for Node.js with a focus on security and modern language features. Here at This Dot, we've been working with Deno for a while, and we've even created a starter kit that you can use to scaffold your next backend Deno project. The starter kit uses many standard Deno modules, such as the Oak web server and the DenoDB ORM. One issue you may encounter, when scaling an application from this starter kit, is how to handle expensive or long-running asynchronous tasks and operations in Deno without blocking your server from handling more requests. Deno, just like Node.js, uses an event loop in order to process asynchronous tasks. This event loop is responsible for managing the flow of Deno applications and handling the execution of asynchronous tasks. The event loop is executed in a single thread. Therefore, if there is some CPU-intensive or long-running logic that needs to be executed, it needs to be offloaded from the main thread. This is where Deno workers come into play. Deno workers are built upon the Web Worker API specification, and provide a way to run JavaScript or TypeScript code in separate threads, allowing you to execute CPU-intensive or long-running tasks concurrently, without blocking the event loop. They communicate with the main process through a message-passing API. In this blog post, we will show you how to expand on our starter kit using Deno Workers. In our starter kit API, where we have CRUD operations for managing technologies, we'll modify the create endpoint to also read an image representing the technology and generate thumbnails for that image. Generating thumbnails Image processing is CPU-intensive. If the image being processed is large, it may require a significant amount of CPU resources to complete in a timely manner. When including image processing as part of an API, it's definitely a good idea to offload that processing to a separate thread if you want to keep your API responsive. Although there are many image processing libraries out there for the Node ecosystem, the Deno ecosystem does not have as many for now. Fortunately, for our use case, using a simple library like deno-image is good enough. With only a few lines of code, you can resize any image, as shown in the below example from deno-image's repository: ` Let's now create our own thumbnail generator. Create a new file called generate_thumbnails.ts in the src/worker folder of the starter kit: ` The function uses fetch to retrieve the image from a remote URL, and store it in a local buffer. Afterwards, it goes through a predefined list of thumbnail sizes, calling resize() for each, and then saving each image to the public/images folder, which is a public folder of the web server. Each image's filename is generated from the original image's URL, and appended with the thumbnail dimensions. Calling the web worker The web worker itself is a simple Deno module which defines an event handler for incoming messages from the main thread. Create worker.ts in the src/worker folder: ` The event's data property expects an object representing a message from the main thread. In our case, we only need an image URL to process an image, so event.data.imageUrl will contain the image URL to process. Then, we call the generateThumbnails function on that URL, and then we close the worker when done. Now, before calling the web worker to resize our image, let's modify the Technology type from the GraphQL schema in the starter kit to accept an image URL. This way, when we execute the mutation to create a new technology, we can execute the logic to read the image, and resize it in the web worker. ` After calling deno task generate-type-definition to generate new TypeScript files from the modified schema, we can now use the imageUrl field in our mutation handler, which creates a new instance of the technology. At the top of the mutation_handler.ts module, let's define our worker: ` This is only done once, so that Deno loads the worker on module initialization. Afterwards, we can send messages to our worker on every call of the mutation handler using postMessage: ` With this implementation, your API will remain responsive, because post-processing actions such as thumbnail generation are offloaded to a separate worker. The main thread and the worker thread communicate with a simple messaging system. Conclusion Overall, Deno is a powerful and efficient runtime for building server-side applications, small and large. Its combination of performance and ease-of-use make it an appealing choice for developers looking to build scalable and reliable systems. With its support for the Web Worker API spec, Deno is also well-suited for performing large-scale data processing tasks, as we've shown in this blog post. If you want to learn more about Deno, check out deno.framework.dev for a curated list of libraries and resources. If you are looking to start a new Deno project, check out our Deno starter kit resources at starter.dev....

State of Deno: A Look at the Deno CLI, Node.js Compatibility and the Fresh Framework cover image

State of Deno: A Look at the Deno CLI, Node.js Compatibility and the Fresh Framework

In this State of Deno event, our panelists discussed the Deno CLI, Node.js compatibility for the npm ecosystem, and the Fresh framework. In this wrap-up, we will take a deeper look into these latest developments and explore what is on the horizon for Deno. You can watch the full State of Deno event on the This Dot Media YouTube Channel. Here is a complete list of the host and panelists that participated in this online event. Hosts: Tracy Lee, CEO, This Dot Labs, @ladyleet Panelists: Colin Ihrig, Software Engineer at Deno, @cjihrig Luca Casonato, Software Engineer at Deno, @lcasdev Bartek Iwańczuk, Software Engineer at Deno, @biwanczuk David Sherret, Software Engineer at Deno, @DavidSherret Table of Contents - Exploring the Deno CLI and its features - What is Deno? - Built in support for TypeScript - Built in toolchain - Deno install and upgrade commands - Deno permissions - Upcoming features - Deno products - Deno Deploy - Deno and Node.js compatibility - Future support for npm packages - The Deno to Node Transform library tool - Fresh framework - Conclusion - We look forward to seeing you at our next State of Deno! Exploring the Deno CLI and Its Features What is Deno? Deno is server-side runtime for JavaScript that also behaves similarly to a browser because it supports all of the same browser APIs on the server. This support provides access to existing knowledge, resources, and documentation for these browser APIs. The team at Deno works closely with browser vendors to make sure that new web APIs work well for both server-side runtime and browsers. Built In Support for TypeScript One of the advantages of Deno is that it ships with TypeScript by default. This removes the setup and configuration time, and reduces the barrier of entry for getting started with TypeScript. Deno also type checks your TypeScript code so you no longer have to use tsc. Built in Toolchain The Deno CLI comes with an entire toolchain which includes a formatter, Linter, package manager, vendor remote dependencies, editor integrations, and more. One of those tools is the Documentation Generator, which annotates library function comments, types, or interfaces with JSDocs comments to easily generate documentation. For a complete list of the Deno CLI tools, please visit their documentation page. Deno install and upgrade commands Deno install is another feature that allows you to take a script and install it using a global command. ` If there is a new version of Deno, you can just run the upgrade command and it will upgrade itself, which makes it a version manager for itself. ` Deno permissions Deno by default will not have file, network or environment access unless you grant those permissions by running a script with command line flags. ` Even with permissions granted, you can scope it to certain directories to allow it to only read and write in the directory of your choosing. If you run the program without permissions granted, the program will still prompt you for permission access before accessing a file. To learn more about Deno's permissions, please read through the documentation. Upcoming features Deno is currently working on improving performance in the areas of HTTP, FFI (Foreign Function Interface) and Node compatibility. There are also improvements being made on the Documentation Generator, to make sure that the docs provided are good and removing the need to maintain two separate docs. Deno Products Deno Deploy Deno deploy is a hosted offering that makes it easy to go from local development to production ready. This service integrates well with GitHub, and provides an option to pay only when users make requests to your services. Deno deploy has a dashboard that allows you to automate most of the processes and show you metrics, deployment statistics, CPU utilization, and network utilization. It also allows you to set up custom domains and provision certificates. Deno and Node.js compatibility Deno v1.15 will introduce a Node.js compatibility mode which will make it possible to run some of Node's programs in Deno. Node APIs like the http server will work in Deno as they would in Node.js. When it comes to the NPM ecosystem compatibility, the goal is to support the large number of packages built on Node.js. The Deno team is working on these compatibility issues, because it uses web APIs for most of their operations. All of these Web APIs were created after Node.js was conceived, meaning that Node implements a whole different set of APIs to do certain operations like performing network requests. This new Node.js API compatibility layer will be able to translate API calls to modern underlying APIs which is what the actual runtime implements. Future support for npm packages When it comes to supporting npm packages on Deno, the goal is to have a transpiler server that takes common.js code and translates that into ESM (ECMAScript module) code. The reason for this is because, just like browsers, Deno only supports ESM code. Deno uses the concept of npm specifiers to provide access to the npm package. Deno downloads the npm package and runs it from a global cache. common.js is also supported ,and it runs the code as it is. ` For npm packages, Deno will create a single copy of the downloaded package instead of multiple directories or sub-directories of modules. It is one global hash file, with no node_modules directory by default, and no need for a package.json by default. If a package requires a node_modules directory, then that directory can be created using a specifier flag. The following command will create a node_modules directory in the project directory, using symlink. ` The Deno to Node Transform library tool The Deno team built a tool to allow library authors to transform Deno packages to Node.js packages. Deno to Node Transform (DNT) takes the Deno code then builds it for Node and distributes it as an npm package. This allows library authors to take advantage of the Deno tool chain. This package can also be shipped on npm for Node.js users. Fresh framework Fresh is a new web framework for Deno, that makes use of the Deno toolchain ecosystem. Fresh uses JSX for templating, and it is similar to Next.js or Remix. A key difference between Fresh and Next.js or Remix, is that Fresh is built to be server-side rendered on the edge rather than server-side in a few locations. Another difference is that with Fresh, no JavaScript is shipped to the client by default, which makes it faster. Fresh handles the Server-side rendering, generates the HTML, sends the file to the client, and hydrates only the part of the page that requires JavaScript on the client by default. Here are some products that already use the Fresh framework: - Deno - merch.deno.com - Deno Deploy To learn more about how to build apps with the Fresh framework, please read through this helpful blog post. Conclusion The team at Deno is making great progress to bring more exciting features to the community to make the runtime engine easy to use for building or migrating existing libraries. If you have any questions about the State of Deno, be sure to ask here. What is it you find exciting about Deno? We will be happy to hear about it on Twitter! We look forward to seeing you at our next State of Deno!...

Next.js Rendering Strategies and how they affect core web vitals cover image

Next.js Rendering Strategies and how they affect core web vitals

When it comes to building fast and scalable web apps with Next.js, it’s important to understand how rendering works, especially with the App Router. Next.js organizes rendering around two main environments: the server and the client. On the server side, you’ll encounter three key strategies: Static Rendering, Dynamic Rendering, and Streaming. Each one comes with its own set of trade-offs and performance benefits, so knowing when to use which is crucial for delivering a great user experience. In this post, we'll break down each strategy, what it's good for, and how it impacts your site's performance, especially Core Web Vitals. We'll also explore hybrid approaches and provide practical guidance on choosing the right strategy for your use case. What Are Core Web Vitals? Core Web Vitals are a set of metrics defined by Google that measure real-world user experience on websites. These metrics play a major role in search engine rankings and directly affect how users perceive the speed and smoothness of your site. * Largest Contentful Paint (LCP): This measures loading performance. It calculates the time taken for the largest visible content element to render. A good LCP is 2.5 seconds or less. * Interaction to Next Paint (INP): This measures responsiveness to user input. A good INP is 200 milliseconds or less. * Cumulative Layout Shift (CLS): This measures the visual stability of the page. It quantifies layout instability during load. A good CLS is 0.1 or less. If you want to dive deeper into Core Web Vitals and understand more about their impact on your website's performance, I recommend reading this detailed guide on New Core Web Vitals and How They Work. Next.js Rendering Strategies and Core Web Vitals Let's explore each rendering strategy in detail: 1. Static Rendering (Server Rendering Strategy) Static Rendering is the default for Server Components in Next.js. With this approach, components are rendered at build time (or during revalidation), and the resulting HTML is reused for each request. This pre-rendering happens on the server, not in the user's browser. Static rendering is ideal for routes where the data is not personalized to the user, and this makes it suitable for: * Content-focused websites: Blogs, documentation, marketing pages * E-commerce product listings: When product details don't change frequently * SEO-critical pages: When search engine visibility is a priority * High-traffic pages: When you want to minimize server load How Static Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): Static rendering typically leads to excellent LCP scores (typically < 1s). The Pre-rendered HTML can be cached and delivered instantly from CDNs, resulting in very fast delivery of the initial content, including the largest element. Also, there is no waiting for data fetching or rendering on the client. * Interaction to Next Paint (INP): Static rendering provides a good foundation for INP, but doesn't guarantee optimal performance (typically ranges from 50-150 ms depending on implementation). While Server Components don't require hydration, any Client Components within the page still need JavaScript to become interactive. To achieve a very good INP score, you will need to make sure the Client Components within the page is minimal. * Cumulative Layout Shift (CLS): While static rendering delivers the complete page structure upfront which can be very beneficial for CLS, achieving excellent CLS requires additional optimization strategies: * Static HTML alone doesn't prevent layout shifts if resources load asynchronously * Image dimensions must be properly specified to reserve space before the image loads * Web fonts can cause text to reflow if not handled properly with font display strategies * Dynamically injected content (ads, embeds, lazy-loaded elements) can disrupt layout stability * CSS implementation significantly impacts CLS—immediate availability of styling information helps maintain visual stability Code Examples: 1. Basic static rendering: ` 2. Static rendering with revalidation (ISR): ` 3. Static path generation: ` 2. Dynamic Rendering (Server Rendering Strategy) Dynamic Rendering generates HTML on the server for each request at request time. Unlike static rendering, the content is not pre-rendered or cached but freshly generated for each user. This kind of rendering works best for: * Personalized content: User dashboards, account pages * Real-time data: Stock prices, live sports scores * Request-specific information: Pages that use cookies, headers, or search parameters * Frequently changing data: Content that needs to be up-to-date on every request How Dynamic Rendering Affects Core Web Vitals * Largest Contentful Paint (LCP): With dynamic rendering, the server needs to generate HTML for each request, and that can't be fully cached at the CDN level. It is still faster than client-side rendering as HTML is generated on the server. * Interaction to Next Paint (INP): The performance is similar to static rendering once the page is loaded. However, it can become slower if the dynamic content includes many Client Components. * Cumulative Layout Shift (CLS): Dynamic rendering can potentially introduce CLS if the data fetched at request time significantly alters the layout of the page compared to a static structure. However, if the layout is stable and the dynamic content size fits within predefined areas, the CLS can be managed effectively. Code Examples: 1. Explicit dynamic rendering: ` 2. Simplicit dynamic rendering with cookies: ` 3. Dynamic routes: ` 3. Streaming (Server Rendering Strategy) Streaming allows you to progressively render UI from the server. Instead of waiting for all the data to be ready before sending any HTML, the server sends chunks of HTML as they become available. This is implemented using React's Suspense boundary. React Suspense works by creating boundaries in your component tree that can "suspend" rendering while waiting for asynchronous operations. When a component inside a Suspense boundary throws a promise (which happens automatically with data fetching in React Server Components), React pauses rendering of that component and its children, renders the fallback UI specified in the Suspense component, continues rendering other parts of the page outside this boundary, and eventually resumes and replaces the fallback with the actual component once the promise resolves. When streaming, this mechanism allows the server to send the initial HTML with fallbacks for suspended components while continuing to process suspended components in the background. The server then streams additional HTML chunks as each suspended component resolves, including instructions for the browser to seamlessly replace fallbacks with final content. It works well for: * Pages with mixed data requirements: Some fast, some slow data sources * Improving perceived performance: Show users something quickly while slower parts load * Complex dashboards: Different widgets have different loading times * Handling slow APIs: Prevent slow third-party services from blocking the entire page How Streaming Affects Core Web Vitals * Largest Contentful Paint (LCP): Streaming can improve the perceived LCP. By sending the initial HTML content quickly, including potentially the largest element, the browser can render it sooner. Even if other parts of the page are still loading, the user sees the main content faster. * Interaction to Next Paint (INP): Streaming can contribute to a better INP. When used with React's <Suspense />, interactive elements in the faster-loading parts of the page can become interactive earlier, even while other components are still being streamed in. This allows users to engage with the page sooner. * Cumulative Layout Shift (CLS): Streaming can cause layout shifts as new content streams in. However, when implemented carefully, streaming should not negatively impact CLS. The initially streamed content should establish the main layout, and subsequent streamed chunks should ideally fit within this structure without causing significant reflows or layout shifts. Using placeholders and ensuring dimensions are known can help prevent CLS. Code Examples: 1. Basic Streaming with Suspense: ` 2. Nested Suspense boundaries for more granular control: ` 3. Using Next.js loading.js convention: ` 4. Client Components and Client-Side Rendering Client Components are defined using the React 'use client' directive. They are pre-rendered on the server but then hydrated on the client, enabling interactivity. This is different from pure client-side rendering (CSR), where rendering happens entirely in the browser. In the traditional sense of CSR (where the initial HTML is minimal, and all rendering happens in the browser), Next.js has moved away from this as a default approach but it can still be achievable by using dynamic imports and setting ssr: false. ` Despite the shift toward server rendering, there are valid use cases for CSR: 1. Private dashboards: Where SEO doesn't matter, and you want to reduce server load 2. Heavy interactive applications: Like data visualization tools or complex editors 3. Browser-only APIs: When you need access to browser-specific features like localStorage or WebGL 4. Third-party integrations: Some third-party widgets or libraries that only work in the browser While these are valid use cases, using Client Components is generally preferable to pure CSR in Next.js. Client Components give you the best of both worlds: server-rendered HTML for the initial load (improving SEO and LCP) with client-side interactivity after hydration. Pure CSR should be reserved for specific scenarios where server rendering is impossible or counterproductive. Client components are good for: * Interactive UI elements: Forms, dropdowns, modals, tabs * State-dependent UI: Components that change based on client state * Browser API access: Components that need localStorage, geolocation, etc. * Event-driven interactions: Click handlers, form submissions, animations * Real-time updates: Chat interfaces, live notifications How Client Components Affect Core Web Vitals * Largest Contentful Paint (LCP): Initial HTML includes the server-rendered version of Client Components, so LCP is reasonably fast. Hydration can delay interactivity but doesn't necessarily affect LCP. * Interaction to Next Paint (INP): For Client Components, hydration can cause input delay during page load, and when the page is hydrated, performance depends on the efficiency of event handlers. Also, complex state management can impact responsiveness. * Cumulative Layout Shift (CLS): Client-side data fetching can cause layout shifts as new data arrives. Also, state changes might alter the layout unexpectedly. Using Client Components will require careful implementation to prevent shifts. Code Examples: 1. Basic Client Component: ` 2. Client Component with server data: ` Hybrid Approaches and Composition Patterns In real-world applications, you'll often use a combination of rendering strategies to achieve the best performance. Next.js makes it easy to compose Server and Client Components together. Server Components with Islands of Interactivity One of the most effective patterns is to use Server Components for the majority of your UI and add Client Components only where interactivity is needed. This approach: 1. Minimizes JavaScript sent to the client 2. Provides excellent initial load performance 3. Maintains good interactivity where needed ` Partial Prerendering (Next.js 15) Next.js 15 introduced Partial Prerendering, a new hybrid rendering strategy that combines static and dynamic content in a single route. This allows you to: 1. Statically generate a shell of the page 2. Stream in dynamic, personalized content 3. Get the best of both static and dynamic rendering Note: At the time of this writing, Partial Prerendering is experimental and is not ready for production use. Read more ` Measuring Core Web Vitals in Next.js Understanding the impact of your rendering strategy choices requires measuring Core Web Vitals in real-world conditions. Here are some approaches: 1. Vercel Analytics If you deploy on Vercel, you can use Vercel Analytics to automatically track Core Web Vitals for your production site: ` 2. Web Vitals API You can manually track Core Web Vitals using the web-vitals library: ` 3. Lighthouse and PageSpeed Insights For development and testing, use: * Chrome DevTools Lighthouse tab * PageSpeed Insights * Chrome User Experience Report Making Practical Decisions: Which Rendering Strategy to Choose? Choosing the right rendering strategy depends on your specific requirements. Here's a decision framework: Choose Static Rendering when * Content is the same for all users * Data can be determined at build time * Page doesn't need frequent updates * SEO is critical * You want the best possible performance Choose Dynamic Rendering when * Content is personalized for each user * Data must be fresh on every request * You need access to request-time information * Content changes frequently Choose Streaming when * Page has a mix of fast and slow data requirements * You want to improve perceived performance * Some parts of the page depend on slow APIs * You want to prioritize showing critical UI first Choose Client Components when * UI needs to be interactive * Component relies on browser APIs * UI changes frequently based on user input * You need real-time updates Conclusion Next.js provides a powerful set of rendering strategies that allow you to optimize for both performance and user experience. By understanding how each strategy affects Core Web Vitals, you can make informed decisions about how to build your application. Remember that the best approach is often a hybrid one, combining different rendering strategies based on the specific requirements of each part of your application. Start with Server Components as your default, use Static Rendering where possible, and add Client Components only where interactivity is needed. By following these principles and measuring your Core Web Vitals, you can create Next.js applications that are fast, responsive, and provide an excellent user experience....

Next.js + MongoDB Connection Storming cover image

Next.js + MongoDB Connection Storming

Building a Next.js application connected to MongoDB can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as Vercel, it is crucial to manage your database connections properly. If you encounter errors like these, you may be experiencing Connection Storming: * MongoServerSelectionError: connect ECONNREFUSED <IP_ADDRESS>:<PORT> * MongoNetworkError: failed to connect to server [<hostname>:<port>] on first connect * MongoTimeoutError: Server selection timed out after <x> ms * MongoTopologyClosedError: Topology is closed, please connect * Mongo Atlas: Connections % of configured limit has gone above 80 Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. We can leverage Vercel’s fluid compute model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with the rise of LLM-oriented applications built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable. Protip: Use global variables Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client. The example below demonstrates how to retrieve an array of users from the users collection in MongoDB and either return them through an API request to /api/users or render them as an HTML list at the /users route. To support this, we initialize a global clientPromise variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request. ` Using this database connection in your API route code is easy: ` You can also use this database connection in your server-side rendered React components. ` In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co