Skip to content

How to Set Up Environment Variables using JSON files with Rollup and TypeScript

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In previous posts, I explained How to setup a TypeScript project using Rollup.js from scratch. Also, I covered How to Serve a Single Page Application(SPA) using Rollup.js and Web Dev Server step by step. Then, I covered How to Build a LitElement Application with Rollup.js and TypeScript.

In this article, I'll take the result of those tutorials as a starting point to set up environment variables using JSON files.

Environment Variables

Reading environment variables is very common in software development. It's a helpful way to cover security concerns and even provides the convenience to set configuration parameters in your application.

For example, in the Node.js world, you may need to set the port number to serve your application. For example:

const app = require('http').createServer((req, res) => res.send('Hello world'));
const PORT = process.env.PORT || 3000;

app.listen(PORT, () => {
  console.log(`Server is ready to listen on port ${PORT}`);
});

In the previous code, the app will read the value of PORT through process.env.PORT. If the value doesn't exist, it will take 3000 as a fallback.

JSON Modules

In today's JavaScript world, it's possible to read JSON Modules as follows:

import config from '../../config.json'; 

Also, recent versions of Node.js enables the use of --experimental-json-module flag for the module to work.

Let's explore a practical way to add the same support to the current Single-Page Application which is based on TypeScript, LitElement, Rollup.js, and Web Dev Server.

Project Setup

Base Project

Let's create a clone or download the project seed before adding the new configurations and tools:

git clone https://github.com/luixaviles/typescript-rollup.git
cd typescript-rollup/
git checkout tags/03-build-litelement -b 04-env-config-json

The previous commands will download the project and create a new branch 04-env-config-json to get started.

Source Code Files

The previous project already contains a set of files and configurations ready to configure and support JSON Modules. Open it with your favorite code editor and take a look at the project structure:

|- typescript-rollup
    |- src/
        |- math/
            |- math.ts
            |- index.ts
        |- string/
            |- string.ts
            |- index.ts
        |- main.ts
    |- index.html
    |- package.json
    |- rollup.config.js
    |- tsconfig.json
    |- web-dev-server.config.js

Installing Rollup Plugins

Since the Single-page application considers Rollup.js as the module bundler, we'll need to install a couple of node modules first:

npm install --save-dev @rollup/plugin-json @web/dev-server-rollup
  • @rollup/plugin-json will be in charge to convert .json files to ES6 modules.
  • @web/dev-server-rollup is an adapter for using rollup plugins in Web Dev Server, which is used to "serve" the web application in development mode.

The package.json file should have the new dependencies listed as follows:

{
  ...
  "devDependencies": {
    ...
    "@rollup/plugin-json": "^4.1.0",
    "@web/dev-server-rollup": "^0.3.2",
    ...
  },
}

Reading the JSON Configuration

Let's add the following env-config.json file into the root of the project:

{
  "environment": "production",
  "host": {
    "protocol": "http",
    "hostname": "localhost",
    "port": 8080
  }
}

Of course, you can set your variables according to the requirements: access keys, platform, etc.

Since you may have sensitive data in that file, it should not be versioned in any case. It's good practice to add it to the .gitignore file:

# Environment variables
env-config.json

Also, you can provide a env-config.json.example file as an example so that developers, Dev Ops, or any member of the team can create a configuration file from it:

{
  "environment": "development",
  "host": {
    "protocol": "http",
    "hostname": "localhost",
    "port": 8080
  }
}

Creating the TypeScript Model

It's time to define the TypeScript model for our configurations. First, let's create a src/environment/environment-model.ts:

// environment-model.ts
export interface Host {
  protocol: string;
  hostname: string;
  port: number | string;
}

export interface EnvConfig {
  environment: string;
  host: Host;
}

Next, create a src/environment/environment.ts file with the following content:

import { EnvConfig } from './environment-model';
import envConfig from '../../env-config.json'; 

export const env = envConfig as EnvConfig;

From now on, the configurations will be available in the env variable.

TypeScript Configuration

You may see a couple of compilation errors from TypeScript after adding the environment.ts file. To avoid them, it's required to add new compiler options in the tsconfig.json file:

{
  "compilerOptions": {
    ...
    "resolveJsonModule": true,
    "allowSyntheticDefaultImports": true
  }
}

Why are these changes needed?

  • The resolveJsonModule flag allows the compiler to include modules with .json extension.
  • The allowSyntheticDefaultImports flag allows default imports from modules with no default export. As the documentation says, it's just type checking.

Reading Configuration Values

Let's read the configuration values to be rendered into an existing web component. Update the main.ts file with the following content:

// main.ts
import { LitElement, html, customElement, css, property } from 'lit-element';
import { env } from './environment/environment';

@customElement('comp-main')
export class CompMain extends LitElement {

    static styles = css`
    :host {
        display: flex;
    }
    `;

    @property({ type: String }) message: string = 'Welcome to LitElement';

    render() {
        return html`
        <div>
            <h1>${this.message}</h1>
            <span>This App uses:</span>
            <ul>
                <li>TypeScript</li>
                <li>Rollup.js</li>
                <li>es-dev-server</li>
            </ul>
            <span>Running environment: ${env.environment}</span>
            <ul>
                <li>Protocol: ${env.host.protocol}</li>
                <li>Hostname: ${env.host.hostname}</li>
                <li>Port: ${env.host.port}</li>
            </ul>
        </div>
        `;
    }
}

Just pay attention to the import line: import { env } from './environment/environment'. Then the EnvConfig object (TypeScript model) will be available as the env variable.

Rollup Configuration

As stated above, the project is already configured and uses Rollup as the module bundler. Let's move forward to allow reading the JSON content for the build.

// rollup.config.js
import merge from 'deepmerge';
import { createSpaConfig } from '@open-wc/building-rollup';
import json from '@rollup/plugin-json';

const baseConfig = createSpaConfig({
  developmentMode: process.env.ROLLUP_WATCH === 'true',
  injectServiceWorker: false
});

export default merge(baseConfig, {
  // any <script type="module"> inside will be bundled by rollup
  input: './index.html',
  plugins: [
    json()
  ]
});

For every building process, the new json() plugin will be in charge to process the .json files. You can call any other Rollup plugin in the plugins array.

Web Dev Server Configuration

If you try to serve the application at this point, you'll find a blank page and some errors in the browser's console:

Failed to load module script: The server responded with a non-JavaScript MIME type of "application/json". 
Strict MIME type checking is enforced for module scripts per HTML spec.
Failed to load module script non-JavaScript MIME type

That is because we configured Rollup for the build process only. However, for serving the app in a "development mode", we're using Web Dev Server.

Let's apply a couple of changes in the web-dev-server.config.js file to fix it:

// web-dev-server.config.js

const { rollupAdapter } = require('@web/dev-server-rollup');
const json =  require('@rollup/plugin-json');

module.exports = {
    port: 8000,
    nodeResolve: true,
    open: true,
    watch: true,
    appIndex: 'index.html',
    mimeTypes: {
      // serve all json files as js
      '**/*.json': 'js'
    },
    plugins: [rollupAdapter(json())],
  };

Let's explain what's happening in that code:

  • The @web/dev-server-rollup package allows using Rollup plugins in Web Dev Server.
  • Instead of using json() plugin directly, it needs to be passed through the rollupAdapter function first.

When you're done with those changes, run npm run start again to see the final result.

Running Single-page app with JSON Environment variables

Source Code of the Project

Find the complete project in this GitHub repository: typescript-rollup. Do not forget to give it a star ⭐️ and play around with the code.

Feel free to reach out on Twitter if you have any questions. Follow me on GitHub to see more about my work.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview * index.js - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 * app.js - this file exports a function that creates and returns our Fastify application instance * sql-plugin.js - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp function to create a new instance of our Fastify app, and then using the inject method from the Fastify API to make a request to the / route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t that we call methods on in our nested test structure. In this example, we use t.beforeEach to create a new Fastify app instance for each test, and call the test method to register our nested tests. Along with beforeEach the other methods you might expect are also available: afterEach, before, after. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test based tests we used for our Fastify plugins - test also includes skip, todo, and only methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe → it test syntax. They both come with the same methods as test and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx and ts-node. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

How to Update the Application Title based on Routing Changes in Angular cover image

How to Update the Application Title based on Routing Changes in Angular

Have you tried to update the document's title of your application? Maybe you're thinking that applying interpolation should be enough: ` That solution is not going to work since the element is outside of the scope of the Angular application. In fact, the root component of your app is within tag, and the title is part of the element. Luckily, Angular provides the Title service with the methods to read the current title of the application, and a setTitle(title) to update that value. However, what happens if you need to update the title on routing changes? Also, you may consider updating it on certain components for Analytics purposes. In this blog post, I'll explain step-by-step how to create a custom Title service to have full control over the title of the current HTML document for your application. Project Setup Prerequisites You'll need to have installed the following tools in your local environment: - Node.js. Preferably the latest LTS version. - A package manager. You can use either NPM or Yarn. This tutorial will use NPM. Creating the Angular Project Let's assume we'll need to build an application with the following routes as requirements: ` Now, let's create the project from scratch using the Angular CLI tool. ` This command will initialize a base project using some configuration options: - --routing. It will create a routing module. - --prefix corp. It defines a prefix to be applied to the selectors for created components(corp in this case). The default value is app. - --style css. The file extension for the styling files. - --skip-tests. it avoids the generations of the .spec.ts files, which are used for testing Creating the Modules and Components Once we got the initial structure of the app, we'll continue running the following commands to create a separate module for /home and /products, which are the main paths of the project: ` * The --routing flag can be using also along with ng generate module to create a routing configuration file for that module. Creating the Title Service Similar to the previous section, we will create a shared module to hold the Title service. Both can be generated with the following commands: ` * The --module app flag is used to "link" the brand new module to the pre-existing app.module.ts file. The Routing Configuration Open the app-routing.module.ts file, and create the initial routes. ` * By default, the application will redirect to the home path. * When the router loads the home path, a HomeComponent will be rendered. * The products path will be loaded using the _lazy loading_ feature. Pay attention to the data provided to the home path. It contains the configured title through pageTitle string. Next, open the products-routing.module.ts file to enable an additional configuration to load the _Products_ and the _Product Detail_ page. ` * The router will render the ProductsComponent by default when the path matches to /products. This route also defines custom data to be rendered as titles later. * When the path also adds an Id on /products/:id, the router will render the ProductDetailComponent. The Title Service Implementation It's time to implement the custom Title Service for our application. ` The above service implementation could be understood in just a few steps. * First, we'll need to make sure to inject the Router, ActivatedRoute and Title services in the constructor. * The title$ attribute contains the initial value for the title("Corp"), which will be emitted through a _BehaviorSubject_. * The titleRoute$ is an Observable ready to emit any pageTitle value defined in the current route. It may use the parent's _pageTitle_ otherwise. * The titleState$ is an Observable ready to _listen_ to either title$ or titleRoute$ values. In case incoming value is defined, it will call the Angular Title service to perform the update. * The getPageTitle method will be in charge of obtaining the pageTitle of the current route if it is defined or the title of the parent otherwise. Injecting the Title Service One easy way to apply the custom Title Service in the whole application is by updating the app.module.ts file and injecting it into the constructor. ` In that way, once the default component gets rendered, the title will be displayed as Corp - Home. If you click on _Go to Products_ link, then a redirection will be performed and the Title service will be invoked again to display Corp - Products at this time. However, we may need to render a different title according to the product detail. In this case, we'll show Corp - Product Detail - :id where the Id matches with the current route parameter. ` Let's explain the implementation of this component: * The constructor injects the ActivatedRoute and the custom TitleService. * The productId$ is the _Observable_ which is going to emit the Id parameter every time it changes in the URL. * Once the component gets initialized, we'll need to _subscribe_ to the productId$ _Observable_ and then emit a new value for the title after creating a new string using the id. That's possible through the titleService.title$.next() method. * When the component gets _destroyed_, we'll need to _unsubscribe_ from the productIdSubscription. We're ready to go! Every time you select a product, the ProductDetail component will be rendered, and the title will be updated accordingly. Live Demo and Source Code Want to play around with the final application? Just open the following link in your browser: https://luixaviles.github.io/angular-update-title. Find the complete angular project in this GitHub repository: angular-update-title-service. Do not forget to give it a star ⭐️, and play around with the code. Feel free to reach out on Twitter if you have any questions. Follow me on GitHub to see more about my work....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co