Skip to content

Performance Analysis with Chrome DevTools

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

When it comes to performance, developers often use Lighthouse, Perfbuddy, or similar performance analysis tools. But when the target site has protection against bots, getting information is not that simple. In this blog post, we are going to focus on where to look for signs of performance bottlenecks by using Chrome Devtools.

Preparations

Even when access to automated performance analysis tools is restricted, the Network, Performance, and Performance Insights tabs of Chrome Devtools can still be leveraged. To do that, some preparations can be made.

When starting our analysis, I recommend opening the page we want to analyse in incognito mode. So we separate the analysis from our regular browser habits, cookies, and possible browser extensions. When we load the page for the first time, let's make sure we disable caching in the Network tab, so resources are always fetched when we reload.

Location of the "disable cache" checkbox in Chrome Devtools' Network tab

Some pages heavily rely on client-side storage mechanisms such as indexedDB, localStorage, and sessionStorage. Cookies can also interfere. Therefore, it's good to leverage the "Application" tab's Clear site data button to make sure lingering data won't interfere with your results.

Some antivirus software with network traffic filtering can also interfere with your requests. They can block, slow down, or even intercept and modify certain network requests, which can greatly affect loading time and the accuracy of your results. If the site under analysis is safe, we recommend disabling network traffic filtering temporarily.

We strongly suggest just looking at the page and reloading it a few times to get a feeling of its performance. A lot of things cannot be detected by human eyes, but you can look for flickers and content shifts on the page. These performance issues can be good entry points to your investigation.

Common bottlenecks: resources

Let's start at the Network tab, where we can identify if resources are not optimised. After we reload the page, we can use the filters on the network tab to focus on image resources. Then we can see the information of these requests, including the size of the images, the time it took to load each image, and any errors that occurred. The waterfall chart is also useful. This is where you can see the timing of each image resource loading.

We should look for evidence that the image resources are served from a CDN, with proper compression. We can check the resources one by one, and see if they contain Content-Encoding: gzip or Content-Encoding: br headers. If these headers are missing, we found one bottleneck that can be fixed by serving images using gzip or brotli compression while serving them.

Headers on resource requests can tell other signs of errors. It can also happen that images are served from a CDN, such as fastly, but if there are fastly-io-error headers on the resources, it can mean that something is misconfigured.

We also need to check the dimensions of the images. If an image is larger than the space it's being displayed in, it may be unnecessarily slowing down the page. If we find such bottlenecks, we can resize the images to match the actual dimensions of the space where they are being displayed to improve loading time.

Server-side rendering can improve your SEO altogether, but it is worth checking the size of the index.html file because sometimes it can be counterproductive. It is recommended to try and keep HTML files under 100kb to keep the TTFB (Time To First Byte) metric under 1 second.

If the page uses polyfills, it's worth checking what polyfills are in use. IE11 is no longer supported, and loading unnecessary polyfills for that browser slows down the page load time.

Performance Insights Tab

The performance Insights Tab in Chrome DevTools allows users to measure the page load of a website. This is done by running a performance analysis on the website and providing metrics on various aspects of the page load process, such as the time it takes for the page to be displayed, the time it takes for network resources to be loaded, and the time it takes for the page to be interactive.

The performance analysis is run by simulating a user visiting the website and interacting with it, which allows the tool to accurately measure the performance of the page under real-world conditions. This information can then be used to identify areas of the website that may be causing slowdowns and to optimize the performance of the page.

Follow the steps to run an analysis:

  1. Open the Chrome Devtools
  2. Select the "Performance insights" tab
  3. Click on the Measure page load button
A screenshot of the performance insights results of a not performant page

The analysis provides us with a detailed waterfall representation of requests, color coded to the request types. It can help you identify requests that block/slow down the page rendering, and/or expensive function calls that block the main thread. It also provides you with information on important performance metrics, such as DCL (DOM Content Loaded), FCP (First Contentful Paint), LCP (Largest Contentful Paint) and TTI (Time To Interactive). You can also simulate network or CPU throttling, or enable the cache if your use case requires that.

DCL refers to the time it takes for the initial HTML document to be parsed and for the DOM to be constructed. FCP refers to the time it takes for the page to display the first contentful element, such as an image or text. LCP is a metric that measures the loading speed of the largest element on a webpage, such as an image or a block of text. A fast LCP helps ensure that users see the page's main content as soon as possible, which can improve the overall user experience. TTI refers to the time it takes for the page to become fully interactive, meaning that all of the necessary resources have been loaded and the page is responsive to user input.

Performance Tab

The "start profiling and reload page" button in the Performance tab of Chrome DevTools allows users to run a performance analysis on a website and view detailed information about how the page is loading and rendering. By clicking this button, the tool will simulate a user visiting the website and interacting with it, and will then provide metrics and other information about the page load process.

Follow the steps to run an analysis

  1. Open the Chrome Devtools
  2. Select the "Performance" tab
  3. Click on the button with the "refresh" icon
Location of the reload and record button

A very useful part of this view is the detailed information provided on the main thread. We can interact with call stacks and find functions that might run too long, blocking the main thread, and delaying the TTI (Time To Interactive) metric. Selecting a function gives all kinds of information on that function. You can see how long that function was running, and what other functions it called, and you can also directly open that function in the Sources tab.

A screenshot of the main thread of a not performant page (page url redacted)

Identifying long-running, blocking functions is crucial in finding performance bottlenecks. One way to mitigate them is to move them into worker threads.


Chrome DevTools is a powerful tool for analyzing the performance of web applications. By using the network tab, you can identify issues with resources that might slow down page load. With the Performance insights and Performance tabs, we can identify issues that may be causing the website to load slowly, and take steps to optimize the code for better performance. Whether you're a beginner or an experienced developer, Chrome DevTools is an essential tool for analyzing and improving the performance of web applications.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

New Core Web Vitals and How They Work cover image

New Core Web Vitals and How They Work

Core Web Vitals, introduced by Google in 2020, have been around for a while now. And after four years, a significant change is coming in March 2024. In this article, I will try to tell you everything you need to know to be prepared. But first, let's briefly talk about what Core Web Vitals are. What are Core Web Vitals? Web Vitals is a Google initiative that aims to provide developers with guidance on authoring the best user experience on the web as well as tools to assess and measure it. Core Web Vitals are a subset of Web Vitals and currently consist of three main metrics: - Largest Contentful Paint (LCP): This metric measures the time it takes for the largest content element to be rendered on the screen. It should reflect the _loading_ aspect of the user experience. - First Input Delay (FID): This metric measures the time it takes for the browser to respond to the first user interaction. It should reflect the _interactivity_ aspect of the user experience. - Cumulative Layout Shift (CLS): This metric measures the amount of unexpected layout shifts of visual elements on the page. It should reflect the _visual stability_ aspect of the user experience. Part of the Core Web Vitals are also the thresholds that define what is considered a good experience. These thresholds are currently set to: | Metric | Good | Needs Improvement | Bad | | ------ | -------- | ----------------- | -------- | | LCP | ≤ 2.5 s | 2.5 - 4 s | > 4 s | | FID | ≤ 100 ms | 100 - 300 ms | > 300 ms | | CLS | ≤ 0.1 | 0.1 - 0.25 | > 0.25 | While the reward of providing a good user experience should be enough for you to strive for great results in these metrics, that is not the only reason you should care about them. Google is reflecting these metrics in its search ranking algorithm as it wants it to "reward content that offers good user experience". What's Changing in March 2024? The change affects how _interactivity_ is measured. The First Input Delay (FID) has been deemed no longer sufficient to measure the interactivity aspect of the user experience. It will be replaced by the Interaction to Next Paint (INP) metric on March 12, 2024. INP, unlike FID, measures the response of all interactions up to a URL change. It then picks the slowest response from all interactions. (If there are more than 50 interactions, it picks a percentile instead, most often the 98th) making it much stricter than FID. Why the Change? There are three main reasons why FID is being replaced by INP: 1. FID is not strict enough: Upwards of 90% of websites meet the good threshold for FID nowadays, making it a poor differentiator between good and bad user experiences. 2. FID is not representative of the full interaction: FID only measures the first part of the reaction delay, up to the point before the response bubbles to JavaScript, not accounting for the (often quite long) time it takes for the JavaScript to process the response and for the browser to paint the next frame. 3. FID only measures the first interaction: According to Google, 90% of interactions happen _after_ the first load, making FID not representative of the full user experience. INP aims to address these issues by measuring the full interaction and by being stricter in its thresholds (according to Google, ~70% of websites should pass the good INP threshold now) and thus capturing a more representative picture of the user experience. How Does INP Work? As I previously mentioned, INP measures all interactions on the page and picks the slowest one or a percentile, which solves one of the issues with FID not representing the whole experience of interacting with a website. This way of measuring is not dissimilar to how CLS measures the unexpected layout shifts on the page as it also captures the full scope of the stay on the page. The interactions considered by INP are: - Mouse clicks - Touchscreen taps - Keyboard presses (including virtual onscreen keyboards) If an interaction is composed of multiple events, such as a keystroke, consisting of the keydown, keypress, and keyup events; INP groups them and measures the latency as the maximum duration of all the included events. Unlike FID, it captures not only the time taken by tasks blocked by the main thread (the _input delay_) but also the _processing time_ (events and JavaScript processing) and the _presentation delay_ (the time it takes for the browser to paint the next frame), making it a much better reflection of the full user experience, but also much harder to pass. So while FID reflects the first impression, INP should capture the overall _responsiveness_ of a page. For more information about interactions with comprehensive diagrams of what actually happens in the browser, check out the web.dev article on INP. Google defined the thresholds for INP as follows: | Metric | Good | Needs Improvement | Bad | | ------ | -------- | ----------------- | -------- | | INP | ≤ 200 ms | 200 - 500 ms | > 500 ms | How to Prepare for INP? Let's start with some general steps you should take to prepare for the new Web Vital: 1. Measure Start with measuring your INP score. The easiest way to measure it is using PageSpeed Insights, but if you need to measure it locally, you can use Lighthose user flows which already support INP. If your INP scores are _good_, you're fine and you can probably stop reading further. 2. Do not panic While, according to Tim Kadlec, only 17% of React websites and 11.8% of Next.js websites in the "top 100k" pass the _good_ INP threshold, so it's very likely that you will have to do some work to pass; framework authors have been working on improving their performance and things may be as hot as Tim paints them. According to Google’s stats, ~80-90% of desktop sites pass the _good_ INP threshold depending on the framework as of June 2023: | Technology | Passing on Mobile [%] | Passing on Desktop [%] | | ----------------- | --------------------- | ---------------------- | | Angular (v2.0.0+) | 28.6 | 83.6 | | Next.js | 28.5 | 87.3 | | Nuxt.js | 32.0 | 91.2 | | Preact | 48.6 | 92.8 | | Vue (v2.0.0+) | 50.3 | 94.1 | | Lit | 50.0 | 88.3 | 3. Optimize In case your INP scores are not _good_, you should start optimizing your website. Identify the pages that perform the worst and start with them. Focus on: - Code splitting and tree shaking to make sure only the necessary JavaScript is included. - Deferring non-critical JavaScript events until the main thread is idle. Consider also offloading heavy tasks to a Web Worker. - Debouncing or throttling frequently fired events. - Breaking up long JavaScript tasks into smaller ones. - Using passive event listeners. - Reducing the presentational delay e.g. by using the will-change CSS property, and preferring transform and opacity over the display and position properties in animations. - Eliminating third-party scripts. Very often, third-party scripts, be it for analytics, cookie consent, or other purposes, have a significant impact on performance. Considering which of them are not absolutely necessary, and either removing them or replacing them with a more performant solution is highly recommended. - Upgrading your dependencies to make sure you are taking advantage of the latest performance-improving features. If you want to read more about the general INP optimization tips, you can check out this Vercel article. While a lot of these optimizations, such as code splitting and tree shaking, are already taken care of by modern frameworks, I believe that it’s worth going through this list to see what you can do to make your website faster and thus a better experience for your users. We will explore concrete performance tips for popular frameworks in the next article. Conclusion The change in Core Web Vitals is coming and it's going to be a significant one. The new metric, Interaction to Next Paint, is going to be much stricter than the current First Input Delay and it's going to be much harder to pass. However, the good news is that framework authors have already been working on making the frameworks more INP-friendly, and while it may take some effort, you should be able to improve your INP score. INP, however, should not be just another Lighthouse threshold we have to satisfy. This change should encourage us to look beyond surface-level metrics and dive deeper into the nuances of user interaction. As we adapt to these changes, our focus should remain on the end goal: creating a good and inclusive user experience for our visitors....

A Deep Dive into SvelteKit's Rendering Techniques cover image

A Deep Dive into SvelteKit's Rendering Techniques

A Deep Dive into SvelteKit's Rendering Techniques Introduction SvelteKit is a meta-framework for Svelte that allows you to develop pages based on their content. At its core, SvelteKit introduces three fundamental strategies out of the box, each designed to streamline the development process and adapt to the specific needs of your project. These strategies enable you to easily create dynamic, responsive, and highly interactive web applications. These strategies, or as SvelteKit calls them, page options, are: * Prerender: Ideal for static content that doesn't change, making your pages lightning-fast to load. * SSR (Server-Side Rendering): Ideal for rendering full pages with dynamic content from a server. * CSR (Client-Side Rendering): Best for highly dynamic and interactive applications where content updates frequently based on user actions. These page options can be applied to specific pages (when exported from **+page.js or +page.server.js) or to a group of pages (when exported from +layout.js or +layout.server.js). They can also be set across the entire application. You accomplish this by exporting it from the root layout. It's worth noting that child layouts and pages can supersede settings from parent layouts. This means you could activate prerendering for the whole app and then turn it off for certain pages that require dynamic server rendering (SSR). In this blog post, we'll explore these page options and share some insights on how you might use them in everyday web projects. Prerendering Prerendering is akin to taking a snapshot of your web pages when you build your application; you might have heard of this as static rendering. This snapshot is then served to all users, ensuring lightning-fast load times since the server simply delivers the pre-built files without additional processing rather than dynamically generating the files for each request. This method is perfect for content that remains unchanged across visits, such as blog posts, documentation, or landing pages. In other words, pages with no dynamic content. There will be situations where you would like to avoid using this option, though. The rule of thumb is that if two different users will see different content, then this is a no-go. Other reasons are inconvenience for your needs. For example, your build times could drag if you end up prerendering tons of pages. Is that convenient to you? Well, that’s for you to decide! The prerender option is turned off by default, so we need to enable it if we want to start using it. Export the following in the +page.server.js or just +page.js: ` Likewise, if you have a group of pages, you could do the same inside a +layout.js or +layout.server.js . If your app is suitable to be all SSG (Static Side Generation), you could use adapter-static, which will output files suitable for use with any static web server. In cases where you happen to opt-in for this, you could turn off pages that need dynamic rendering: ` Another cool feature about prerendering is that pages that fetch data from server routes can automatically inherit default values during the prerendering process. This feature simplifies data management and enhances the development workflow, especially when dealing with dynamic content that needs to be prerendered with specific data sets. Let's say you have a blog where each post fetches its content from a server route when the page loads. Normally, this would require the user's browser to make a request to the server at runtime. However, with prerendering, you can have this data fetched and embedded into the page at build time. ` In this example, when the blog/[slug].sveltepage is prerendered, it makes a fetch call to /api/posts/[slug], which returns the post data. This data is then used to prerender the blog post page with the content already in place, allowing the page to load instantly for the user, with the blog post content visible even before any JavaScript is executed on the client side. There’s a third and useful option when prerendering: ` Using this option is like telling SvelteKit to make a smart guess about whether to prerender your page in advance. You're saying, "Hey SvelteKit, you decide if this page should be prerendered based on what you find about the page" SvelteKit analyzes your page and if it determines the page can be prerendered without complications, it will automatically generate a prerendered version. This process involves SvelteKit either crawling a link inside an already prerendered page, prerendering entry points or targeting pages that are specifically marked for prerendering set in entries. An example is a blog section where you post articles regularly. The blog section has a main page that lists all your blog posts and individual pages for each blog post. The structure for this might be something like: In this setup, the /blog main entry page will be prerendered automatically. However, individual blog posts at paths like /blog/[slug] will not be prerendered unless linked directly from another prerendered page. For instance, if there’s a link to /blog/some-cool-slug, like: <a href="/blog/some-cool-slug">, SvelteKit will be able to crawl that link and prerender that specific page as well. Now let’s say that you would like to prerender your latest blog post, but there's no link for SvelteKit to crawl to this page. In these cases, you can explicitly add these pages to the prerender entries list. By doing so, SvelteKit will prerender every specified entry, ensuring that the content is immediately accessible to users. You can configure the prerender entries directly in your svelte.config.js file or by exporting an entries function from a +page.js, a +page.server.js or a +server.js belonging to a dynamic route. Here’s how you can do it: ` SSR SSR is similar to prerendering. It ensures that pages are first rendered on the server. This process generates the complete HTML content, which is then sent to the client for hydration). This approach enhances the initial page load performance and SEO, providing a better user experience. Unlike prerendering, where pages are generated at build time, SSR pages are created at runtime. This key difference allows for the generation of dynamic content, making SSR particularly suited for applications that require personalized content for each user or real-time updates, such as user dashboards, e-commerce sites, and social media platforms. Similar to prerendering, SSR comes with its own set of limitations, and there are scenarios where it might not be the best choice for your project: * SSR can be expensive. It can significantly load your server, especially for high-traffic sites, as each page request involves rendering content on the server. If server resources are constrained, this could lead to performance bottlenecks. * Highly Interactive Applications: Applications that rely heavily on user interactions and real-time updates (like games or interactive tools like an admin panel) might not benefit much from SSR. CSR can provide a smoother user experience in these cases, as it minimizes server requests after the initial load. * SEO Is Not YOUR Priority: If search engine optimization isn't a key concern for your project (for example, an internal dashboard or an app behind a login), the SEO benefits of SSR might not justify the additional complexity and server demands. export const ssr = true is the option enabled by default. Thus, we can use its benefits from the start. To execute code exclusively on the server, such as fetching data from a database or an external API, SvelteKit offers a convention using +page.server.js files. This setup is ideal for server-side operations, ensuring sensitive logic and credentials are not exposed to the client. ` Note: When you employ +page.js in SvelteKit, the contained code is executed on both the server and client sides by default. However, if you intend for the code to run exclusively on the client side, you can disable SSR by setting export const ssr = false within your +page.js file. Doing so ensures that the code is only executed in the browser, adhering to client-side rendering principles and turning your app into a SPA. This is useful for cases where you don’t need the load on the server, or you don’t benefit from any of the benefits of SSR. CSR CSR plays a crucial role in making web pages interactive by hydrating them and incorporating JavaScript. JavaScript is crucial for adding interactivity to web pages—everything from animations and video players to form validations and dynamic content updates relies on JavaScript. However, there are instances where JavaScript is unnecessary. Consider a prerendered 'About' page; enabling CSR on this page could be excessive, especially if it lacks interactive elements. In such scenarios, you can streamline your page by disabling CSR that comes enabled by default, which can be done by simply doing this: ` Using this trick smartly can make your web pages load fast because downloading JavaScript is sometimes heavy. The key lies in strategically combining this approach with other rendering techniques to optimize performance and user experience. One factor to consider, though, is that turning off CSR also means you'll lose client-side routing. This requires you to depend on traditional browser navigation instead. Conclusion By thoughtfully applying these strategies, you can craft a SvelteKit application that performs exceptionally and delivers a fantastic user experience tailored to your audience's needs. Wrapping up our journey through SSR with SvelteKit. We discovered how choosing the right way to render pages (like prerendering, SSR, or CSR) is super important and can make a big difference in how well a website works. SvelteKit is awesome because it lets you pick the best method for your website. So, remember, playing with SvelteKit and these rendering methods can make your websites stand out. Use it to build websites that not only work great but are also fun and easy for people to use. Dive in, try things out, and see how your web projects can shine!...

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml file. We put this file under the tools/aws/ folder. ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci to install the dependencies and in the build phase, we are going to run the build command using the ENVIRONMENT_TARGET variable. This is useful, because if you have more environments, like development and staging you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the aws/codebuild/standard:7.0 version. This version uses Node 18. We want to always use the latest image version for this runtime and as the Environment type we are good with Linux EC2. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec section select Use a buildspec file and give the path from your repository root as the Buildspec name. For our example, it is tools/aws/build-and-deploy.buildspec.yml. We leave the Batch configuration and the Artifacts sections as they are and in the Logs section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the aws-codebuild-build-logs bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2) and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild as the build provider and let's select the build that we created above. Remember that we have the ENVIRONMENT_TARGET as a variable used in our build, so let's add it to this stage with the Plaintext value prod. This way the build will run the build:prod command from our package.json. As the Build type we want Single build. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role. Let's go to the IAM page in the console and open the Roles page. Search for our codebuild role and let's add permissions to it. Click the Add permissions button and select Attach policies. We need two AWS-managed policies to be added to this service role. The AdministratorAccess-AWSElasticBeanstalk will allow us to deploy the API and the AmazonS3FullAccess will allow us to deploy the front-end. The CloudFrontFullAccess will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws command. Let's update our buildspec file with the following changes: ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit button. This will then enable us to edit the Build stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod as Plaintext for the FRONT_END_BUCKET and E3FV1Q1P98H4EZ as Plaintext for the CLOUDFRONT_DISTRIBUTION_ID Now if we add changes to our index.html file, for example, change the button to HELLO 2, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID: #{SourceVariables.CommitId} - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME: Test AWS App - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME: TestAWSApp-prod - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET: elasticbeanstalk-us-east-1-474671518642 - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. ` The APP_VERSION variable is the version property from the package.json file. In a release process, the application's version is stored here. The API_VERSION variable will contain the APP_VERSION and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the API_ZIP_KEY will have this information. The APP_VERSION_DESCRIPTION will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. ` Let's make a change in the API, for example, the message sent back by the /api/hello endpoint and push up the changes. --- Now every time a change is merged to the main branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co