Skip to content

Getting Authenticated Images in Angular

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Getting authenticated images in Angular

Let's imagine the following situation: We are working on an application which handles contracts between clients and sales representatives from a particular company.

For example, an insurance company could require us to develop a secure application where damage reports can be uploaded, and then the insurance agent, who has access rights to deal with such reports, can check the uploaded photos. These uploaded photos can be photos of damaged cars where the license plate is, and photos of the car's registration papers. These photos can contain sensitive data.

So the application is almost ready. One of the last features is to display these uploaded photos to everybody who has access to that particular insurance report. The application is set up in a way that allows us to download the images from a specific endpoint: /api/reports/{reportId}/{imageId}. We have set up our layout, have generated URLs for all the images that we need to request, and put them into the img tags.

However, they don't show up, because when they are fetched, they never hit our HttpInterceptor that sets the Authorization header on the request.

Introducing @this-dot/ng-utils UseHttpImageSource pipe

We came up with the idea of using a pipe to solve the above problem. We decided to include it as the first element in a collection of useful utilities for Angular. We called it @this-dot/ng-utils.

How to solve the problem?

If we send our request using Angular's HttpClient, it will hit our HttpInterceptor, which will, in turn, attach the Authorization header to the request. To avoid putting this logic into our services and/or components, we are going to implement a pipe.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false,
})
export class UseHttpImageSourcePipe implements PipeTransform {

  constructor(private httpClient: HttpClient,
              private domSanitizer: DomSanitizer,
              private cdr: ChangeDetectorRef) {
  }

  transform(imagePath: string): string | SafeUrl {
    // our logic will come here
    return imagePath;
  }

}

We immediately know that we are going to need the HttpClient for getting the image, the DomSanitizer, so we will be able to use the bypassSecurityTrustUrl method to allow the returned blob to be displayed.

We do know that the endpoints we call will return safe images, so that is why we trust the returned values. We will also need the ChangeDetectorRef because this process is async, and we are going to trigger change detection manually. The pipe itself is not pure, because the returned value will change eventually, so we need to get the value from the transform on every change detection cycle.

We also need to keep in mind that whenever the input value changes, a new request needs to be sent out. That is why we are going to use a BehaviorSubject as the base of our async subscription. Let's also use the ngOnDestroy lifecycle hook to tear down the subscription when the component that has our pipe instance destroys.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false,
})
export class UseHttpImageSourcePipe implements PipeTransform, OnDestroy {
  private subscription = new Subscription();
  private transformValue = new BehaviorSubject<string>('');

  private latestValue!: string | SafeUrl;

  constructor(private httpClient: HttpClient,
              private domSanitizer: DomSanitizer,
              private cdr: ChangeDetectorRef) {
    // every pipe instance will set up their subscription
    this.setUpSubscription();
  }

  // ...

  transform(imagePath: string): string | SafeUrl {
    // we emit a new value
    this.transformValue.next(imagePath);

    // we always return the latest value
    return this.latestValue;
  }

  ngOnDestroy(): void {
    this.subscription.unsubscribe();
  }

  private setUpSubscription(): void {
    const transformSubscription = this.transformValue
      .asObservable()
      .pipe(
        // we filter out empty strings and falsy values
        filter((v): v is string => !!v),
        // we don't emit if the input hasn't changed
        distinctUntilChanged(),
        // Our HttpClient logic will come here
        // 
        tap((imagePath: string | SafeUrl) => {
        // we set the latestValue property of the pipe
        this.latestValue = imagePath;
        // and we mark the DOM changed, so the pipe's transform method is called again
        this.cdr.markForCheck();
      })
      )
      .subscribe();
    this.subscription.add(transformSubscription);
  }
}

Let's walk through what happens. When the pipe is initialised, it sets up the subscription to our transformValue, BehaviorSubject. This subscription will only be initialised once per pipe instance, and we unsubscribe from it in the ngOnDestroy() lifecycle hook.

The logic will happen mainly inside our setUpSubscription() method, where we filter out falsy values. We use the distinctUntilChanged() operator to check if the current transformValue is different from the previous one. If it is, it emits the new value. And finally, the transform method returns the latestValue, which is stored on the pipe.

We are going to implement our HttpClient logic there. Right now it just sets the latestValue property and triggers change detection. Let's get our images using the HttpClient.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false,
})
export class UseHttpImageSourcePipe implements PipeTransform, OnDestroy {
  // ...
  private setUpSubscription(): void {
    const transformSubscription = this.transformValue
      .asObservable()
      .pipe(
        filter((v): v is string => !!v),
        distinctUntilChanged(),
        // we use switchMap, so the previous subscription gets torn down 
        switchMap((imagePath: string) => this.httpClient
          // we get the imagePath, observing the response and getting it as a 'blob'
          .get(imagePath, { observe: 'response', responseType: 'blob' })
          .pipe(
            // we map our blob into an ObjectURL
            map((response: HttpResponse<Blob>) => URL.createObjectURL(response.body)),
            // we bypass Angular's security mechanisms
            map((unsafeBlobUrl: string) => this.domSanitizer.bypassSecurityTrustUrl(unsafeBlobUrl)),
            // we trigger it only when there is a change in the result
            filter((blobUrl) => blobUrl !== this.latestValue),
          )
        ),
        tap((imagePath: string | SafeUrl) => {
          this.latestValue = imagePath;
          this.cdr.markForCheck();
        })
      )
      .subscribe();
    this.subscription.add(transformSubscription);
  }
}

We subscribe to our httpClient.get() method inside a switchMap operator, so we unsubscribe from the previous subscription if the image path passed into our transform method changes. We set up our get request to return a blob which we then convert into an ObjectURL, and we bypass the safety mechanisms of Angular. When the change detection cycle is triggered, this sanitised blob will be returned as the latestValue, and the image gets displayed. And when we check the network tab of the dev tools, we can see that our Authorization header is set on the image request.

Our pipe in our template:

  <img width="200px" [src]="'assets/images/success.png' | useHttpImgSrc" />
With authorization header

Some user experience improvements

Although our images now display on the page, we got a requirement to make them stateful. A loading image should be displayed before the actual image is loaded, and if the request has an error, another image should be displayed. Let's set up the loading image logic in our pipe, using an optional parameter in the transform() method.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false,
})
export class UseHttpImageSourcePipe implements PipeTransform, OnDestroy {
  private subscription = new Subscription();
  private loadingImagePath!: string;
  private latestValue!: string | SafeUrl;
  private transformValue = new BehaviorSubject<string>('');

  // ...

  transform(
    imagePath: string,
    loadingImagePath?: string
  ): string | SafeUrl {
    this.setLoadingImagePath(loadingImagePath);
    // ...

    this.transformValue.next(imagePath);
    // return the loading image while there is no value present
    return this.latestValue || this.loadingImagePath;
  }

  // ...

  private setLoadingImagePath(
    loadingImagePath?: string
  ): void {
    // if it is already set we do nothing
    if (this.loadingImagePath) {
      return;
    }
    this.loadingImagePath = loadingImagePath;
  }

}

The latestValue property is only falsy before the actual image arrives. We created the setLoadingImagePath() method, which if provided, sets up a path for the loading placeholder image. Let's set up our template for two images, one which would surely fail, the other one will be loaded, but both of them will display the loading screen.

  <img width="200px" [src]="'assets/images/success.png' | useHttpImgSrc:'assets/images/loading.png'" />
  // ...
  <img width="200px" [src]="'assets/images/notfound.png' | useHttpImgSrc:'assets/images/loading.png'" />
Images are loading

Let's make the error image working as well. We are going to pass it as a second optional parameter. Let's update our template first.

  <img width="200px" [src]="'assets/images/success.png' | useHttpImgSrc:'assets/images/loading.png':'assets/images/error.png'" />
  // ...
  <img width="200px" [src]="'assets/images/notfound.png' | useHttpImgSrc:'assets/images/loading.png':'assets/images/error.png'" />

Let's update our pipe's implementations as well.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false,
})
export class UseHttpImageSourcePipe implements PipeTransform, OnDestroy {
  private subscription = new Subscription();
  private loadingImagePath!: string;
  private errorImagePath!: string;
  private latestValue!: string | SafeUrl;
  private transformValue = new BehaviorSubject<string>('');

  // ...

  transform(
    imagePath: string,
    loadingImagePath?: string,
    errorImagePath?: string
  ): string | SafeUrl {
    this.setLoadingAndErrorImagePaths(loadingImagePath, errorImagePath);
    if (!imagePath) {
      return this.errorImagePath;
    }

    this.transformValue.next(imagePath);
    return this.latestValue || this.loadingImagePath;
  }

  // ...

  private setUpSubscription(): void {
    const transformSubscription = this.transformValue
      .asObservable()
      .pipe(
        filter((v): v is string => !!v),
        switchMap((imagePath: string) => this.httpClient
          .get(imagePath, { observe: 'response', responseType: 'blob' })
          .pipe(
            map((response: HttpResponse<Blob>) => URL.createObjectURL(response.body)),
            map((unsafeBlobUrl: string) => this.domSanitizer.bypassSecurityTrustUrl(unsafeBlobUrl)),
            filter((blobUrl) => blobUrl !== this.latestValue),
            // if the request errors out we return the error image's path value
            catchError(() => of(this.errorImagePath))
          )
        ),
        tap((imagePath: string | SafeUrl) => {
          this.latestValue = imagePath;
          this.cdr.markForCheck();
        })
      )
      .subscribe();
    this.subscription.add(transformSubscription);
  }

  // ...

  private setLoadingAndErrorImagePaths(
    loadingImagePath: string,
    errorImagePath: string
  ): void {
    if (this.loadingImagePath && this.errorImagePath) {
      return;
    }
    this.loadingImagePath = loadingImagePath;
    this.errorImagePath = errorImagePath;
  }

}

Finally, when the image is not loaded, our error image will be displayed.

When the image loads and an error happens

But what about developer experience?

Adding the loading and error image paths becomes a tedious job when you have more than one place where you need to set those up. We would like to keep that functionality if we ever need to override default values on another page. Let's set up our pipe's container module in a way that we can set default values in the app's root module.

First, we create the injectors:

import { InjectionToken } from '@angular/core';

export const THIS_DOT_LOADING_IMAGE_PATH = new InjectionToken<string>('THIS_DOT_LOADING_IMAGE_PATH');
export const THIS_DOT_ERROR_IMAGE_PATH = new InjectionToken<string>('THIS_DOT_ERROR_IMAGE_PATH');

Then, we create our forRoot method:

@NgModule({
  imports: [CommonModule],
  declarations: [UseHttpImageSourcePipe],
  exports: [UseHttpImageSourcePipe],
})
export class UseHttpImageSourcePipeModule {
  static forRoot(
    config: { loadingImagePath?: string; errorImagePath?: string } = {}
  ): ModuleWithProviders<UseHttpImageSourcePipeModule> {
    return {
      ngModule: UseHttpImageSourcePipeModule,
      providers: [
        // set up the providers
        {
          provide: THIS_DOT_LOADING_IMAGE_PATH,
          useValue: config.loadingImagePath || null,
        },
        {
          provide: THIS_DOT_ERROR_IMAGE_PATH,
          useValue: config.errorImagePath || null,
        },
      ],
    };
  }
}

And in our app.module.ts file:

@NgModule({
  // ...
  imports: [
    BrowserModule,
    HttpClientModule,
    UseHttpImageSourcePipeModule.forRoot({
      loadingImagePath: 'assets/images/loading.png',
      errorImagePath: 'assets/images/error.png',
    }),
    // ...
  ],
  // ...
})
export class AppModule {}

With this setup, we can inject the THIS_DOT_LOADING_IMAGE_PATH and the THIS_DOT_ERROR_IMAGE_PATH in our pipe, and update the logic so it defaults to that.

@Pipe({
  name: 'useHttpImgSrc',
  pure: false
})
export class UseHttpImageSourcePipe implements PipeTransform, OnDestroy {
  // ...
  constructor(
    private httpClient: HttpClient,
    private domSanitizer: DomSanitizer,
    private cdr: ChangeDetectorRef,
    @Inject(THIS_DOT_LOADING_IMAGE_PATH) private defaultLoadingImagePath: string,
    @Inject(THIS_DOT_ERROR_IMAGE_PATH) private defaultErrorImagePath: string
  ) {
  }

  transform(
    imagePath: string,
    loadingImagePath?: string,
    errorImagePath?: string
  ): string | SafeUrl {
    this.setLoadingAndErrorImagePaths(loadingImagePath, errorImagePath);
    // ...
  }

  // ...

  private setLoadingAndErrorImagePaths(
    loadingImagePath: string = this.defaultLoadingImagePath,
    errorImagePath: string = this.defaultErrorImagePath
  ): void {
    if (this.loadingImagePath && this.errorImagePath) {
      return;
    }
    this.loadingImagePath = loadingImagePath;
    this.errorImagePath = errorImagePath;
  }
}

With that, we can update our templates.

  <img width="200px" [src]="'assets/images/success.png' | useHttpImgSrc" />
  // ...
  <img width="200px" [src]="'assets/images/notfound.png' | useHttpImgSrc" />

And everything works as before. We can still override the loading and error images by passing properties to the pipe, but with Angular's dependency injection, we made it simpler to use default values. Using the injection tokens we can also override the values on component level if we ever need that.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Exploring Angular Forms: A New Alternative with Signals cover image

Exploring Angular Forms: A New Alternative with Signals

Exploring Angular Forms: A New Alternative with Signals In the world of Angular, forms are essential for user interaction, whether you're crafting a simple login page or a more complex user profile interface. Angular traditionally offers two primary approaches: template-driven forms and reactive forms. In my previous series on Angular Reactive Forms, I explored how to harness reactive forms' power to manage complex logic, create dynamic forms, and build custom form controls. A new tool for managing reactivity - signals - has been introduced in version 16 of Angular and has been the focus of Angular maintainers ever since, becoming stable with version 17. Signals allow you to handle state changes declaratively, offering an exciting alternative that combines the simplicity of template-driven forms with the robust reactivity of reactive forms. This article will examine how signals can add reactivity to both simple and complex forms in Angular. Recap: Angular Forms Approaches Before diving into the topic of enhancing template-driven forms with signals, let’s quickly recap Angular's traditional forms approaches: 1. Template-Driven Forms: Defined directly in the HTML template using directives like ngModel, these forms are easy to set up and are ideal for simple forms. However, they may not provide the fine-grained control required for more complex scenarios. Here's a minimal example of a template-driven form: ` ` 2. Reactive Forms: Managed programmatically in the component class using Angular's FormGroup, FormControl, and FormArray classes; reactive forms offer granular control over form state and validation. This approach is well-suited for complex forms, as my previous articles on Angular Reactive Forms discussed. And here's a minimal example of a reactive form: ` ` Introducing Signals as a New Way to Handle Form Reactivity With the release of Angular 16, signals have emerged as a new way to manage reactivity. Signals provide a declarative approach to state management, making your code more predictable and easier to understand. When applied to forms, signals can enhance the simplicity of template-driven forms while offering the reactivity and control typically associated with reactive forms. Let’s explore how signals can be used in both simple and complex form scenarios. Example 1: A Simple Template-Driven Form with Signals Consider a basic login form. Typically, this would be implemented using template-driven forms like this: ` ` This approach works well for simple forms, but by introducing signals, we can keep the simplicity while adding reactive capabilities: ` ` In this example, the form fields are defined as signals, allowing for reactive updates whenever the form state changes. The formValue signal provides a computed value that reflects the current state of the form. This approach offers a more declarative way to manage form state and reactivity, combining the simplicity of template-driven forms with the power of signals. You may be tempted to define the form directly as an object inside a signal. While such an approach may seem more concise, typing into the individual fields does not dispatch reactivity updates, which is usually a deal breaker. Here’s an example StackBlitz with a component suffering from such an issue: Therefore, if you'd like to react to changes in the form fields, it's better to define each field as a separate signal. By defining each form field as a separate signal, you ensure that changes to individual fields trigger reactivity updates correctly. Example 2: A Complex Form with Signals You may see little benefit in using signals for simple forms like the login form above, but they truly shine when handling more complex forms. Let's explore a more intricate scenario - a user profile form that includes fields like firstName, lastName, email, phoneNumbers, and address. The phoneNumbers field is dynamic, allowing users to add or remove phone numbers as needed. Here's how this form might be defined using signals: ` > Notice that the phoneNumbers field is defined as a signal of an array of signals. This structure allows us to track changes to individual phone numbers and update the form state reactively. The addPhoneNumber and removePhoneNumber methods update the phoneNumbers signal array, triggering reactivity updates in the form. ` > In the template, we use the phoneNumbers signal array to dynamically render the phone number input fields. The addPhoneNumber and removePhoneNumber methods allow users to reactively add or remove phone numbers, updating the form state. Notice the usage of the track function, which is necessary to ensure that the ngFor directive tracks changes to the phoneNumbers array correctly. Here's a StackBlitz demo of the complex form example for you to play around with: Validating Forms with Signals Validation is critical to any form, ensuring that user input meets the required criteria before submission. With signals, validation can be handled in a reactive and declarative manner. In the complex form example above, we've implemented a computed signal called formValid, which checks whether all fields meet specific validation criteria. The validation logic can easily be customized to accommodate different rules, such as checking for valid email formats or ensuring that all required fields are filled out. Using signals for validation allows you to create more maintainable and testable code, as the validation rules are clearly defined and react automatically to changes in form fields. It can even be abstracted into a separate utility to make it reusable across different forms. In the complex form example, the formValid signal ensures that all required fields are filled and validates the email and phone numbers format. This approach to validation is a bit simple and needs to be better connected to the actual form fields. While it will work for many use cases, in some cases, you might want to wait until explicit "signal forms" support is added to Angular. Tim Deschryver started implementing some abstractions around signal forms, including validation and wrote an article about it. Let's see if something like this will be added to Angular in the future. Why Use Signals in Angular Forms? The adoption of signals in Angular provides a powerful new way to manage form state and reactivity. Signals offer a flexible, declarative approach that can simplify complex form handling by combining the strengths of template-driven forms and reactive forms. Here are some key benefits of using signals in Angular forms: 1. Declarative State Management: Signals allow you to define form fields and computed values declaratively, making your code more predictable and easier to understand. 2. Reactivity: Signals provide reactive updates to form fields, ensuring that changes to the form state trigger reactivity updates automatically. 3. Granular Control: Signals allow you to define form fields at a granular level, enabling fine-grained control over form state and validation. 4. Dynamic Forms: Signals can be used to create dynamic forms with fields that can be added or removed dynamically, providing a flexible way to handle complex form scenarios. 5. Simplicity: Signals can offer a simpler, more concise way to manage form states than traditional reactive forms, making building and maintaining complex forms easier. Conclusion In my previous articles, we explored the powerful features of Angular reactive forms, from dynamic form construction to custom form controls. With the introduction of signals, Angular developers have a new tool that merges the simplicity of template-driven forms with the reactivity of reactive forms. While many use cases warrant Reactive Forms, signals provide a fresh, powerful alternative for managing form state in Angular applications requiring a more straightforward, declarative approach. As Angular continues to evolve, experimenting with these new features will help you build more maintainable, performant applications. Happy coding!...

Overview of the New Signal APIs in Angular cover image

Overview of the New Signal APIs in Angular

Overview of the New Signal APIs in Angular Google's Minko Gechev and Jeremy Elbourn announced many exciting things at NG Conf 2024. Among them is the addition of several new signal-based APIs. They are already in developer preview, so we can play around with them. Let's dig into it, starting with signal-based inputs and the new matching outputs API. Signal Based Inputs Discussions about signal-based inputs have been taking place in the Angular community for some time now, and they are finally here. Until now, you used the @Input() decorator to define inputs. This is what you'd have to write in your component to declare an optional and a required input: ` With the new signal-based inputs, you can write much less boilerplate code. Here is how you can define the same inputs using the new syntax: ` It's not only less boilerplate, but because the values are signals, you can also use them directly in computed signals and effects. That, effectively, means you get to avoid computing combined values in ngOnChanges or using setters for your inputs to be able to compute derived values. In addition, input signals are read-only. The New Output I intentionally avoid calling them signal-based outputs because they are not. They still work the same way as the old outputs. The Angular team has just introduced a new output API that is more consistent with the latest inputs API and allows you to write less boilerplate, similar to the new input API. Here is how you would define an output until now: ` Here is how you can define the same output using the new syntax: ` The thing I like about the new output API is that sometimes it happens to me that I forget to instantiate the EventEmitter because I do this instead: ` You won't forget to instantiate the output with the new syntax because the output function does it for you. Signal Queries I am sure most readers know the @ViewChild, @ViewChildren, @ContentChild, and @ContentChildren decorators very well and have experienced the pain of triggering the infamous ExpressionChangedAfterItHasBeenCheckedError or having the values unavailable when needed. Here is a refresher on how you would use these decorators until now: ` With the new signal queries, similar to the new input API, the values are signals, and you can use them directly in computed signals and effects. You can define the same queries using the new syntax: ` Jeremy Elbourn mentioned that the new signal queries have better type inference and are more consistent with the new input and output APIs. He also showcased a brand new feature not available with the old queries. You can now define a query as required, and the Angular compiler will throw an error if the query has no result, guaranteeing that the value won't be undefined. Here is how you can define a required query: ` Model Inputs Jeremy and Minko announced the last new feature is the model inputs. The name is vague, but the feature is cool—it simplifies the definition of two-way bindings. Until now, to achieve two-way binding, you would have to define an input and an output following a given naming convention. @Input and @Output had to be defined with the same name (followed by "Change" in the case of the output). Then, you could use the template's [()] syntax. ` ` That way, you could keep the value in sync between the parent and the child component. With the new model inputs, you can define a two-way binding with a single line of code. Here is how you can define the same two-way binding using the new syntax: ` The html template stays the same: ` The model function returns a writable signal that can be updated directly. The value will be propagated back to any two-way bindings. Conclusion The new signal-based APIs are a great addition to Angular. They allow you to write less boilerplate code and make your components more reactive. The new APIs are already in developer preview, so you can start playing around with them today. I look forward to seeing how the community will adopt these new features and what other exciting things the Angular team has in store for us, such as zoneless apps by default....

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful. In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo The sample application For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS. The UI The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello endpoint. If there is a response with status code 2xx, it puts an h1 tag with the response contents into the div with the id result. If it errors out, it puts an error message into the same div. ` The API We bootstrap the NestJS app to have the api prefix before every endpoint call. ` We bootstrap it with the AppModule which only has the AppController in it. ` And the AppController sets up two very basic endpoints. We set up a health check on the /api route and our hello endpoint on the /api/hello route. ` Hosting the front-end with S3 and CloudFront To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally. We go with the defaults for this bucket setting it to the us-east-1 region. Let's set up the bucket to block all public access, because we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket. When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution. As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended). Create a new Control setting with the defaults. I recommend adding a description to describe this control setting. At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution. In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html and create the distribution. When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions tab. Edit the Bucket policy and paste the policy you just copied, then save it. If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name. If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello endpoint and should display an error message on the page. Hosting the API in Elastic Beanstalk Elastic beanstalk prerequisites For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src folder, let's create a Procfile with the contents: web: node main.js. Then open the apps/api/project.json and under the build configuration, extend the production build setup with the following (I only ) ` The above settings will make sure that when we build the API with a production configuration, it will generate a package.json and a package-lock.json near the output file main.js. To have a production-ready API, we set up a script in the package.json file of the repository. Running this will create a dist/apps/api and a dist/apps/frontend folder with the necessary files. ` After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later. ` Creating the Elastic Beanstalk Environment Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well. We are creating a Web server environment. Provide your application's name in the Application information section. You could also provide some unique tags for your convenience. In the Environment information section please provide information on your environment. Leave the Domain field blank for an autogenerated value. When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version. Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible) On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role for the EC2 instance profile. If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier, AWSElasticBeanstalkMulticontainerDocker and the AWSElasticBeanstalkRoleWorkerTier managed permissions. The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs. For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group. In the Capacity section we set the Environment type to Load balanced. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the instance types section, We only set this to t3.micro For this tutorial, but you might need to use larger tiers. Configure the Scaling triggers to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before. At the Load Balancer Type section, choose Application load balancer and select Dedicated for exactly this environment. Let's set up the listeners, to support HTTPS. Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default process. Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api route. Let's modify the default process accordingly and set its port to 8080. For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket. At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments. At the end, let's configure the Environment properties. We have set the default process to occupy port 8080, therefore, we need to set up the PORT environment variable to 8080. Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending /api/hello at the end of it. Connect CloudFront to the API endpoint Let's go back to our CloudFront distribution and select the Origins tab, then create a new origin. Copy your load balancer's URL into the Origin domain field and select HTTPS only protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use HTTP only, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to API. Leave everything else as default and create a new origin. Under the Behaviors tab, create a new behavior. Set up the path pattern as /api/* and select your newly created API origin. At the Viewer protocol policy select Redirect HTTP to HTTPS and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best. Now if you visit your deployment, when you click on the HELLO button, it should no longer attach an error message to the DOM. --- Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned....

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader cover image

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader

Ashley Willis has seen Developer Relations evolve from being on the sidelines of the tech team to having a seat at the strategy table. In her ten years in the space, she’s done more than give great conference talks or build community—she’s helped shape what the DevRel role looks like for software providers. Now as the Senior Director of Developer Relations at GitHub, Ashley is focused on building spaces where developers feel heard, seen, and supported. > “A decade ago, we were seen as amplifiers, not collaborators,” she says. “Now we’re influencing product roadmaps and shaping developer experience end to end.” DevRel Has Changed For Ashley, the biggest shift hasn’t been the work itself—but how it’s understood. > “The work is still outward-facing, but it’s backed by real strategic weight,” she explains. “We’re showing up in research calls and incident reviews, not just keynotes.” That shift matters, but it’s not the finish line. Ashley is still pushing for change when it comes to burnout, representation, and sustainable metrics that go beyond conference ROI. > “We’re no longer fighting to be taken seriously. That’s a win. But there’s more work to do.” Talking Less as a Leader When we asked what the best advice Ashley ever received, she shared an early lesson she received from a mentor: “Your presence should create safety, not pressure.” > “It reframed how I saw my role,” she says. “Not as the one with answers, but the one who holds the space.” Ashley knows what it’s like to be in rooms where it’s hard to speak up. She leads with that memory in mind, and by listening more than talking, normalizing breaks, and creating environments where others can lead too. > “Leadership is emotional labor. It’s not about being in control. It’s about making it safe for others to lead, too.” Scaling More Than Just Tech Having worked inside high-growth companies, Ashley knows firsthand: scaling tech is one thing. Scaling trust is another. > “Tech will break. Roadmaps will shift. But if there’s trust between product and engineering, between company and community—you can adapt.” And she’s learned not to fall for premature optimization. Scale what you have. Don’t over-design for problems you don’t have yet. Free Open Source Isn’t Free There’s one myth Ashley is eager to debunk: that open source is “free.” > “Open source isn’t free labor. It’s labor that’s freely given,” she says. “And it includes more than just code. There’s documentation, moderation, mentoring, emotional care. None of it is effortless.” Open source runs on human energy. And when we treat contributors like an infinite resource, we risk burning them out, and breaking the ecosystem we all rely on. > “We talk a lot about open source as the foundation of innovation. But we rarely talk about sustaining the people who maintain that foundation.” Burnout is Not Admirable Early in her career, Ashley wore burnout like a badge of honor. She doesn’t anymore. > “Burnout doesn’t prove commitment,” she says. “It just dulls your spark.” Now, she treats rest as productive. And she’s learned that clarity is kindness—especially when giving feedback. > “I thought being liked was the same as being kind. It’s not. Kindness is honesty with empathy.” The Most Underrated GitHub Feature? Ashley’s pick: personal instructions in GitHub Copilot. Most users don’t realize they can shape how Copilot writes, like its tone, assumptions, and context awareness. Her own instructions are specific: empathetic, plainspoken, technical without being condescending. For Ashley, that helps reduce cognitive load and makes the tool feel more human. > “Most people skip over this setting. But it’s one of the best ways to make Copilot more useful—and more humane.” Connect with Ashley Willis She has been building better systems for over a decade. Whether it’s shaping Copilot UX, creating safer teams, or speaking truth about the labor behind open source, she’s doing the quiet work that drives sustainable change. Follow Ashley on BlueSky to learn more about her work, her maker projects, and the small things that keep her grounded in a fast-moving industry. Sticker Illustration by Jacob Ashley....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co