Skip to content

This Dot Blog

This Dot provides teams with technical leaders who bring deep knowledge of the web platform. We help teams set new standards, and deliver results predictably.

Newest First
Tags:AWS
How to set up local cloud environment with LocalStack cover image

How to set up local cloud environment with LocalStack

Most applications are powered by AWS services due to their richness in providing services for specific purposes. However, testing an AWS application while developing without maintaining a dev account is hard....

How to configure and optimize a new Serverless Framework project with TypeScript cover image

How to configure and optimize a new Serverless Framework project with TypeScript

Elevate your Serverless Framework project with TypeScript integration. Learn to configure TypeScript, enable offline mode, and optimize deployments to AWS with tips on AWS profiles, function packaging, memory settings, and more....

Deploying apps and services to AWS using AWS Copilot CLI cover image

Deploying apps and services to AWS using AWS Copilot CLI

Learn how to leverage AWS Copilot CLI, a tool that abstracts the complexities of infrastructure management, making the deployment and management of containerized applications on AWS an easy process...

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

In our previous blog post we set up a horizontally scalable deployment for our full-stack javascript app. In this article we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application....

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

You have an SPA with a NestJS back-end. What if your app is a hit? You need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means you need to have more instances running behind a load balancer....

Utilizing AWS Cognito for Authentication cover image

Utilizing AWS Cognito for Authentication

Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPool_Dev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts file as shown below: ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth class in the @aws-amplify/auth package. For example, to sign in after the user has submitted the form containing the username and password, you can use the Auth.signIn(email, password) method as shown below: ` The logged-in user object is then translated to an instance of CoreUser, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: ` The login component triggers a SignInActions.userSignInAttempted action, which is processed by the above effect. Depending on the outcome of the signInAdmin call in the AuthService class, the action is translated to either AuthAPIActions.userLoginSuccess or AuthAPIActions.userSignInFailed. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp method for user registration. - Signing out is done using Auth.signOut. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub property. ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template() function. The template reads the HTML template embedded in the Lambda. ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...

Avoiding Null and Undefined with NonNullable<T> in TypeScript cover image

Avoiding Null and Undefined with NonNullable<T> in TypeScript

Use Cases Use Case 1: Adding Two Numbers Scenario: A function that adds two numbers and returns their sum. But if one of the numbers is undefined or null, it returns the other number as-is. ` In this code snippet, the addNumbers() function takes two parameters, a and b. a is a required parameter of type number, while b is an optional parameter of type number or null. The function uses the NonNullable&lt;T&gt; conditional type to ensure that the return value is not null or undefined. If b is null or undefined, the function returns the value of a. Otherwise, it adds a and b together and returns the sum. To handle the case where b is null or undefined, the code uses the nullish coalescing operator, ??, which returns the value on its left-hand side if it is not null or undefined, and the value on its right-hand side otherwise. Use Case 2: Checking Contact Information Scenario: A class representing a person, but with optional email and phone properties. The contact() function logs the email and phone numbers if they are defined and not null. Otherwise, it logs a message saying that no contact information is available. ` Explanation: In this code snippet, the Person class has a name property and optional email and phone properties. The contact() function checks if both email and phone are not undefined and not null before logging the details. Otherwise, it logs a message saying that no contact information is available. To initialize the properties with null, the constructor uses the nullish coalescing operator, ??. When creating a new Person, you can pass null or undefined as arguments, and the class will interpret them as null. Conclusion Understanding and correctly implementing conditional types like NonNullable&lt;T&gt; in TypeScript is crucial to reduce potential code pitfalls. By reviewing examples of numerical operations and contact information handling, we've seen how this conditional type helps reinforce our code against null or undefined values. This highlights the utility of TypeScript's conditional types not only for enhancing code stability, but also for easing our coding journey. So keep experimenting and finding the best ways to deploy these tools for creating robust, secure, and efficient programs!...

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2 cover image

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2

Suppose you want to set up your infrastructure on AWS, but avoid going over the manual steps, or you want reproducible results. In that case, CDK might be the thing for you. CDK stands for Cloud Development Kit; it allows you to program your hosting setup using either TypeScript, JavaScript, Python, Java, C#, or Go. CDK does require you to be familiar with AWS terminology. This series will explain the services used, but it might be a good idea to read up on what AWS offers. Or read one of our earlier articles on AWS. CDK is imperative, which means you can code your infrastructure. There is a point to be made, however, that it behaves more like a declarative tool. All the code one writes ends up in a stack definition. This definition is sent to AWS to set up the desired services, or alter an already running stack. The imperative approach allows one to do easy conditional statements or loops without learning a new language. AWS Toolkit To make things easier for us, AWS offers the AWS Toolkit for VS code. The installation of the plugin in VS Code is straightforward. We had some issues with the authentication, and recommend using the "Edit credentials" route over the "Add a new connection" option. When on the account start page, select the profile you'd like to use. Open the accordion, so it shows the authentication options. Pick "Command line or programmatic access" to open a dialog with the required values. Click the text underneath the heading "Option 2: Add a profile to your AWS credentials file". This will automatically copy the values for you. Next, go back to VS Code, and paste these values into your credentials file. Feel free to change the name between the square brackets to something more human-readable. You can now pick this profile when connecting to AWS in VS Code. First stack With our handy toolkit ready, let's deploy our first stack to AWS using CDK. For this, the CDK needs to make a CloudFormation stack. In your terminal, create a new empty directory (the default name of the app will be the same as your directory's name) and navigate into it. Scaffold a new project with ` This will create all the required files to create your stack in AWS. From here on, we can bootstrap our AWS environment for use with CDK. Run the bootstrap command with the profile you’ve configured earlier. For example, I pasted my credentials, and named the profile ‘sandbox’. ` CDK will now create the required AWS resources to deploy our stack. Having all our prerequisites met, let’s create a lambda to test if our stack is working properly. Create a new JavaScript file lambda/Hello.js containing this handler ` And add our lambda to our stack in the constructor in lib/-stack.ts ` That’s all we need to deploy our lambda to our stack. We can now run the deploy command, which will compare our new local configuration with what is already deployed. Before any changes are pushed, this diff will be displayed on your terminal, and ask for confirmation. This is a good moment to evaluate whether what you’ve written has correctly translated to the desired infrastructure. ` This same command will push updates. Note that you will only see the diff and confirmation prompt when CDK is about to create new resources. When updating the contents of your Lambda function, it simply pushes the code changes. Now in VS Code, within your AWS view, you’ll find a new CloudFormation, Lambda, and S3 bucket in the explorer view. Right click your Lambda to “Invoke on AWS”. This opens a new window for that specific Lambda. In the right-hand corner, click “Invoke”. The output window will open, and you should see the returned payload including the message we set in our handler. This is not very practical yet. We’re still missing an endpoint to call from our client or browser. This can be done by adding a FunctionURL. Simply add the following line in your stack definition. The authentication is disabled for now, but this makes it possible to make a GET request to the lambda, and see its result. This might not be the desired situation, and AWS offers options to secure your endpoints. ` After redeploying this change, right click your Lambda in VS Code and copy the URL. Paste it in your browser and you should see the result of your Lambda! Our first stack is deployed and working. Cleanup By following this article, you should remain within the free tier of AWS and not incur any costs. To keep costs low, it’s a good practice to clean up your stacks that are no longer in use. ` The CDK destroy command will remove your stack, but leaves the CDK bootstrapped for future deployments. If you want to fully remove all resources created by following this article, also remove the CloudFormation and S3 bucket. This can be done through VS Code by right clicking your CloudFormation and selecting “Delete CloudFormation Stack” and simply “Delete” for the associated S3 bucket. This brings you back to a completely clean slate and future use of the CDK should be bootstrapped again. Round up You should now be able to bootstrap CDK, Create a stack, and run a Lambda function within the stack which is accessible through a FunctionURL. You can grow your stack by adding more Lambda functions, augmenting the logic of those functions, or adding other AWS resources not covered in this article. The setup created can be torn down and recreated in the exact same way over and over, making it easy to share with your team. Changes are incremental, and can be rolled back if need be. This should offer confidence in managing your infrastructure, over manually creating it through the AWS console. Have fun building your own infrastructure!...

Building Your First Application with AWS Amplify cover image

Building Your First Application with AWS Amplify

AWS (Amazon Web Services) is popular for the cloud solution it provides across the globe, in various regions with data centers. In this article, we will be looking at a particular platform by AWS for frontend developers, AWS Amplify. AWS Amplify is a set of tools and features that let web and mobile developers quickly and easily build full-stack applications on AWS. This article is a summary of JavaScript Marathon: AWS for Frontend Developers with Michael Liendo. If you want a more detailed explanation of building and deploying frontend apps with AWS Amplify, I recommend you go and check out the video! Application User Flow Most applications need certain key features to be created for users. Let’s explore a few of them. - User Login: - This can be created by spinning up an ExpressJS application with Authentication, and handling things like user hashing, password policy, and forgot password. - API integration: - This is another common need as we typically need to handle user data with a backend application. - Database: - Most applications store user information. This would be key in creating an interactive user experience in an application. Bringing these services together can be a lot for many developers. Developers will also have to consider application scalability as the users increase. AWS Amplify AWS Amplify is built to specifically handle scale for frontend developers, and also provides the opportunity for an application to scale as the application and users grow. With scalability handled, this allows developers to focus on providing value for their users versus having to worry about scalability at every stage. AWS Amplify Tools AWS Amplify tools for building and deploying frontend applications include: - CLI: To connect frontend with AWS backend cloud resources. - UI Components: AWS UI components library is an open-source design system with cloud-connected components and primitives that simplify building accessible, responsive, and beautiful applications. - Hosting Solution: For deploying frontend applications, static sites, and server-side apps, with a CI/CD pipeline. - Amplify Studio: A GUI for UI to plug a Figma component and automatically convert it into a ReactJS component. Walking back to how AWS will help manage the user journey we listed above and make developer lives easier, here are some of the services provided by AWS that help spin up applications with ease: - User Login: For user login, we can use Amazon Cognito, AWS’s user directory service to handle user authentication, password policies, forgot password, and more. - API: For API access, we can use AWS AppSync, a serverless GraphQL and Pub/Sub API service. - Database: for Database, we can use Amazon’s DynamoDB, which is a fully managed, serverless, key-value NoSQL database. - Storage: for assets storage, we can be use Amazon Simple Storage Service (Amazon S3). Building a Project & Project Setup Now that you’re familiar with a few of the services we can use to build an application easily, let’s get started and build one together! Before we start, let’s install the AWS Amplify CLI. Run: ` This will give us access to use Amplify’s commands for our application. The Application We will be building a Next framework application. This application will be a collection of pictures of Dogs. To scaffold a Next application, run: ` Now cd into the application directory. ` Install the packages and dependencies we will be using from AWS: ` Now, open the project in your code editor. We will be using VS Code. First, we will wrap the root component in an AmplifyProvider component. Open _app.js and replace the code: ` This is to make the application aware of Amplify. We also imported the style library from the React Amplify library. We will be using the install amplify CLI tool to initialize the Amplify configuration in our project. To do this, run: ` You can modify the properties as well, but for this demo, we will leave it as default, and when it asks Initialize the project with the above configuration? we will choose NO. This is because we will replace the src directory with a . directory and the build directory replace with .next directory. If you don’t already have AWS credentials set up, Amplify will walk you through setting up new credentials. For this demo, we will be accepting the default credentials settings provided, but we recommend you follow with the required information for your project. Check out the Documentation to learn more. AWS will add a few cloud functions and create a configuration file in the project directory, aws-exports.js. You can add services to your Amplify project by running the amplify add command. For example, to add the authentication service (AWS Cognito), run: ` This will ask for the type of security configuration you want for the project. Next, it asks how you want users to authenticate. This will add authentication to your application. To test it out, let's edit the index.js file and replace the content: ` Now, run the application in dev environment: ` Navigate to the dev localhost URL in the browser, http://localhost:3000/. The landing URL is now authenticated, and requires a username and password to login. The application now has full authentication with the ability to sign in: There is a registration function and user detail fields: There is also a forgotten password function that emails the user a code to reset the password, all from just a few lines of code: This is a fully functioning application with authentication included locally. To use the full authentication, we will need to push the application to AWS service. To do that, run: ` This will list services created in the application and prompt if you want to continue with the command. Upon accepting, it will push the application to the cloud and update the amplify-exports.js configuration file with the cloud configuration and AWS services that we enabled in our application. Now, let's modify the _app.js to apply the Amplify configurations. Add the Amplify and config imports as follows: ` The authentication configuration handles form validation out-of-the-box including password policy and email, or phone number verification depending on what you choose. You can view ampliy-exports.js to confirm the configuration options available for the project. Now to add an API to the application, run: ` For this demo, we will choose GraphQL for the API service, API key for authentication, and Amazon Cognito. Everything else will be the default. Amplify will auto generate the GraphQL schema for the project, which you can modify to fit your use case. Push the API updates to AWS: ` Amplify will trigger to generate code for your GraphQL API. We suggest you accept the default options. Storage We’ll add a storage service to our project to allow users to upload favorite dog images. Run: ` You can apply default settings or modify it to fit your use case. Building a Demo app Now that we have prepared the project, let’s modify index.js to implement file upload for authenticated users. ` Walk Through First, we created a state to hold a list of dogs’ data from the API. We then declared an async function to handle the form submission. Using the AWS component library, we loop through the dogItems, rendering each item to display the uploaded image and details of the dog. We imported the Storage module from amplify, passed dogPicFile to dogPicName for upload, and set the level to protected to give user access only to read and update data. Then, we imported the API module from amplify, and the destructured data property. Using the GraphQL code generated for us by amplify when we run amplify add api, we imported createDogs from mutations so we can post form data to the database with GraphQL. We set a new state with the return data from the database. With React’s useEffect, we declared an async function to fetch data from the database with GraphQL query, and set the state with the returned data, and we call the fetchDogData. To test the application, run: ` Conclusion In this article, we learned how to use AWS Amplify to implement authentication, integrate an API with your frontend application, connect with a database, and also how to store files in AWS storage. This can all be accomplished within a short time, and using very few lines of code. If you want a more detailed explanation of the content covered in this write up, I recommend you watch the video by JavaScript Marathon: AWS for Frontend Developers with Michael Liendo on This Dot’s YouTube Channel. What are you planning on building with AWS Amplify?...

Remix Deployment with Architecture cover image

Remix Deployment with Architecture

Intro Today’s article, will give a brief overview of the Architect framework and how to deploy a Remix app. I’ll cover a few different topics, such as what Architect is, why it’s good to use, and the issues I ran into while using it. It is a straightforward process, and I recommend using it with the Grunge Stack offered by Remix. So let’s jump on in and start talking about Architect. Prerequisites There are a few prerequisites and also some basic understandings that are expected going into this. The first is to have a GitHub account, then an AWS account, and finally some basic understanding of how to deploy. I also recommend checking out the *Grunge Stack* here if you run into any issues when we progress further. What is Architect? First off, Architect is a simple framework for Functional Web Apps (FWAs) on AWS. Now you might be wondering, "why Architect?" It offers the best developer experience, works locally, has infrastructure as code, is secured to the least privilege by default, and is open-source. Architect prioritizes speed, smart configurable defaults, and flexible infrastructure. It also allows users to test things and debug locally. It defines a high-level manifest, and turns a complex cloud infrastructure into a build artifact. It complies the manifest into AWS CloudFormation and deploys it. Since it’s open-source, Architect prioritizes a regular release schedule, and backwards compatibility. Remix deployment Deploying Remix with Architect is rather straightforward, and I recommend using the Grunge Stack to do it. First, we’re going to head on over to the terminal and run *npx create-remix --template remix-run/grunge-stack*. That will get you a Remix template with almost everything you need. *Generating remix template* For the next couple of steps, you need to add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your repo’s secrets. You’ll need to create the secrets on your AWS account, and then add them to your repo. *AWS secrets* *GitHub secrets* The last steps before deploying include giving CloudFormation a SESSION_SECRET of its own for both staging and production environments. Then you need to add an ARC_APP_SECRET. *Adding env variables* ` With all of that out of the way, you need to run *npx arc deploy*, which will deploy your build to the staging environment. You could also do *npx arc deploy —-dry-run* beforehand to verify everything is fine. Issues I Had in My Own Project Now let's cover some issues I had, and my solution for my own project. So, I had a project far into development, and while looking at the Grunge Stack, I was scratching my head, trying to figure out what was necessary to bring over to my existing project. I had several instances of trial and error as I plugged in the Architect related items. For instance: the arc file, which has the app name, HTTP routes, static assets for S3 buckets, and the AWS configuration. I also had to change things like the *remix.config* file, adding a *server.ts* file, and adding Architect related dependencies in the package.json. During this process, I would get errors about missing files or missing certain function dirs(), and I dumped a good chunk of time into it. So while continuing to get those errors, I concluded that I would follow the Grunge Stack, and their instructions for a new project. Then I would merge my old project with the new project; and that resolved my issues. While not the best way, it did get me going, and did not waste any more time. That resolved my immediate issues of trying to get my Remix app deployed to AWS, but then I had to figure out what got pushed. That was easy, since it created a lambda, API gateway, and bucket based on the items in my arc file. Then I realized, while testing it out live, that my environmental variables hadn’t carried over, so I had to tweak those on AWS. My project also used a GitHub OAuth app. So that needed to be tweaked with the new URL. Then it was up and running. Conclusion Today’s article covered a brief overview of what Architect is, why it’s good to use, how to deploy a Remix app and the issues you might have. I hope it was useful and will give others a good starting point....

Migrating an Amplify Backend on Serverless Framework - Part 2 cover image

Migrating an Amplify Backend on Serverless Framework - Part 2

This is Part Two of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part One here and Part Three here. Welcome to the second part of our "Migrating an Amplify Backend to Serverless Framework", where I will give you a step-by-step guide on how to migrate Amplify-based services so they can be deployable using the Serverless Framework. In the first part, we scaffolded our example application and explained how to deploy Cognito user pools. In this blog post, we'll focus on the core of the application, which is the GraphQL API powered by AppSync. How AppSync Works Before we go deep into the code, it's worth understanding how AppSync actually works. AppSync is a powerful service by AWS that allows you to deploy GraphQL APIs with ease, as well as connect those APIs with data sources like AWS DynamoDB, lambda, and more. It can work without Amplify, but when you pair Amplify with it, you get some extra benefits. Deployment, for example, is way easier. All you need is a GraphQL schema, and Amplify will generate all the required AppSync components for it to work end-to-end. Some of these components are: * Data sources, which are interfaces to other AWS resources such as lambdas and DynamoDB tables. * Resolvers, which connect your GraphQL types and fields to data sources. A resolver specifies how data is mapped between your GraphQL API and your data sources. This mapping is done using mapping templates, and mapping templates are written using Apache Velocity Template Language (VTL). A mapping template contains one template for request, and one for response. * DynamoDB tables, usually one for each GraphQL type. Below, you can see the flow of data when a GraphQL request is made that interacts with a DynamoDB table: A standard GraphQL schema is not enough to generate everything that AppSync needs, though. Amplify needs some additional metadata in the form of Amplify-specific directives which you can place in your GraphQL schema, and Amplify will use the information from the directives to set up everything correctly. Then, during deployment, Amplify will *transform* your schema so that all Amplify-specific directives are stripped, and additional GraphQL types are generated if needed. This is especially important for the @connection directive, which allows you to define 1:1, 1:M, or N:M relationships between models (which are basically GraphQL types annotated with @model directive). Having additional GraphQL types is necessary in order for GraphQL API clients to properly read those relationships. For example, if the Amplify GraphQL schema contains the following type definition: ` Then, this will be transformed to the following in the standard GraphQL schema: ` It's not quite the same, isn't it? Yet, only the latter form can be recognized by AppSync. Hence, a simplified sequence of steps executed during deployment is: 1. Transform your GraphQL schema enriched with Amplify directives to a standard GraphQL schema, which is then uploaded to AppSync. 2. Create/update/delete DynamoDB tables based on GraphQL types annotated with @model 3. Create/update/delete data sources based on directives such as @model and @function 4. Create/update/delete resolvers and connect them to GraphQL schema's fields on one side as well as to data sources on the other side. Resolvers are the "glue" that connects GraphQL fields to data sources. In our Serverless deployment, we'll need to replicate the above deployment steps. Let's get to it. Transforming GraphQL Schema As part of our Amplified ToDo application, we provided a GraphQL schema that will be used for our API. Now, if we tried to deploy the schema to AppSync, with or without Serverless Framework, it would fail because the schema contains Amplify-specific directives. We need to run it through a *transformer* first - the same one that Amplify runs when the application is deployed using Amplify. Fortunately, the Amplify CLI is open-source, and the transformer is part of it, meaning that we can freely use it. Guided by an excellent article by Ronny Roeller, we wrote a script that takes an Amplify GraphQL schema and produces an AppSync-compatible schema. Not only that, it also generates VTL templates along with it, just like Amplify! The script assumes that the Amplify GraphQL schema is located in the amplify-schema.graphql file (we copied it from amplify/backend/api/amplifiedtodo/schema.graphql). In the script, we first create an instance of GraphQLTransform and pass it different transformer instances, each covering one specific Amplify directive. For example, the ModelConnectionTransformer instance will process @connection directives, and the KeyTransformer will process @key directives. Then, we call the GraphQLTransform.transform() method which will output the transformed schema to appsync-schema.graphql. Finally, we go through the generated types and fields to generate the mapping templates for the resolvers. Mapping templates are stored to serverless.mapping-templates.yml file. There are two types of mapping templates in the file: "UNIT" (which is the default) and "PIPELINE". They correspond to unit and pipeline resolvers, respectively. Unit resolvers are used for mapping to a single data source, while pipeline resolvers allow you to invoke a series of functions in a serial manner. In our case, pipeline resolvers are used for invoking lambda-based data sources. Invoking the script is a manual step, and it's best to invoke it before deployment. We made an npm task called deploy that runs the transformer, and then runs sls deploy. This task should be used for deploying from now on as it makes sure that no changes to the GraphQL are left out by accident. Generating GraphQL TypeScript Types Another thing that Amplify does is that it can generate TypeScript files that correspond to GraphQL types. Unfortunately, this is hard to do without the Amplify CLI, as the code generation part is deep in the Amplify CLI and cannot be extracted like the transformer. If you use TypeScript, you'll need to keep the Amplify CLI to perform code generation using the amplify codegen command. You can add this command to the deploy command as well. ` Configuring the AppSync Plugin The final step is configuring the serverless-appsync-plugin plugin so we can deploy our schema to AppSync. Before we do that, we should create the lambdas and DynamoDB tables that our GraphQL API will use. If you recall from the previous blog post, we've defined one Lambda called processQueue under the Mutation type: ` Let's create a placeholder for that Lambda by copying handlers/insert-user-preference to handlers/process-queue and adding it to our Serverless config: ` We will also need to create one DynamoDB table for each GraphQL type, namely: * List * Item * UserPreference * NotificationQueue Let's add the table names to the environment so that they can be passed into our Lambda functions: ` Now, let's create the actual tables under the Resources section: ` Note how we did not need to define all fields that are defined in the schema. DynamoDB is schemaless, meaning that you can provision arbitrary columns to table rows. We only need to define the columns that are primary key columns as well as those covered by indexes. Indexes are not created automatically by the serverless-appsync-plugin - you need to define manually in the Resources section. Next, we need to create data sources. Create a file called serverless.data-sources.yml with the following contents: ` The above file defines data sources for our lambdas and DynamoDB tables. The names of data sources are not arbitrary. They need to match the names that were generated by the transformer script in the serverless.mapping-templates.yml file. Also, each data source references a table by using the Ref intrinsic function to find the table name by resource name (defined under Resources section of the main Serverless config). The final piece are the function configs, which is a configuration part of the serverless-appsync-plugin that is required only for pipeline resolvers. In the serverless.mapping-templates.yml file, each mapping template for a pipeline resolver refers to a function config. For example, the pipeline resolver for the processQueue lambda references a function config named InvokeProcessQueueLambdaDataSource. The function config provides a request/response mapping, also written in VTL templates, for the lambda in question. Create a file named serverless.appsync-function-configs.yml, with the following contents: ` Create default-invoke-lambda.req.vtl inside the mapping-templates directory, with the following contents: ` Likewise, create default-invoke-lambda.res.vtl with the following contents: ` As you can see, the VTL request/response templates for pipeline resolvers are not complicated, and they are same for every pipeline resolver if there are more of them. For that reason, all pipeline resolvers can reference the same files: default-invoke-lambda.req.vtl for request, and default-invoke-lambda.res.vtl for response. Finally, to bring all the pieces together, this is the final configuration of the AppSync plugin: ` Looking above, that's quite a bit of configuration work. It's a smooth ride once you set it up, though. To help you understand which configuration references which, here's a diagram with all the pieces: AppSync Configuration Is Complete With the above configuration in place, the AppSync configuration is complete. You can view the entire code on GitHub. This was by far the most complex part of the migration. In the next and final blog post, we'll be covering some easier topics, like setting up DynamoDB triggers and configuring S3 buckets....

How to Use Custom Domain with Serverless: The Perfbuddy API Use Case cover image

How to Use Custom Domain with Serverless: The Perfbuddy API Use Case

Recently, Perfbuddy has transitioned its backend stack to utilize Serverless when managing the AWS Amplify backend, in favor of using pure Amplify. I was brought on near the end of that process to lend a hand, after previously wrapping up migrating another project's backend over to the Serverless framework. Out of the tasks I helped with, one of the bigger challenges came from setting up custom domains for our API endpoints that the frontend would point to. That is, instead of a difficult-to-keep-track-of URL generated by AWS, we can use a custom domain such as api.perfbuddy.com that points to an API Gateway domain where the API routes are set up. Much more developer friendly to remember, _especially_ if you have multiple environments set up for various stages. Fortunately, Serverless has a plugin called Domain Manager that can handle all of that for us. The potential problem? The site had already deployed in a production state inside of another AWS account, and its domain was registered to that account -- separately from where the Serverless backend was being set up. There is a process to migrate that, but having never done it before and the site needing to stay live, I decided to see if I could make it work as it was. Fortunately, I was able to. Prerequisite Before we take a deeper look, if you're following along looking to solve a similar problem and you haven't yet started your Serverless migration, we've discussed that process previously in detail, and you'll definitely want to have the basics already set up for your Serverless deployment instead of doing this all in one go. If you're already deploying with Serverless, you probably also have an IAM role created and set up with the permissions you'll be needing. Just to be sure, you'll be needing these permissions: ` The Problem As I stated previously, the Perfbuddy domain was registered in a separate AWS account than the one our new Serverless deployment was in under our organization account. Fortunately, I had access to both accounts so either a) I could make this work or b) I would have to migrate the domain over and be a nervous wreck the whole time that I'd break something. My first step was just to naively follow the excellent Serverless Domain Manager plugin guide, which got me most of the way there. I added the plugin to the project: ` Then, added the plugin to the the serverless.yml: ` Note: You'll want to make sure you pay attention to the order of your plugins, since for some plugins, the order matters greatly. If you've been paying attention to the order of your plugins already, you probably know whether or not if it's safe to list it first, last, etc. For my purposes, I added it to the end of the list without issue. Next, I needed to add the plugin configuration under the custom field: ` The documentation specifies multiple ways to set up multiple domains in one configuration setup, but for my first attempt, I decided to try to get just one domain setup to see if it worked (more on setting up multiple domains later). I left out a few of the parameters that were listed in the guide, and only added a few that I knew I needed to specify. Most parameters have a default value if not specified, which you can find on the plugin's Github. After running the create command: ` I hit my first real snag: there was no hosted zone set up for the custom domain. The Solution At this step, I feared I would actually potentially have to migrate the domain over. However, when reading the migration documentation, I noticed a specific line in the second optional step that would clue me in on exactly what I needed to know. It reads as follows: > If you're using Route 53 as the DNS service for the domain, Route 53 doesn't transfer the hosted zone when you transfer a domain to a different AWS account. If domain registration is associated with one account and the corresponding hosted zone is associated with another account, neither domain registration nor DNS functionality is affected. The only effect is that you'll need to sign into the Route 53 console using one account to see the domain, and sign in using the other account to see the hosted zone. So, since I had access to both accounts under the same organization, it read to me that AWS would handle the DNS automatically without transferring anything. A hosted zone can live on one account while the main domain registration lived on the other. Perfect. Navigating to the Route 53 console where the Serverless backend was being deployed, I created a hosted zone under the exact domain name I wanted to deploy to (I also created hosted zones for the other specific domains I would need): At this point, the Serverless CLI may have worked with the configuration we specified as is, but I decided to add an additional parameter using the hosted zone ID from the one I just created: ` Now, running the create_domain worked and navigating to the API Gateway console showed the custom domain we just created: Additionally, navigating back to our hosted zone in the Route 53 console, we created shows two additional records the plugin created for us. Finally, running a deploy command: ` Successfully deployed the backend, now with the domain manager plugin info appearing in the output: ` Deploying Multiple Domains Of course, it wasn't just a single API backend I needed to setup; I needed to setup multiple. The problem I saw with using the plugin's built-in capability for multiple domains was that it would try to deploy to all of them at the same time. We needed to deploy to different stages, at different points of development and release independently between them. So, utilizing the Serverless stage parameters I was able to use the same single custom domain structure for the domain manager plugin configuration. You can even specify whole arrays as a parameter, so I did exactly that (with the other stages and parameters added as needed, of course): ` Note: Be sure to take note that you are using the correct YAML formatting and structure for you use case -- at one point, I was using the multiple domains YAML structure, but incorrectly so my deployments weren't actually going through as a result, I believe. I spent far more time resolving that specifically than I needed to. Then, back under the actual plugin configuration, I changed it as follows: ` Now, running any of the commands: ` Deployed to all needed environments separately, and pointing my locally running frontend to api.perfbuddy.com worked like a charm: Conclusion Working in Serverless truly has been a smooth process, and we're constantly blown away with its power that can only succinctly be described as magic. And, of course, if you haven't yet checked out Perfbuddy, be sure to do so! It's completely free and was made to help all developers and their teams improve their products....