This Dot Blog
This Dot provides teams with technical leaders who bring deep knowledge of the web platform. We help teams set new standards, and deliver results predictably.
How to configure and optimize a new Serverless Framework project with TypeScript
Elevate your Serverless Framework project with TypeScript integration. Learn to configure TypeScript, enable offline mode, and optimize deployments to AWS with tips on AWS profiles, function packaging, memory settings, and more....
Jan 26, 2024
Deploying apps and services to AWS using AWS Copilot CLI
Learn how to leverage AWS Copilot CLI, a tool that abstracts the complexities of infrastructure management, making the deployment and management of containerized applications on AWS an easy process...
Jan 3, 2024
Sep 15, 2023
How to host a full-stack app with AWS CloudFront and Elastic Beanstalk
You have an SPA with a NestJS back-end. What if your app is a hit? You need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means you need to have more instances running behind a load balancer....
Sep 11, 2023
Utilizing AWS Cognito for Authentication
Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPool_Dev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts file as shown below: ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth class in the @aws-amplify/auth package. For example, to sign in after the user has submitted the form containing the username and password, you can use the Auth.signIn(email, password) method as shown below: ` The logged-in user object is then translated to an instance of CoreUser, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: ` The login component triggers a SignInActions.userSignInAttempted action, which is processed by the above effect. Depending on the outcome of the signInAdmin call in the AuthService class, the action is translated to either AuthAPIActions.userLoginSuccess or AuthAPIActions.userSignInFailed. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp method for user registration. - Signing out is done using Auth.signOut. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub property. ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template() function. The template reads the HTML template embedded in the Lambda. ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...
Jul 19, 2023
Avoiding Null and Undefined with NonNullable<T> in TypeScript
Use Cases Use Case 1: Adding Two Numbers Scenario: A function that adds two numbers and returns their sum. But if one of the numbers is undefined or null, it returns the other number as-is. ` In this code snippet, the addNumbers() function takes two parameters, a and b. a is a required parameter of type number, while b is an optional parameter of type number or null. The function uses the NonNullable<T> conditional type to ensure that the return value is not null or undefined. If b is null or undefined, the function returns the value of a. Otherwise, it adds a and b together and returns the sum. To handle the case where b is null or undefined, the code uses the nullish coalescing operator, ??, which returns the value on its left-hand side if it is not null or undefined, and the value on its right-hand side otherwise. Use Case 2: Checking Contact Information Scenario: A class representing a person, but with optional email and phone properties. The contact() function logs the email and phone numbers if they are defined and not null. Otherwise, it logs a message saying that no contact information is available. ` Explanation: In this code snippet, the Person class has a name property and optional email and phone properties. The contact() function checks if both email and phone are not undefined and not null before logging the details. Otherwise, it logs a message saying that no contact information is available. To initialize the properties with null, the constructor uses the nullish coalescing operator, ??. When creating a new Person, you can pass null or undefined as arguments, and the class will interpret them as null. Conclusion Understanding and correctly implementing conditional types like NonNullable<T> in TypeScript is crucial to reduce potential code pitfalls. By reviewing examples of numerical operations and contact information handling, we've seen how this conditional type helps reinforce our code against null or undefined values. This highlights the utility of TypeScript's conditional types not only for enhancing code stability, but also for easing our coding journey. So keep experimenting and finding the best ways to deploy these tools for creating robust, secure, and efficient programs!...
Jun 9, 2023
How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2
Mar 8, 2023
Building Your First Application with AWS Amplify
Sep 19, 2022
Remix Deployment with Architecture
Intro Today’s article, will give a brief overview of the Architect framework and how to deploy a Remix app. I’ll cover a few different topics, such as what Architect is, why it’s good to use, and the issues I ran into while using it. It is a straightforward process, and I recommend using it with the Grunge Stack offered by Remix. So let’s jump on in and start talking about Architect. Prerequisites There are a few prerequisites and also some basic understandings that are expected going into this. The first is to have a GitHub account, then an AWS account, and finally some basic understanding of how to deploy. I also recommend checking out the *Grunge Stack* here if you run into any issues when we progress further. What is Architect? First off, Architect is a simple framework for Functional Web Apps (FWAs) on AWS. Now you might be wondering, "why Architect?" It offers the best developer experience, works locally, has infrastructure as code, is secured to the least privilege by default, and is open-source. Architect prioritizes speed, smart configurable defaults, and flexible infrastructure. It also allows users to test things and debug locally. It defines a high-level manifest, and turns a complex cloud infrastructure into a build artifact. It complies the manifest into AWS CloudFormation and deploys it. Since it’s open-source, Architect prioritizes a regular release schedule, and backwards compatibility. Remix deployment Deploying Remix with Architect is rather straightforward, and I recommend using the Grunge Stack to do it. First, we’re going to head on over to the terminal and run *npx create-remix --template remix-run/grunge-stack*. That will get you a Remix template with almost everything you need. *Generating remix template* For the next couple of steps, you need to add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your repo’s secrets. You’ll need to create the secrets on your AWS account, and then add them to your repo. *AWS secrets* *GitHub secrets* The last steps before deploying include giving CloudFormation a SESSION_SECRET of its own for both staging and production environments. Then you need to add an ARC_APP_SECRET. *Adding env variables* ` With all of that out of the way, you need to run *npx arc deploy*, which will deploy your build to the staging environment. You could also do *npx arc deploy —-dry-run* beforehand to verify everything is fine. Issues I Had in My Own Project Now let's cover some issues I had, and my solution for my own project. So, I had a project far into development, and while looking at the Grunge Stack, I was scratching my head, trying to figure out what was necessary to bring over to my existing project. I had several instances of trial and error as I plugged in the Architect related items. For instance: the arc file, which has the app name, HTTP routes, static assets for S3 buckets, and the AWS configuration. I also had to change things like the *remix.config* file, adding a *server.ts* file, and adding Architect related dependencies in the package.json. During this process, I would get errors about missing files or missing certain function dirs(), and I dumped a good chunk of time into it. So while continuing to get those errors, I concluded that I would follow the Grunge Stack, and their instructions for a new project. Then I would merge my old project with the new project; and that resolved my issues. While not the best way, it did get me going, and did not waste any more time. That resolved my immediate issues of trying to get my Remix app deployed to AWS, but then I had to figure out what got pushed. That was easy, since it created a lambda, API gateway, and bucket based on the items in my arc file. Then I realized, while testing it out live, that my environmental variables hadn’t carried over, so I had to tweak those on AWS. My project also used a GitHub OAuth app. So that needed to be tweaked with the new URL. Then it was up and running. Conclusion Today’s article covered a brief overview of what Architect is, why it’s good to use, how to deploy a Remix app and the issues you might have. I hope it was useful and will give others a good starting point....
Aug 30, 2022
Migrating an Amplify Backend on Serverless Framework - Part 2
This is Part Two of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part One here and Part Three here. Welcome to the second part of our "Migrating an Amplify Backend to Serverless Framework", where I will give you a step-by-step guide on how to migrate Amplify-based services so they can be deployable using the Serverless Framework. In the first part, we scaffolded our example application and explained how to deploy Cognito user pools. In this blog post, we'll focus on the core of the application, which is the GraphQL API powered by AppSync. How AppSync Works Before we go deep into the code, it's worth understanding how AppSync actually works. AppSync is a powerful service by AWS that allows you to deploy GraphQL APIs with ease, as well as connect those APIs with data sources like AWS DynamoDB, lambda, and more. It can work without Amplify, but when you pair Amplify with it, you get some extra benefits. Deployment, for example, is way easier. All you need is a GraphQL schema, and Amplify will generate all the required AppSync components for it to work end-to-end. Some of these components are: * Data sources, which are interfaces to other AWS resources such as lambdas and DynamoDB tables. * Resolvers, which connect your GraphQL types and fields to data sources. A resolver specifies how data is mapped between your GraphQL API and your data sources. This mapping is done using mapping templates, and mapping templates are written using Apache Velocity Template Language (VTL). A mapping template contains one template for request, and one for response. * DynamoDB tables, usually one for each GraphQL type. Below, you can see the flow of data when a GraphQL request is made that interacts with a DynamoDB table: A standard GraphQL schema is not enough to generate everything that AppSync needs, though. Amplify needs some additional metadata in the form of Amplify-specific directives which you can place in your GraphQL schema, and Amplify will use the information from the directives to set up everything correctly. Then, during deployment, Amplify will *transform* your schema so that all Amplify-specific directives are stripped, and additional GraphQL types are generated if needed. This is especially important for the @connection directive, which allows you to define 1:1, 1:M, or N:M relationships between models (which are basically GraphQL types annotated with @model directive). Having additional GraphQL types is necessary in order for GraphQL API clients to properly read those relationships. For example, if the Amplify GraphQL schema contains the following type definition: ` Then, this will be transformed to the following in the standard GraphQL schema: ` It's not quite the same, isn't it? Yet, only the latter form can be recognized by AppSync. Hence, a simplified sequence of steps executed during deployment is: 1. Transform your GraphQL schema enriched with Amplify directives to a standard GraphQL schema, which is then uploaded to AppSync. 2. Create/update/delete DynamoDB tables based on GraphQL types annotated with @model 3. Create/update/delete data sources based on directives such as @model and @function 4. Create/update/delete resolvers and connect them to GraphQL schema's fields on one side as well as to data sources on the other side. Resolvers are the "glue" that connects GraphQL fields to data sources. In our Serverless deployment, we'll need to replicate the above deployment steps. Let's get to it. Transforming GraphQL Schema As part of our Amplified ToDo application, we provided a GraphQL schema that will be used for our API. Now, if we tried to deploy the schema to AppSync, with or without Serverless Framework, it would fail because the schema contains Amplify-specific directives. We need to run it through a *transformer* first - the same one that Amplify runs when the application is deployed using Amplify. Fortunately, the Amplify CLI is open-source, and the transformer is part of it, meaning that we can freely use it. Guided by an excellent article by Ronny Roeller, we wrote a script that takes an Amplify GraphQL schema and produces an AppSync-compatible schema. Not only that, it also generates VTL templates along with it, just like Amplify! The script assumes that the Amplify GraphQL schema is located in the amplify-schema.graphql file (we copied it from amplify/backend/api/amplifiedtodo/schema.graphql). In the script, we first create an instance of GraphQLTransform and pass it different transformer instances, each covering one specific Amplify directive. For example, the ModelConnectionTransformer instance will process @connection directives, and the KeyTransformer will process @key directives. Then, we call the GraphQLTransform.transform() method which will output the transformed schema to appsync-schema.graphql. Finally, we go through the generated types and fields to generate the mapping templates for the resolvers. Mapping templates are stored to serverless.mapping-templates.yml file. There are two types of mapping templates in the file: "UNIT" (which is the default) and "PIPELINE". They correspond to unit and pipeline resolvers, respectively. Unit resolvers are used for mapping to a single data source, while pipeline resolvers allow you to invoke a series of functions in a serial manner. In our case, pipeline resolvers are used for invoking lambda-based data sources. Invoking the script is a manual step, and it's best to invoke it before deployment. We made an npm task called deploy that runs the transformer, and then runs sls deploy. This task should be used for deploying from now on as it makes sure that no changes to the GraphQL are left out by accident. Generating GraphQL TypeScript Types Another thing that Amplify does is that it can generate TypeScript files that correspond to GraphQL types. Unfortunately, this is hard to do without the Amplify CLI, as the code generation part is deep in the Amplify CLI and cannot be extracted like the transformer. If you use TypeScript, you'll need to keep the Amplify CLI to perform code generation using the amplify codegen command. You can add this command to the deploy command as well. ` Configuring the AppSync Plugin The final step is configuring the serverless-appsync-plugin plugin so we can deploy our schema to AppSync. Before we do that, we should create the lambdas and DynamoDB tables that our GraphQL API will use. If you recall from the previous blog post, we've defined one Lambda called processQueue under the Mutation type: ` Let's create a placeholder for that Lambda by copying handlers/insert-user-preference to handlers/process-queue and adding it to our Serverless config: ` We will also need to create one DynamoDB table for each GraphQL type, namely: * List * Item * UserPreference * NotificationQueue Let's add the table names to the environment so that they can be passed into our Lambda functions: ` Now, let's create the actual tables under the Resources section: ` Note how we did not need to define all fields that are defined in the schema. DynamoDB is schemaless, meaning that you can provision arbitrary columns to table rows. We only need to define the columns that are primary key columns as well as those covered by indexes. Indexes are not created automatically by the serverless-appsync-plugin - you need to define manually in the Resources section. Next, we need to create data sources. Create a file called serverless.data-sources.yml with the following contents: ` The above file defines data sources for our lambdas and DynamoDB tables. The names of data sources are not arbitrary. They need to match the names that were generated by the transformer script in the serverless.mapping-templates.yml file. Also, each data source references a table by using the Ref intrinsic function to find the table name by resource name (defined under Resources section of the main Serverless config). The final piece are the function configs, which is a configuration part of the serverless-appsync-plugin that is required only for pipeline resolvers. In the serverless.mapping-templates.yml file, each mapping template for a pipeline resolver refers to a function config. For example, the pipeline resolver for the processQueue lambda references a function config named InvokeProcessQueueLambdaDataSource. The function config provides a request/response mapping, also written in VTL templates, for the lambda in question. Create a file named serverless.appsync-function-configs.yml, with the following contents: ` Create default-invoke-lambda.req.vtl inside the mapping-templates directory, with the following contents: ` Likewise, create default-invoke-lambda.res.vtl with the following contents: ` As you can see, the VTL request/response templates for pipeline resolvers are not complicated, and they are same for every pipeline resolver if there are more of them. For that reason, all pipeline resolvers can reference the same files: default-invoke-lambda.req.vtl for request, and default-invoke-lambda.res.vtl for response. Finally, to bring all the pieces together, this is the final configuration of the AppSync plugin: ` Looking above, that's quite a bit of configuration work. It's a smooth ride once you set it up, though. To help you understand which configuration references which, here's a diagram with all the pieces: AppSync Configuration Is Complete With the above configuration in place, the AppSync configuration is complete. You can view the entire code on GitHub. This was by far the most complex part of the migration. In the next and final blog post, we'll be covering some easier topics, like setting up DynamoDB triggers and configuring S3 buckets....
May 20, 2022
How to Use Custom Domain with Serverless: The Perfbuddy API Use Case
Recently, Perfbuddy has transitioned its backend stack to utilize Serverless when managing the AWS Amplify backend, in favor of using pure Amplify. I was brought on near the end of that process to lend a hand, after previously wrapping up migrating another project's backend over to the Serverless framework. Out of the tasks I helped with, one of the bigger challenges came from setting up custom domains for our API endpoints that the frontend would point to. That is, instead of a difficult-to-keep-track-of URL generated by AWS, we can use a custom domain such as api.perfbuddy.com that points to an API Gateway domain where the API routes are set up. Much more developer friendly to remember, _especially_ if you have multiple environments set up for various stages. Fortunately, Serverless has a plugin called Domain Manager that can handle all of that for us. The potential problem? The site had already deployed in a production state inside of another AWS account, and its domain was registered to that account -- separately from where the Serverless backend was being set up. There is a process to migrate that, but having never done it before and the site needing to stay live, I decided to see if I could make it work as it was. Fortunately, I was able to. Prerequisite Before we take a deeper look, if you're following along looking to solve a similar problem and you haven't yet started your Serverless migration, we've discussed that process previously in detail, and you'll definitely want to have the basics already set up for your Serverless deployment instead of doing this all in one go. If you're already deploying with Serverless, you probably also have an IAM role created and set up with the permissions you'll be needing. Just to be sure, you'll be needing these permissions: ` The Problem As I stated previously, the Perfbuddy domain was registered in a separate AWS account than the one our new Serverless deployment was in under our organization account. Fortunately, I had access to both accounts so either a) I could make this work or b) I would have to migrate the domain over and be a nervous wreck the whole time that I'd break something. My first step was just to naively follow the excellent Serverless Domain Manager plugin guide, which got me most of the way there. I added the plugin to the project: ` Then, added the plugin to the the serverless.yml: ` Note: You'll want to make sure you pay attention to the order of your plugins, since for some plugins, the order matters greatly. If you've been paying attention to the order of your plugins already, you probably know whether or not if it's safe to list it first, last, etc. For my purposes, I added it to the end of the list without issue. Next, I needed to add the plugin configuration under the custom field: ` The documentation specifies multiple ways to set up multiple domains in one configuration setup, but for my first attempt, I decided to try to get just one domain setup to see if it worked (more on setting up multiple domains later). I left out a few of the parameters that were listed in the guide, and only added a few that I knew I needed to specify. Most parameters have a default value if not specified, which you can find on the plugin's Github. After running the create command: ` I hit my first real snag: there was no hosted zone set up for the custom domain. The Solution At this step, I feared I would actually potentially have to migrate the domain over. However, when reading the migration documentation, I noticed a specific line in the second optional step that would clue me in on exactly what I needed to know. It reads as follows: > If you're using Route 53 as the DNS service for the domain, Route 53 doesn't transfer the hosted zone when you transfer a domain to a different AWS account. If domain registration is associated with one account and the corresponding hosted zone is associated with another account, neither domain registration nor DNS functionality is affected. The only effect is that you'll need to sign into the Route 53 console using one account to see the domain, and sign in using the other account to see the hosted zone. So, since I had access to both accounts under the same organization, it read to me that AWS would handle the DNS automatically without transferring anything. A hosted zone can live on one account while the main domain registration lived on the other. Perfect. Navigating to the Route 53 console where the Serverless backend was being deployed, I created a hosted zone under the exact domain name I wanted to deploy to (I also created hosted zones for the other specific domains I would need): At this point, the Serverless CLI may have worked with the configuration we specified as is, but I decided to add an additional parameter using the hosted zone ID from the one I just created: ` Now, running the create_domain worked and navigating to the API Gateway console showed the custom domain we just created: Additionally, navigating back to our hosted zone in the Route 53 console, we created shows two additional records the plugin created for us. Finally, running a deploy command: ` Successfully deployed the backend, now with the domain manager plugin info appearing in the output: ` Deploying Multiple Domains Of course, it wasn't just a single API backend I needed to setup; I needed to setup multiple. The problem I saw with using the plugin's built-in capability for multiple domains was that it would try to deploy to all of them at the same time. We needed to deploy to different stages, at different points of development and release independently between them. So, utilizing the Serverless stage parameters I was able to use the same single custom domain structure for the domain manager plugin configuration. You can even specify whole arrays as a parameter, so I did exactly that (with the other stages and parameters added as needed, of course): ` Note: Be sure to take note that you are using the correct YAML formatting and structure for you use case -- at one point, I was using the multiple domains YAML structure, but incorrectly so my deployments weren't actually going through as a result, I believe. I spent far more time resolving that specifically than I needed to. Then, back under the actual plugin configuration, I changed it as follows: ` Now, running any of the commands: ` Deployed to all needed environments separately, and pointing my locally running frontend to api.perfbuddy.com worked like a charm: Conclusion Working in Serverless truly has been a smooth process, and we're constantly blown away with its power that can only succinctly be described as magic. And, of course, if you haven't yet checked out Perfbuddy, be sure to do so! It's completely free and was made to help all developers and their teams improve their products....
May 11, 2022
Migrating an Amplify Backend to Serverless Framework
This is Part One of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part Two here and Part Three here. We've used Amplify extensively here at This Dot on various projects. It is a great service by AWS that lets you develop and deploy frontends and backends with ease. While Amplify frontends are relatively straightforward to work with, working with Amplify backends is a bit tricky. In one project, we have had many challenges working on an Amplify backend in a large team. Amplify does quite a bit of code generation whenever you make a change in the backend, and it was challenging for multiple people to work on backend features in parallel. More often than not, we ran into nasty merge conflicts, and occasionally, we've even had cases where developers were losing entire Amplify environments due to bad merges. Since we have expertise in Serverless Framework, we decided to try migrating our Amplify backend to Serverless Framework so we can speed up development by leveraging Serverless Framework's offline capabilities as well as generally faster deploys. Also, in Serverless Framework, there's usually only a handful of configuration files that you have to edit, so resolving merge issues is pretty straightforward. This is the story of that migration. It will be a series of blog posts where we will show, step by step, how to migrate an Amplify backend to Serverless Framework so that you can use Serverless Framework alone to deploy your backend services. Note that we recommend having at least some knowledge of Amplify, AWS, and Serverless Framework. Sample Application To start off, we've scaffolded a to-do application that we named "Amplified ToDo". We've used Amplify's guide for scaffolding a GraphQL project and the application itself doesn't have any business logic (nor is needed for the purpose of this blog post). We'll only use it as an example of how to model some of Amplify's core features like GraphQL API, Cognito user pools, lambdas, lambda triggers, etc. To do this, though, we'll need to make some assumptions about how the application will work. Since it will be a to-do application, we will definitely need a to-do list with to-do items. They can be modeled with the following types: ` There are several things to note here. To demonstrate the use of Cognito user pools, we're assuming that only Cognito-authenticated users will be able to use the app. Both List and Item types have a cognitoUserId field which prevents users from reading each other's data through Amplify's @auth directive. We're also assuming that users will want to use reminders for their tasks, hence the remindAt field in the Item type. Reminders will be sent only to those users that want them, though. We'll create another type called UserPreference where we will store reminder preferences for our users: ` A Cognito trigger will create an instance of user preference on every new user registration. As for the notifications, there are dozens of ways we could process these notifications, but to demonstrate the usage of DynamoDB triggers, we'll use a dedicated DynamoDB table for queuing notifications. Whenever Item is updated, a DynamoDB will read its remindAt field and insert a notification to the queue. Let's add a GraphQL type for that: ` Finally, to demonstrate invoking lambdas via GraphQL API, we'll have a dedicated mutation for triggering the processing of the notification queue. Let's add GraphQL types for that as well: ` You can view the entire schema on GitHub. With the above assumptions in mind, we can now proceed in modeling the following Amplify features using Serverless Framework: * Deploying Cognito user pools * Invoking lambdas via Cognito user triggers (creation of UserPreference items) * Setting up GraphQL API * Using Amplify-specific directives such as @auth and @model * Mapping GraphQL types to DynamoDB tables * Invoking lambdas via GraphQL mutations (processing notification queues) * Invoking lambdas via DynamoDB triggers (adding notifications to queues on every modification of Item rows) * Deploying S3 buckets As part of this blog post, we'll cover the first two points. The remaining points will come in future blog posts. Building Our Serverless Framework Config The first time you try to model anything close to an Amplify-powered GraphQL API, you come to appreciate just how much of the work Amplify does for you! Not only does it provide you specialized directives for easily modeling your data types (such as @connection for parent-child connections), but it takes care of the resolving part as well - you don't need to write *any* code for mapping your GraphQL invocations to a DynamoDB table, for example. Needless to say, Amplify is a powerful beast. Hence, copying an Amplify-based GraphQL API to Serverless Framework is not an easy task. Fortunately, the Amplify CLI which does the bulk of the work is open-source, so we can use parts of it to replicate Amplify's behavior. Also, Serverless Framework itself has some nice plugins for working with AppSync, the powerhorse behind Amplify's GraphQL API. When joined together, they can help us accomplish what we need. Initializing Serverless Config Install the Serverless CLI using the official guide, and initialize your Serverless configuration file by calling serverless in the terminal: ` This will create the following directory structure under your project directory: ` handler.js is just a sample Lambda function, while serverless.yml is the core of the Serverless Framework config, and we'll be spending most of our time there. Now, go to the amplified-todo-api and install the serverless-appsync-plugin plugin as well as dependencies that we need for transforming the GraphQL schema. We'll need this later on when we will be configuring AppSync. ` Now, add .env to your .gitignore file and create a .env file that will hold your environment variables. Here, you can store various parameters that may differ from developer to developer, such as the name of the AWS profile that holds your AWS access key ID and the secret access key that you can use for deployment. Let's create that environment variable in the .env file: ` todo is the profile that I use locally, but yours can be named differently. Now, we can start editing the Serverless config file. Let's do some modifications to the config that Serverless CLI initialized and try our first deploy: ` In the modified config, we've added support for dotenv files as well as defined the profile that we will be using for deployment. We're reading the PROFILE environment variable by using the environment variable syntax for reading variables in Serverless Framework. If you run sls deploy now, you will have your first service deployed: ` Deploying Cognito User Pools Now that we have the basic config set up, let's replace the hello lambda with a lambda that will be invoked after a new app user is registered. The content of the lambda doesn't matter - it may be used to insert a new UserPreference item to the table, for example. What we will show you, though, is how you can create and deploy such lambda, and have it triggered by Cognito. Cognito user pools allow you to hook into various events of a user lifecycle, such as user authentication, or user registration. When the user is registered and confirmed, Cognito will trigger a PostConfirmation event. To create a lambda that will listen to such an event, you can create something like handlers/insert-user-preference/index.js in the amplified-todo-api directory, with the following content: ` Now, replace the hello function lambda definition in the Serverless config file with the following: ` And run sls deploy. Now here comes the tricky part. By defining that we are listening on PostConfirmation events, Serverless Framework will also implicitly create a Cognito user pool on AWS once you deploy! This is unlike many other resources that have to be defined explicitly in the Serverless config. The user pool will have default properties, however. If you want to customize those properties, then you need to override the generated user pool by creating its resource in the Resources section of the Serverless config file: ` The name of the Cognito user pool resource is important. It needs to be the same as the resource that was created implicitly by Serverless Framework. Serverless Framework uses a special naming convention when creating resources, and you can look into their documentation to see how they form the name of the Cognito user pool resource when it's created implicitly by a lambda trigger. For Cognito user pools, it will normalize the name of the pool and then prefix it with CognitoUserPool, so that in our case, it becomes CognitoUserPoolAmplifiedToDo. Tip: if you are ever unsure what name Serverless Framework will use for an implicitly generated resource, you may want to dig into the generated Cloud Formation template located in amplified-todo-api/.serverless/cloudformation-template-update-stack.json. In the case of Cognito user pool, look by the name of the Cognito user pool. In our case, it was AmplifiedToDo, so you should be able to find CognitoUserPoolAmplifiedToDo resource in the Cloud Formation template. The Cloud Formation template is big and maybe scary to look at, but it's useful for troubleshooting deployment issues. Once you figure out the name of the resource, modifying the remaining properties of the resource is relatively straightforward. For UserPoolName, we can use any name, but since this is the name that will be shown on the AWS Console, it makes sense to inject provider.stage into the name so that we can differentiate user pools by stage. To use the user pool from an external application such as our UI application, we also need to create a user pool *client*. This is the second resource shown in the previous snippet. The user pool client needs to reference the user pool it is created under, and for this we use the Ref intrinsic function to reference the user pool by resource name (CognitoUserPoolAmplifiedToDo). (AWS intrinsic functions are used in Cloud Formation templates for objects to reference each other.) Now, when you try to deploy, your AWS may not allow you to deploy the changes. You might get an error like: ` This is because once you create a Cognito pool, you cannot modify its name, even through the AWS console. If this happens, it would be best to remove the service using sls remove and re-deploy using sls deploy. For that reason, it's always a good idea to configure Cognito before doing anything else. Going Forward This is it for this blog post. In the following weeks, we will be continuing this project to create our first AppSync configuration, and deploy our GraphQL schema....
Apr 13, 2022