
Angular 17: Continuing the Renaissance
Dive into the Angular Renaissance with Angular 17, emphasizing standalone components, enhanced control flow syntax, and a new lazy-loading paradigm. Discover server-side rendering improvements, hydration stability, and support for view transitions....
Nov 20, 2023
6 mins

The Renaissance of PWAs
What Are PWAs? Progressive Web Apps, or PWAs, are not a new concept. In fact, they have been around for years, and have been adopted by companies such as Starbucks, Uber, Tinder, and Spotify. Here at This Dot, we have written numerous blog posts on PWAs. PWAs are essentially web applications that utilize modern web technologies to deliver a user experience akin to native apps. They can work offline, send push notifications, and even be added to a user's home screen, thus blurring the boundaries between web and native apps. PWAs are built using standard web technologies like HTML, CSS, and JavaScript. However, they leverage advanced web APIs to deliver enhanced capabilities. Most web apps can be transformed into PWAs by incorporating certain features and adhering to standards. The keystones of PWAs are the service worker and the web app manifest. Service workers enable offline operation and background syncing by acting as network proxies, managing requests programmatically. The web app manifest, on the other hand, gives the PWA a native-like presence on the user's device, specifying its appearance when installed. Looking Back The concept of PWAs was introduced by Google engineers Alex Russell and Frances Berriman in 2015, even though Steve Jobs had already discussed the idea of web apps that resembled and behaved like native apps as early as 2007. However, despite the widespread adoption of PWAs over the years, Apple's approach to PWAs drastically changed after Steve Jobs' 2007 presentation, distinguishing it from other tech giants such as Google and Microsoft. As a leader in technological innovation, Apple was notably slower in adopting PWA technology, much of which can be attributed to its business model and the ecosystem it built around the App Store. For example, Safari, Apple's web browser, has historically been slow to adopt the latest web standards and APIs crucial for PWAs. Features such as push notifications, background sync, and access to certain hardware functionalities were unsupported or only partially supported for a long time. As a result, the PWA experience on iOS/iPadOS was not - and to some extent, still isn't - on par with that provided by Android. Despite the varying degrees of support from different vendors, PWAs have seen a significant increase in adoption since 2015, both by businesses and users, due to their cross-platform nature, offline capabilities, and the enhanced user experience they offer. Major corporations like Twitter, Pinterest, and Alibaba have launched PWAs, leading to substantial increases in user engagement and session duration. For instance, according to a 2017 Pinterest case study, Pinterest's PWA led to a 60% increase in core engagements and a 44% increase in user-generated ad revenue. Google and Microsoft have also championed this technology, integrating more PWA support into their platforms. Google highlighted the importance of PWAs for the mobile web, while Microsoft sought to populate its Windows Store with PWAs. Apple's Shift Towards PWAs Despite slower adoption and limited support, Apple isn't completely dismissing PWAs. Recent updates have indicated some promising improvements in PWA capabilities on both iOS/iPadOS and MacOS. For instance, in iOS/iPadOS 16 released last year, Apple added notifications for Home Screen web apps, utilizing the Web Push standard with support for badging. They also included an API for iOS/iPadOS browsers to facilitate the 'Add to Home Screen' feature. The forthcoming Safari 17 and MacOS Sonoma releases, announced at June's WWDC, promise even more significant changes, most notably: Installing Web Apps on MacOS, iOS, and iPadOS Any web app, not just PWAs, can now be added to the MacOS dock from the File menu. Once added, these web apps will open in their own window and integrate with operating system features such as the Stage Manager, Screen Time, Notifications, and Focus. They will also have their own isolated storage, and any cookies present in the Safari browser for that web app at the time of installation will be transferred to this isolated storage. This means many users will not need to re-authenticate to web apps after installing them locally. PWAs, in particular, can control the appearance and behavior of the installed web app via the PWA manifest. For instance, if your web app already includes navigation controls, or if they're not necessary in the context of the app, you can manage whether the navigation buttons are displayed by setting the display` configuration option in the manifest to `standalone`. The `display` option will also be taken into consideration in iOS/iPadOS, where standalone web apps will become *Home Screen web apps*. These apps offer a standalone, app-like experience on iOS, complete with separate cookies and storage from the browser, and improved notification handling. Improved Notifications Apple initially added support for notifications in iOS/iPadOS 16, but Safari 17 and MacOS Sonoma take it a step further. If you've already implemented Web Push according to web standards, then push notifications should work for your web page as a web app on Mac without any additional effort. Moreover, the silent` property is now taken into account, and there are several improvements to the Notifications API to enhance its reliability. These recent updates put the support for notifications on Mac on par with iOS/iPadOS, including support for badging, and seamless integration with the Focus mode. Improved API Over the past year, Apple has also introduced several other API-level improvements. The enhancements to the User Activation API aid in determining whether a function that depends on user activation, such as requesting permission to send notifications, is called. Safari 16 updated the un-prefixed Fullscreen API, and introduced preliminary support for the Screen Orientation API. Safari 17 in particular has improved support for the Storage API and added support for ReadableStream`. The Renaissance of PWAs? With the new PWA-related features in the Apple ecosystem, it's hard not to wonder if we are witnessing a renaissance of PWAs. Initially praised for their ability to leverage web technology to create app-like experiences, PWAs went through a period of relative stagnation, especially on Apple's platforms. For a time, Apple was noticeably more conservative in their implementation of PWA features. However, their recent bolstering of PWA support in Safari signals a significant shift, aligning the browser with other major platforms such as Google Chrome and Microsoft Edge, both of which have long supported PWAs. The implications of this shift are profound. This widespread and robust support for PWAs across all major platforms could effectively reduce the gap between web applications and native applications. PWAs, with their promise of a single, consistent experience across all devices, could become the preferred choice for businesses and developers. The cost-effectiveness of developing and maintaining one PWA versus separate applications for multiple platforms is an undeniable benefit. The fact that all these platforms are now heavily supporting PWAs might suggest an industry-wide shift toward a more unified and simplified development paradigm, hinting that indeed, we could be on the verge of a PWA renaissance....
Aug 18, 2023
5 mins

Utilizing AWS Cognito for Authentication
Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPoolDev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts` file as shown below: `typescript // apps/admin/src/main.ts Amplify.configure({ Auth: { userPoolId: process.env.USERPOOL_ID, userPoolWebClientId: process.env.USERPOOL_WEB_CLIENT_ID, } } ); ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth` class in the `@aws-amplify/auth` package. For example, to sign in after the user has submitted the form containing the username and password, you can use the `Auth.signIn(email, password)` method as shown below: `typescript // libs/core/src/lib/amplify/auth.service.ts @Injectable({ providedIn: 'root', }) export class AuthService { constructor(private transloco: TranslocoService) {} signInAdmin(email: string, password: string): Observable { return from(Auth.signIn(email, password)).pipe( switchMap(() => { return this.isAdmin().pipe( switchMap((isAdmin) => { if (isAdmin) { return this.getCurrentUser(); } throw new Error(this.transloco.translate('userAuth.errors.notAdmin')); }) ); }) ); } getCurrentUser(): Observable { return from(Auth.currentUserInfo()).pipe( filter((user) => !!user), map((user) => this.cognitoToCoreUser(user)) ); } cognitoToCoreUser(cognitoUser: AmplifyCognitoUser): CoreUser { return { cognitoId: cognitoUser.username, emailVerified: cognitoUser.attributes.emailverified, }; } } ` The logged-in user object is then translated to an instance of CoreUser`, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: `typescript @Injectable() export class AuthEffects implements OnInitEffects { public signIn$ = createEffect(() => this.actions$.pipe( ofType(SignInActions.userSignInAttempted), withLatestFrom(this.store.select(AuthSelectors.selectRedirectUrl)), exhaustMap(([{ email, password }, redirectUrl]) => this.authService.signInAdmin(email, password).pipe( map((user) => AuthAPIActions.userLoginSuccess({ user })), tap(() => void this.router.navigateByUrl(redirectUrl || '/reports')), catchError((error) => of( AuthAPIActions.userSignInFailed({ errors: [error.message], email, }) ) ) ) ) ) ); } ` The login component triggers a SignInActions.userSignInAttempted` action, which is processed by the above effect. Depending on the outcome of the `signInAdmin` call in the `AuthService` class, the action is translated to either `AuthAPIActions.userLoginSuccess` or `AuthAPIActions.userSignInFailed`. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp` method for user registration. - Signing out is done using Auth.signOut`. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub` property. `javascript async function handler(event, context) { const owner = event.request.userAttributes.sub; if (owner) { const user = await getUser({ owner }); if (user == null) { await addUser({ owner, notificationConfig: DEFAULTNOTIFICATION_CONFIG }); } context.done(null, event); } else { context.done(null, event); } } async function getUser({ owner }) { const params = { ExpressionAttributeNames: { '#owner': 'owner' }, ExpressionAttributeValues: { ':owner': owner }, KeyConditionExpression: '#owner = :owner', IndexName: 'byOwner', TableName: process.env.USERTABLE_NAME, }; const { Items } = await documentClient().query(params).promise(); return Items.length ? Items[0] : null; } async function addUser(user) { const { owner, notificationConfig } = user; const date = new Date(); const params = { Item: { id: uuidv4(), typename: 'User', owner: owner, notificationConfig: notificationConfig, createdAt: date.toISOString(), updatedAt: date.toISOString(), termsAccepted: false, }, TableName: process.env.USERTABLE_NAME, }; await documentClient().put(params).promise(); } ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template()` function. The template reads the HTML template embedded in the Lambda. `javascript exports.handler = async (event, context) => { try { if (event.triggerSource === 'CustomMessageSignUp') { const { codeParameter } = event.request; const { email } = event.request.userAttributes; const encodedEmail = encodeURIComponent(email); const link = ${process.env.REDIRECT_URL}email=${encodedEmail}&code=${codeParameter}`; const createdAt = new Date(); const year = createdAt.getFullYear(); event.response.emailSubject = 'Your verification code'; event.response.emailMessage = template(email, codeParameter, link, year); } context.done(null, event); console.log(Successfully sent custom message after signing up`); } catch (err) { context.done(null, event); console.error( Error when sending custom message after signing up`, JSON.stringify(err, null, 2) ); } }; const template = (email, code, link, year) => ... ; ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...
Jul 19, 2023
6 mins

Implementing a Task Scheduler in Node Using Redis
Node.js and Redis are often used together to build scalable and high-performing applications. Although Redis has always been primarily an in-memory data store that allows for fast and efficient data access, over time, it has gained many useful features, and nowadays it can be used for things like rate limiting, session management, or queuing. With its excellent support for sorted sets, one feature to be added to that list can also be task scheduling. Node.js doesn't have support for any kind of task scheduling other than the built-in setInterval()` and `setTimeout()` functions, which are quite simple and don't have task queuing mechanisms. There are third-party packages like node-schedule and node-cron of course. But what if you wanted to understand how this could work under the hood? This blog post will show you how to build your own scheduler from scratch. Redis Sorted Sets Redis has a structure called sorted sets_, a powerful data structure that allows developers to store data that is both ordered and unique, which is useful in many different use cases such as ranking, scoring, and sorting data. Since their introduction in 2009, sorted sets have become one of the most widely used and powerful data structures in Redis. To add some data to a sorted set, you would need to use the ZADD command, which accepts three parameters: the name of the sorted set, the name of the member, and the score to associate with that member. When having multiple members, each with its own score, Redis will sort them by score. This is incredibly useful for implementing leaderboard-like lists. In our case, if we use a timestamp as a score, this means that we can order sorted set members by date, effectively implementing a queue where members with the most recent timestamp are at the top of the list. If the member name is a task identifier, and the timestamp is the time at which we want the task to be executed, then implementing a scheduler would mean reading the sorted list, and just grabbing whatever task we find at the top! The Algorithm Now that we understand the capabilities of Redis sorted sets, we can draft out a rough algorithm that will be implemented by our Node scheduler. Scheduling Tasks The scheduling task piece would include adding a task to the sorted set, and adding task data to the global set using the task identifier as the key. The steps are as follows: 1. Generate an identifier for the submitted task using the INCR command. This command will get the next integer sequence each time it's called. 2. Use the SET command to set task data in the global set. The SET command accepts a key and a string. The key must be unique, therefore it can be something like task:${taskId}`, while the value can be a JSON representation of the task data. 3. Use the ZADD command to add the task identifier and the timestamp to the sorted set. The name of the sorted set can be something simple like sortedTasks`, while the set member can be the task identifier and the score is the timestamp. Processing Tasks The processing part is an endless loop that checks if there are any tasks to process, otherwise it waits for a predefined interval before trying again. The algorithm can be as follows: 1. Check if we are still allowed to run. We need a way to stop the loop if we want to stop the scheduler. This can be a simple boolean flag in the code. 2. Use the ZRANGE command to get the first task in the list. ZRANGE accepts several useful arguments, such as the score range (the timestamp interval, in our case), and the offset/limit. If we provide it with the following arguments, we will get the first next task we need to execute. - Minimal score: 0 (the beginning of time) - Maximum score: current timestamp - Offset: 0 - Count: 1 3. If there is a task found: 3.1 Get the task data by executing the GET command on the task:${taskId}` key. 3.2 Deserialize the data and call the task handler. 3.3 Remove the task data using the DEL command on the task:${taskId}` key. 3.4 Remove the task identifier from the sorted set by calling ZREM on the sortedSets` key. 3.5 Go back to point 2 to get the next task. 4. If there is no task found, wait for a predefined number of seconds before trying again. The Code Now to the code. We will have two objects to work with. The first one is RedisApi` and this is simply a façade over the Redis client. For the Redis client, we chose to use ioredis, a popular Redis library for Node. `javascript const Redis = require('ioredis'); function RedisApi(host, port) { const redis = new Redis(port, host, { maxRetriesPerRequest: 3 }); return { getFirstInSortedSet: async (sortedSetKey) => { const results = await redis.zrange( sortedSetKey, 0, new Date().getTime(), 'BYSCORE', 'LIMIT', 0, 1 ); return results?.length ? results[0] : null; }, addToSortedSet: (sortedSetKey, member, score) => { return redis.zadd(sortedSetKey, score, member); }, removeFromSortedSet: (sortedSetKey, member) => { return redis.zrem(sortedSetKey, member); }, increaseCounter: (counterKey) => { return redis.incr(counterKey); }, setString: (stringKey, value) => { return redis.set(stringKey, value, 'GET'); }, getString: (stringKey) => { return redis.get(stringKey); }, removeString: (stringKey) => { return redis.del(stringKey); }, isConnected: async () => { try { // Just get some dummy key to see if we are connected await redis.get('dummy'); return true; } catch (e) { return false; } }, }; } ` The RedisApi` function returns an object that has all the Redis operations that we mentioned previously, with the addition of `isConnected`, which we will use to check if the Redis connection is working. The other object is the Scheduler` object and has three functions: - start()` to start the task processing - stop()` to stop the task processing - schedule()` to submit new tasks The start()` and `schedule()` functions contain the bulk of the algorithm we wrote above. The `schedule()` function adds a new task to Redis, while the `start()` function creates a `findNextTask()` function internally, which it schedules recursively while the scheduler is running. When creating a new `Scheduler` object, you need to provide the Redis connection details, a polling interval, and a task handler function. The task handler function will be provided with task data. `javascript function Scheduler( pollingIntervalInSec, taskHandler, redisHost, redisPort = 6379 ) { const redisApi = new RedisApi(redisPort, redisHost); let isRunning = false; return { schedule: async (data, timestamp) => { const taskId = await redisApi.increaseCounter('taskCounter'); console.log( Scheduled new task with ID ${taskId} and timestamp ${timestamp}`, data ); await redisApi.setString(task:${taskId}`, JSON.stringify(data)); await redisApi.addToSortedSet('sortedTasks', taskId, timestamp); }, start: async () => { console.log('Started scheduler'); isRunning = true; const findNextTask = async () => { const isRedisConnected = await redisApi.isConnected(); if (isRunning && isRedisConnected) { console.log('Polling for new tasks'); let taskId; do { taskId = await redisApi.getFirstInSortedSet('sortedTasks'); if (taskId) { console.log(Found task ${taskId}`); const taskData = await redisApi.getString(task:${taskId}`); try { console.log(Passing data for task ${taskId}`, taskData); taskHandler(JSON.parse(taskData)); } catch (err) { console.error(err); } redisApi.removeString(task:${taskId}`); redisApi.removeFromSortedSet('sortedTasks', taskId); } } while (taskId); setTimeout(findNextTask, pollingIntervalInSec 1000); } }; findNextTask(); }, stop: () => { isRunning = false; console.log('Stopped scheduler'); }, }; } ` That's it! Now, when you run the scheduler and submit a simple task, you should see an output like below: `javascript const scheduler = new Scheduler( 5, (taskData) => { console.log('Handled task', taskData); }, 'localhost' ); scheduler.start(); // Submit a task to execute 10 seconds later scheduler.schedule({ name: 'Test data' }, new Date().getTime() + 10000); ` ` Started scheduler Scheduled new task with ID 4 and timestamp 1679677571675 { name: 'Test data' } Polling for new tasks Polling for new tasks Polling for new tasks Found task 4 with timestamp 1679677571675 Passing data for task 4 {"name":"Test data"} Handled task { name: 'Test data' } Polling for new tasks ` Conclusion Redis sorted sets are an amazing tool, and Redis provides you with some really useful commands to query or update sorted sets with ease. Hopefully, this blog post was an inspiration for you to consider using sorted sets in your applications. Feel free to use StackBlitz to view this project online and play with it some more....
May 3, 2023
5 mins

Next.js Authentication Using OAuth
Modern web apps have come a long way from their early days, and as a result, users have come to expect certain features. One such feature is being able to authenticate in the web app using external accounts owned by providers such as Facebook, Google, or GitHub. Not only is this way of authenticating more secure, but there is less effort required by the user. With only a few clicks, they can sign in to your web app. Such authentication is done using the OAuth protocol. It's a powerful and very commonly used protocol that allows users to authenticate with third-party applications using their existing login credentials. These days, it has become an essential part of modern web applications. In this blog post, we will explore how to implement OAuth authentication in a Next.js application. Why OAuth? Implementing authentication using OAuth is useful for a number of reasons. First of all, it allows users to sign in to your web app using their existing credentials from a trusted provider, such as Facebook and Google. This eliminates the need to go through a tedious registration process, and most importantly, it eliminates the need to come up with a password for the web app. This has many benefits for both the web app owner and the user. Neither need to store the password anywhere, as the password is handled by the trusted OAuth provider. This means that even if the web app gets hacked for some reason, the attacker will not gain access to the user password. For exactly that reason, you'll often hear experienced developers advise to "never roll your own authentication". OAuth in Next.js Next.js is the most popular React metaframework, and this gives you plenty of options and libraries for implementing authentication in your app. The most popular one, by far, is definitely Auth.js, formerly named NextAuth.js. With this library, you can get OAuth running in only a few simple steps, as we'll show in this blog post. We'll show you how to utilize the latest features in both Next.js and Auth.js to set up an OAuth integration using Facebook. Implementing OAuth in Next.js 13 Using Auth.js Creating Facebook App Before starting with the project, let's create an app on Facebook. This is a prerequisite to using Facebook as an OAuth provider. To do this, you'll need to go to Meta for Developers and create an account there by clicking on "Get Started". Once this is done, you can view the apps dashboard and click "Create App" to create your app. Since this is an app that will be used solely for Facebook login, we can choose "Consumer" as the type of the app, and you can pick any name for it. In our case, we used "Next.js OAuth Demo". After the app is created, it's immediately visible on the dashboard. Click the app, and then click Settings / Basic on the left menu to show both the app ID and the app secret - this will be used by our Next.js app. Setting Up Next.js For the purpose of this blog post, we'll create a new Next.js project from scratch so you can have a good reference project with minimum features and dependencies. In the shell, execute the npx create-next-app@latest --typescript` command and follow the prompts: `shell ✔ What is your project named? … nextjs-with-facebook-oauth ✔ Would you like to use ESLint with this project? … Yes ✔ Would you like to use src/` directory with this project? … No ✔ Would you like to use experimental app/` directory with this project? … Yes ✔ What import alias would you like configured? … @/ Creating a new Next.js app in /Users/dario/Projects/nextjs-with-facebook-oauth. ` As you can see, we've also used this opportunity to play with the experimental app` directory which is still in beta, but is the way Next.js apps will be built in the future. In our project, we've also set up Tailwind just to design the login page more quickly. Next, install the Auth.js library: `shell npm install @auth/core next-auth ` Now we need to create a catch-all API route under /api/auth` that will be handled by the Auth.js library. We'll do this by creating the following file: `typescript // pages/api/auth/[...nextauth].ts import NextAuth, { NextAuthOptions } from "next-auth"; import FacebookProvider from "next-auth/providers/facebook"; export const authOptions: NextAuthOptions = { providers: [ FacebookProvider({ clientId: process.env.FACEBOOKAPP_ID as string, clientSecret: process.env.FACEBOOKAPP_SECRET as string, }), ], }; export default NextAuth(authOptions); ` Note that even though we will be utilizing Next.js 13's app` directory, we need to place this route in the `pages` directory, as Auth.js doesn't yet support placing its API handler in the `app` directory. This is the only case where we will be using `pages`, though. In your project root, create a .env.local` file with the following contents: ` FACEBOOKAPP_ID=[app ID from the Facebook apps dashboard] FACEBOOKAPP_SECRET=[app secret from the Facebook apps dashboard] NEXTAUTHSECRET=[generate this value by going to https://generate-secret.vercel.app/32] NEXTAUTHURL=http://localhost:3000 ` All the above environment variables except NEXTAUTH_URL` are considered secret, and you should avoid committing them in the repository. Now, moving on to the React components, we'll need to have a few components that will perform the following functionality: - Display the sign-in button if the user is not authenticated - Otherwise, display user name and a sign-out button The Home` component that was auto-generated by Next.js is a server component and we can use `getServerSession()` from Auth.js to get the user's session. Based on that, we'll show either the sign-in component or the logged-in user information. `authOptions` provided to getServerSession() is the object that is defined in the API route. `typescript // /app/page.tsx import "./globals.css"; import { getServerSession } from "next-auth/next"; import { authOptions } from "pages/api/auth/[...nextauth]"; import { UserInformation } from "@/app/UserInformation"; import { SignIn } from "@/app/SignIn"; export default async function Home() { const session = await getServerSession(authOptions); return ( {session?.user?.name ? ( ) : ( )} ); } ` The SignIn` component has the sign-in button. The sign-in button needs to open an URL on Facebook that will initiate the authentication process. Once the authentication process is completed, it will invoke a "callback"- a special URL on the app side that is handled by Auth.js. `typescript // app/SignIn.tsx const loginUrl = https://www.facebook.com/v16.0/dialog/oauth?client_id=${ process.env.FACEBOOKAPP_ID }&redirecturi=${encodeURI( ${process.env.NEXTAUTH_URL}/api/auth/callback/facebook` )}; export function SignIn() { return ( Continue with Facebook ); } ` The UserInformation`, on the other hand, is displayed after the authentication process is completed. Unlike other components, this needs to be a client component to utilize the *signOut* method from Auth.js, which only works client-side. `typescript // app/UserInformation.tsx "use client"; import { signOut } from "next-auth/react"; export interface UserInformationProps { username: string; } export function UserInformation({ username }: UserInformationProps) { return ( You are logged in as {username}.{" "} signOut({ redirect: true })}> Sign out . ); } ` And that's it! Now run the project using npm run dev` and you should be able to authenticate to Facebook as shown below: Conclusion In conclusion, implementing OAuth-based authentication in Next.js is relatively straightforward thanks to Auth.js. This library not only comes with built-in Facebook support, but it also comes with 60+ other popular services, such as Google, Auth0, and more. We hope this blog post was useful, and you can always refer to the CodeSandbox project if you want to view the full source code. For other Next.js demo projects, be sure to check out starter.dev, where we already have a Next.js starter kit that can give you the best-practices in integrating Next.js with other libraries....
Mar 27, 2023
6 mins

Splitting Work: Multi-Threaded Programming in Deno
Deno is a new runtime for JavaScript/TypeScript built on top of the V8 JavaScript engine. It was created as an alternative for Node.js with a focus on security and modern language features. Here at This Dot, we've been working with Deno for a while, and we've even created a starter kit that you can use to scaffold your next backend Deno project. The starter kit uses many standard Deno modules, such as the Oak web server and the DenoDB ORM. One issue you may encounter, when scaling an application from this starter kit, is how to handle expensive or long-running asynchronous tasks and operations in Deno without blocking your server from handling more requests. Deno, just like Node.js, uses an event loop in order to process asynchronous tasks. This event loop is responsible for managing the flow of Deno applications and handling the execution of asynchronous tasks. The event loop is executed in a single thread. Therefore, if there is some CPU-intensive or long-running logic that needs to be executed, it needs to be offloaded from the main thread. This is where Deno workers come into play. Deno workers are built upon the Web Worker API specification, and provide a way to run JavaScript or TypeScript code in separate threads, allowing you to execute CPU-intensive or long-running tasks concurrently, without blocking the event loop. They communicate with the main process through a message-passing API. In this blog post, we will show you how to expand on our starter kit using Deno Workers. In our starter kit API, where we have CRUD operations for managing technologies, we'll modify the create endpoint to also read an image representing the technology and generate thumbnails for that image. Generating thumbnails Image processing is CPU-intensive. If the image being processed is large, it may require a significant amount of CPU resources to complete in a timely manner. When including image processing as part of an API, it's definitely a good idea to offload that processing to a separate thread if you want to keep your API responsive. Although there are many image processing libraries out there for the Node ecosystem, the Deno ecosystem does not have as many for now. Fortunately, for our use case, using a simple library like deno-image is good enough. With only a few lines of code, you can resize any image, as shown in the below example from deno-image's repository: `typescript import { resize } from "https://deno.land/x/denoimage/mod.ts"; const img = await resize(Deno.readFileSync("./demo/img.jpg"), { width: 100, height: 100, }); Deno.writeFileSync("./demo/result.jpg", img); ` Let's now create our own thumbnail generator. Create a new file called generate_thumbnails.ts` in the src/worker folder of the starter kit: `typescript // src/worker/generatethumbnails.ts import { resize } from 'https://deno.land/x/denoimage@v0.0.2/index.ts'; import { createHash } from 'https://deno.land/std@0.104.0/hash/mod.ts'; export async function generateThumbnails(imageUrl: string): Promise { const imageResponse = await fetch(imageUrl); const imageBufferArray = new Uint8Array(await imageResponse.arrayBuffer()) for (const size of [75, 100, 125, 150, 200]) { const img = await resize(imageBufferArray, { width: size, height: size, }); const imageUrlHash = createHash("sha1").update(imageUrl).toString(); Deno.writeFileSync(./public/images/${imageUrlHash}-${size}x${size}.png`, img); } } ` The function uses fetch to retrieve the image from a remote URL, and store it in a local buffer. Afterwards, it goes through a predefined list of thumbnail sizes, calling `resize()` for each, and then saving each image to the `public/images` folder, which is a public folder of the web server. Each image's filename is generated from the original image's URL, and appended with the thumbnail dimensions. Calling the web worker The web worker itself is a simple Deno module which defines an event handler for incoming messages from the main thread. Create worker.ts` in the `src/worker` folder: `typescript // src/worker/worker.ts import { generateThumbnails } from './generatethumbnails.ts'; self.onmessage = async (event) => { console.log('Image processing worker received a new message', event.data) const { imageUrl } = event.data; await generateThumbnails(imageUrl); self.close(); } ` The event's data property expects an object representing a message from the main thread. In our case, we only need an image URL to process an image, so event.data.imageUrl` will contain the image URL to process. Then, we call the `generateThumbnails` function on that URL, and then we close the worker when done. Now, before calling the web worker to resize our image, let's modify the Technology` type from the GraphQL schema in the starter kit to accept an image URL. This way, when we execute the mutation to create a new technology, we can execute the logic to read the image, and resize it in the web worker. `typescript // src/graphql/schema/technology.ts import { gql } from '../../../deps.ts'; export const technologyTypes = gql type Technology { id: String! displayName: String! description: String! url: String! createdAt: String! updatedAt: String! imageUrl: String!! } ... ` After calling deno task generate-type-definition` to generate new TypeScript files from the modified schema, we can now use the `imageUrl` field in our mutation handler, which creates a new instance of the technology. At the top of the mutation_handler.ts` module, let's define our worker: `typescript // src/graphql/resolvers/mutationhandler.ts import { GraphqlContext } from '../interfaces/graphqlinterfaces.ts'; // other imports const thumbnailWorker = new Worker(new URL("../../worker/worker.ts", import.meta.url).href, { type: "module" }); ` This is only done once, so that Deno loads the worker on module initialization. Afterwards, we can send messages to our worker on every call of the mutation handler using postMessage`: `typescript // src/graphql/resolvers/mutationhandler.ts export const createTechnology = async ( parent: unknown, { input }: MutationCreateTechnologyArgs, { cache }: GraphqlContext, ): Promise => { await cache.invalidateItem('getTechnologies'); const technologyModel = await TechnologyRepository.create({ ...input, }); // Generate thumbnails asynchronously in a separate thread thumbnailWorker.postMessage({imageUrl: input.imageUrl}) return { id: technologyModel.id, displayName: technologyModel.displayName, description: technologyModel.description, url: technologyModel.url, createdAt: technologyModel.createdAt, updatedAt: technologyModel.updatedAt, } as Technology; }; ` With this implementation, your API will remain responsive, because post-processing actions such as thumbnail generation are offloaded to a separate worker. The main thread and the worker thread communicate with a simple messaging system. Conclusion Overall, Deno is a powerful and efficient runtime for building server-side applications, small and large. Its combination of performance and ease-of-use make it an appealing choice for developers looking to build scalable and reliable systems. With its support for the Web Worker API spec, Deno is also well-suited for performing large-scale data processing tasks, as we've shown in this blog post. If you want to learn more about Deno, check out deno.framework.dev for a curated list of libraries and resources. If you are looking to start a new Deno project, check out our Deno starter kit resources at starter.dev....
Mar 21, 2023
4 mins

Setting Up TypeORM Migrations in an Nx/NestJS Project
TypeORM is a powerful Object-Relational Mapping (ORM) library for TypeScript and JavaScript that serves as an easy-to-use interface between an application's business logic and a database, providing an abstraction layer that is not tied to a particular database vendor. TypeORM is the recommended ORM for NestJS as both are written in TypeScript, and TypeORM is one of the most mature ORM frameworks available for TypeScript and JavaScript. One of the key features of any ORM is handling database migrations, and TypeORM is no exception. A database migration is a way to keep the database schema in sync with the application's codebase. Whenever you update your codebase's persistence layer, perhaps you'll want the database schema to be updated as well, and you want a reliable way for all developers in your team to do the same with their local development databases. In this blog post, we'll take a look at how you could implement database migrations in your development workflow if you use a NestJS project. Furthermore, we'll give you some ideas of how nx can help you as well, if you use NestJS in an nx-powered monorepo. Migrations Overview In a nutshell, migrations in TypeORM are TypeScript classes that implement the MigrationInterface` interface. This interface has two methods: `up` and `down`, where `up` is used to execute the migration, and `down` is used to rollback the migration. Assuming that you have an entity (class representing the table) as below: `typescript import { Entity, Column, PrimaryGeneratedColumn } from "typeorm" @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string } ` If you generate a migration from this entity, it could look as follows: `typescript import { MigrationInterface, QueryRunner } from 'typeorm'; export class CreatePost1674827561606 implements MigrationInterface { name = 'CreatePost1674827561606'; public async up(queryRunner: QueryRunner): Promise { await queryRunner.query( CREATE TABLE "post" ("id" SERIAL NOT NULL, "title" character varying NOT NULL, "text" character varying NOT NULL, CONSTRAINT "PK_be5fda3aac270b134ff9c21cdee" PRIMARY KEY ("id"))` ); } public async down(queryRunner: QueryRunner): Promise { await queryRunner.query(DROP TABLE "post"`); } }; ` As can be seen by the SQL commands, the up` method will create the `post` table, while the `down` method will drop it. How do we generate the migration file, though? The recommended way is through the TypeORM CLI. TypeORM CLI and TypeScript The CLI can be installed globally, by using npm i -g typeorm`. It can also be used without installation by utilizing the npx command: `npx typeorm `. The TypeORM CLI comes with several scripts that you can use, depending on the project you have, and whether the entities are in JavaScript or TypeScript, with ESM or CommonJS modules: - typeorm`: for JavaScript entities - typeorm-ts-node-commonjs`: for TypeScript entities using CommonJS - typeorm-ts-node-esm`: for TypeScript entities using ESM Many of the TypeORM CLI commands accept a data source file as a mandatory parameter. This file provides configuration for connecting to the database as well as other properties, such as the list of entities to process. The data source file should export an instance of DataSource`, as shown in the below example: `typescript // typeorm.config.ts import { DataSource } from 'typeorm'; import { Post } from './models/post.entity'; export default new DataSource({ type: 'postgres', host: process.env.DATABASEHOST, port: parseInt(process.env.DATABASEPORT as string), username: process.env.DATABASEUSERNAME, password: process.env.DATABASEPASSWORD, database: process.env.DATABASENAME, entities: [Post], }); ` To use this data source, you would need to provide its path through the -d` argument to the TypeORM CLI. In a NestJS project using ESM, this would be: `shell typeorm-ts-node-esm -d src/typeorm.config.ts migration:generate CreatePost ` If the DataSource` did not import the `Post` entity from another file, this would most likely succeed. However, in our case, we would get an error saying that we "cannot use import statement outside a module". The `typeorm-ts-node-esm` script expects our project to be a module -- and any importing files need to be modules as well. To turn the `Post` entity file into a module, it would need to be named `post.entity.mts` to be treated as a module. This kind of approach is not always preferable in NestJS projects, so one alternative is to transform our DataSource` configuration to JavaScript - just like NestJS is transpiled to JavaScript through Webpack. The first step is the transpilation step: `shell tsc src/typeorm.config.ts --outDir "./dist" ` Once transpiled, you can then use the regular typeorm` CLI to generate a migration: `shell typeorm -d dist/typeorm.config.js migration:generate CreatePost ` Both commands can be combined together in a package.json` script: `json // package.json { "scripts": { "typeorm-generate-migrations": "tsc src/typeorm.config.ts --outDir ./dist && typeorm -d dist/typeorm.config.js migration:generate" } } ` After the migrations are generated, you can use the migration:run` command to run the generated migrations. Let's upgrade our `package.json` with that command: `json // package.json { "scripts": { "typeorm-build-config": "tsc src/typeorm.config.ts --outDir ./dist", "typeorm-generate-migrations": "npm run typeorm-build-config && typeorm -d dist/typeorm.config.js migration:generate", "typeorm-run-migrations": "npm run typeorm-build-config && typeorm -d dist/typeorm.config.js migration:run" } } ` Using Tasks in Nx If your NestJS project is part of an nx monorepo, then you can utilize nx project tasks. The benefit of this is that nx will detect your tsconfig.json` as well as inject any environment variables defined in the project. Assuming that your NestJS project is located in an app called `api`, the above npm scripts can be written as nx tasks as follows: `json // apps/api/project.json { // ... "targets": { "build-migration-config": { "executor": "@nrwl/node:webpack", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/apps/typeorm-migration", "main": "apps/api/src/app/typeorm.config.ts", "tsConfig": "apps/api/tsconfig.app.json" } }, "typeorm-generate-migrations": { "executor": "@nrwl/workspace:run-commands", "outputs": ["{options.outputPath}"], "options": { "cwd": "apps/api", "commands": ["typeorm -d ../../dist/apps/typeorm-migration/main.js migration:generate"] }, "dependsOn": ["build-migration-config"] }, "typeorm-run-migrations": { "executor": "@nrwl/workspace:run-commands", "outputs": ["{options.outputPath}"], "options": { "cwd": "apps/api", "commands": ["typeorm -d ../../dist/apps/typeorm-migration/main.js migration:run"] }, "dependsOn": ["build-migration-config"] } }, "tags": [] } ` The typeorm-generate-migration` and `typeorm-run-migrations` tasks depend on the `build-migration-config` task, meaning that they will always transpile the data source config first, before invoking the `typeorm` CLI. For example, the previous CreatePost` migration could be generated through the following command: `shell nx run api:typeorm-generate-migration CreatePost ` Conclusion TypeORM is an amazing ORM framework, but there are a few things you should be aware of when running migrations within a big TypeScript project like NestJS. We hope we managed to give you some tips on how to best incorporate migrations in an NestJS project, with and without nx....
Feb 22, 2023
4 mins

Setting Up Reverse Proxy in Heroku Using Nginx
An overview of the various options you have when setting up nginx as a reverse proxy in Heroku....
Nov 4, 2022
5 mins

Combining Validators and Transformers in NestJS
When building a new API, it is imperative to validate that requests towards the API conform to a predefined specification or a contract. For example, the specification may state that an input field must be a valid e-mail string. Or, the specification may state that one field is optional, while another field is mandatory. Although such validation can also be performed on the client side, we should never rely on it alone. There should always be a validation mechanism on the server side as well. After all, you never know who's acting on behalf of the client. Therefore, you can never fully trust the data you receive. Popular backend frameworks usually have a very good support for validation out of the box, and NestJS, which we will cover in this blog post, is no exception. In this blog post, we will be focusing on NestJS's validation using ValidationPipe`- specifically on one lesser known feature- which is the ability to not only validate input, but transform it beforehand as well, thereby combining transformation and validation of data in one go. Using ValidationPipe To test this out, let's build a UsersController` that supports getting a list of users, and with the option to filter by several conditions. After scaffolding our project using `nest new [project-name]`, let's define a class that will represent this collection of filters, and name it `GetUsersQuery`: `typescript class GetUsersQuery { userIds: string[]; nameContains: string; pageSize: number; } ` Now, let's use it in the controller: `typescript class User { id: string; name: string; active = false; } @Controller('users') export class UsersController { @Get() getUsers(@Query() query: GetUsersQuery): User[] { console.log(JSON.stringify(query)); return [ { id: '1', name: 'Zeus Carver', active: true }, { id: '2', name: 'Holly Gennero', active: true }, ]; } } ` The problem with this approach is that there is no validation performed whatsoever. Although we've defined userIds` as an array of strings, and `pageSize` as a number, this is just compile-time verification - there is no runtime validation. In fact, if you execute a GET request on `http://localhost:3000/users?userIds=1,2,3&pageSize=3`, the query object will actually contain only string fields: `json { "userIds": "1,2,3", "pageSize": "3" } ` There's a way to fix this in NestJS. First, let's install the dependencies needed for using data transformation and validation in NestJS: ` shell npm i --save class-validator class-transformer ` As their names would suggest, the class-validator` package brings support for validating data, while the `class-transformer` package brings support for transforming data. Each package adds some decorators of their own to aid you in this. For example, the class-validator` package has the `@IsNumber()` decorator to perform runtime validation that a field is a valid number, while the `class-transformer` package has the `@Type()` decorator to perform runtime transformation from one type to another. Having that in mind, let's decorate our GetUsersQuery` a bit: `typescript class GetUsersQuery { @IsArray() @IsOptional() @Transform(({ value }) => value.split(',')) userIds: string[]; @IsOptional() nameContains: string; @IsOptional() @IsNumber() @Type(() => Number) pageSize: number; } ` This is not enough, though. To utilize the class-validator` decorators, we need to use the `ValidationPipe`. Additionally, to utilize the `class-transformer` decorators, we need to use `ValidationPipe` with its `transform: true` flag: `typescript class User { id: string; name: string; active = false; } @Controller('users') export class UsersController { @Get() @UsePipes(new ValidationPipe({ transform: true })) getUsers(@Query() query: GetUsersQuery): User[] { console.log(JSON.stringify(query)); return [ { id: '1', name: 'Zeus Carver', active: true }, { id: '2', name: 'Holly Gennero', active: true }, ]; } } ` Here's what happens in the background. As said earlier, by default, every path parameter and query parameter comes over the network as a string`. We _could_ convert these values to their JavaScript primitives in the controller (array of strings and a number, respectively), or we can use the `transform: true` property of the `ValidationPipe` to do this automatically. NestJS does need some guidance on how to do it, though. That's where class-transformer` decorators come in. Internally, NestJS will use Class Transformer's `plainToClass` method to convert the above object to an instance of the `GetUsersQuery` class, using the Class Transformer decorators to transform the data along the way. After this, our object becomes: `json { "userIds": ["1", "2", "3"], "pageSize": 3 } ` Now, Class Validator comes in, using its annotations to validate that the data comes in as expected. Why is Class Validator needed if we already transformed the data beforehand? Well, Class Transformer will not throw any errors if it failed to transform the data. This means that, if you provided a string like "testPageSize" to the pageSize` query parameter, our query object will actually come in as: `json { "userIds": ["1", "2", "3"], "pageSize": null } ` And this is where Class Validator will kick in and raise an error that pageSize` is not a proper number: `json { "statusCode": 400, "message": [ "pageSize must be a number conforming to the specified constraints" ], "error": "Bad Request" } ` Other transformation options The @Type` and `@Transform` decorators give us all kinds of options for transforming data. For example, strings can be converted to dates and then validated using the following combination of decorators: `typescript @IsOptional() @Type(() => Date) @IsDate() registeredSince: Date; ` We can do the same for booleans: `typescript @IsOptional() @Type(() => Boolean) @IsBoolean() isActive: boolean; ` If we wanted to define advanced transformation rules, we can do so through an anonymous function passed to the @Transform` decorator. With the following transformation, we can also accept `isActive=1` in addition to `isActive=true`, and it will properly get converted to a boolean value: `typescript @IsOptional() @Transform(({ value }) => value === '1' || value === 'true') @IsBoolean() isActive: boolean; ` Conclusion This was an overview of the various options you have at your disposal when validating and transforming data. As you can see, NestJS gives you many options to declaratively define your validation and transformation rules, which will be enforced by ValidationPipe`. This allows you to focus on your business logic in controllers and services, while being assured that the controller inputs have been properly validated. You'll find the source code for this blog post's project on our GitHub....
Oct 27, 2022
3 mins

Building Your First App Using Fresh Framework
The Fresh framework is the new kid on the block among frontend meta-frameworks, and this blog post will guide you through creating a currency conversion app using Fresh....
Oct 7, 2022
6 mins

NestJS API Versioning Strategies
Versioning is an important part of API design. It's also one of those project aspects that is not given enough thought upfront, and it often happens that it comes into play late in the game, when it's difficult to introduce breaking changes (and introducing versioning can sometimes be a breaking change). In this blog post, we will describe the various versioning strategies that you can implement in NestJS, with a special focus on the highest-matching version selection. This is a strategy that you might consider when you want to minimize the amount of changes needed to upgrade your API-level versions. Types of versioning In NestJS, there are four different types of versioning that can be implemented: **URI versioning** The version will be passed within the URI of the request. For example, if a request comes in to `/api/v1/users`, then `v1` marks the version of the API. This is the default in NestJS. **Custom header versioning** A custom request header will specify the version. For example, `X-API-Version: 1` in a request to `/api/users` will request v1 version of the API. **Media type versioning** Similar to custom header versioning, a header will specify the version. Only, this time, the standard media accept header is used. For example: `Accept: application/json;v=2` **Custom versioning** Any aspect of the request may be used to specify the version(s). A custom function is provided to extract said version(s). For example, you can implement query parameter versioning using this mechanism. URI versioning and custom header versioning are the most common choices when implementing versioning. Before deciding which type of versioning you want to use, it's also important to define the versioning strategy. Do you want to version on the API level? Or on the endpoint level? If you want to go with the endpoint-versioning approach, this gives you more fine-grained control over your endpoints, without needing to reversion the entire API. The downside of this approach, is that it may get difficult to track endpoint versions. How would an API client know which version is the latest, or which endpoints are compatible with each other? There would need to be a discovery mechanism for this, or just very well maintained documentation. API-level versioning is more common, though. With API-level versioning, every time you introduce a breaking change, you deliver a new version of the entire API, even though internally, most of the code is unchanged. There are some strategies to mitigate this, and we will focus on one in particular in this blog post. But first, let's see how we can enable versioning on our API. Applying versions to your endpoints The first step is to enable versioning on the NestJS application: `typescript app.enableVersioning({ type: VersioningType.URI, }); ` With URI versioning enabled, to apply a version on an endpoint, you'd either provide the version on the @Controller` decorator to apply the version to all endpoints under the controller, or you'd apply the version to a route in the controller with the `@Version` decorator. In the below example, we use endpoint versioning on the findAll()` method. `typescript import { Controller, Get, Param, Version } from '@nestjs/common'; @Controller('users') export class UsersController { @Get() @Version('1') findAll() { return 'findAll()'; } @Get(':id') findOne(@Param('id') id: string) { return findOne(${id})`; } } ` We can invoke findAll()` using curl: `shell ➜ nestjs-versioning-strategies git:(main) ✗ curl http://localhost:3000/api/v1/users findAll()% ` How can we invoke findOne()`, though? Since only `findAll()` is versioned, invoking `findOne()` needs to be without a version. When you request an endpoint without a version, NestJS will try to find so-called "version-neutral" endpoints, which are the endpoints that are not annotated with any version. In our case, this would mean the URI we use will not contain v1` or any other version in the path: ` shell ➜ nestjs-versioning-strategies git:(main) ✗ curl http://localhost:3000/api/users/1 findOne(1)% ` This happens because implicitly, NestJS considers the "version-neutral" version to be the default version* if no version is requested by the API client. The default version is the version that is applied to all controllers/routes that don't have a version specified via the decorators. The versioning configuration we wrote earlier could have easily been written as: `typescript app.enableVersioning({ type: VersioningType.URI, defaultVersion: VERSIONNEUTRAL, }); ` Meaning, any controllers/routes without a version (such as findAll()` above), will be given the "version-neutral" version by default. If we don't want to use version-neutral endpoints, then we can specify some other version as the default version. `typescript app.enableVersioning({ type: VersioningType.URI, defaultVersion: '1', }); ` The findOne()` endpoint will now return a 404, unless you call it with an explicit version. This is because we no longer have any "version-neutral" versions defined anywhere (the controllers/routes or the `defaultVersion` property). `shell ➜ nestjs-versioning-strategies git:(main) ✗ curl http://localhost:3000/api/users/1 {"statusCode":404,"message":"Cannot GET /api/users/1","error":"Not Found"}% ` Multiple versions Multiple versions can be applied to a controller/route by setting the version to be an array. `typescript import { Controller, Get, Param, Version } from '@nestjs/common'; @Controller('users') export class UsersController { @Get() @Version(['1', '2']) findAll() { return 'findAll()'; } @Get(':id') findOne(@Param('id') id: string) { return findOne(${id})`; } } ` Invoking /api/v1/users` or `/api/v2/users` will both land on the same method `findAll()` in the controller. Multiple versions can also be set in the defaultVersion` of the versioning configuration: `typescript app.enableVersioning({ type: VersioningType.URI, defaultVersion: ['1', '2'], }); ` This simply means that controllers/routes without a version decorator will be applied to both version 1 and version 2. Selection of highest-matching version Imagine the following scenario: You've decided to use API-level versioning, but you don't want to update all of your controllers/routes every time you increase a version of the API. You only want to do it on those that had breaking changes. Other controllers/routes should remain at whatever version they are currently. Currently, in NestJS, there is no way of accomplishing this with just a configuration option. But fortunately, the versioning config allows you to define a custom version extractor. A version extractor is simply a function that will tell NestJS which versions the client is requesting*, in order of preference. For example, if the version extractor returns an array such as `['3', '2', '1']`. This means the client is requesting version 3, or version 2 if 3 is not available, or version 1 if neither 2 nor 3 is available. This kind of highest-matching version selection does have a caveat, though. It does not reliably work with the Express server, so we need to switch to the Fastify server instead. Fortunately, that is easy in NestJS. Install the Fastify adapter first: `shell npm i --save @nestjs/platform-fastify ` Next, provide the FastifyAdapter` to the `NestFactory`: `typescript import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; import { VersioningType } from '@nestjs/common'; import { FastifyAdapter } from '@nestjs/platform-fastify'; async function bootstrap() { const app = await NestFactory.create(AppModule, new FastifyAdapter()); app.setGlobalPrefix('api'); app.enableVersioning({ type: VersioningType.URI, defaultVersion: '1', }); await app.listen(3000); } bootstrap(); ` And that's it. Now we can proceed onto writing the version extractor: `typescript import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; import { VersioningType } from '@nestjs/common'; import { FastifyAdapter } from '@nestjs/platform-fastify'; import { FastifyRequest } from 'fastify'; const DEFAULTVERSION = '1'; const extractor = (request: FastifyRequest): string | string[] => { const requestedVersion = request.headers['x-api-version'] ?? DEFAULTVERSION; // If requested version is N, then this generates an array like: ['N', 'N-1', 'N-2', ... , '1'] return Array.from( { length: parseInt(requestedVersion) }, (, i) => `${i + 1}`, ).reverse(); }; async function bootstrap() { const app = await NestFactory.create(AppModule, new FastifyAdapter()); app.setGlobalPrefix('api'); app.enableVersioning({ type: VersioningType.CUSTOM, extractor, defaultVersion: DEFAULTVERSION, }); await app.listen(3000); } bootstrap(); ` The version extractor uses the x-api-version` header to extract the requested version and then returns an array of all possible versions up to and including the requested version. The reason why we chose to use header-based versioning in this example is that it would be too complex to implement URI-based versioning using a version extractor. First of all, the version extractor gets an instance of FastifyRequest`. This instance does not provide any properties or methods for obtaining parts of the URL. You only get the URL path in the `request.url` property. You would need to parse this yourself if you wanted to extract a route token or a query parameter. Secondly, you would also need to handle the routing based on the version requested. Now, if we add multiple versions to our controller, we will always be getting the highest supported version: `typescript import { Controller, Get, Param, Version } from '@nestjs/common'; @Controller('users') export class UsersController { @Get() @Version('2') findAll2() { return 'findAll2()'; } @Get() @Version('1') findAll1() { return 'findAll1()'; } @Get(':id') findOne(@Param('id') id: string) { return findOne(${id})`; } } ` Let's test this: `shell ➜ curl http://localhost:3000/api/users/1 --header "X-Api-Version: 1" findOne(1)% ➜ curl http://localhost:3000/api/users/1 --header "X-Api-Version: 2" findOne(1)% ➜ curl http://localhost:3000/api/users --header "X-Api-Version: 2" findAll2()% ➜ curl http://localhost:3000/api/users --header "X-Api-Version: 1" findAll1()% ` We have only one findOne()` implementation, which doesn't have any explicit version applied. However, since the default version is 1 (as configured in the versioning config), this means that version 1 applies to the `findOne()` endpoint. Now, if a client requested version 2 of our API, the version extractor would tell NestJS to first try version 2 of the endpoint if exists, or to try version 1 if it doesn't exist. Unlike findOne()`, `findAll1()` and `findAll2()` have explicit versions applied: version 1 and version 2, respectively. That's why the third and the fourth calls will return the versions that were explicitly requested by the client. Conclusion This was an overview of the tools you have at your disposal for implementing various versioning strategies in NestJS, with a special focus on API-level versioning and highest-matching version selection. As you can see, NestJS provides a very robust way of implementing various strategies. But some come with caveats, so it is always good to know them upfront before deciding which versioning strategy to use in your project. The entire source code for this mini-project is available on GitHub, with the code related to highest-matching version implementation being in the highest-matching-version-selection` branch....
Aug 10, 2022
5 mins

Migrating an Amplify Backend on Serverless Framework - Part 3
This is Part Three of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part One here and Part Two here. This is the third and final part of our series where we're showing the steps needed to migrate an Amplify backend to Serverless Framework. After scaffolding the project in the first part, and setting up the GraphQL API in the second part, what now remains is setting up final touches like DynamoDB triggers and S3 buckets. Let's get to it. DynamoDB Triggers A DynamoDB trigger allows you to invoke a lambda every time a DynamoDB table is updated, and the lambda will receive the updated row in the input event. In our application, we will be using this to add a new notification to the NotificationQueue` table every time an `Item` row is created that has `remindAt` set. For this purpose, let's create that lambda, which will be just a placeholder since we're focusing mainly on the Serverless configuration. Copy the contents of handlers/process-queue/index.js` to `handlers/add-to-queue/index.js`. This lambda has the following content: `javascript 'use strict'; module.exports.handler = async (event) => { return { statusCode: 200, body: JSON.stringify( { message: 'Go Serverless v3.0! Your function executed successfully!', input: event, }, null, 2 ), }; }; ` Now we need to make a slight modification to our Item` table resource, to add stream configuration. Having stream configured on the DynamoDB table is a prerequisite for a trigger to be invoked on row modification. The stream is configured by adding a `StreamViewType` property to the table configuration. The table configuration for the Item` resource now becomes: `yaml Resources: # ...other resources ItemTableResource: Type: AWS::DynamoDB::Table Properties: StreamSpecification: StreamViewType: NEWAND_OLD_IMAGES KeySchema: - AttributeName: id KeyType: HASH AttributeDefinitions: - AttributeName: id AttributeType: S - AttributeName: listId AttributeType: S GlobalSecondaryIndexes: - IndexName: byList KeySchema: - AttributeName: listId KeyType: HASH Projection: ProjectionType: ALL BillingMode: PAYPER_REQUEST TableName: ${self:provider.environment.ITEMTABLE_NAME} ` The only remaining part is to connect the lambda and stream configuration together. This is done in the functions` property of the Serverless configuration: `yaml functions: # ... other functions addToQueue: handler: handlers/add-to-queue/index.handler events: - stream: type: dynamodb arn: !GetAtt ItemTableResource.StreamArn ` We have the standard lambda function definition as well as an events` property that hooks up the lambda to the stream of the `Item` table. Again, we use an intrinsic function, in this case `!GetAtt` to fetch the ARN (Amazon Resource Name) of the stream. With this in place, lambda is now hooked to the `Item` data stream and will begin listening to modification events. One such event might look like this: `json { "Records": [ { "awsRegion": "us-east-1", "dynamodb": { "ApproximateCreationDateTime": 1632502576, "Keys": { "id": { "S": "..." } }, "NewImage": { "typename": { "S": "Item" }, "id": { "S": "..." }, "remindAt": { "S": "2021-09-24T16:56:15.182Z" }, "cognitoUserId": { "S": "..." }, "listId": { "S": "..." }, "title": { "S": "Some title" }, "notes": { "S": "Item notes" } }, "SequenceNumber": "853010500000000020163575159", "SizeBytes": 356, "StreamViewType": "NEWAND_OLD_IMAGES" }, "eventID": "...", "eventName": "INSERT", "eventSource": "aws:dynamodb", "eventSourceARN": "arn:aws:dynamodb:us-east-1:...", "eventVersion": "1.1" } ] } ` Setting up S3 Buckets In case a user of our app would like to upload an image as part of the todo item's note, we could upload that image to an S3 bucket and then serve it from there when displaying the note in the UI. For this to work, we would need to provision an S3 bucket through our Serverless configuration. An S3 bucket is a resource, just like any other DynamoDB table in the configuration. We need to give it a name, so let's configure that in the environment first: `yaml provider: # ... environment: # ... other environment variables... S3BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images ` The S3 bucket name is composed of service name and stage, suffixed by the string "-images". In our case, for dev environment, the bucket would be named amplified-todo-api-dev-images`. Now we need to configure the resources for this S3 bucket. We can append the following configuration to the end of the Resources` section: `yaml Resources: # ...other resources ImageBucketResource: Type: AWS::S3::Bucket Properties: BucketName: ${self:provider.environment.S3BUCKET_NAME} ImageBucketPolicy: Type: 'AWS::S3::BucketPolicy' Properties: PolicyDocument: Version: '2012-10-17' Statement: - Sid: PublicRead Effect: Allow Principal: '' Action: - 's3:GetObject' Resource: !Join ['', ['arn:aws:s3:::', !Ref ImageBucketResource, /]] Bucket: Ref: ImageBucketResource ` In the above configuration, we create a resource for the bucket, and a policy specifying public read permissions for that resource. Note how ImageBucketPolicy` is referencing `ImageBucketResource`. We're using intrinsic functions again to avoid hardcoding the image bucket resource name. If we wanted to have a lambda that would upload to this S3 bucket, then we would need to add the permissions for it: `yaml provider: # ... environment: # ...other environment variables S3BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images iam: role: statements: - Effect: Allow Action: - s3:PutObject - s3:GetObject Resource: 'arn:aws:s3:::${self:provider.environment.S3BUCKET_NAME}/*' ` Our S3 bucket is now set up. Bonus: Lambda Bundling The project in this state is relatively simple and should take less than a couple of minutes to deploy. With time, however, it will probably grow larger, and the lambdas may start to require some external dependencies. The deployment will become slower, and lambda deployments will contain more files than are really necessary. At that point, it will be a good idea to introduce lambda bundling. serverless-esbuild is a plugin that utilizes esbuild to bundle and minify your lambda code. It's almost zero-config, and works out of the box without the need to install any additional plugins. With it, you can have both TypeScript and JavaScript code. To start using it, install it first: `shell npm install --save-dev serverless-esbuild ` Now add it to the plugins array: `yaml plugins: - serverless-esbuild - serverless-appsync-plugin ` Finally, configure it to both bundle and minify your lambdas: `yaml custom: esbuild: bundle: true minify: true appSync: # ... appSync config ` That's it. Your lambdas will now be bundled and minified on every deploy. Conclusion This is the end of our three-part series on migrating an Amplify backend to Serverless Framework. We hope you enjoyed the journey! Even if you're not migrating from Amplify, these guides should help you configure various services such as AppSync and DynamoDB in Serverless Framework. Don't forget that the entire source code for this project is up on GitHub. Should you need any help, though, with either Amplify or Serverless Framework, please do not hesitate to drop us a line!...
Jun 21, 2022
4 mins