Skip to content
Dario Djuric

AUTHOR

Dario Djuric

Senior Software Engineer

Dario is a full-stack engineer who has spent most of his career doing enterprise Java projects. He has always had a hidden passion for frontend, though -- and he is now able to pursue that passion at This Dot. He spends most of his free time with his two sons, but occasionally, he manages to squeeze in some casual sports activities such as jogging, cycling, and soccer.

Select...
Select...
A Look at Playwright Parallelism cover image

A Look at Playwright Parallelism

In this blog post, we are exploring Playwright’s parallelism capabilities to speed up test execution....

Deploying Next.js Applications to Fly.io cover image

Deploying Next.js Applications to Fly.io

Fly.io has gained significant popularity in the developer community recently, particularly after being endorsed by Kent C. Dodds for hosting his Epic Web project. It's a go-to choice for hobby projects, thanks to its starting plans that are effectively free, making it highly accessible for individual developers. While Next.js applications are often deployed to Vercel, Fly.io has emerged as a perfectly viable alternative, offering robust hosting solutions and global deployment capabilities. In this blog post, we'll give an overview of how to install a Next.js app to Fly.io, mentioning any gotchas you should be aware of along the way. The Project Our project is a simple Next.js project using the latest version of Next.js at the time of the writing (14). It uses the app directory and integrates with Spotify to get a list of episodes for our podcast, Modern Web. The bulk of the logic is in the page.tsx file, shown below, which represents the front page that is server-rendered after retrieving a list of episodes from Spotify. ` The getEpisodes is a custom function that is a wrapper around the Spotify API. It uses the Spotify client ID and secret (provided through the SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables, respectively) to get an access token from Spotify and invoke a REST endpoint to get a list of episodes for the given show ID. As can be seen from the above code, the Home is an async, server-rendered component. Scaffolding of Fly.io Configuration To get started with Fly.io and deploy a new project using flyctl, you need to go through a few simple steps: installing the flyctl CLI, logging into Fly.io, and using the flyctl launch command. Installing the CLI Installing flyctl is different depending on the operating system you use: - If you're on Windows, the easiest way to install flyctl is by using scoop, a command-line installer. First, install scoop if you haven’t already, then run scoop install flyctl in your command prompt or PowerShell. - For macOS users, you can use Homebrew, a popular package manager. Simply open your terminal and run brew install superfly/tap/flyctl. - Linux users can install flyctl by running the following script in the terminal: curl -L https://fly.io/install.sh | sh. This will download and install the latest version. Logging In After installing flyctl, the next step is to log in to your Fly.io account. Open your terminal or command prompt and enter flyctl auth login. This command will open a web browser prompting you to log in to Fly.io. If you don’t have an account, you can create one at this step. Once you're logged in through the browser, the CLI will automatically be authenticated. Scaffolding the Fly.io Configuration The next step is to use fly launch to add all the necessary files for deployment, such as a Dockerfile and a fly.toml file, which is the main Fly.io configuration file. This command initiates a few actions: - It detects your application type and proposes a configuration. - It sets up your application on Fly.io, including provisioning a new app name if you don’t specify one. - It allows you to select a region to deploy to. There are really many to choose from, so you can get really picky here. Once the process completes, flyctl will be ready for deploying the application. In our case, the process went like this: ` Deploying Now, if this was a simpler Next.js app without any environment variables, running flyctl deploy would build the Docker image in a specialized "builder" app container on Fly.io and deploy that image to the app container running the app. However, in our case, executing flyctl deploy will fail: ` This is because our page is statically rendered, and the Next.js build process attempts to run Home, our server-rendered component to cache its output. Before we can do this, we need to add our environment variables so that Fly.io is aware of them, but this is somewhat a tricky subject, so let's explain why in the following chapter. Handling of Secrets Most complex web apps will need some secrets injected into the app via environment variables. Environment variables are a good way to inject sensitive information, such as API secret keys, to your web app without storing them in the repository, the file system, or any other unprotected place. Unlike other providers such as the previously mentioned Vercel, Fly.io distinguishes built-time and run-time secrets, which are then injected as environment variables. Build-time secrets are those secrets that your app requires to build itself, while run-time secrets are needed while the app is running. In our case, due to the fact that Next.js will attempt to cache our static pages upfront, the Spotify client ID and client secret are needed during both build-time and run-time (after the cache expires). Build-Time Secrets Our Next.js app is built while building the Docker image, therefore build-time secrets should be passed to the Docker context. The recommended, Docker-way of doing this, is through Docker's build-time secrets, which are added through a special --mount=type=secret flag to the RUN command that builds the site. This is a relatively newer feature that allows you to securely pass secrets needed during the build process without including them in the final image or even as an intermediate layer. This means that, instead of having the following build command in our Dockerfile: ` we will have: ` Now, you can either modify the Dockerfile manually and add this, or you can use a helpful little utility called dockerfile: ` If we were using docker build to build our image, we would pass the secret values like so: ` However, in our case we use fly deploy, which is just a wrapper around docker build. To pass the secrets, we would use the following command: ` And now, the app builds and deploys successfully in a few minutes. To summarize, if you have any secrets which are necessary at build-time, they will need to be provided to the fly deploy command. This means that if you have a CI/CD pipeline, you will need to make sure that these secrets are available to your CI/CD platform. In the case of GitHub actions, they would need to be stored in your repository's secrets. Run-Time Secrets Run-time secrets are handled in a different way - you need to provide them to Fly.io via the fly secrets set command: ` Now, you might be wondering why fly deploy cannot use these secrets if they are already stored in Fly.io. The architecture of Fly.io is set up in such a way that reading these secrets through the API, once they are set, is not possible. Secrets are stored in an encrypted vault. When you set a secret using fly secrets set, it sends the secret value through the Fly.io API, which writes it to the vault for your specific Fly.io app. The API servers can only encrypt; they cannot decrypt secret values. Therefore, the fly deploy process, which is, if you remember, just a wrapper around docker build, cannot access the decrypted secret values. Other Things to Consider Beware of .env Files In Next.js, you can use .env as well as .env.local for storing environment variable values for local development. However, keep in mind that only .env.local files are ignored by the Docker build process via the .dockerignore file generated by Fly.io. This means that if you happen to be using an .env file, this file could be bundled into your Docker image, which is potentially a security risk if it contains sensitive information. To prevent this from happening, be sure to add .env to your .dockerignore file as well. Not Enough Memory? For larger Next.js sites, you might run into situations where the memory of your instance is simply not enough to run the app, especially if you are on the hobby plan. If that happens, you have two options. The first one does not incur any additional costs, and it involves increasing the swap size. This is not ideal, as more disk operation is involved, making the entire process slower, but it is good enough if you don't have any other options. To set swap size to something like 512 MB, you need to add the following line to the fly.toml file near the top: ` The second one is increasing memory size of your instance. This does incur additional cost, however. If you decide to use this option, the command to use is: ` For example, to increase RAM memory to 1024 MB, you would use the command: ` After making the changes, you can try redeploying and seeing if the process is still crashing due to lack of memory. Conclusion In conclusion, deploying Next.js applications to Fly.io offers a flexible and robust solution for developers looking for alternatives to more commonly used platforms like Vercel. We hope this blog post has provided you with some useful insights on the things to consider when doing so. Be sure to also check out our Next starter templates on starter.dev if you'd like to integrate a few other frameworks into your Next.js project. The entire source code for this project is available on Stackblitz....

OAuth2 for JavaScript Developers cover image

OAuth2 for JavaScript Developers

Using GitHub as an example, the post guides JavaScript developers through the OAuth2 process, emphasizing its importance for secure and efficient third-party integrations in web applications....

Angular 17: Continuing the Renaissance cover image

Angular 17: Continuing the Renaissance

Dive into the Angular Renaissance with Angular 17, emphasizing standalone components, enhanced control flow syntax, and a new lazy-loading paradigm. Discover server-side rendering improvements, hydration stability, and support for view transitions....

The Renaissance of PWAs cover image

The Renaissance of PWAs

What Are PWAs? Progressive Web Apps, or PWAs, are not a new concept. In fact, they have been around for years, and have been adopted by companies such as Starbucks, Uber, Tinder, and Spotify. Here at This Dot, we have written numerous blog posts on PWAs. PWAs are essentially web applications that utilize modern web technologies to deliver a user experience akin to native apps. They can work offline, send push notifications, and even be added to a user's home screen, thus blurring the boundaries between web and native apps. PWAs are built using standard web technologies like HTML, CSS, and JavaScript. However, they leverage advanced web APIs to deliver enhanced capabilities. Most web apps can be transformed into PWAs by incorporating certain features and adhering to standards. The keystones of PWAs are the service worker and the web app manifest. Service workers enable offline operation and background syncing by acting as network proxies, managing requests programmatically. The web app manifest, on the other hand, gives the PWA a native-like presence on the user's device, specifying its appearance when installed. Looking Back The concept of PWAs was introduced by Google engineers Alex Russell and Frances Berriman in 2015, even though Steve Jobs had already discussed the idea of web apps that resembled and behaved like native apps as early as 2007. However, despite the widespread adoption of PWAs over the years, Apple's approach to PWAs drastically changed after Steve Jobs' 2007 presentation, distinguishing it from other tech giants such as Google and Microsoft. As a leader in technological innovation, Apple was notably slower in adopting PWA technology, much of which can be attributed to its business model and the ecosystem it built around the App Store. For example, Safari, Apple's web browser, has historically been slow to adopt the latest web standards and APIs crucial for PWAs. Features such as push notifications, background sync, and access to certain hardware functionalities were unsupported or only partially supported for a long time. As a result, the PWA experience on iOS/iPadOS was not - and to some extent, still isn't - on par with that provided by Android. Despite the varying degrees of support from different vendors, PWAs have seen a significant increase in adoption since 2015, both by businesses and users, due to their cross-platform nature, offline capabilities, and the enhanced user experience they offer. Major corporations like Twitter, Pinterest, and Alibaba have launched PWAs, leading to substantial increases in user engagement and session duration. For instance, according to a 2017 Pinterest case study, Pinterest's PWA led to a 60% increase in core engagements and a 44% increase in user-generated ad revenue. Google and Microsoft have also championed this technology, integrating more PWA support into their platforms. Google highlighted the importance of PWAs for the mobile web, while Microsoft sought to populate its Windows Store with PWAs. Apple's Shift Towards PWAs Despite slower adoption and limited support, Apple isn't completely dismissing PWAs. Recent updates have indicated some promising improvements in PWA capabilities on both iOS/iPadOS and MacOS. For instance, in iOS/iPadOS 16 released last year, Apple added notifications for Home Screen web apps, utilizing the Web Push standard with support for badging. They also included an API for iOS/iPadOS browsers to facilitate the 'Add to Home Screen' feature. The forthcoming Safari 17 and MacOS Sonoma releases, announced at June's WWDC, promise even more significant changes, most notably: Installing Web Apps on MacOS, iOS, and iPadOS Any web app, not just PWAs, can now be added to the MacOS dock from the File menu. Once added, these web apps will open in their own window and integrate with operating system features such as the Stage Manager, Screen Time, Notifications, and Focus. They will also have their own isolated storage, and any cookies present in the Safari browser for that web app at the time of installation will be transferred to this isolated storage. This means many users will not need to re-authenticate to web apps after installing them locally. PWAs, in particular, can control the appearance and behavior of the installed web app via the PWA manifest. For instance, if your web app already includes navigation controls, or if they're not necessary in the context of the app, you can manage whether the navigation buttons are displayed by setting the display configuration option in the manifest to standalone. The display option will also be taken into consideration in iOS/iPadOS, where standalone web apps will become *Home Screen web apps*. These apps offer a standalone, app-like experience on iOS, complete with separate cookies and storage from the browser, and improved notification handling. Improved Notifications Apple initially added support for notifications in iOS/iPadOS 16, but Safari 17 and MacOS Sonoma take it a step further. If you've already implemented Web Push according to web standards, then push notifications should work for your web page as a web app on Mac without any additional effort. Moreover, the silent property is now taken into account, and there are several improvements to the Notifications API to enhance its reliability. These recent updates put the support for notifications on Mac on par with iOS/iPadOS, including support for badging, and seamless integration with the Focus mode. Improved API Over the past year, Apple has also introduced several other API-level improvements. The enhancements to the User Activation API aid in determining whether a function that depends on user activation, such as requesting permission to send notifications, is called. Safari 16 updated the un-prefixed Fullscreen API, and introduced preliminary support for the Screen Orientation API. Safari 17 in particular has improved support for the Storage API and added support for ReadableStream. The Renaissance of PWAs? With the new PWA-related features in the Apple ecosystem, it's hard not to wonder if we are witnessing a renaissance of PWAs. Initially praised for their ability to leverage web technology to create app-like experiences, PWAs went through a period of relative stagnation, especially on Apple's platforms. For a time, Apple was noticeably more conservative in their implementation of PWA features. However, their recent bolstering of PWA support in Safari signals a significant shift, aligning the browser with other major platforms such as Google Chrome and Microsoft Edge, both of which have long supported PWAs. The implications of this shift are profound. This widespread and robust support for PWAs across all major platforms could effectively reduce the gap between web applications and native applications. PWAs, with their promise of a single, consistent experience across all devices, could become the preferred choice for businesses and developers. The cost-effectiveness of developing and maintaining one PWA versus separate applications for multiple platforms is an undeniable benefit. The fact that all these platforms are now heavily supporting PWAs might suggest an industry-wide shift toward a more unified and simplified development paradigm, hinting that indeed, we could be on the verge of a PWA renaissance....

Utilizing AWS Cognito for Authentication cover image

Utilizing AWS Cognito for Authentication

Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPool_Dev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts file as shown below: ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth class in the @aws-amplify/auth package. For example, to sign in after the user has submitted the form containing the username and password, you can use the Auth.signIn(email, password) method as shown below: ` The logged-in user object is then translated to an instance of CoreUser, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: ` The login component triggers a SignInActions.userSignInAttempted action, which is processed by the above effect. Depending on the outcome of the signInAdmin call in the AuthService class, the action is translated to either AuthAPIActions.userLoginSuccess or AuthAPIActions.userSignInFailed. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp method for user registration. - Signing out is done using Auth.signOut. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub property. ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template() function. The template reads the HTML template embedded in the Lambda. ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...

Implementing a Task Scheduler in Node Using Redis cover image

Implementing a Task Scheduler in Node Using Redis

Node.js and Redis are often used together to build scalable and high-performing applications. Although Redis has always been primarily an in-memory data store that allows for fast and efficient data access, over time, it has gained many useful features, and nowadays it can be used for things like rate limiting, session management, or queuing. With its excellent support for sorted sets, one feature to be added to that list can also be task scheduling. Node.js doesn't have support for any kind of task scheduling other than the built-in setInterval() and setTimeout() functions, which are quite simple and don't have task queuing mechanisms. There are third-party packages like node-schedule and node-cron of course. But what if you wanted to understand how this could work under the hood? This blog post will show you how to build your own scheduler from scratch. Redis Sorted Sets Redis has a structure called _sorted sets_, a powerful data structure that allows developers to store data that is both ordered and unique, which is useful in many different use cases such as ranking, scoring, and sorting data. Since their introduction in 2009, sorted sets have become one of the most widely used and powerful data structures in Redis. To add some data to a sorted set, you would need to use the ZADD command, which accepts three parameters: the name of the sorted set, the name of the member, and the score to associate with that member. When having multiple members, each with its own score, Redis will sort them by score. This is incredibly useful for implementing leaderboard-like lists. In our case, if we use a timestamp as a score, this means that we can order sorted set members by date, effectively implementing a queue where members with the most recent timestamp are at the top of the list. If the member name is a task identifier, and the timestamp is the time at which we want the task to be executed, then implementing a scheduler would mean reading the sorted list, and just grabbing whatever task we find at the top! The Algorithm Now that we understand the capabilities of Redis sorted sets, we can draft out a rough algorithm that will be implemented by our Node scheduler. Scheduling Tasks The scheduling task piece would include adding a task to the sorted set, and adding task data to the global set using the task identifier as the key. The steps are as follows: 1. Generate an identifier for the submitted task using the INCR command. This command will get the next integer sequence each time it's called. 2. Use the SET command to set task data in the global set. The SET command accepts a key and a string. The key must be unique, therefore it can be something like task:${taskId}, while the value can be a JSON representation of the task data. 3. Use the ZADD command to add the task identifier and the timestamp to the sorted set. The name of the sorted set can be something simple like sortedTasks, while the set member can be the task identifier and the score is the timestamp. Processing Tasks The processing part is an endless loop that checks if there are any tasks to process, otherwise it waits for a predefined interval before trying again. The algorithm can be as follows: 1. Check if we are still allowed to run. We need a way to stop the loop if we want to stop the scheduler. This can be a simple boolean flag in the code. 2. Use the ZRANGE command to get the first task in the list. ZRANGE accepts several useful arguments, such as the score range (the timestamp interval, in our case), and the offset/limit. If we provide it with the following arguments, we will get the first next task we need to execute. - Minimal score: 0 (the beginning of time) - Maximum score: current timestamp - Offset: 0 - Count: 1 3. If there is a task found: 3.1 Get the task data by executing the GET command on the task:${taskId} key. 3.2 Deserialize the data and call the task handler. 3.3 Remove the task data using the DEL command on the task:${taskId} key. 3.4 Remove the task identifier from the sorted set by calling ZREM on the sortedSets key. 3.5 Go back to point 2 to get the next task. 4. If there is no task found, wait for a predefined number of seconds before trying again. The Code Now to the code. We will have two objects to work with. The first one is RedisApi and this is simply a façade over the Redis client. For the Redis client, we chose to use ioredis, a popular Redis library for Node. ` The RedisApi function returns an object that has all the Redis operations that we mentioned previously, with the addition of isConnected, which we will use to check if the Redis connection is working. The other object is the Scheduler object and has three functions: - start() to start the task processing - stop() to stop the task processing - schedule() to submit new tasks The start() and schedule() functions contain the bulk of the algorithm we wrote above. The schedule() function adds a new task to Redis, while the start() function creates a findNextTask() function internally, which it schedules recursively while the scheduler is running. When creating a new Scheduler object, you need to provide the Redis connection details, a polling interval, and a task handler function. The task handler function will be provided with task data. ` That's it! Now, when you run the scheduler and submit a simple task, you should see an output like below: ` ` Conclusion Redis sorted sets are an amazing tool, and Redis provides you with some really useful commands to query or update sorted sets with ease. Hopefully, this blog post was an inspiration for you to consider using sorted sets in your applications. Feel free to use StackBlitz to view this project online and play with it some more....

Next.js Authentication Using OAuth cover image

Next.js Authentication Using OAuth

Modern web apps have come a long way from their early days, and as a result, users have come to expect certain features. One such feature is being able to authenticate in the web app using external accounts owned by providers such as Facebook, Google, or GitHub. Not only is this way of authenticating more secure, but there is less effort required by the user. With only a few clicks, they can sign in to your web app. Such authentication is done using the OAuth protocol. It's a powerful and very commonly used protocol that allows users to authenticate with third-party applications using their existing login credentials. These days, it has become an essential part of modern web applications. In this blog post, we will explore how to implement OAuth authentication in a Next.js application. Why OAuth? Implementing authentication using OAuth is useful for a number of reasons. First of all, it allows users to sign in to your web app using their existing credentials from a trusted provider, such as Facebook and Google. This eliminates the need to go through a tedious registration process, and most importantly, it eliminates the need to come up with a password for the web app. This has many benefits for both the web app owner and the user. Neither need to store the password anywhere, as the password is handled by the trusted OAuth provider. This means that even if the web app gets hacked for some reason, the attacker will not gain access to the user password. For exactly that reason, you'll often hear experienced developers advise to "never roll your own authentication". OAuth in Next.js Next.js is the most popular React metaframework, and this gives you plenty of options and libraries for implementing authentication in your app. The most popular one, by far, is definitely Auth.js, formerly named NextAuth.js. With this library, you can get OAuth running in only a few simple steps, as we'll show in this blog post. We'll show you how to utilize the latest features in both Next.js and Auth.js to set up an OAuth integration using Facebook. Implementing OAuth in Next.js 13 Using Auth.js Creating Facebook App Before starting with the project, let's create an app on Facebook. This is a prerequisite to using Facebook as an OAuth provider. To do this, you'll need to go to Meta for Developers and create an account there by clicking on "Get Started". Once this is done, you can view the apps dashboard and click "Create App" to create your app. Since this is an app that will be used solely for Facebook login, we can choose "Consumer" as the type of the app, and you can pick any name for it. In our case, we used "Next.js OAuth Demo". After the app is created, it's immediately visible on the dashboard. Click the app, and then click Settings / Basic on the left menu to show both the app ID and the app secret - this will be used by our Next.js app. Setting Up Next.js For the purpose of this blog post, we'll create a new Next.js project from scratch so you can have a good reference project with minimum features and dependencies. In the shell, execute the npx create-next-app@latest --typescript command and follow the prompts: ` As you can see, we've also used this opportunity to play with the experimental app directory which is still in beta, but is the way Next.js apps will be built in the future. In our project, we've also set up Tailwind just to design the login page more quickly. Next, install the Auth.js library: ` Now we need to create a catch-all API route under /api/auth that will be handled by the Auth.js library. We'll do this by creating the following file: ` Note that even though we will be utilizing Next.js 13's app directory, we need to place this route in the pages directory, as Auth.js doesn't yet support placing its API handler in the app directory. This is the only case where we will be using pages, though. In your project root, create a .env.local file with the following contents: ` All the above environment variables except NEXTAUTH_URL are considered secret, and you should avoid committing them in the repository. Now, moving on to the React components, we'll need to have a few components that will perform the following functionality: - Display the sign-in button if the user is not authenticated - Otherwise, display user name and a sign-out button The Home component that was auto-generated by Next.js is a server component and we can use getServerSession() from Auth.js to get the user's session. Based on that, we'll show either the sign-in component or the logged-in user information. authOptions provided to getServerSession() is the object that is defined in the API route. ` The SignIn component has the sign-in button. The sign-in button needs to open an URL on Facebook that will initiate the authentication process. Once the authentication process is completed, it will invoke a "callback"- a special URL on the app side that is handled by Auth.js. ` The UserInformation, on the other hand, is displayed after the authentication process is completed. Unlike other components, this needs to be a client component to utilize the *signOut* method from Auth.js, which only works client-side. ` And that's it! Now run the project using npm run dev and you should be able to authenticate to Facebook as shown below: Conclusion In conclusion, implementing OAuth-based authentication in Next.js is relatively straightforward thanks to Auth.js. This library not only comes with built-in Facebook support, but it also comes with 60+ other popular services, such as Google, Auth0, and more. We hope this blog post was useful, and you can always refer to the CodeSandbox project if you want to view the full source code. For other Next.js demo projects, be sure to check out starter.dev, where we already have a Next.js starter kit that can give you the best-practices in integrating Next.js with other libraries....

Splitting Work: Multi-Threaded Programming in Deno cover image

Splitting Work: Multi-Threaded Programming in Deno

Deno is a new runtime for JavaScript/TypeScript built on top of the V8 JavaScript engine. It was created as an alternative for Node.js with a focus on security and modern language features. Here at This Dot, we've been working with Deno for a while, and we've even created a starter kit that you can use to scaffold your next backend Deno project. The starter kit uses many standard Deno modules, such as the Oak web server and the DenoDB ORM. One issue you may encounter, when scaling an application from this starter kit, is how to handle expensive or long-running asynchronous tasks and operations in Deno without blocking your server from handling more requests. Deno, just like Node.js, uses an event loop in order to process asynchronous tasks. This event loop is responsible for managing the flow of Deno applications and handling the execution of asynchronous tasks. The event loop is executed in a single thread. Therefore, if there is some CPU-intensive or long-running logic that needs to be executed, it needs to be offloaded from the main thread. This is where Deno workers come into play. Deno workers are built upon the Web Worker API specification, and provide a way to run JavaScript or TypeScript code in separate threads, allowing you to execute CPU-intensive or long-running tasks concurrently, without blocking the event loop. They communicate with the main process through a message-passing API. In this blog post, we will show you how to expand on our starter kit using Deno Workers. In our starter kit API, where we have CRUD operations for managing technologies, we'll modify the create endpoint to also read an image representing the technology and generate thumbnails for that image. Generating thumbnails Image processing is CPU-intensive. If the image being processed is large, it may require a significant amount of CPU resources to complete in a timely manner. When including image processing as part of an API, it's definitely a good idea to offload that processing to a separate thread if you want to keep your API responsive. Although there are many image processing libraries out there for the Node ecosystem, the Deno ecosystem does not have as many for now. Fortunately, for our use case, using a simple library like deno-image is good enough. With only a few lines of code, you can resize any image, as shown in the below example from deno-image's repository: ` Let's now create our own thumbnail generator. Create a new file called generate_thumbnails.ts in the src/worker folder of the starter kit: ` The function uses fetch to retrieve the image from a remote URL, and store it in a local buffer. Afterwards, it goes through a predefined list of thumbnail sizes, calling resize() for each, and then saving each image to the public/images folder, which is a public folder of the web server. Each image's filename is generated from the original image's URL, and appended with the thumbnail dimensions. Calling the web worker The web worker itself is a simple Deno module which defines an event handler for incoming messages from the main thread. Create worker.ts in the src/worker folder: ` The event's data property expects an object representing a message from the main thread. In our case, we only need an image URL to process an image, so event.data.imageUrl will contain the image URL to process. Then, we call the generateThumbnails function on that URL, and then we close the worker when done. Now, before calling the web worker to resize our image, let's modify the Technology type from the GraphQL schema in the starter kit to accept an image URL. This way, when we execute the mutation to create a new technology, we can execute the logic to read the image, and resize it in the web worker. ` After calling deno task generate-type-definition to generate new TypeScript files from the modified schema, we can now use the imageUrl field in our mutation handler, which creates a new instance of the technology. At the top of the mutation_handler.ts module, let's define our worker: ` This is only done once, so that Deno loads the worker on module initialization. Afterwards, we can send messages to our worker on every call of the mutation handler using postMessage: ` With this implementation, your API will remain responsive, because post-processing actions such as thumbnail generation are offloaded to a separate worker. The main thread and the worker thread communicate with a simple messaging system. Conclusion Overall, Deno is a powerful and efficient runtime for building server-side applications, small and large. Its combination of performance and ease-of-use make it an appealing choice for developers looking to build scalable and reliable systems. With its support for the Web Worker API spec, Deno is also well-suited for performing large-scale data processing tasks, as we've shown in this blog post. If you want to learn more about Deno, check out deno.framework.dev for a curated list of libraries and resources. If you are looking to start a new Deno project, check out our Deno starter kit resources at starter.dev....

Setting Up TypeORM Migrations in an Nx/NestJS Project cover image

Setting Up TypeORM Migrations in an Nx/NestJS Project

TypeORM is a powerful Object-Relational Mapping (ORM) library for TypeScript and JavaScript that serves as an easy-to-use interface between an application's business logic and a database, providing an abstraction layer that is not tied to a particular database vendor. TypeORM is the recommended ORM for NestJS as both are written in TypeScript, and TypeORM is one of the most mature ORM frameworks available for TypeScript and JavaScript. One of the key features of any ORM is handling database migrations, and TypeORM is no exception. A database migration is a way to keep the database schema in sync with the application's codebase. Whenever you update your codebase's persistence layer, perhaps you'll want the database schema to be updated as well, and you want a reliable way for all developers in your team to do the same with their local development databases. In this blog post, we'll take a look at how you could implement database migrations in your development workflow if you use a NestJS project. Furthermore, we'll give you some ideas of how nx can help you as well, if you use NestJS in an nx-powered monorepo. Migrations Overview In a nutshell, migrations in TypeORM are TypeScript classes that implement the MigrationInterface interface. This interface has two methods: up and down, where up is used to execute the migration, and down is used to rollback the migration. Assuming that you have an entity (class representing the table) as below: ` If you generate a migration from this entity, it could look as follows: ` As can be seen by the SQL commands, the up method will create the post table, while the down method will drop it. How do we generate the migration file, though? The recommended way is through the TypeORM CLI. TypeORM CLI and TypeScript The CLI can be installed globally, by using npm i -g typeorm. It can also be used without installation by utilizing the npx command: npx typeorm . The TypeORM CLI comes with several scripts that you can use, depending on the project you have, and whether the entities are in JavaScript or TypeScript, with ESM or CommonJS modules: - typeorm: for JavaScript entities - typeorm-ts-node-commonjs: for TypeScript entities using CommonJS - typeorm-ts-node-esm: for TypeScript entities using ESM Many of the TypeORM CLI commands accept a data source file as a mandatory parameter. This file provides configuration for connecting to the database as well as other properties, such as the list of entities to process. The data source file should export an instance of DataSource, as shown in the below example: ` To use this data source, you would need to provide its path through the -d argument to the TypeORM CLI. In a NestJS project using ESM, this would be: ` If the DataSource did not import the Post entity from another file, this would most likely succeed. However, in our case, we would get an error saying that we "cannot use import statement outside a module". The typeorm-ts-node-esm script expects our project to be a module -- and any importing files need to be modules as well. To turn the Post entity file into a module, it would need to be named post.entity.mts to be treated as a module. This kind of approach is not always preferable in NestJS projects, so one alternative is to transform our DataSource configuration to JavaScript - just like NestJS is transpiled to JavaScript through Webpack. The first step is the transpilation step: ` Once transpiled, you can then use the regular typeorm CLI to generate a migration: ` Both commands can be combined together in a package.json script: ` After the migrations are generated, you can use the migration:run command to run the generated migrations. Let's upgrade our package.json with that command: ` Using Tasks in Nx If your NestJS project is part of an nx monorepo, then you can utilize nx project tasks. The benefit of this is that nx will detect your tsconfig.json as well as inject any environment variables defined in the project. Assuming that your NestJS project is located in an app called api, the above npm scripts can be written as nx tasks as follows: ` The typeorm-generate-migration and typeorm-run-migrations tasks depend on the build-migration-config task, meaning that they will always transpile the data source config first, before invoking the typeorm CLI. For example, the previous CreatePost migration could be generated through the following command: ` Conclusion TypeORM is an amazing ORM framework, but there are a few things you should be aware of when running migrations within a big TypeScript project like NestJS. We hope we managed to give you some tips on how to best incorporate migrations in an NestJS project, with and without nx....

Setting Up Reverse Proxy in Heroku Using Nginx cover image

Setting Up Reverse Proxy in Heroku Using Nginx

An overview of the various options you have when setting up nginx as a reverse proxy in Heroku....

Combining Validators and Transformers in NestJS cover image

Combining Validators and Transformers in NestJS

When building a new API, it is imperative to validate that requests towards the API conform to a predefined specification or a contract. For example, the specification may state that an input field must be a valid e-mail string. Or, the specification may state that one field is optional, while another field is mandatory. Although such validation can also be performed on the client side, we should never rely on it alone. There should always be a validation mechanism on the server side as well. After all, you never know who's acting on behalf of the client. Therefore, you can never fully trust the data you receive. Popular backend frameworks usually have a very good support for validation out of the box, and NestJS, which we will cover in this blog post, is no exception. In this blog post, we will be focusing on NestJS's validation using ValidationPipe- specifically on one lesser known feature- which is the ability to not only validate input, but transform it beforehand as well, thereby combining transformation and validation of data in one go. Using ValidationPipe To test this out, let's build a UsersController that supports getting a list of users, and with the option to filter by several conditions. After scaffolding our project using nest new [project-name], let's define a class that will represent this collection of filters, and name it GetUsersQuery: ` Now, let's use it in the controller: ` The problem with this approach is that there is no validation performed whatsoever. Although we've defined userIds as an array of strings, and pageSize as a number, this is just compile-time verification - there is no runtime validation. In fact, if you execute a GET request on http://localhost:3000/users?userIds=1,2,3&pageSize=3, the query object will actually contain only string fields: ` There's a way to fix this in NestJS. First, let's install the dependencies needed for using data transformation and validation in NestJS: ` As their names would suggest, the class-validator package brings support for validating data, while the class-transformer package brings support for transforming data. Each package adds some decorators of their own to aid you in this. For example, the class-validator package has the @IsNumber() decorator to perform runtime validation that a field is a valid number, while the class-transformer package has the @Type() decorator to perform runtime transformation from one type to another. Having that in mind, let's decorate our GetUsersQuery a bit: ` This is not enough, though. To utilize the class-validator decorators, we need to use the ValidationPipe. Additionally, to utilize the class-transformer decorators, we need to use ValidationPipe with its transform: true flag: ` Here's what happens in the background. As said earlier, by default, every path parameter and query parameter comes over the network as a string. We _could_ convert these values to their JavaScript primitives in the controller (array of strings and a number, respectively), or we can use the transform: true property of the ValidationPipe to do this automatically. NestJS does need some guidance on how to do it, though. That's where class-transformer decorators come in. Internally, NestJS will use Class Transformer's plainToClass method to convert the above object to an instance of the GetUsersQuery class, using the Class Transformer decorators to transform the data along the way. After this, our object becomes: ` Now, Class Validator comes in, using its annotations to validate that the data comes in as expected. Why is Class Validator needed if we already transformed the data beforehand? Well, Class Transformer will not throw any errors if it failed to transform the data. This means that, if you provided a string like "testPageSize" to the pageSize query parameter, our query object will actually come in as: ` And this is where Class Validator will kick in and raise an error that pageSize is not a proper number: ` Other transformation options The @Type and @Transform decorators give us all kinds of options for transforming data. For example, strings can be converted to dates and then validated using the following combination of decorators: ` We can do the same for booleans: ` If we wanted to define advanced transformation rules, we can do so through an anonymous function passed to the @Transform decorator. With the following transformation, we can also accept isActive=1 in addition to isActive=true, and it will properly get converted to a boolean value: ` Conclusion This was an overview of the various options you have at your disposal when validating and transforming data. As you can see, NestJS gives you many options to declaratively define your validation and transformation rules, which will be enforced by ValidationPipe. This allows you to focus on your business logic in controllers and services, while being assured that the controller inputs have been properly validated. You'll find the source code for this blog post's project on our GitHub....