Skip to content

Avoiding Null and Undefined with NonNullable<T> in TypeScript

Use Cases

Use Case 1: Adding Two Numbers

Scenario: A function that adds two numbers and returns their sum. But if one of the numbers is undefined or null, it returns the other number as-is.

function addNumbers(a: number, b?: number | null): NonNullable<number> {
  return a + (b ?? 0);
}

const sum1 = addNumbers(2, 3); // Returns 5
const sum2 = addNumbers(2, null); // Returns 2
const sum3 = addNumbers(2, undefined); // Returns 2

In this code snippet, the addNumbers() function takes two parameters, a and b. a is a required parameter of type number, while b is an optional parameter of type number or null. The function uses the NonNullable<T> conditional type to ensure that the return value is not null or undefined. If b is null or undefined, the function returns the value of a. Otherwise, it adds a and b together and returns the sum. To handle the case where b is null or undefined, the code uses the nullish coalescing operator, ??, which returns the value on its left-hand side if it is not null or undefined, and the value on its right-hand side otherwise.

Use Case 2: Checking Contact Information

Scenario: A class representing a person, but with optional email and phone properties. The contact() function logs the email and phone numbers if they are defined and not null. Otherwise, it logs a message saying that no contact information is available.

class Person {
  name: string;
  email?: string | null;
  phone?: string | null;

  constructor(name: string, email?: string | null, phone?: string | null) {
    this.name = name;
    this.email = email ?? null;
    this.phone = phone ?? null;
  }

  contact() {
    if(this.email !== undefined && this.email !== null && this.phone !== undefined && this.phone !== null) {
      console.log(`${this.name} can be reached at ${this.email} or ${this.phone}`);
    } else {
      console.log(`${this.name} has no contact information available`);
    }
  }
}

const john = new Person('John Doe', 'john.doe@example.com', '(123) 456-7890');
const jane = new Person('Jane Smith', null, '987-654-3210');

john.contact(); // logs 'John Doe can be reached at john.doe@example.com or (123) 456-7890'
jane.contact(); // logs 'Jane Smith has no contact information available'

Explanation: In this code snippet, the Person class has a name property and optional email and phone properties. The contact() function checks if both email and phone are not undefined and not null before logging the details. Otherwise, it logs a message saying that no contact information is available. To initialize the properties with null, the constructor uses the nullish coalescing operator, ??. When creating a new Person, you can pass null or undefined as arguments, and the class will interpret them as null.

Conclusion

Understanding and correctly implementing conditional types like NonNullable<T> in TypeScript is crucial to reduce potential code pitfalls. By reviewing examples of numerical operations and contact information handling, we've seen how this conditional type helps reinforce our code against null or undefined values. This highlights the utility of TypeScript's conditional types not only for enhancing code stability, but also for easing our coding journey. So keep experimenting and finding the best ways to deploy these tools for creating robust, secure, and efficient programs!

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Utilizing AWS Cognito for Authentication cover image

Utilizing AWS Cognito for Authentication

Utilizing AWS Cognito for Authentication AWS Cognito, one of the most popular services of the Amazon Web Services, is at the heart of many web and mobile applications, providing numerous useful user identity and data security features. It is designed to simplify the process of user authentication and authorization, and many developers decide to use it instead of developing their own solution. "Never roll out your own authentication" is a common phrase you'll hear in the development community, and not without a reason. Building an authentication system from scratch can be time-consuming and error-prone, with a high risk of introducing security vulnerabilities. Existing solutions like AWS Cognito have been built by expert teams, extensively tested, and are constantly updated to fix bugs and meet evolving security standards. Here at This Dot, we've used AWS Cognito together with Amplify in many of our projects, including Let's Chat With, an application that we recently open-sourced. In this blog post, we'll show you how we accomplished that, and how we used various Cognito configuration options to tailor the experience to our app. Setting Up Cognito Setting up Cognito is relatively straightforward, but requires several steps. In Let's Chat With, we set it up as follows: 1. Sign in to the AWS Console, then open Cognito. 2. Click the "Create user pool" to create a user pool. User Pools are essentially user directories that provide sign-up and sign-in options, including multi-factor authentication and user-profile functionality. 3. In the first step, as a sign-in option, select "Email", and click "Next". 4. Choose "Cognito defaults" as the password policy "No MFA" for multi-factor authentication. Leave everything else at the default, and click "Next". 5. In the "Configure sign-up experience" step, leave everything at the default settings. 6. In the "Configure message delivery" step, select "Send email with Cognito". 7. In the "Integrate your app" step, just enter names for your user pool and app client. For example, the user pool might be named "YourAppUserPoolDev", while the app client could be named "YourAppFrontend_Dev". 8. In the last step, review your settings and create the user pool. After the user pool is created, make note of its user pool ID: as well as the client ID of the app client created under the user pool: These two values will be passed to the configuration of the Cognito API. Using the Cognito API Let's Chat With is built on top of Amplify, AWS's collection of various services that make development of web and mobile apps easy. Cognito is one of the services that powers Amplify, and Amplify's SDK is offers some helper methods to interact with the Cognito API. In an Angular application like Let's Chat With, the initial configuration of Cognito is typically done in the main.ts` file as shown below: `typescript // apps/admin/src/main.ts Amplify.configure({ Auth: { userPoolId: process.env.USERPOOL_ID, userPoolWebClientId: process.env.USERPOOL_WEB_CLIENT_ID, } } ); ` How the user pool ID and user pool web client ID are injected depends on your deployment option. In our case, we used Amplify and defined the environment variables for injection into the built app using Webpack. Once Cognito is configured, you can utilize its authentication methods from the Auth` class in the `@aws-amplify/auth` package. For example, to sign in after the user has submitted the form containing the username and password, you can use the `Auth.signIn(email, password)` method as shown below: `typescript // libs/core/src/lib/amplify/auth.service.ts @Injectable({ providedIn: 'root', }) export class AuthService { constructor(private transloco: TranslocoService) {} signInAdmin(email: string, password: string): Observable { return from(Auth.signIn(email, password)).pipe( switchMap(() => { return this.isAdmin().pipe( switchMap((isAdmin) => { if (isAdmin) { return this.getCurrentUser(); } throw new Error(this.transloco.translate('userAuth.errors.notAdmin')); }) ); }) ); } getCurrentUser(): Observable { return from(Auth.currentUserInfo()).pipe( filter((user) => !!user), map((user) => this.cognitoToCoreUser(user)) ); } cognitoToCoreUser(cognitoUser: AmplifyCognitoUser): CoreUser { return { cognitoId: cognitoUser.username, emailVerified: cognitoUser.attributes.emailverified, }; } } ` The logged-in user object is then translated to an instance of CoreUser`, which represents the internal representation of the logged-in user. The AuthService class contains many other methods that act as a facade over the Amplify SDK methods. This service is used in authentication effects since Let's Chat With is based on NgRx and implements many core functionalities through NgRx effects: `typescript @Injectable() export class AuthEffects implements OnInitEffects { public signIn$ = createEffect(() => this.actions$.pipe( ofType(SignInActions.userSignInAttempted), withLatestFrom(this.store.select(AuthSelectors.selectRedirectUrl)), exhaustMap(([{ email, password }, redirectUrl]) => this.authService.signInAdmin(email, password).pipe( map((user) => AuthAPIActions.userLoginSuccess({ user })), tap(() => void this.router.navigateByUrl(redirectUrl || '/reports')), catchError((error) => of( AuthAPIActions.userSignInFailed({ errors: [error.message], email, }) ) ) ) ) ) ); } ` The login component triggers a SignInActions.userSignInAttempted` action, which is processed by the above effect. Depending on the outcome of the `signInAdmin` call in the `AuthService` class, the action is translated to either `AuthAPIActions.userLoginSuccess` or `AuthAPIActions.userSignInFailed`. The remaining user flows are implemented similarly: - Clicking signup triggers the Auth.signUp` method for user registration. - Signing out is done using Auth.signOut`. Reacting to Cognito Events How can you implement additional logic when a signup occurs, such as saving the user to the database? While you can use an NgRx effect to call a backend service for that purpose, it requires additional effort and may introduce a security vulnerability since the endpoint needs to be open to the public Internet. In Let's Chat With, we used Cognito triggers to perform this logic within Cognito without the need for extra API endpoints. Cognito triggers are a powerful feature that allows developers to run AWS Lambda functions in response to specific actions in the authentication and authorization flow. Triggers are configured in the "User pool properties" section of user pools in the AWS Console. We have a dedicated Lambda function that runs on post-authentication or post-confirmation events: The Lambda function first checks if the user already exists. If not, it inserts a new user object associated with the Cognito user into a DynamoDB table. The Cognito user ID is read from the event.request.userAttributes.sub` property. `javascript async function handler(event, context) { const owner = event.request.userAttributes.sub; if (owner) { const user = await getUser({ owner }); if (user == null) { await addUser({ owner, notificationConfig: DEFAULTNOTIFICATION_CONFIG }); } context.done(null, event); } else { context.done(null, event); } } async function getUser({ owner }) { const params = { ExpressionAttributeNames: { '#owner': 'owner' }, ExpressionAttributeValues: { ':owner': owner }, KeyConditionExpression: '#owner = :owner', IndexName: 'byOwner', TableName: process.env.USERTABLE_NAME, }; const { Items } = await documentClient().query(params).promise(); return Items.length ? Items[0] : null; } async function addUser(user) { const { owner, notificationConfig } = user; const date = new Date(); const params = { Item: { id: uuidv4(), typename: 'User', owner: owner, notificationConfig: notificationConfig, createdAt: date.toISOString(), updatedAt: date.toISOString(), termsAccepted: false, }, TableName: process.env.USERTABLE_NAME, }; await documentClient().put(params).promise(); } ` Customizing Cognito Emails Another Cognito trigger that we found useful for Let's Chat With is the "Custom message" trigger. This trigger allows you to customize the content of verification emails or messages for your app. When a user attempts to register or perform an action that requires a verification message, the trigger is activated, and your Lambda function is invoked. Our Lambda function reads the verification code and the email from the event, and creates a custom-designed email message using the template()` function. The template reads the HTML template embedded in the Lambda. `javascript exports.handler = async (event, context) => { try { if (event.triggerSource === 'CustomMessageSignUp') { const { codeParameter } = event.request; const { email } = event.request.userAttributes; const encodedEmail = encodeURIComponent(email); const link = ${process.env.REDIRECT_URL}email=${encodedEmail}&code=${codeParameter}`; const createdAt = new Date(); const year = createdAt.getFullYear(); event.response.emailSubject = 'Your verification code'; event.response.emailMessage = template(email, codeParameter, link, year); } context.done(null, event); console.log(Successfully sent custom message after signing up`); } catch (err) { context.done(null, event); console.error( Error when sending custom message after signing up`, JSON.stringify(err, null, 2) ); } }; const template = (email, code, link, year) => ... ; ` Conclusion Cognito has proven to be reliable and easy to use while developing Let's Chat With. By handling the intricacies of user authentication, it allowed us to focus on developing other features of the application. The next time you create a new app and user authentication becomes a pressing concern. Remember that you don't need to build it from scratch. Give Cognito (or a similar service) a try. Your future self, your users, and probably your sanity will thank you. If you're interested in the source code for Let's Chat With, check out its GitHub repository. Contributions are always welcome!...

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Introducing the All New SvelteKit and SCSS Kit for starter.dev cover image

Introducing the All New SvelteKit and SCSS Kit for starter.dev

Introduction At This Dot Labs, we love Svelte. We've even created a starter.dev kit for SvelteKit that you can use to scaffold your next frontend project using SCSS, TypeScript, Vitest and Storybook. What is starter.dev? Starter.dev helps developers get started building web apps with a variety of frameworks, showcasing how various libraries can fit together to solve similar problems. To do that, This Dot Labs has built a series of showcase apps that recreate the experience of using GitHub. What is SvelteKit? How is it unique? SvelteKit is a full-stack framework that gives you the best of both worlds: the page is server-side rendered on your first visit, but when you navigate to other pages, they are client-side rendered. SvelteKit gives you levers for your pages to use SSR (server-side rendering), CSR (client-side rendering), SSG (static site generation), SPA (single page application) & MPA (multi page application). The core of SvelteKit provides a highly configurable rendering engine. Why SvelteKit and not Svelte? SvelteKit isn’t built on top of Svelte, but it’s a backend web framework where Svelte is used as the view layer. In theory, you could rip it out and replace it with another component framework that supports server-side rendering, and the same is true for other web frameworks. This allows us to deploy everything as a Node server, or even use Vercel and serverless functions. Other reasons to use SvelteKit include: 1. Pages (file based routing) 2. Endpoints (API routes) 3. Nested layouts (way more powerful than just nesting files because the segment of the URL maps to your component hierarchy) 4. Hot module replacement (instant updates in the browser when you make a change preserving application state) 5. Preprocessing (TypeScript, SCSS, and Pug among others) 6. Building component libraries (creating and publishing npm packages) 7. Deployment options (adapters for any platform) Building a SvelteKit showcase presented several challenges given its uniqueness and different approach to building web apps. This blog post details what we chose to include in our SvelteKit GitHub clone, and how we integrated them. Project Structures and Naming Conventions SvelteKit has unique conventions in its project structure and naming conventions. Project Files src This is the meat of the project: - lib` contains your library code, which can be imported via the `$lib` alias, or packaged up for distribution using `svelte-package`. It can be imported by using the `$lib/*` alias. - server` contains your server-only library code. SvelteKit will prevent you from importing these in client code. - components` contain single responsibility components that are imported in our routes. They typically contain three files: `.spec.ts` for unit tests, `.svelte` and `.stories.ts` for storybook stories. Barrel files were not necessary here. - images` contains images that can be imported, and used in the project. - stores` contains state that needs to be accessed by multiple unrelated components, or by a regular JS module. - styles` contains styles that can be imported in our Svelte applications. - params` contains any param matchers your app needs - routes` contains the `routes` of your application - app.html` is your page template — an HTML document containing the following placeholders: - %sveltekit.head%` — link and script elements needed by the app, plus any head content - %sveltekit.body%` — the markup for a rendered page. This should live inside a `` or other element, rather than directly inside the body element, to prevent bugs caused by browser extensions injecting elements that are then destroyed by the hydration process. SvelteKit will warn you in development if this is not the case - %sveltekit.assets%` — either paths.assets, if specified, or a relative path to paths.base - %sveltekit.nonce%` — a CSP nonce for manually included links and scripts, if used - hooks.server.ts` contains your application's hooks static Any static assets that should be served as-is, like robots.txt` or `favicon.png`, go in here. svelte.config.js This file contains your Svelte and SvelteKit configuration. tsconfig.json Since SvelteKit relies on certain configuration being set a specific way, it generates its own .svelte-kit/tsconfig.json` file, which your own config extends. vite.config.js A SvelteKit project is really just a Vite project that uses the @sveltejs/kit/vite plugin, along with any other Vite configuration. Routing To get a deep dive of how routing works with SvelteKit, please check out this article. SCSS SvelteKit supports a number of CSS preprocessors. For people who are new to Svelte or SvelteKit, the syntax for using SCSS or SASS is simple, just need to add the lang="sass" attribute to the style tag. `scss ul { list-style-type: circle; li { position: relative; } } ` `sh npm i -D sass ` Then add SCSS support with the svelte-preprocess` package. `js // svelte.config.js import adapter from "@sveltejs/adapter-auto"; import preprocess from "svelte-preprocess"; / @type {import('@sveltejs/kit').Config} */ const config = { // Consult https://github.com/sveltejs/svelte-preprocess // for more information about preprocessors preprocess: preprocess({ typescript: true, scss: true }), kit: { adapter: adapter(), }, }; export default config; ` Why SCSS and not Tailwind? Tailwind CSS has been used by several other starter kits. We otherwise decided to go with SCSS as the syntax is simple and easily understood by even beginners. Vitest Test Driven Development (TDD) is one of the best ways to ensure your code works like it's supposed to work. It can also help you create reliable builds during continuous deployments. Vitest is an up-and-coming testing framework which has similar functionality to Jest. Since we are using Vite as our build tool for Svelte in this kit, Vitest has very good integration with Vite, and offers a similar testing environment without needing extra configuration. To test Svelte components that seemed to be hard to test. Such as two-way bindings, name slots, Context API, etc., we need to add more configuration. We added @testing-library/svelte`, `jsdom` and `@testing-library/jest-dom` that allow for similar functionality as Jest. `sh npm i -D @testing-library/svelte @testing-library/jest-dom jsdom ` We ensured the $lib` alias is supported in our tests by resolving the alias in our `vite.config.ts`. We also added a `setupTest.ts` to add `@testing-library/jest-dom` matchers & mocks of SvelteKit modules. `ts // vite.config.ts import { sveltekit } from "@sveltejs/kit/vite"; import type { UserConfig } from "vite"; import { configDefaults, type UserConfig as VitestConfig } from "vitest/config"; import path from "path"; const config: UserConfig & { test: VitestConfig["test"] } = { plugins: [sveltekit()], define: { // Eliminate in-source test code "import.meta.vitest": "undefined", }, test: { // jest like globals globals: true, environment: "jsdom", // in-source testing includeSource: ["src//*.{js,ts,svelte}"], // Add @testing-library/jest-dom matchers & mocks of SvelteKit modules setupFiles: ["./setupTests.ts"], // Exclude files in c8 coverage: { exclude: ["setupTest.ts"], }, deps: { // Put Svelte component here, e.g., inline: [/svelte-multiselect/, /msw/] inline: [], }, // Exclude playwright tests folder exclude: [...configDefaults.exclude, "tests"], }, // Ensure the $lib alias is supported in our tests resolve: { alias: { $lib: path.resolve(dirname, "./src/lib"), }, }, }; export default config; ` `ts // setupTest.ts / eslint-disable @typescript-eslint/no-empty-function */ import matchers from "@testing-library/jest-dom/matchers"; import { expect, vi } from "vitest"; import type { Navigation, Page } from "@sveltejs/kit"; import { readable } from "svelte/store"; import as environment from "$app/environment"; import as navigation from "$app/navigation"; import as stores from "$app/stores"; // Add custom jest matchers expect.extend(matchers); // Mock SvelteKit runtime module $app/environment vi.mock("$app/environment", (): typeof environment => ({ browser: false, dev: true, building: false, version: "any", })); // Mock SvelteKit runtime module $app/navigation vi.mock("$app/navigation", (): typeof navigation => ({ afterNavigate: () => {}, beforeNavigate: () => {}, disableScrollHandling: () => {}, goto: () => Promise.resolve(), invalidate: () => Promise.resolve(), invalidateAll: () => Promise.resolve(), preloadData: () => Promise.resolve(), preloadCode: () => Promise.resolve(), })); // Mock SvelteKit runtime module $app/stores vi.mock("$app/stores", (): typeof stores => { const getStores: typeof stores.getStores = () => { const navigating = readable(null); const page = readable({ url: new URL("http://localhost"), params: {}, route: { id: null, }, status: 200, error: null, data: {}, form: undefined, }); const updated = { subscribe: readable(false).subscribe, check: () => false, }; return { navigating, page, updated }; }; const page: typeof stores.page = { subscribe(fn) { return getStores().page.subscribe(fn); }, }; const navigating: typeof stores.navigating = { subscribe(fn) { return getStores().navigating.subscribe(fn); }, }; const updated: typeof stores.updated = { subscribe(fn) { return getStores().updated.subscribe(fn); }, check: () => false, }; return { getStores, navigating, page, updated, }; }); ` If you want to see some test recipes you can use on your SvelteKit projects, check out our SvelteKit-SCSS Github showcase. Storybook Like many of the other starter.dev kits, the SvelteKit starter uses Storybook to interactively view and build components in isolation. For more information on Storybook and SvelteKit visit the article. Linting ESLint` and `Prettier` are useful tools for keeping the project neat and consistent among multiple contributors. To quickly format your project, run: `sh npm run format ` Running ESLint and Prettier as part of your git workflow is important because it helps you fail fast. This helps us, as contributors, to have a more consistent production codebase. We achieved this in our SvelteKit-SCSS Github showcase with the help of Husky`, `lint-staged` and `Prettier`. How does lint-staged` work? It's specifically designed to work on "staged" files, which are files you've changed or created, but haven't yet committed to your project. Working on staged files limits the number of files you need to lint at any given time, and makes the workflow faster. We configured lint-staged` in our `package.json`. This runs `Prettier` pre-commit, and ensures the code is up to our ESLint standards. `json // package.json "lint-staged": { "src//*{.js,.ts,.html,.css,.svelte}": [ "prettier --plugin-search-dir . --write ." ], "src//*{.js,.ts,.svelte}": "eslint ." }, ` The commands you configure will run "pre-commit". As you're attempting to commit files to your project you'll see ESLint run in your terminal. Once it's done you may have successfully committed, or find yourself with linting errors you need to fix before you're able to commit the code. This works hand-in-hand with husky`. Husky uses distinct bash files with filenames that match the workflow step they correspond to, e.g. "pre-commit". `sh #!/bin/sh . "$(dirname "$0")//husky.sh" cd svelte-kit-scss npx lint-staged ` Running ESLint, or Prettier as part of your git workflow is important because it helps you fail fast, which helps contributors have a more consistent production codebase. Conclusion SvelteKit is a relatively new and novel framework. The structure represents our best judgment of how a basic SvelteKit application should be. We welcome everyone to take a look, and contribute back to the SvelteKit-SCSS and our SvelteKit-SCSS Github showcase if you have any improvements that you would like to propose!...

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...