Skip to content

Svelte Component Testing with Cypress + Vite

Cypress is a well-known e2e and integration testing framework. But since v7, a Cypress Component Test Runner was introduced, and it can be used to render and test components in isolation. It's still in alpha, so things may change! In this blog post, we will set up our environment to test Svelte components while using Vite.

Starting a new project

First, we will start by creating a new project with Vite.

npm init vite@latest
// Project name: › cypress-svelte-testing
// Select a framework: › svelte
// Select a variant: › svelte-ts

cd cypress-svelte-testing
npm install

*Note: check Vite's documentation to find out how to start a project with other package managers.

The project is now configured to use Svelte and TypeScript.

To start our server, we need to run the command npm run dev.

Svelte default site

We will be testing this application which consists of two components: App and Counter. App is the shell, or the main component wrapping the counter component: a button that updates its count when clicked.

Installing dependencies

To run our tests, we need to install a few dependencies: cypress: The testing framework @cypress/vite-dev-server: Responsible for lanching and restarting the server when files change. cypress-svelte-unit-test: A package to mount our components in the testing environment. Unfortunately, there's is no support at the moment of writing for our current environment, so we will be using a fork, and then we'll install it from the repo. @testing-library/cypress: Optional. Adds selectors for our queries.

Let's go ahead and install all of these as dev-dependencies.

npm i --save-dev cypress @cypress/vite-dev-server @testing-library/cypress

npm i --save-dev https://github.com/flakolefluk/cypress-svelte-unit-test

Configuring the test environment

Now, we need to follow we will have to update or create some files to configure our environment. We'll go through each of them.

  • <root>/cypress.json to let cypress know where to find the component test files.
{
  "componentFolder": "src",
  "testFiles": "**/*spec.{js,jsx,ts,tsx}"
}
  • <root>/cypress/plugins/index.ts to configure the dev server:
import { startDevServer } from '@cypress/vite-dev-server';
import path from 'path';

module.exports = (on, config) => {
  on('dev-server:start', async (options) => {
    return startDevServer({
      options,
      viteConfig: {
        configFile: path.resolve(__dirname, '..', '..', 'vite.config.js'),
      },
    });
  });

  return config;
};
  • <root>/cypress/support/commands.js to add the testing library commands
import '@testing-library/cypress/add-commands';
  • <root>/tsconfig.json to add typescript support for the testing library commands
{
  // ...
  "compilerOptions": {
	// ...
	// add the following and preserve the rest of the config file
    "types": ["cypress", "@testing-library/cypress"]
  },
  // ...
}
  • <root>/package.json to add a few scripts that will run our tests
{
  // ...
  "scripts": {
// ...
    "cy:open-ct": "cypress open-ct",
    "cy:run-ct": "cypress run-ct"
  },
  // ...
}

Our tests are ready to be written!

In our cypress configuration file, we declared that the tests will live inside the src. This makes it possible to locate our tests next to our components.

Writing tests

We will test both components that were created along with the project.

Create two files, one next to each component:

// <root>/src/App.spec.ts
import App from './App.svelte';
import { mount } from 'cypress-svelte-unit-test';
describe('App', function () {
  it('renders App with correct heading', () => {
    mount(App);
    cy.findByRole('heading', { level: 1, name: /hello typescript/i }).should(
      'exist'
    ).should('be.visible');
  });
});

// <root>/src/Counter.spec.ts
import Counter from './Counter.svelte';
import { mount } from 'cypress-svelte-unit-test';
describe('Counter', function () {
  it('Renders button and updates count when clicked', () => {
    mount(Counter);
    cy.findByRole('button', { name: 'Clicks: 0' }).should('exist');
    cy.findByRole('button', { name: /Click/i }).click();
    cy.findByRole('button', { name: 'Clicks: 1' }).should('exist');
  });
});

I wrote a couple of tests as a start. First, we check the App component and that it contains a heading with content, we also check that the element is visible.

For our Counter component, we first check that it exists. Then we click it and verify that the text inside it is updated with the expected current count.

In both tests, I'm using the queries provided by the *testing-library * package.

Running our tests

We can run our tests in two ways, with the run-ct and open-ct commands. The first one will tun the tests headlessly by default, while the second one will open the test runner.

Let's try both.

  • npm run cy:run-ct Your tests will run in your terminal and you'll get a nice overview of them.
Screen Shot 2021-10-15 at 17.09.07
  • npm run cy:open-ct Your tests will open in a browser and you'll be able to go through every step of your tests, which will make debugging a lot easier as you can see what's happening.
cypress-svelte-01

Time to write more tests!

Final thoughts

I hope this tutorial will help you set up your environment for working with Vite and Cypress to test your Svelte components. I find that testing with the open-ct command is really useful when writing tests. Most of the configuration shown is applicable to other frameworks too (There are other packages that will mount the components). Check the official documentation for more. You can find the code shown here in this repo.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Understanding Vue's Reactive Data cover image

Understanding Vue's Reactive Data

Introduction Web development has always been about creating dynamic experiences. One of the biggest challenges developers face is managing how data changes over time and reflecting these changes in the UI promptly and accurately. This is where Vue.js, one of the most popular JavaScript frameworks, excels with its powerful reactive data system. In this article, we dig into the heart of Vue's reactivity system. We unravel how it perfectly syncs your application UI with the underlying data state, allowing for a seamless user experience. Whether new to Vue or looking to deepen your understanding, this guide will provide a clear and concise overview of Vue's reactivity, empowering you to build more efficient and responsive Vue 3 applications. So, let’s kick off and embark on this journey to decode Vue's reactive data system. What is Vue's Reactive Data? What does it mean for data to be ”'reactive”? In essence, when data is reactive, it means that every time the data changes, all parts of the UI that rely on this data automatically update to reflect these changes. This ensures that the user is always looking at the most current state of the application. At its core, Vue's Reactive Data is like a superpower for your application data. Think of it like a mirror - whatever changes you make in your data, the user interface (UI) reflects these changes instantly, like a mirror reflecting your image. This automatic update feature is what we refer to as “reactivity”. To visualize this concept, let's use an example of a simple Vue application displaying a message on the screen: `javascript import { createApp, reactive } from 'vue'; const app = createApp({ setup() { const state = reactive({ message: 'Hello Vue!' }); return { state }; } }); app.mount('#app'); ` In this application, 'message' is a piece of data that says 'Hello Vue!'. Let's say you change this message to 'Goodbye Vue!' later in your code, like when a button is clicked. `javascript state.message = 'Goodbye Vue!'; ` With Vue's reactivity, when you change your data, the UI automatically updates to 'Goodbye Vue!' instead of 'Hello Vue!'. You don't have to write extra code to make this update happen - Vue's Reactive Data system takes care of it. How does it work? Let's keep the mirror example going. Vue's Reactive Data is the mirror that reflects your data changes in the UI. But how does this mirror know when and what to reflect? That's where Vue's underlying mechanism comes into play. Vue has a behind-the-scenes mechanism that helps it stay alerted to any changes in your data. When you create a reactive data object, Vue doesn't just leave it as it is. Instead, it sends this data object through a transformation process and wraps it up in a Proxy. Proxy objects are powerful and can detect when a property is changed, updated, or deleted. Let's use our previous example: `javascript import { createApp, reactive } from 'vue'; const app = createApp({ setup() { const state = reactive({ message: 'Hello Vue!' }); return { state }; } }); app.mount('#app'); ` Consider our “message” data as a book in a library. Vue places this book (our data) within a special book cover (the Proxy). This book cover is unique - it's embedded with a tracking device that notifies Vue every time someone reads the book (accesses the data) or annotates a page (changes the data). In our example, the reactive function creates a Proxy object that wraps around our state object. When you change the 'message': `javascript state.message = 'Goodbye Vue!'; ` The Proxy notices this (like a built-in alarm going off) and alerts Vue that something has changed. Vue then updates the UI to reflect this change. Let’s look deeper into what Vue is doing for us and how it transforms our object into a Proxy object. You don't have to worry about creating or managing the Proxy; Vue handles everything. `javascript const state = reactive({ message: 'Hello Vue!' }); // What vue is doing behind the scenes: function reactive(obj) { return new Proxy(obj, { // target = state and key = message get(target, key) { track(target, key) return target[key] }, set(target, key, value) { target[key] = value // Here Vue will trigger its reactivity system to update the DOM. trigger(target, key) } }) } ` In the example above, we encapsulate our object, in this case, “state”, converting it into a Proxy object. Note that within the second argument of the Proxy, we have two methods: a getter and a setter. The getter method is straightforward: it merely returns the value, which in this instance is “state.message” equating to 'Hello Vue!' Meanwhile, the setter method comes into play when a new value is assigned, as in the case of “state.message = ‘Hey young padawan!’”. Here, “value” becomes our new 'Hey young padawan!', prompting the property to update. This action, in turn, triggers the reactivity system, which subsequently updates the DOM. Venturing Further into the Depths If you have been paying attention to our examples above, you might have noticed that inside the Proxy` method, we call the functions `track` and `trigger` to run our reactivity. Let’s try to understand a bit more about them. You see, Vue 3 reactivity data is more about Proxy objects. Let’s create a new example: `vue import { reactive, watch, computed, effect } from "vue"; const state = reactive({ showSword: false, message: "Hey young padawn!", }); function changeMessage() { state.message = "It's dangerous to go alone! Take this."; } effect(() => { if (state.message === "It's dangerous to go alone! Take this.") { state.showSword = true; } }); {{ state.message }} Click! ` In this example, when you click on the button, the message's value changes. This change triggers the effect function to run, as it's actively listening for any changes in its dependencies__. How does the effect` property know when to be called? Vue 3 has three main functions to run our reactivity: effect`, `track`, and `trigger`. The effect` function is like our supervisor. It steps in and takes action when our data changes – similar to our effect method, we will dive in more later. Next, we have the track` function. It notes down all the important data we need to keep an eye on. In our case, this data would be `state.message`. Lastly, we've got the trigger` function. This one is like our alarm bell. It alerts the `effect` function whenever our important data (the stuff `track` is keeping an eye on) changes. In this way, trigger`, `track`, and `effect` work together to keep our Vue application reacting smoothly to changes in data. Let’s go back to them: `javascript function reactive(obj) { return new Proxy(obj, { get(target, key) { // target = state & key = message track(target, key) // keep an eye for this return target[key] }, set(target, key, value) { target[key] = value trigger(target, key) // trigger the effects! } }) } ` Tracking (Dependency Collection) Tracking is the process of registering dependencies between reactive objects and the effects that depend on them. When a reactive property is read, it's "tracked" as a dependency of the current running effect. When we execute track()`, we essentially store our effects in a Set object. But what exactly is an "effect"? If we revisit our previous example, we see that the effect method must be run whenever any property changes. This action — running the effect method in response to property changes — is what we refer to as an "Effect"! (computed property, watcher, etc.) > Note: We'll outline a basic, high-level overview of what might happen under the hood. Please note that the actual implementation is more complex and optimized, but this should give you an idea of how it works. Let’s see how it works! In our example, we have the following reactive object: `javascript const state = reactive({ showSword: false, message: "Hey young padawn!", }); // which is transformed under the hood to: function reactive(obj) { return new Proxy(obj, { get(target, key) { // target = state | key = message track(target, key) // keep an eye for this return target[key] }, set(target, key, value) { target[key] = value trigger(target, key) // trigger the effects! } }) } ` We need a way to reference the reactive object with its effects. For that, we use a WeakMap. Which type is going to look something like this: `typescript WeakMap>> ` We are using a WeakMap to set our object state as the target (or key). In the Vue code, they call this object `targetMap`. Within this targetMap` object, our value is an object named `depMap` of Map type. Here, the keys represent our properties (in our case, that would be `message` and `showSword`), and the values correspond to their effects – remember, they are stored in a Set that in Vue 3 we refer to as `dep`. Huh… It might seem a bit complex, right? Let's make it more straightforward with a visual example: With the above explained, let’s see what this Track` method kind of looks like and how it uses this `targetMap`. This method essentially is doing something like this: `javascript let activeEffect; // we will see more of this later function track(target, key) { if (activeEffect) { // depsMap` maps targets to their keys and dependent effects let depsMap = targetMap.get(target); // If we don't have a depsMap for this target in our targetMap`, create one. if (!depsMap) { depsMap = new Map(); targetMap.set(target, depsMap); } let dep = depsMap.get(key); if (!dep) { // If we don't have a set of effects for this key in our depsMap`, create one. dep = new Set(); depsMap.set(key, dep); } // Add the current effect as a dependency dep.add(activeEffect); } } ` At this point, you have to be wondering, how does Vue 3 know what activeEffect` should run? Vue 3 keeps track of the currently running effect by using a global variable. When an effect is executed, Vue temporarily stores a reference to it in this global variable, allowing the track` function to access the currently running effect and associate it with the accessed reactive property. This global variable is called inside Vue as `activeEffect`. Vue 3 knows which effect is assigned to this global variable by wrapping the effects functions in a method that invokes the effect whenever a dependency changes. And yes, you guessed, that method is our effect` method. `javascript effect(() => { if (state.message === "It's dangerous to go alone! Take this.") { state.showSword = true; } }); ` This method behind the scenes is doing something similar to this: `javascript function effect(update) { //the function we are passing in const effectMethod = () => { // Assign the effect as our activeEffect` activeEffect = effectMethod // Runs the actual method, also triggering the get` trap inside our proxy update(); // Clean the activeEffect after our Effect has finished activeEffect = null } effectMethod() } ` The handling of activeEffect` within Vue's reactivity system is a dance of careful timing, scoping, and context preservation. Let’s go step by step on how this is working all together. When we run our `Effect` method for the first time, we call the `get` trap of the Proxy. `javascript function effect(update) const effectMethod = () => { // Storing our active effect activeEffect = effectMethod // Running the effect update() ... } ... } effect(() => { // we call the the get` trap when getting our `state.message` if (state.message === "It's dangerous to go alone! Take this.") { state.showSword = true; } }); ` When running the get` trap, we have our `activeEffect` so we can store it as a dependency. `javascript function reactive(obj) { return new Proxy(obj, { // Gets called when our effect runs get(target, key) { track(target, key) // Saves the effect return target[key] }, // ... (other handlers) }) } function track(target, key) { if (activeEffect) { //... rest of the code // Add the current effect as a dependency dep.add(activeEffect); } } ` This coordination ensures that when a reactive property is accessed within an effect, the track function knows which effect is responsible for that access. Trigger Method Our last method makes this Reactive system to be complete. The trigger` method looks up the dependencies for the given target and key and re-runs all dependent effects. `javascript function trigger(target, key) { const depsMap = targetMap.get(target); if (!depsMap) return; // no dependencies, no effects, no need to do anything const dep = depsMap.get(key); if (!dep) return; // no dependencies for this key, no need to do anything // all dependent effects to be re-run dep.forEach(effect => { effect() }); } ` Conclusion Diving into Vue 3's reactivity system has been like unlocking a hidden superpower in my web development toolkit, and honestly, I've had a blast learning about it. From the rudimentary elements of reactive data and instantaneous UI updates to the intricate details involving Proxies, track and trigger functions, and effects, Vue 3's reactivity is an impressively robust framework for building dynamic and responsive applications. In our journey through Vue 3's reactivity, we've uncovered how this framework ensures real-time and precise updates to the UI. We've delved into the use of Proxies to intercept and monitor variable changes and dissected the roles of track and trigger functions, along with the 'effect' method, in facilitating seamless UI updates. Along the way, we've also discovered how Vue ingeniously manages data dependencies through sophisticated data structures like WeakMaps and Sets, offering us a glimpse into its efficient approach to change detection and UI rendering. Whether you're just starting with Vue 3 or an experienced developer looking to level up, understanding this reactivity system is a game-changer. It doesn't just streamline the development process; it enables you to create more interactive, scalable, and maintainable applications. I love Vue 3, and mastering its reactivity system has been enlightening and fun. Thanks for reading, and as always, happy coding!...

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Declarative Canvas with Svelte cover image

Declarative Canvas with Svelte

The ` element and the Canvas API let us draw graphics via JavaScript. However, its Imperative API can be converted into a Declarative one using Svelte. The technique to achieve this will require you to use what is sometimes called Renderless Components_. Renderless Components In Svelte, all the sections of a .svelte file are optional, including the template. This allows us to create a component that will not be rendered, but can contain some logic in the ` section. Let's create a new project. I'll be using Vite and Svelte for this tutorial. `bash npm init vite ✔ Project name: canvas-svelte ✔ Select a framework: › svelte ✔ Select a variant: › svelte-ts cd canvas-svelte npm i ` Now that our project is ready, let's create a new component. `html console.log("No template"); ` We will be printing a message to the console when the component is initialized. Let's see how it works by making some changes to the entry point of our application. `ts // src/main.ts // import App from './App.svelte' import Renderless from './lib/Renderless.svelte' const app = new Renderless({ target: document.getElementById('app') }) export default app ` If we start our server and open the developer tools in our browser, we will see the message printed. It's working. Note that this component, even if it doesn't have a template, still behaves as a regular component instance, and you will still have access to the component Lifecycle methods. Let's test it. `html import { onMount } from "svelte"; console.log("No template"); onMount(() => { console.log("Component mounted"); }); ` We added a second message to be shown after the component is mounted. Both messages are now printed in the expected order. This means that we can use our Renderless Component just as any other Svelte Component. Let's revert the changes to the main.ts` file, and "render" the component inside the App component. `ts // src/main.ts import App from './App.svelte' const app = new App({ target: document.getElementById('app') }) export default app ` `html import { onMount } from "svelte"; import Renderless from "./lib/Renderless.svelte"; console.log("App: initialized"); onMount(() => { console.log("App: mounted"); }); ` Finally, let's also modify our Renderless component to log more meaningful messages. `html import { onMount } from "svelte"; console.log("Renderless: initialized"); onMount(() => { console.log("Renderless: mounted"); }); ` It's important to note the order of initialization and mounting of the components. This will be important when we create our Canvas and renderless_ Components. There's a third way to mount our component and that's passing it as a child of another component. This is also called content projection. And the way that we do this is by using slots__. Let's create a container component that will render elements in a slot. I will also add more elements that will live along with the element. `html import { onMount } from "svelte"; console.log("Container: initialized"); onMount(() => { console.log("Container: mounted"); }); The container of things invisible things ` Let's also add a prop_ to the Renderless component to add some kind of identifier to it. `html import { onMount } from "svelte"; export let id:string = "NoId" console.log(Renderless ${id}: initialized`); onMount(() => { console.log(Renderless ${id}: mounted`); }); ` Finally, in our App, we update the template to use the container, and pass multiple instances of Renderless to it. `html import { onMount } from "svelte"; import Container from "./lib/Container.svelte"; import Renderless from "./lib/Renderless.svelte"; console.log("App: initialized"); onMount(() => { console.log("App: mounted"); }); ` Now, we can see the rendered Container and the renderless_ components logging when they are initialized and mounted. Now that we've learned about renderless components, let's use them with the ` element. `` and the Canvas API The canvas element cannot contain any children, except for a fallback element to render. Anything that you may want to render inside the canvas must be done using its imperative API. Let's create a new Canvas component and render an empty canvas. `html import { onMount } from "svelte"; console.log("Canvas: initialized"); onMount(() => { console.log("Canvas: mounted"); }); ` Update the App component to import and use Canvas. `html import { onMount } from "svelte"; import Canvas from "./lib/Canvas.svelte"; console.log("App: initialized"); onMount(() => { console.log("App: mounted"); }); ` If we open the browser dev tools, we should see a canvas element rendered now. Rendering elements inside canvas As mentioned previously, we cannot add elements to draw inside our canvas. We have to use the API to do it. To get a reference to the element, we will use the bind:this` directive. It's important to understand that, to use the API, we need the element to be available. This means that we will have to draw after the component is mounted. `html import { onMount } from "svelte"; let canvasElement: HTMLCanvasElement console.log("1", canvasElement) // undefined!!! console.log("Canvas: initialized"); onMount(() => { console.log("2", canvasElement) // OK!!! console.log("Canvas: mounted"); }); ` Now let's draw a line (I'm removing all the logging from the component for clarity). `html import { onMount } from "svelte"; let canvasElement: HTMLCanvasElement onMount(() => { // get canvas context let ctx = canvasElement.getContext("2d") // draw line ctx.beginPath(); ctx.moveTo(10, 20); // line will start here ctx.lineTo(150, 100); // line ends here ctx.stroke(); // draw it }); ` To draw, we need the canvas context. So we must do it after mounting the component. Then, we can start drawing using the canvas API. If we want to add a second line, we would have to add a new block of code. `html import { onMount } from "svelte"; let canvasElement: HTMLCanvasElement onMount(() => { // get canvas context let ctx = canvasElement.getContext("2d") // draw first line ctx.beginPath(); ctx.moveTo(10, 20); // line will start here ctx.lineTo(150, 100); // line ends here ctx.stroke(); // draw it // draw second line ctx.beginPath(); ctx.moveTo(10, 40); // line will start here ctx.lineTo(150, 120); // line ends here ctx.stroke(); // draw it }); ` We can see that we are starting to add more and more code to our component just by drawing simple shapes. This can get out of hand quickly. We can create helper functions to draw the lines. `html import { onMount } from "svelte"; let canvasElement: HTMLCanvasElement; onMount(() => { // get canvas context let ctx = canvasElement.getContext("2d"); // draw first line drawLine(ctx, [10, 20], [150, 100]); // draw second line drawLine(ctx, [10, 40], [150, 120]); }); type Point = [number, number]; function drawLine(ctx: CanvasRenderingContext2D, start: Point, end: Point) { ctx.beginPath(); ctx.moveTo(...start); // line will start here ctx.lineTo(...end); // line ends here ctx.stroke(); // draw it } ` The code becomes more readable, but we are still delegating all the responsibility to the Canvas component, which will translate into having a very complex component. We can avoid this by using renderless_ components and the Context API. We know a few things so far: We require the Canvas context to draw. We can get the context after the component is mounted. Child components are mounted before the parent component. Parent components are initialized before child components. We can use to mount child components. We want to split our component into multiple. For this example, we want the Line component to draw itself. Canvas and Line are coupled. A Line component cannot be drawn without a Canvas, and it needs the canvas context. The problem is that the context is not available when we mount the Child component (Line is mounted before Canvas), so we need a different approach. Instead of passing the context to draw itself, we will let the parent component know that a child component needs to be drawn. We'll communicate the Canvas and Line components using Context`. Context is a way for two or more components to communicate. Context can only be set or retrieved during initialization, which is what we need in our case. Remember that Canvas is initialized before our Line component. Let's start by moving the line rendering to its own component. I will also move some types to their own file to be shared across components. `ts // src/types.ts export type Point = [number, number]; export type DrawFn = (ctx: CanvasRenderingContext2D) => void; export type CanvasContext = { addDrawFn: (fn: DrawFn) => void; removeDrawFn: (fn: DrawFn) => void; }; ` `html import type { Point } from "./types"; export let start: Point; export let end: Point; function draw(ctx: CanvasRenderingContext2D) { ctx.beginPath(); ctx.moveTo(...start); ctx.lineTo(...end); ctx.stroke(); } ` This is very similar to what we had in our Canvas component, but abstracted to a reusable component. Now we need a Communicate Canvas and Line components. Our Canvas will work as the orchestrator of all the rendering. It will initialize all the Child components, gather the rendering functions, and draw them when required. `html import { onMount, setContext } from "svelte"; import type { DrawFn } from "./types"; let canvasElement: HTMLCanvasElement; let fnsToDraw = [] as DrawFn[]; setContext("canvas", { addDrawFn: (fn: DrawFn) => { fnsToDraw.push(fn); }, removeDrawFn: (fn: DrawFn) => { let index = fnsToDraw.indexOf(fn); if (index > -1){ fnsToDraw.splice(index, 1); } }, }); onMount(() => { // get canvas context let ctx = canvasElement.getContext("2d"); draw(ctx); }); function draw(ctx){ fnsToDraw.forEach(draw => draw(ctx)); } ` The first thing to note is that our template has changed and now we have a ` element beside our canvas. It will be used to mount any children that we pass into our canvas-- in our case, the Line components. These will not add any HTML element. In the script section, we added an array to hold all the render functions to draw. We also set a new context. This has to be done during initialization. Our Canvas is initialized before Line, so we set two methods here. These are methods to add and remove a function from our array that holds them. Then any Child component can have access to this context, and call its methods. That's exactly what we'll do next in the Line component. `html import { getContext, onDestroy, onMount } from "svelte"; import type { Point, CanvasContext } from "./types"; export let start: Point; export let end: Point; let canvasContext = getContext("canvas") as CanvasContext; onMount(() => { canvasContext.addDrawFn(draw); }); onDestroy(() => { canvasContext.removeDrawFn(draw); }); function draw(ctx: CanvasRenderingContext2D) { ctx.beginPath(); ctx.moveTo(...start); ctx.lineTo(...end); ctx.stroke(); } ` We register the function using the context previously set by Canvas when we mount this component. We could do it on initialization too because we know that context will be available anyway. But I prefer doing it after the component is mounted. And when the element is destroyed, it removes itself from the list of rendering functions. Finally, let's update our App to use the new Canvas and Line components. `html import Canvas from "./lib/Canvas.svelte"; import Line from "./lib/Line.svelte"; ` We've successfully updated our Canvas component to use a declarative approach. A few things are missing though. We are only drawing once when the Canvas component is mounted. We need to make the canvas render frequently to update itself when changes happen (unless you only want to render once). Note that we would have to do it with or without the approach we've taken. And it's a common way of updating the canvas contents. `html // NOTE: some code removed for readability // ... let frameId: number // ... onMount(() => { // get canvas context let ctx = canvasElement.getContext("2d"); frameId = requestAnimationFrame(() => draw(ctx)); }); onDestroy(() => { if (frameId){ cancelAnimationFrame(frameId) } }) function draw(ctx: CanvasRenderingContext2D) { if (clearFrames) { ctx.clearRect(0,0,canvasElement.width, canvasElement.width) } fnsToDraw.forEach((fn) => fn(ctx)); frameId = requestAnimationFrame(() => draw(ctx)) } ` We achieve this rerendering of the canvas using the requestAnimationFrame` method. The callback passed in will be run before the browsers' repaint. First, we create a new variable to assign the current frameId` (required for canceling the animation). Then, when we mount the component, we invoke `requestAnimationFrame` and assign the returned id to our variable. So far, the end result is as before. The difference is now in our draw function that will request a new animation frame each time after being drawn. We will also clear our canvas by default. Otherwise, when we are animating, each frame would be drawn on top of each other (This might be the desired effect. In that case the `clearFrame` prop can be set to false). Our canvas will update each frame until we destroy our component and cancel any current animation using the id previously stored. Adding more features The basic functionality for the components is working, but we may want to add more features. For this example, we will be exposing two events: onmousemove` and `onmouseleave`. To do this, we need to add a few things two our Canvas component. In the template, change the canvas element to this: `html ` Now, the events can be handled in our App: `html import Canvas from "./lib/Canvas.svelte"; import Line from "./lib/Line.svelte"; import type { Point } from "./lib/types"; function followMouse(e) { let rect = e.target.getBoundingClientRect(); end = [e.clientX - rect.left, e.clientY - rect.top]; } let start = [0, 0] as Point; let end = [0, 0] as Point; followMouse(e)} on:mouseleave={() => { end = [0, 0]; }} > ` Svelte is responsible for updating the end position of the line. But our Canvas component is the one used to update the canvas content (using requestAnimationFrame). Wrapping up I hope this tutorial helps you as an introduction to use canvas in Svelte, but also to understand how we can turn a library with an Imperative API into a more declarative one. There are a few examples of these ideas with more complex examples using a similar approach, like svelte-cubed or svelte-leaflet. From the svelte-cubed` docs: This ... `ts import as THREE from 'three'; function render(element) { const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera( 45, element.clientWidth / element.clientHeight, 0.1, 2000 ); const renderer = new THREE.WebGLRenderer(); renderer.setSize(element.clientWidth / element.clientHeight); element.appendChild(renderer.domElement); const geometry = new THREE.BoxGeometry(); const material = new THREE.MeshNormalMaterial(); const box = new THREE.Mesh(geometry, material); scene.add(box); camera.position.x = 2; camera.position.y = 2; camera.position.z = 5; camera.lookAt(new THREE.Vector3(0, 0, 0)); renderer.render(scene, camera); } ` becomes... `html import as THREE from 'three'; import as SC from 'svelte-cubed'; ` We just scratched the surface of the Canvas API, but you can extend it for your own needs, or even create a library for it!...

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....