Skip to content
Tom VanAntwerp

AUTHOR

Tom VanAntwerp

Senior Software Engineer

Select...
Select...
Upgrading from Astro 2 to Astro 4 cover image

Upgrading from Astro 2 to Astro 4

Astro has released version 4 just a few months after launching version 3. Here’s are the most important new features to know about if you haven’t upgraded from v2 yet....

JavaScript Errors: An Introductory Primer cover image

JavaScript Errors: An Introductory Primer

JavaScript Errors are an integral part of the language, and its runtime environment. They provide valuable feedback when something goes wrong during the execution of your script. And once you understand how to use and handle Errors, you'll find them a much better debugging tool than always reaching for console.log. Why Use Errors? When JavaScript throws errors, it's usually because there's a mistake in the code. For example, trying to access properties of null or undefined would throw a TypeError. Or trying to use a variable before it has been declared would throw a ReferenceError. But these can often be caught before execution by properly linting your code. More often, you'll want to create your own errors in your programs to catch problems unique to what you're trying to build. Throwing your own errors can make it easier for you to interrupt the control flow of your code when necessary conditions aren't met. Why would you want to use Error instead of just console.logging all sorts of things? Because an Error will force you to address it. JavaScript is optimistic. It will do its best to execute despite all sorts of issues in the code. Just logging some problem might not be enough to notice it. You could end up with subtle bugs in your program and not know! Using console.log won't stop your program from continuing to execute. An Error, however, interrupts your program. It tells JavaScript, "we can't proceed until we've fixed this problem". And then JavaScript happily passes the message on to you! Using Errors in JavaScript Here's an example of throwing an error: ` When an Error is thrown, nothing after that throw in your scope will be executed. JavaScript will instead pass the Error to the nearest error handler higher up in the call stack. If no handler is found, the program terminates. Since you probably don't want your programs to crash, it's important to set up Error handling. This is where something like try / catch comes in. Any code you write inside the try will attempt to execute. If an Error is thrown by anything inside the try, then the following catch block is where you can decide how to handle that Error. ` In the catch block, you receive an error object (by convention, this is usually named error or err, but you could give it any name) which you can then handle as needed. Asynchronous Code and Errors There are two ways to write asynchronous JavaScript, and they each have their own way of writing Error handling. If you're using async / await, you can use the try / catch block as in the previous example. However, if you're using Promises, you'll want to chain a catch to the Promise like so: ` Understanding the Error Object The Error object is a built-in object that JavaScript provides to handle runtime errors. All types of errors inherit from it. Error has several useful properties. - message: Probably the most useful of Error's properties, message is a human-readable description of the error. When creating a new Error, the string you pass will become the message. - name: A string representing the error type. By default, this is Error. If you're using a built-in sub-class of Error like TypeError, it will be that instead. Otherwise, if you're creating a custom type of Error, you'll need to set this in the constructor. - stack: While technically non-standard, stack is a widely supported property that gives a full stack trace of where the error was created. - cause: This property allows you to give more specific data when throwing an error. For example, if you want to add a more detailed message to a caught error, you could throw a new Error with your message and pass the original Error as the cause. Or you could add structured data for easier error analysis. Creating an Error object is quite straightforward: ` In addition to the generic Error, JavaScript provides several built-in sub-classes of Error: - EvalError: Thrown when a problem occurs with the eval() function. This only exists for backwards compatibility, and will not be thrown by JavaScript. You shouldn't use eval() anyway. - InternalError: Non-standard error thrown when something goes wrong in the internals of the JavaScript engine. Really only used in Firefox. - RangeError: Thrown when a value is not within the expected range. - ReferenceError: Thrown when a value doesn't exist yet / hasn't been initialized. - SyntaxError: Thrown when a parsing error occurs. - TypeError: Thrown when a variable is not of the expected type. - URIError: Thrown when using a global URI handling function incorrectly. Each of these error types inherits from the Error object and generally adds no additional properties or methods, but they do change the name property to reflect the error type. Making Your Own Custom Errors It's sometimes useful to extend the Error object yourself! This lets you add properties to particular Errors you throw, as well as easily check in catch blocks if an Error is of a particular type. To extend Error, use a class. ` In this example, CustomError extends the Error class. It changes the name to CustomError and gives it the new property foo: 'bar'. You can then throw your CustomError, check if the error in your catch block is an instance of the CustomError, and access the properties associated with your CustomError. This gives you a lot more control over how Errors are structured and validated, which could greatly aid with debugging because your errors won't all just be Errors. Common Confusions There are many ways that using Errors can go subtly wrong. Here are some of the common issues to keep in mind when working with Error. Failure to Catch When an Error is thrown, the program will cease executing anything else in its scope and start working its way back up the call stack until it finds a catch block to deal with the Error. If it never finds a catch, the program will crash. So it's important to make sure you actually catch your errors, or else you might terminate your program unintentionally for small and recoverable issues. It's especially helpful to think about catching errors when you're executing code from external libraries which you don't control. You may import a function to handle something for you, and it unexpectedly throws an Error. You should anticipate this possibility and ask yourself: "Should this code go inside of a try / catch block?" to prevent an error like this from crashing your code. Network Requests Don't Throw on 400 and 500 Statuses You might want to make a request to an API, and then handle an error if the request fails. ` Maybe you made a bad request and got back a 400. Maybe you're not properly authenticated and got a 401 or 403. Maybe the endpoint is invalid and you get a 404. Or maybe the server is having a bad day and you get a 500. In none of those cases will you get an Error! From JavaScript's point of view, your request worked. You sent some data to a place, and the place sent you something back. Mission accomplished! Except it's not. You need to deal with these HTTP error statuses. So if you want to handle responses that aren't OK, you need to do it explicitly. To fix the previous example: ` You Can Throw Anything You should throw new Errors. But you could throw 'literally anything'. There's nothing forcing you to only throw an Error. However, it's a lot harder to handle your errors if there's no consistency in what to expect in your catch blocks. It's a best practice to only throw an Error and not any other kind of JavaScript object. This problem becomes especially clear in TypeScript, when the default type of an error in a catch block is not Error, but unknown. TypeScript has no way to know if an error passed into the catch is going to actually be an Error or not, which can make it more frustrating to write error handling code. For this reason, it's often a good idea to check what exactly you've received before trying to handle it. ` (Alas, you cannot throw 🥳. That's a SyntaxError. But throw new Error('🥳') is still perfectly valid!) Conclusion Wielding JavaScript Errors is a big upgrade from console.logging all the things and hoping for the best. And it's not very hard to do it well! By using Error, your apps will be much more explicit in how you expect things to work, and how you expect things might not work. And when something does go wrong, you'll be more likely to notice and better equipped to figure out the problem....

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged cover image

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged

One way to keep your codebase clean is to enforce linting, formatting, and type checking on every commit. This is made very easy with pre-commit hooks. Using Husky, you can run arbitrary commands before a commit is made. This can be combined with lint-staged, which allows you to run commands on only the files that have been staged for commit. This is useful because you don't want to run linting, formatting, and type checking on every file in your project, but only on the ones that have been changed. But if you're using an Nx monorepo for your project, things can get a little more complicated. Rather than have you use eslint or prettier directly, Nx has its own scripts for linting and formatting. And type checking is complicated by the use of specific tsconfig.json files for each app or library. Setting up pre-commit hooks with Nx isn't as straightforward as in a simpler repository. This guide will show you how to set up pre-commit hooks to run linting, formatting, and type checking in an Nx monorepo. Configure Formatting Nx comes with a command, nx format:write for applying formatting to affected files which we can give directly to lint-staged. This command uses Prettier under the hood, so it will abide by whatever rules you have in your root-level .prettierrc file. Just install Prettier, and add your preferred configuration. ` Then add a .prettierrc file to the root of your project with your preferred configuration. For example, if you want to use single quotes and trailing commas, you can add the following: ` Configure Linting Nx has its own plugin that uses ESLint to lint projects in your monorepo. It also has a plugin with sensible ESLint defaults for your linter commands to use, including ones specific to Nx. To install them, run the following command: ` Then, we can create a default .eslintrc.json file in the root of our project: ` The above ESLint configuration will, by default, apply Nx's module boundary rules to any TypeScript or JavaScript files in your project. It also applies its recommended rules for JavaScript and TypeScript respectively, and gives you room to add your own. You can also have ESLint configurations specific to your apps and libraries. For example, if you have a React app, you can add a .eslintrc.json file to the root of your app directory with the following contents: ` Set Up Type Checking Type checking with tsc is normally a very straightforward process. You can just run tsc --noEmit to check your code for type errors. But things are more complicated in Nx with lint-staged. There are a two tricky things about type checking with lint-staged in an Nx monorepo. First, different apps and libraries can have their own tsconfig.json files. When type checking each app or library, we need to make sure we're using that specific configuration. The second wrinkle comes from the fact that lint-staged passes a list of staged files to commands it runs by default. And tsc will only accept either a specific tsconfig file, or a list of files to check. We do want to use the specific tsconfig.json files, and we also only want to run type checking against apps and libraries with changes. To do this, we're going to create some Nx run commands within our apps and libraries and run those instead of calling tsc directly. Within each app or library you want type checked, open the project.json file, and add a new run command like this one: ` Inside commands is our type-checking command, using the local tsconfig.json file for that specific Nx app. The cwd option tells Nx where to run the command from. The forwardAllArgs option tells Nx to ignore any arguments passed to the command. This is important because tsc will fail if you pass both a tsconfig.json and a list of files from lint-staged. Now if we ran nx affected --target=typecheck from the command line, we would be able to type check all affected apps and libraries that have a typecheck target in their project.json. Next we'll have lint-staged handle this for us. Installing Husky and lint-staged Finally, we'll install and configure Husky and lint-staged. These are the two packages that will allow us to run commands on staged files before a commit is made. ` In your package.json file, add the prepare script to run Husky's install command: ` Then, run your prepare script to set up git hooks in your repository. This will create a .husky directory in your project root with the necessary file system permissions. ` The next step is to create our pre-commit hook. We can do this from the command line: ` It's important to use Husky's CLI to create our hooks, because it handles file system permissions for us. Creating files manually could cause problems when we actually want to use the git hooks. After running the command, we will now have a file at .husky/pre-commit that looks like this: ` Now whenever we try to commit, Husky will run the lint-staged command. We've given it some extra options. First, --concurrent false to make sure attempts to write fixes with formatting and linting don't conflict with simultaneous attempts at type checking. Second is --relative, because our Nx commands for formatting and linting expect a list of file paths relative to the repo root, but lint-staged would otherwise pass the full path by default. We've got our pre-commit command ready, but we haven't actually configured lint-staged yet. Let's do that next. Configuring lint-staged In a simpler repository, it would be easy to add some lint-staged configuration to our package.json file. But because we're trying to check a complex monorepo in Nx, we need to add a separate configuration file. We'll call it lint-staged.config.js and put it in the root of our project. Here is what our configuration file will look like: ` Within our module.exports object, we've defined two globs: one that will match any TypeScript files in our apps, libraries, and tools directories, and another that also matches JavaScript and JSON files in those directories. We only need to run type checking for the TypeScript files, which is why that one is broken out and narrowed down to only those files. These globs defining our directories can be passed a single command, or an array of commands. It's common with lint-staged to just pass a string like tsc --noEmit or eslint --fix. But we're going to pass a function instead to combine the list of files provided by lint-staged with the desired Nx commands. The nx affected and nx format:write commands both accept a --files option. And remember that lint-staged always passes in a list of staged files. That array of file paths becomes the argument to our functions, and we concatenate our list of files from lint-staged into a comma-delimitted string and interpolate that into the desired Nx command's --files option. This will override Nx's normal behavior to explicitly tell it to only run the commands on the files that have changed and any other files affected by those changes. Testing It Out Now that we've got everything set up, let's try it out. Make a change to a TypeScript file in one of your apps or libraries. Then try to commit that change. You should see the following in your terminal as lint-staged runs: ` Now, whenever you try to commit changes to files that match the globs defined in lint-staged.config.js, the defined commands will run first, and verify that the files contain no type errors, linting errors, or formatting errors. If any of those commands fail, the commit will be aborted, and you'll have to fix the errors before you can commit. Conclusion We've now set up a monorepo with Nx and configured it to run type checking, linting, and formatting on staged files before a commit is made. This will help us catch errors before they make it into our codebase, and it will also help us keep our codebase consistent and readable. To see an example Nx monorepo with these configurations, check out this repo....

Creating Custom Types in TypeScript with Indexed Access Types, Const Assertions, and Satisfies cover image

Creating Custom Types in TypeScript with Indexed Access Types, Const Assertions, and Satisfies

Frequently when writing TypeScript, you may need to create a new type from an existing type. For example, you may have a large type that you need to use in multiple places, and you want to create a new type that is a subset of the original type. Or you may have a large object full of data that you want to use to create types to maintain type safety. In this post, we'll cover how to create new types from existing types and data in TypeScript. Accessing parts of a type with indexed access types In JavaScript, you can access an object property's value with the string key of that property using someObject['someProperty']. You can use the same sort of syntax with TypeScript's types to get specific pieces out of a type. For example: ` Using TypeName["someProperty"] allows you to extract that piece of the type. These are called indexed access types. If you needed to use a piece of a large, complex type, you could simply pull that piece out into its own type using indexed access types. Why indexed access types? But what good is this? Couldn't I just refactor? In the previous example, wouldn't it be better for the pizza's Toppings to be a type of its own before defining Pizza, and then passed in as toppings: Toppings? I'd say yes, it would be. (And we'll cover that later!) But what if you're working with a type that you don't have control over (e.g., from a third party library), but you need to use a piece of it in a different type? That's where indexed access types come in. Why not just use Pick? Wait, why not just use Pick instead of indexed access types? You would want to use the indexed access type when you want _specifically_ a piece of the type, and not a type with that single property. For example: ` The index is a type! It isn't obvious from looking at the examples, but when you index a type, you're doing so with another type! So if I wanted to access a piece of a type with a defined string, it would fail. For example: ` In this case, I would instead have to use Pizza[typeof key] to get the same result as I would from just passing the value directly as Pizza["toppings"]. Alternatively, changing const key into type key would work. Because the index is a type, you can pass a type in as the index. This lets me do things like tell TypeScript: "I want to create a type that could be any one of the items in this array". You would do this by using the type number as your index access type. For example, if I wanted to create a single Topping type from our Pizza example, I could do the following: ` Creating types with const assertions Sometimes in TypeScript, you'll have some object full of data that you would like to use in a type-safe way. Let's return to our pizza example. Say we're building a web app to let people order our pizzas. Inside our order form, we have a list of toppings. This list of data could include a name, a description, and an extra price. ` Since we've gone through the trouble of writing all of this out, we should use this data to inform the Pizza type about our toppings. If we don't, it's both a duplication of code (a time-waster) and an opportunity for this data to get out of sync with our Pizza type. For a first attempt, you might use the indexed access types we learned about earlier to get each of the topping names: ` But that won't work! TypeScript has widened the type from those literal values to the broader string type. It doesn't assume that these values can't be changed later on. But it did notice that every name in TOPPINGS was a string, so it decided that the string type was the safest bet. Here, you can see how it would widely interpret the type of any entry in TOPPINGS: ` This is a good default, but it's not what we want here. The fix to this problem is easy: const assertions. We can simply append as const at the end of our TOPPINGS declaration. This tells TypeScript that we want to treat everything about this object as literal values that should not be widened. For example: ` Now we've got a type with all of the literal values from TOPPINGS as readonly properties in our type! From here, we can use indexed access types to create our Topping type from the name property: ` And we can use this type to inform our Pizza type: ` Extra type safety with satisfies Let's say we're factoring out the available crusts for making our Pizza. We could start with an array of strings, use a const assertion to use the literal values and avoid widening, and then again use our indexed access types to create a type from that array: ` Well, almost there. Notice that we have an undefined type in there. That's because we have an extra comma in our array. This is effectively the same as saying ['thin', 'thick', undefined, 'stuffed']. You could detect the undefined with type annotations, but that can't be mixed with const assertions. The type cannot both be string[] and readonly ['thin', 'thick', 'stuffed']. ` To avoid this issue, we can use satisfies to confirm that the value conforms to a certain intended shape. In our case, we want to confirm that the array is a tuple of strings. We don't need TypeScript to confirm which strings exactly—only that it matches the intended shape. ` We can further combine satifies with as const to get the literal values we want while verifying that the array is a tuple of strings: ` With as const, we tell TypeScript that it should not widen the inferred type of CRUSTS and that we expect it to be the literal values given. And with satisfies readonly string[], we tell TypeScript that CRUSTS should satisfy the shape of an array of readonly strings. Now we can't accidentally add an extra comma or other value to the array, and we can still use the literal values from CRUST to create new types. Conclusion The combination of indexed access types, const assertions, and the satisfies operator, give us a lot of power to create types that are more specific, and more accurate. You can use them to transform your data into useful types, rather than attempting to duplicate that information manually, and inevitably having the data and types fall out of sync. This can ultimately save you and your team a lot of time, effort, and headache. If you want to view the examples in this article in a runnable playground, you can find them at the TypeScript playground....

Web Scraping with Deno cover image

Web Scraping with Deno

TypeScript is a great choice for writing a web scraper, but using it and web APIs in Node is kind of a pain. But there's an alternative: web scraping with Deno!...

Starter.dev and Remix: How and Why cover image

Starter.dev and Remix: How and Why

Starter.dev helps developers get started building web apps with a variety of frameworks, showcasing how various libraries can fit together to solve similar problems. To do that, This Dot Labs has built a series of showcase apps that recreate the experience of using GitHub. Unique among these showcases is our Remix app. Remix, unlike many of the other frameworks, is meant to run on the server. It has a server-side component unlike the other starter kits, which primarily rely on front end API calls for data fetching. In Remix, data fetching happens in loaders that run on the server, and are used to generate the page. Remix presented several unique challenges because of its different approach to building web apps when compared to most of the other tools we showcased. Between its newness and its novel approach to JavaScript web apps, building a showcase for it wasn't as straightforward as it was for many other frameworks. This blog post details what we chose to include in our Remix GitHub clone, and how we integrated them. Authentication Authentication is done by creating an OAuth GitHub application. The user clicks the button to sign in with GitHub, and is then redirected to GitHub to authorize the app. After approval, they are then redirected back to the app with an access token. This token is then used to make requests to the GitHub API. Rather than implement the OAuth flow from scratch, the Remix starter used remix-auth-github. This package made it very easy to configure authentication services with Remix. For a more general purpose version, see remix-auth. GraphQL We chose to use GitHub's GraphQL API to fetch our data instead of their REST API. Because we were trying to copy the GitHub interface, we knew that there would be many different components showing data from different data models all at once. This is where GraphQL shines; it let us make nested queries to fetch the data we needed with fewer calls to GitHub. We used graphql-request to make our API calls. We chose it because it's a minimal library for making GraphQL requests, and we didn't want this showcase to be overwhelmed by the choice of a larger and more feature-rich library. Our goal was to show off Remix, not GraphQL. GraphQL queries are made in Remix's loader functions inside route files. We use graphql-request to combine our data queries with our aforementioned auth token in order to fetch GitHub data for inclusion in our rendered page. Tailwind Several of the starter.dev kits also used Tailwind for styles, so we chose the same for Remix to avoid duplication of work. Remix has an approach to styling that lends itself to keeping styles separate from JavaScript, and Tailwind works well here. Our first starter.dev kit was a NextJS app using CSS Modules to @apply Tailwind styles, co-locating those style files with the components. Example CSS and Component from the Next app: ` ` Remix didn't support CSS Modules and is generally not great at style and logic co-location. By default, Remix expects you to import a CSS file at the root of your application. Given these restrictions, the CSS Modules approach wasn't going to work in the Remix app. We wanted to do the following: - Keep using the Tailwind styles from the Next app - Leave the Tailwind classes out of the JSX to avoid clutter - Not have all styles located in a styles directory. But instead, co-locate them with the components The best example of Tailwind in Remix available when we began was Kent C. Dodd's blog. But he inlines the Tailwind classes in his components, which creates the hard-to-read long lines of class names we wanted to avoid. To accomplish our goals, we instead chose a simple approach: we kept the lists of Tailwind styles as strings exported from a Component.classNames.ts file. Since all of the styles from the Next app were exclusively Tailwind class names, we didn't have to worry about losing anything by switching from CSS Modules to just typing strings of class names. ` ` To use these Tailwind styles, the Remix starter.dev app imports a single CSS file that is automatically generated at the application root. Concurrently is used in development to continuously run Remix and regenerate the Tailwind CSS file. This approach is very similar to the recommendations from Tailwind and Remix. It's also possible to create a CSS file where you can use @apply in custom classes using nearly the same methods, as documented by Remix. We didn't do this because we preferred to keep style information closer to the component files rather than siloed away in a styles directory. Storybook Like many of the other starter.dev kits, the Remix starter uses Storybook to interactively view and build components in isolation. It also uses Vite with Storybook instead of Webpack. In testing both Webpack and Vite, we found that Vite was faster to build and reload Storybook, so we chose to use it over Webpack. However, Storybook is not natively supported in Remix. (See here for discussion.) This lack of support caused the greatest trouble with Remix's Link component. The Link component is used to navigate between pages in Remix, but Storybook doesn't know what to make of it. Fortunately, because we know that Link eventually renders as an a element, we were able to create a mocked version of Link just for Storybook that effectively swaps it with a. Then in the main.js configuration file, we aliased @remix-run/react to point to our mocks instead. Architect for Deployment Deploying Remix is a blog post unto itself. Remix includes a lot of support for pre-configured deployment types and hosts, but AWS Amplify—our target environment for the starter.dev showcases—was not initially one of them. To get it working as desired, we modified the Remix Grunge Stack and used Architect, an infrastructure-as-code tool for AWS, to deploy the Remix app. Conclusion When we built our Remix starter.dev showcase, the framework had not been open to the public for very long. The structure of our Remix starter reflects our best judgement of how to use Remix in those early days, and has continued to be updated as we learn more and the ecosystem matures. We welcome everyone to take a look, and contribute back to the starter if you have any improvements!...

How to Login in to Third Party Services in Stripe Apps with OAuth PKCE cover image

How to Login in to Third Party Services in Stripe Apps with OAuth PKCE

One of the benefits of Stripe Apps is that they allow you to connect to third-party services directly from the Stripe Dashboard. There are many ways to implement the OAuth flows to authenticate with a third-party service, but the ideal one for Stripe Apps is PKCE. Unlike other OAuth flows, a Stripe app authenticating with a third-party using PKCE does not require any kind of backend. The entire process can take place in the user's browser. What is OAuth PKCE Proof Code for Key Exchange (PKCE, pronounced "pixie") is an extension of regular OAuth flows. It is designed for when you've got a client where it would be possible to access a secret key, such as a native app, or a single-page app. Because Stripe Apps are very restricted for security purposes, the OAuth PKCE flow is the only OAuth flow that works in Stripe Apps without requiring a separate backend. Not all third-party services support the PKCE authorization flow. One that does is Dropbox, and we will use that for our code examples. Using createOAuthState and oauthContext to Get an Auth Token To use the OAuth PKCE flow, you'll use createOAuthState from the Stripe UI Extension SDK to generate a state and code challenge. We will use these to request a code and verifier from Dropbox. Dropbox will then respond to a specific endpoint for our Stripe App with the code and verifier, which we'll have access to in the oauthContext. With these, we can finally get our access token. If you wish to follow along, you'll need to both create a Stripe App and a Dropbox App. We'll start by creating state to save our oauthState and challenge, and then get a code and verifier if we don't have one already. If we do have a code and verifier, we'll try to get the token, and put it in tokenData state. ` ` ` Fetch Dropbox User Data To prove to ourselves that the token works, let's fetch Dropbox user data using the token. We'll create a new function to fetch this user data, and call it from within our Stripe App's view. We'll store this user data in state. ` ` ` Storing Tokens with the Secret Store Currently, we're only persisting the retrieved token data in memory. As soon as we close the Stripe App, it will be forgotten and the user would have to fetch it all over again. For security reasons, we can't save it as a cookie or to local storage. But Stripe has a solution: the secret store. The secret store allows us to persist key-value data with Stripe itself. We can use this to save our token data and load it whenever a user opens our Stripe App. To make it easier to work with the secret store, we'll create a custom hook: useSecretStore. ` Once we've got our custom hook ready, we can integrate it into our App.tsx view. We will rewrite the useEffect to check for a saved token in the secret store, and use that if it's valid. Only if there is no token available do we create a new one, which will then be persisted to the secret store. We also add a Log Out button, which will reset the tokenData and secret store values to null. The Log Out button creates an issue. If we have oauthContext from logging in, and then we log out, the Stripe App still has the same oauthContext. If we tried logging in again without closing the app, we would get an error because we're re-using old credentials. To fix this, we also add a React ref to keep track of whether or not we've used our current oauthContext values. ` We've done a lot to create our authorization flow using PKCE. To see this entire example all together, check out this code sample on GitHub....

Web Scraping with TypeScript and Node.js cover image

Web Scraping with TypeScript and Node.js

For those times when you can't get web data from an API or as a CSV download, learn to write a web scraper with TypeScript and Node....

Creating Custom Scrollbars with React cover image

Creating Custom Scrollbars with React

Learn how to develop custom scrollbar in React by implementing your own using DOM elements and provide a smooth and interactive experience to your visitors....

Introduction to Vanilla Extract for CSS cover image

Introduction to Vanilla Extract for CSS

Vanilla Extract is a CSS framework that lets you create class styles with TypeScript. It combines the utility class approach of something like Tailwind with the type-safety of TypeScript, allowing you to create your own custom yet consistent styles. Styles generated by Vanilla Extract are locally scoped, and compile to a single stylesheet at build time. This introduction will show you what it looks like to use Vanilla Extract in a React app, but it's a framework-agnostic library. Anywhere you can include a class name, it should work just fine. We'll begin with very simple styles, and work our way through some of the more complex features until you've got a foundational understanding of how to utilize Vanilla Extract in your own projects. Styles To start with Vanilla Extract, you first create a style. The style method accepts either a style object, or an array of style objects. Here is a simple example using React. Here, we create a style, then use it in our component by passing the variable name to className. When the app builds, a stylesheet will be generated, and our exampleStyle will get a hashed class name which will be passed into the component. ` ` When this app is built, exampleStyle gets compiled to a CSS class in a static stylesheet. That class name is then given to the React component. ` ` Themes If you want to make sure you're using consistent values across your styles, you can create a theme to make those styles available throughout your app. Using themes lets you use known names for things like standardized spacing or color palette names, allowing you to define them once upfront, and get type safety when using them later. ` ` Because we're using TypeScript, you can get type safety and intellisense auto-completion. If you make a typo when writing out a theme variable, your editor will warn you about it. Variants Sometimes you have styles that are nearly, but not quite the same. In these cases, it's sometimes best to define variants. In this example, I've created styles for two button variants: one for the primary brand colors, and one for secondary colors. ` ` Sprinkles Sprinkles allow you to create easy-to-reuse utility classes with Vanilla Extract. Sprinkles allow you to define conditions, and the properties that apply under each condition, and generate all the utility classes that would be necessary to satisfy all of those potential conditions. In this example, we use defineProperties to outline some conditions and acceptable property values around colors and spacing. Then, we combine these using createSprinkles to give us a single way to use them. ` ` Once built, the various sprinkles permutations are compiled as CSS utility classes in our stylesheet, and applied to the relevant elements like this: ` You can see, in the style names, how our conditions and properties have combined to create unique utility classes for each possibility, such as padding_small_mobile and padding_xlarge_desktop. Like with any utility class approach, be careful not to define more conditions and properties than you will actually use— you could create a very large stylesheet of mostly unused CSS that way. If you want to combine Sprinkles utility classes with some styles unique to the element you're working on, you can combine them in an array passed to style. ` Recipes Recipes give you an easy way to combine base styles, multiple variants, and combinations of variants. In this example, let's revisit making different types of buttons. ` ` Once again, our recipe and defined variants get transformed into class names. All of the combinations necessary to get the different variants and compount variants to work have been created in the static CSS file. ` ` This is a basic introduction to Vanilla Extract. It's a very powerful library for creating styles, and you'll find much more information in the documentation. You can also play with all of my examples on CodeSandbox....

Understanding CSS Gradients cover image

Understanding CSS Gradients

The code for CSS gradients can be confusing. Building up your understanding of gradients one step at a time can give you confidence writing them yourself, without relying on gradient generator tools....

Using Lottie Animations for UI Components in React cover image

Using Lottie Animations for UI Components in React

Lottie can bring the power of Adobe After Effects's animations to the web. Learn how to integrate these animations into your user interface in a React app....