Skip to content
Jamie Kuppens

AUTHOR

Jamie Kuppens

Software Engineer

I’m a software engineer with an interest in web development and some more esoteric things like emulator development.

Select...
Select...
How to Implement Soft Delete with Prisma using Partial Indexes cover image

How to Implement Soft Delete with Prisma using Partial Indexes

A guide on how to implement soft delete functionality while at the same time being able to utilize unique indexes in Prisma....

QR Code Scanning & Generation cover image

QR Code Scanning & Generation

QR codes provide a very valuable way of sharing information with users, and many applications rely on them for various purposes....

A Tale of Form Autofill, LitElement and the Shadow DOM cover image

A Tale of Form Autofill, LitElement and the Shadow DOM

Many web applications utilize forms in places be it for logging in, making payments, or editing a user profile. As a user of web applications, you have probably noticed that the browser is able to autofill in certain fields when a form appears so that you don't have to do it yourself. If you've ever written an application in Lit though, you may have noticed that this doesn't always work as expected. The Problem I was working on a frontend project utilizing Lit and had to implement a login form. In essence these aren’t very complicated on the frontend side of life. You just need to define a form, put some input elements inside of it with the correct type attributes assigned to it, then you hook the form up to your backend, API, or whatever you need to call to authenticate by adding a submit handler. However, there was an issue. The autocomplete doesn’t appear to be working as expected. Only the username field was being filled, but not the password. When this happened, I made sure to check documentation sites such as MDN and looked at examples. But I couldn’t find any differences between theirs and mine. At some point, I prepared a minimal reproducible example without Lit, and I was able to get the form working fine, so it had to do something with my usage of Lit. After doing a little bit of research and some testing, I found out this happened because Lit relies very heavily on something known as the Shadow DOM. I don’t believe the Shadow DOM is necessarily supposed to break this functionality. But for most major browsers, it doesn’t play nice with autocomplete for the time being. I experienced slightly different behavior in all browsers, and the autocomplete even worked under Shadow DOM with Firefox in the Lit app I was working on. The solution I ended up settling on was ensuring the form was contained inside of the Light DOM instead of the Shadow DOM, whilst also allowing the Shadow DOM to continue to be used in places where autofillable forms are not present. In this article I will show you how to implement this solution, and how to deal with any problems that might arise from it. Shadow DOM vs. Light DOM The Shadow DOM is a feature that provides a way to encapsulate your components and prevent unrelated code and components from affecting them in undesired ways. Specifically, it allows for a way to prevent outside CSS from affecting your components and vice versa by scoping them to a specific shadow root. When it comes to the Light DOM, even if you’ve never heard of the term, you’ve probably used it. If you’ve ever worked on any website before, and interacted with the standard DOM tree, that is the Light DOM. The Light DOM, and any Shadow DOMs under it for that matter, can contain Shadow DOMs inside of them attached to elements. When you add a Lit component to a page, a shadow root will get attached to it that will contain its subelements, and prevent CSS from outside of that DOM from affecting it. Using Light DOM with Certain Web Components By default, Lit attaches a shadow root to all custom elements that extend from LitElement. However, web components don’t actually require a shadow root to function. We can do away with the shadow root by overriding the createRenderRoot method, and returning the web component itself: ` Although we can just put this method in any element we want exposed into the Light DOM. We can also make a new component called LightElement that overrides this method that we can extend from instead of LitElement on our own components. This will be useful later when we tackle another problem. Uh oh, where did my CSS styling and slots go? The issue with not using a shadow root is Lit has no way to encapsulate your component stylesheets anymore. As a result, your light components will now inherit styles from the root that they are contained in. For example, if your components are directly in the body of the page, then they will inherit all global styles on the page. Similarly when your light components are inside of a shadow root, they will inherit any styles attached to that shadow root. To resolve this issue, one could simply add style tags to the HTML template returned in the render() method, and accept that other stylesheets in the same root could affect your components. You can use naming conventions such as BEM for your CSS classes to mitigate this for the most part. Although this does work and is a very pragmatic solution, this solution does pollute the DOM with multiple duplicate stylesheets if more than one instance of your component is added to the DOM. Now, with the CSS problem solved, you can now have a functional Lit web component with form autofill for passwords and other autofillable data! You can view an example using this solution here. A Better Approach using Adopted Stylesheets For a login page where only one instance of the component is in the DOM tree at any given point, the aforementioned solution is not a problem at all. However, this can become a problem if whatever element you need to use the Light DOM with is used in lots of places or repeated many times on a page. An example of this would be a custom input element in a table that contains hundreds of rows. This can potentially cause performance issues, and also pollute the CSS inspector in your devtools resulting in a suboptimal experience both for users and yourself. The better, though still imperfect, way to work around this problem is to use the adopted stylesheets feature to attach stylesheets related to the web component to the root it is connected in, and reuse that same stylesheet across all instances of the node. Below is a function that tracks stylesheets using an id and injects them in the root node of the passed in element. Do note that, with this approach, it is still possible for your component’s styles to leak to other components within the same root. And like I advised earlier, you will need to take that into consideration when writing your styles. ` This solution works for most browsers, and a fallback is included for Safari as it doesn’t support adoptedStylesheets at the time of writing this article. For Safari we inject de-duplicated style elements at the root. This accomplishes the same result effectively. Let’s go over the evictDisconnectedRoots function that was called inside of the injection function. We need to ensure we clean up global state since the injection function relies on it to keep duplication to a minimum. Our global state holds references to document nodes and shadow roots that may no longer exist in the DOM. We want these to get cleaned up so as to not leak memory. Thankfully, this is easy to iterate through and check because of the isConnected property on nodes. ` Now we need to get our Lit component to use our new style injection function. This can be done by modifying our LightElement component, and having it iterate over its statically defined stylesheets and inject them. Since our injection function contains the de-duplication logic itself, we don’t need to concern ourselves with that here. ` With all that you should be able to get an autocompletable form just like the previous example. The full example using the adopted stylesheets approach can be found here. Conclusion I hope this article was helpful for helping you figure out how to implement autofillable forms in Lit. Both examples can be viewed in our blog demos repository. The example using basic style tags can be found here, and the one using adopted stylesheets can be found here....

A Guide to Custom Angular Attribute Directives cover image

A Guide to Custom Angular Attribute Directives

Discover the power of Angular attribute directives and learn how to create your own custom directives with this interactive guide from This Dot Labs...

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions cover image

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions

Over the lifetime of a project performance, issues may arise from time to time. Lots of the time, these issues don't get detected until they get into production. Adding continuous benchmarking to your project and build pipeline can help you catch these issues before that happens. What is Continuous Benchmarking Benchmarking is the process of measuring the performance of an application. Continuous benchmarking builds on top of this by doing so either on a regular basis, or whenever new code is pushed so that performance regressions can be identified and found as soon as they are introduced. Adding continuous benchmarking to your build pipeline can help you effectively catch performance issues before they ever make it to production. Much like with tests, you are still responsible for writing benchmark logic. But once that’s done, integrating it with your build pipeline can be done easily using the continuous-benchmark GitHub Action. github-action-benchmark github-action-benchmark allows you to easily integrate your existing benchmarks written with your benchmark framework of choice with your build pipeline, with a wide range of configuration options. This action allows you to track the performance of benchmarks against branches in your repository over the history of your project. You can also set thresholds on workflows in PRs, so performance regressions automatically prevent PRs from merging. Benchmark results can vary from framework to framework. This action supports a few different frameworks out of the box, and if yours is not supported, then it can be extended. For your benchmark results to be consumed, they must be kept in a file named output.txt, and formatted in a way that the action will understand. Each benchmark framework will have a different format. This action supports a few of the most popular ones. Example Benchmark in Rust Firstly, we need a benchmark to test with, and we’re going to use Rust. I am not going to detail everything to setup Rust projects in general, but a full example can be found here. In this case, there is just a simple fibonacci number generator. ` Then, a benchmark for this function can be written like so: ` In this case, we have two benchmarks that use the fib function with a different amount of iterations. The more iterations that you execute, the more accurate your results will be. Finally, if your project is setup to compile with cargo already, running the benchmarks should be as simple as running cargo bench. Now that the benchmark itself is setup, it’s time to move to the action. GitHub Action Setup The most basic use-case of this action is setting it up against your main branch so it can collect performance data from every merge moving forward. GitHub actions are configured using yaml files. Let’s go over an example configuration that will run benchmarks on a rust project every time code gets pushed to main, starting with the event trigger. ` If you aren’t familiar with GitHub Actions already, the ‘on’ key allows us to specify the circumstances that this workflow will run. In our case, we want it to trigger when pushes happen against the main branch. If we want to, we can add additional triggers and branches as well. But for this example,, we’re only focusing on push for now. ` The jobs portion is relatively standard. The code gets checked out from source control, the tooling needed to build the Rust project is installed, the benchmarks are run, and then the results get pushed. For the results storing step, a GitHub API token is required. This is automatically generated when the workflow runs, and is not something that you need to add yourself. The results are then pushed to a special 'gh-pages' branch where the performance data is stored. This branch does need to exist already for this step to work. Considerations There are some performance considerations to be aware of when utilizing GitHub Actions to execute benchmarks. Although the specifications of machines used for different action executions are similar, the runtime performance may vary. GitHub Actions are executed in virtual machines that are hosted on servers. The workloads of other actions on the same servers can affect the runtime performance of your benchmarks. Usually, this is not an issue at all, and results in minimal deviations. This is just something to keep in mind if you expect the results of each of your runs to be extremely accurate. Running benchmarks with more iterations does help, but isn’t a magic bullet solution. Here are the hardware specifications currently being used by GitHub Actions at the time of writing this article. This information comes from the GitHub Actions Documentation. Hardware specification for Windows and Linux virtual machines: - 2-core CPU (x86_64) - 7 GB of RAM - 14 GB of SSD space Hardware specification for macOS virtual machines: - 3-core CPU (x86_64) - 14 GB of RAM - 14 GB of SSD space If you need more consistent performance out of your runners, then you should use self-hosted runners. Setting these up is outside the scope of this article, and is deserving of its own. Conclusion Continuous benchmarking can help detect performance issues before they cause issues in production, and with GitHub Actions, it is easier than ever to implement it. If you want to learn more about GitHub Qctions and even implementing your own, check out this JS Marathon video by Chris Trzesniewski....

Git Reflog: A Guide to Recovering Lost Commits cover image

Git Reflog: A Guide to Recovering Lost Commits

Losing data can be very frustrating. Sometimes data is lost because of hardware dying, but other times it’s done by mistake. Thankfully, Git has tools that can assist with the latter case at least. In this article, I will demonstrate how one can use the git-reflog tool to recover lost code and commits. What is Reflog? Whenever you add data to your local Git repository or perform destructive operations, Git keeps track of all these using reference logs, also known as reflogs. These log entries contain a SHA-1 hash of the commit associated with it and any references, or refs for short. Refs themselves are branch names, tags, and symbolic refs like HEAD, which is always pointing to the ref or commit id that’s currently checked out. These reflogs can prove very useful in assisting with data recovery against a Git repository if some code is lost in a destructive operation. Reflog records contain data such as the SHA-1 hash that HEAD was pointing to when an operation was performed, and a description of the operation that was performed as well. Here is an example of what a reflog might look like: ` The first part 956eb2f is the commit hash of the currently checked out commit when this entry was added to the reflog. If a ref currently exists in the repo that points to the commit id, such as the branch-prefix/v2-1-4 branch in this case, then those refs will be printed alongside the commit id in the reflog entry. It should be noted that the actual refs themselves are not always stored in the entry, but are instead inferred by Git from the commit id in the entry when dumping the reflog. This means that if we were to remove the branch named branch-prefix/v2-1-4, it would no longer appear in the reflog entry here. There’s also a HEAD part as well. This just tells us that HEAD is currently pointing to the commit id in the entry. If we were to navigate to a different branch such as main, then the HEAD -> section would disappear from that specific entry. The HEAD@{n} section is just an index that specifies where HEAD was n moves ago. In this example, it is zero, which means that is where HEAD currently is. Finally what follows is a text description of the operation that was performed. In this case, it was just a commit. Descriptions for supported operations include but are not limited to commit, pull, checkout, reset, rebase, and squash. Basic Usage Running git reflog with no other arguments or git reflog show will give you a list of records that show when the tips of branches and other references in the repository have been updated. It will also be in the order that the operations were done. The output for a fresh repository with an initial commit will look something like this. ` Now let’s create a new branch called feature with git switch -c feature and then commit some changes. Doing this will add a couple of entries to the reflog. One for the checkout of the branch, and one for committing some changes. ` This log will continue to grow as we perform more operations that write data to git. A Rebase Gone Wrong Let’s do something slightly more complex. We’re going to make some changes to main and then rebase our feature branch on top of it. This is the current history once a few more commits are added. ` And this is what main looks like: ` After doing a git rebase main while checked into the feature branch, let’s say some merge conflicts got resolved incorrectly and some code was accidentally lost. A Git log after doing such a rebase might look something like this. ` Fun fact: if the contents of a commit are not used after a rebase between the tip of the branch and the merge base, Git will discard those commits from the active branch after the rebase is concluded. In this example, I entirely discarded the contents of two commits “by mistake”, and this resulted in Git discarding them from the current branch. Alright. So we lost some code from some commits, and in this case, even the commits themselves. So how do we get them back as they’re in neither the main branch nor the feature branch? Reflog to the Rescue Although our commits are inaccessible on all of our branches, Git did not actually delete them. If we look at the output of git reflog, we will see the following entries detailing all of the changes we’ve made to the repository up till this point: ` This can look like a bit much. But we can see that the latest commit on our feature branch before the rebase reads 138afbf HEAD@{6}: commit: here's some more. The SHA1 associated with this entry is still being stored in Git and we can get back to it by using git-reset. In this case, we can run git reset --hard 138afbf. However, git reset --hard ORIG_HEAD also works. The ORIG_HEAD in the latter command is a special variable that indicates the last place of the HEAD since the last drastic operation, which includes but is not limited to: merging and rebasing. So if we run either of those commands, we’ll get output saying HEAD is now at 138afbf here's some more and our git log for the feature branch should look like the following. ` Any code that was accidentally removed should now be accessible once again! Now the rebase can be attempted again. Reflog Pruning and Garbage Collection One thing to keep in mind is that the reflog is not permanent. It is subject to garbage collection by Git on occasion. In reality, this isn’t a big deal since most uses of reflog will be against records that were created recently. By default, reflog records are set to expire after 90 days. The duration of this can be controlled via the gc.reflogExpire key in your git config. Once reflog records are expired, they then become eligible for removal by git-gc. git gc can be invoked manually, but it usually isn’t. git pull, git merge, git rebase and git commit are all examples of commands that will trigger git gc to run behind the scenes. I will abstain from going into detail about git gc as that would be deserving of its own article, but it’s important to know about in the context of git reflog as it does have an effect on it. Conclusion git reflog is a very helpful tool that allows one to recover lost code and commits when used in conjunction with git reset. We learned how to use git reflog to view changes made to a repository since we’ve cloned it, and to undo a bad rebase to recover some lost commits....

Git Bisect: the Time Traveling Bug Finder cover image

Git Bisect: the Time Traveling Bug Finder

I think it’s safe to say that most of us have been in a situation where we pull down some changes from main and something breaks unexpectedly, or a bug got introduced in a recent deployment. It might not take long to narrow down which commit caused the issue if there’s only a couple of new commits, but if you’re a dozen or more commits behind, it can be a daunting task to determine which one caused it. But can’t I just check each commit until I find the culprit? You could just check each commit individually without any special tools until you find the one that caused the issue, but that can be a very slow process. This is not ideal and is analogous to the reason why linear search isn’t as effective as binary search. As the title suggests, there is a tool that Git provides called “bisect”. What this command does is checks out various commit refs in the tree of the branch you’re currently working in and allows you to mark commits as, “good”, “bad”, or “skip” (invalid / broken build). It does away with the need of having to check each commit individually as it is able to infer if commits are good or bad based on which other commits you have already marked. Git Bisect in Action Let’s imagine a hypothetical scenario where some bug was reported for the software we’re working on. Starting a git bisect session usually looks like the following example. ` In this case, the commit hash in this example comes from a commit that I already know works. In the case where you pull down changes, and only then does something break, you can use whatever commit you were at prior before you pulled them down. If it’s an older bug, then you could check an older tag or two to see if it exists there. Next is the part where we search for the offending commit. Every time you mark a commit, bisect will then navigate to another commit in-between your good and bad starting points using a specialized binary search algorithm. ` This is the general workflow you will follow when bisecting for its most basic use case, and these commands will be repeated until there are no more revisions left to review. Bisect will try to predict how many steps are left, and let you know every time you mark a commit. Once you are done, you will be checked into the commit that introduced the regression. This assumes that you marked everything accurately! After you are done bisecting, you can quickly return to where you started by running git bisect reset. How Git Bisect Works Firstly, bisect makes the reasonable assumption that any commits after a bad commit remain bad, and any commits before a good commit remain good. It then continues to narrow down which commit is the cause by asking you to check the middlemost commit, along with some added bias when navigating around invalid commits. Though, that’s not vitally important to understand as a user of the command. The following graphic shows how bisect moves throughout your branch’s history. Bisect becomes incredibly useful when dealing with repositories with a lot of history, or when tracking down the cause of a bug that’s been in a codebase for a long time. It makes it possible to mule over hundreds of commits in fewer than a dozen steps! That’s a lot better than going through commits one-by-one or at random. Limitations It is worth mentioning that bisect isn’t as useful in cases where commits are very large because they incorporate several different changes all bundled together (e.g. squash merges). In an ideal world, each commit in the main branch’s history can be built, and they will implement or fix one thing and one thing only. But in reality, this isn’t always the case. The skip command is available to help with this scenario, but even with that, it’s possible that a change that caused the bug is in one of those skipped commits; therefore, relying solely on the diff of the determined commit to find the root cause of a bug may be misleading. Conclusion Git bisect is a very useful tool that can dramatically decrease the amount of time it takes to identify the cause of a regression. I would also recommend reading the official documentation on git bisect as it’s actually quite informative! There are a lot of good examples in here that demonstrate how you can use the command to its full potential....

A Guide to Keeping Secrets out of Git Repositories cover image

A Guide to Keeping Secrets out of Git Repositories

If you’ve been a developer for a while, then you hopefully know it is wise to keep secret information such as passwords and encryption keys outside of source control. If you didn’t know that, then surprise! Now you know. Sometimes slip-ups do happen and a password ends up in a default config file or a new config file was not added to “.gitignore” and that same someone ran “git add .” and didn’t even notice it got committed. There should be protections in place no matter how diligent your programmers are since nobody is infallible, and the peace of mind is well worth it. How can software know something is a secret? It can’t know for sure, but it can make an educated guess. Secrets typically fit certain known patterns, or have higher entropy than other strings in your code and configuration files. A good scanner should check for strings that fit these patterns throughout your entire repository’s history, and raise anything suspicious to you. Checking for Secrets in CI and CD When it comes to automatically checking for secrets in your code, you have quite an array of options. To keep this article brief, I am just going to cover a few tools, and which tools you use may depend on your repository host. GitHub If you’re using GitHub for your project and your repository is either public or you use GitHub Enterprise Cloud, then GitHub will automatically scan the code you upload for secrets. GitHub’s solution is special because they have partnered with several different companies to allow for automatic revocation of secrets pushed to the repo. See the following excerpt from GitHub’s secret scanner documentation: > When you make a repository public, or push changes to a public repository, GitHub always scans the code for secrets that match partner patterns. If secret scanning detects a potential secret, we notify the service provider who issued the secret. The service provider validates the string and then decides whether they should revoke the secret, issue a new secret, or contact you directly. Their action will depend on the associated risks to you or them. Well, that’s nice now isn’t it? If you’re curious if the services you use are partnered with GitHub so that their secrets can be scanned for, you can view the full list here. Just keep in mind that this functionality is only available for public repositories and private repositories using GitHub Enterprise Cloud with a “GitHub Advanced Security” license. GitLab GitLab, like GitHub, has secret detection as well. GitLab uses Gitleaks for their secret detection. This is a well documented tool whose source code is freely available. The capabilities of secret detection in GitLab does vary based on your tier, though. You will have to use GitLab Ultimate to view detected secrets in the pipeline, and merge request sections for example. You can still use the scanner in free and premium versions, but it isn’t nearly as integrated as it is in the ultimate version. Gitleaks We mentioned that GitLab uses Gitleaks, but you aren’t just limited to using it with GitLab! Since Gitleaks is open source, that means you can use it with other providers such as GitHub, and even run it locally on your own system. It is also very easy to set up either as a CI job, or if you need to run it locally. Scanning for Secrets using a CI Job For GitHub you can simply use this action made by the author of Gitleaks. In this case Gitleaks is helpful if you’re using private repositories on GitHub without GitHub Enterprise Cloud. It is fully configurable with the action as it allows you to specify a custom .gitleaks.toml file. This is optional of course, and the default might work fine for you. Checking for Secrets in a Pre-Commit Hook There are a couple of ways to accomplish setting up the hook. A pre-commit script is available on the Gitleaks GitHub that will run Gitleaks on your staged files before you commit. Your commit will be stopped if any secrets are detected. This script can simply be copied into your .git/hooks/ directory. It does require that Gitleaks is installed and in your $PATH, however. The other method involves using the pre-commit utility. It will assist with installing Gitleaks automatically for any developers that clone the repository and it can also assist with installing the hooks for the first time as well. Using the pre-commit tool might make more sense if you want to ensure other linters and checkers run, and you don’t want to have developers juggle installing everything themselves. A Good Code Review Process Goes a Long Way Although automated tooling for identifying secrets in code works well, it’s still good to keep an eye out for them when reviewing code. Automated scanning tools, as I mentioned earlier in the article, work great and you should definitely use them. However, they aren’t perfect. These tools look for sets of patterns and strings with high entropy, but not all secrets fit these criteria. Knowing that even with the best scanning tools it’s still possible secrets could sneak through, it’s easier to understand why it is important to also have a good code review process to catch these issues. Also remember that committing secrets isn’t the only thing you should be worried about. You should have others review your work to help mitigate the chances that your changes could introduce new security vulnerabilities in the code as well! Oh no, there’s already a secret in my Git history! If you already have secrets in your repository, and they’re pushed to a main branch, not all hope is lost. Before we get into methods of removing the secrets, I have a _massive disclaimer_ I should get out of the way. The only way to truly remove secrets from your repository is to rewrite your git history. This is a destructive operation, and will require developers to re-pull branches and cherry-pick changes from their local branches if applicable. Did you read the disclaimer? Good, we can discuss methods then. What method we do depends on how and when the secret made it into the repository. The Secret is in a Single Branch If a secret made it into a feature branch by mistake, then you could simply initiate an interactive rebase to remove it and force push that branch to the remote. It should be noted that this is only effective if no other branches are based off of your branch, and if it isn’t tagged. Let’s say, for example, you push your code to the remote and a CI job identifies a secret after your push. At this point there would be nobody else using your branch and it shouldn’t be tagged, so this is the perfect opportunity to just rewrite the commit that triggered the CI failure. If your commit hash is let’s say 09fac8dbfd27bd9b4d23a00eb648aa751789536d, then these are the first command you would have to execute to begin cleaning up your branch’s history: ` Note the caret at the end of the SHA1 commit hash. _This is vitally important to include_ as we need to rebase to the commit prior to the commit introducing the secret. The gist is that we’re going to return back to that point in time, and prevent the secret from ever being added in the first place. Git will now open your default command-line text editor and ask you how you want to execute your rebase. Find the line referencing the problematic commit and replace pick with edit on that line. If you save the file and quit you then should now find yourself at the commit where the secret was introduced. From here you can remove the secret, stage the affected files, and then execute the following commands: ` And if that’s successful: ` Then after that the history should be successfully rewritten to not include the secret locally. If you pushed your branch already, then you’ll need to get these changes pushed to the remote as well. You can do this with a force push using the -f flag like so: ` Now you should be set! At this point, you can get to fixing up your code to pull the secret some other way that doesn’t include config files or strings hard-coded inside of your codebase. The Secrets are in Main Already… If your secrets are present in many branches, like tagged versions or your main branch, then things get a little more complicated. There’s more than one way to handle this situation, but I am going to cover only one. Just revoke the secret. In this scenario, you should revoke the secret and issue a new one. If you do this, then it doesn’t matter that the old secret is still in the repository history because it will be entirely useless! How this is done of course depends on what service the secret was issued from. There is also the added benefit in that anyone that has cloned your repo with the secret will also be unable to use it. Simply rewriting history doesn’t matter if an adversary has already downloaded it before you deleted it. It’s important to note that this isn’t always great if, for example, the secret is used in multiple projects. You will need to ensure your revoked secret is replaced _everywhere_ before you actually revoke it or else you may experience downtime. Conclusion With scanning tools becoming more accessible, there are fewer and fewer reasons to not use them. Secret scanning is especially important for public repositories, but it is also useful for private repositories where a compromised developer account can access secrets and wreak havoc....

Debugging Strategies for Angular Applications cover image

Debugging Strategies for Angular Applications

When programming, a lot of time is spent reading and debugging code. When coming to grips with a new framework or library, it is important to know how to debug it when things eventually go astray. Angular has a plethora of useful 1st party and 3rd party tools that will aid you in debugging your application effectively. Angular Devtools First thing’s first, if you don’t have the Angular Devtools already, then you should download them. The Angular Devtools does many things such as allowing you to view the component tree live as your application runs, view the state of components in your application, and profile your application. Below, I have an Angular application loaded, and I can view the internal state and configuration of the various components present. There is also a “Profiler” tab that can be used for measuring the performance of your individual components. You may ask what performance profiling has to do with debugging. Performance issues are problematic like any bug, and can lead to a detrimental user experience. Being able to identify these issues will make it easier for you to fix these issues, and improve the application. The profiler gives us a breakdown of the resources used by each component. This is very useful for identifying performance bottlenecks in your application. One last thing I should mention is the ability to jump to the source code of the component you currently have selected. The button looks like a pair of angle brackets in the top-right corner. This is useful if you need to use the built-in debugger provided by the browser. I highly recommend reading up on the official guide for Angular Devtools, which goes more in-depth about the various features. Using a JavaScript Debugger Do not forget that your browser has a built-in debugger. Angular application dev builds are very debuggable as source maps are provided by webpack. If you’re using the Angular CLI for your developer server, then you should already be good to go! You can access your source code through the “Sources” tab, and it will be available under the “webpack://” dropdown. However, it isn’t very convenient to navigate your source code like this. Instead, you can search for your TypeScript files using CTRL+P (or CMD+P on mac), or use the Angular Devtools to jump to the source code of your components. Once you’re in the source file, you can place breakpoints directly in your TypeScript files! Do note that debugging the original TypeScript files is not available in production builds. You should try to use the debugger over “console.log” in most situations. It will be much easier to view the state of your application here, and understand what is going on. Redux Devtools Lots of Angular applications are using NgRx for state management now. One important thing to keep in mind is that NgRx is compatible with the Redux devtools browser extension. Although NgRx is not based on Redux, that does not stop it from being compatible with its developer tools! Isn’t that neat? You can install the Redux devtools from your browser’s respective extension marketplace. Once it’s installed, you should see a new tab when you open devtools (F12). Here, you can see I have loaded up a demo application that uses Redux, and the tools appear to be working! Before we dig in deeper, I should mention that the application has to do some extra setup to ensure that Redux Devtools works. Out of the box, your application will not work with Redux Devtools until you import the StoreDevTools module. That can easily be done by adding another import to your module.ts file. ` There are a few things to describe on this screen. Let’s talk about the left sidebar. It is here that you will find a list of actions that have been emitted throughout the lifetime of the application. The first 5 actions are emitted by NgRx after performing initialization of the store and the router. Everything after those are from the application code for the particular example we’re using. Clicking on one of the actions and having the “Diff” view action on the right-hand side will show what state was changed by it. In the TODO MVC example I used, there are two actions that happen upon initialization. The first just updates a loading flag to true. Then, the second one adds the TODO entries and flips the loading flag back to false. You can also view the contents of individual actions through “Action”, the overall state at that point in time with “State”, generate Jest tests that replay the creation of the state with “Test”, and use “Trace” to find the callstack of whatever triggered the action in the first place. The trace tab in-particular, can be very helpful if an action is being emitted unexpectedly, and you want to figure out why. I did all of this analysis against the todomvc-angular-ngrx repository if you want to try using the Redux devtools for yourself. We have only glossed the surface of the Redux Devtools here, and I could write a whole article about this wonderful piece of software, but we already did that! If you want to learn more, please consider reading our more in-depth article, Developer Tools & Debugging in NgRx. Using ng.profiler for Identifying Change Detection Issues We’ve already gone over the Angular Devtools profiler, but there is another tool provided by Angular that we can use for identifying performance issues related to change detection. For the uninitiated, change detection is the mechanism by which Angular checks for changes to your application’s state, and re-renders views when it does happen. Sometimes, change detection may cause performance issues in larger applications, and one easy way to find out if this is harming it is to use ng.profiler. ng.profiler can be run on the browser’s console once it is enabled. To enable it, you must call “enableDebugTools” after your app module is finished initializing. I recommend only doing this for dev builds. ` With this code in “main.ts”, we can now use ng.profiler. Hurray, it works! As you can see, there’s no problem with change detection in our current view of the application. If there is a problem however, there is an extra option that you can specify to get more fine-grained details. Pass the “record” option into ng.profiler like so. To view the results of the profiling we must ensure that the JavaScript Profiler is enabled (note: this is Chrome specific). You can enable this feature via the Devtools Hamburger Menu > More Tools > JavaScript Profiler. Once that’s enabled, you should have a new “JavaScript Profiler” tab that contains your results! This may look a little intimidating because you will see a lot of results for functions that you don’t have direct control over. Essentially, this is a breakdown of the amount of time it takes for functions in your project to execute. You can sort by total time spent to identify bottlenecks being caused by JavaScript execution as a result of change detection being triggered. Since the code in my example is fairly minimal, you’re only going to see function calls from the libraries we are using. In a heavier real-world application, you will likely see references to functions in your own application if they’re doing a lot of work on the CPU. Conclusion When encountering difficult bugs, whether they’re performance related or otherwise, it helps to know what tools are at your disposal. I hope this information proves useful in your future bug hunts!...

Connecting to PokeAPI with Angular and Apollo Client cover image

Connecting to PokeAPI with Angular and Apollo Client

GraphQL is becoming more relevant each passing day, and becoming necessary to connect to many APIs on the web. In this post, I'll show you how to connect to a publically available GraphQL API with Angular and Apollo Client. You will need basic knowledge of JavaScript to follow along, and although some basic Angular knowledge is useful, it is not entirely necessary, and I'll explain enough so that anyone can follow along. GraphQL and Apollo GraphQL is a query language that is used to interact with APIs on the web. It's possible you may have used RESTful APIs in the past, and GraphQL aims to alleviate some of the issues it has such as over-fetching, and extra round trips to the API. This is done by allowing queries to specify only the fields it needs, and by allowing batching of multiple queries in a single request. Apollo on the other hand is a popular set of tools used to create and interact with GraphQL APIs. For this demonstration, we're only concerned with the client, which in this case is a web browser since we're using Angular. We'll be using the unofficial Apollo Angular library. This uses Apollo Client under the hood, and exposes several useful features such as fetching, caching, state management, and more. Getting Started We are going to use the command-line tool ng to manage our new Angular project. ng makes it easy to bootstrap new Angular projects, and will create all the boilerplate, and set us up with some default configurations that will allow us to build and serve our project. This tool can be installed globally with npm by running the following command: ` Now navigate to your projects directory on your system, and run the following: ` When prompted to add routing to the project, make sure to say "yes". For this demo, I chose basic CSS when prompted, though any of the CSS flavors offered will do. After you choose both of those options, ng should have hopefully setup our project's boilerplate. You should be able to serve the website now with the following: ` If successful, you should see a message that tells you to navigate to http://localhost:4200/ in your browser. If you see the following page that means everything was successful! PokeAPI The PokeAPI is a publically available API that can be used to query information about Pokémon. This API is accessible both over REST and GraphQL. This API uses many features and design patterns with GraphQL that you will see in the real world. The nice thing about GraphQL is you can run queries directly from your browser. Load up the PokeAPI console and try executing the following query to get a list of gen 3 Pokémon: ` Now that we know the PokeAPI is working as expected, we can go back to our Angular application and start working on integrating with it! Component Setup and Routing Let's create a few components and setup routing between them. We'll want to make a page to list Pokémon from the PokeAPI, and a page that allows us to view individual Pokémon. For starters, I would recommend emptying out app.component.html as it contains the starter page. We'll want to render our list of Pokémon here later, and it will be easier if we start from a clean slate. First, let's create a component that will render a list of Pokémon later: ` Add another component for rendering individual Pokémon: ` These commands should have both created our components and update declarations in app.module.ts so we can use them. Now that both those components are added we should be able to add routes for them by modifying the routes variable in app-routing.module.ts: ` Here, I map /pokemon to the list and /pokemon/:id to the Pokémon detail component. I also add a redirect so the root page navigates the browser to /pokemon, though this is optional. If ng serve is still running, then you should be able to render both components by visiting their URLs. The Pokémon list component, for example, can be reached at http://localhost:4200/pokemon. Installing Apollo Client Apollo Client can be installed just as any other dependency, but there's also a bit of setup involved other than just importing it to get it working in your project. The best way to add Apollo Client to your project is with Angular's own built-in ng add subcommand. Run the following to add Apollo Client to your project: ` You will be prompted for a GraphQL URL while adding the dependency. Input https://beta.pokeapi.co/graphql/v1beta as the URL, which is the GraphQL endpoint we'll be making POST requests to later. Once you do that, you should get an output that roughly looks like the following: The results will inform you of which files have been created and changed. This is the benefit of using ng add over npm install when installing Apollo Client. ng add will automatically setup Apollo Client in our project for us in addition to installing it! The file we'll look at first is src/app/graphql.module.ts, which is a new file that was added after installation. ` This file exports a factory function, and an NG module that adds an Apollo Client provider. Note how the factory function takes a parameter with type HttpLink. This is a class provided by Apollo Client that depends on Angular's built-in HttpClient, which we will need to import as a module. Thankfully ng add is intelligent, and added a declaration for both HttpClientModule and Apollo Client's own GraphQLModule in the updated file app.module.ts. ` Let's Fetch Some Data! Now that all the visual components are in place and dependencies installed, we are now ready to start calling the PokeAPI. To start off, we're going to create a couple of services with our queries in them, and some typings for the data contained within. These serivces will contain classes that extend off apollo-graphql's Query class. schema.ts ` get-pokemon-list.service.ts ` get-pokemon.service.ts ` The query that will be called resides in the query property of the class. You may notice that we define an interface in both services. This is done so we can properly type everything in our responses. The code for the components is very simple at this point. We're going to create an observable for our pokemon and render them in the template. We can populate the observable by injecting our GetPokemonListService from before, and then watch() it. pokemon-list.component.ts ` In watch(), we can pass query parameters, and in this case, we just hard-coded some pagination parameters. These parameters can be adjusted to get different "pages" of data if we so desire. After 'watch' is valueChanges, which is an observable that will emit query results. There's a data envelope on the resulting object, and we just want a list of species, so we use RsJS to map the results to the array contained within. Now the template can simply iterate over the observable using an async pipe: pokemon-list.component.html ` Then you should get something roughly like this: The template contains anchors that should link to the other page. The component for this page is very similar to the one we just wrote, so I'll be brief. pokemon.component.ts ` The primary difference here is we pull in the ActivatedRoute service so we can read the Pokémon's ID in the URL parameters. The GetPokemonService service takes the ID returned from the router as a parameter. Things get a little more interesting with the template. pokemon.component.html ` Here, we render a list of Pokémon in the same evolution chain with links to them, and some flavor text which should look something like this: At this point, you should have a basic application that can browse Pokémon! There's a lot of improvement that can be made though such as adding pagination, and adding some styles for example, though I feel that's outside the scope of this article. Conclusion We have only scratched the surface of the Apollo Angular library and the PokeAPI itself for that matter. Apollo Angular provides a lot of functionality that makes it work well with Angular. I highly recommend checking out the following documentation links to learn more! Feel free to check out the code on GitHub as well. Documentation * PokeAPI GraphQL Console * Apollo Angular * Angular...

Introduction to Testing in Node.js with Mocha and Chai cover image

Introduction to Testing in Node.js with Mocha and Chai

When writing software, it can be tedious to test features manually and ensure your entire program continues to work as it evolves over time. We can use automated testing to invoke our program automatically, and ensure it works as expected. The Purpose of Testing Simply put, writing good tests that verify your program works as expected will help you catch regressions, and improve the reliability of your software. Good tests will ensure the program meets the requirements the program needs to meet, and that it will continue to do so as the program evolves over time. Software testing is also integral to some software development processes like TDD and BDD, though these are outside the scope of this article. The Node.js ecosystem has a large variety of testing frameworks to choose from, and many serve many different purposes. We'll be focusing on using Mocha and Chai to demonstrate the basics of testing in the context of backend development. What are Mocha and Chai? Mocha is a JavaScript testing framework that runs on Node.js, and in the browser. It is one of the most popular test frameworks in use in the Node.js community, and is very easy to learn and understand. Chai is an assertion library that helps extend test frameworks like Mocha to make writing certain kinds of tests easier. Most test frameworks like Mocha also come with their own assertion functions as well, but we've opted to show off Chai since it makes it easier to write more complex tests. Installing Mocha and Friends Mocha and Chai can be installed into your project using the following: ` Also, note that all following examples are available on GitHub for your viewing pleasure. I won't be showing all code being tested in this article as it would be a lot. You can then define a scripts section in your package.json file to run your tests so you can run your tests with npm test. ` The Basics We will start by showing off one of the most basic tests that you can make with Mocha, and bear with me as I'll show off some more complex tests later. This is a basic unit test that calls a function that returns the number 42, and tests that the result comes back as expected. The following is defined in a file called index.js. ` And the test for this function can be written as follows in a file located at test/index.test.js (note the naming convention). ` Tests can be run by invoking npm test in your project root. I've opted to keep all tests in a directory called test under the project root, and have test files for each corresponding source file with an added test.js suffix at the end of the filename. There are a few layers to this test, so let's go through each one. Tests are defined by calling global Mocha functions that are defined when the mocha command is invoked. describe is used for organizing your tests. The way that I'm using it above is one level for what I'm testing in general, this being the index file, the meaningOfLife function, and then all tests that need to invoke that function. In this case there's only one test since this is a very simple function. Tests are written inside of callbacks passed into the it function. One thing to note is how the test reads out like plain English. This is meant to make the purpose of the test to be more clear to the reader. In this example, we simply import and call the meaningOfLife function and check that it equals 42. We also use the assert call from Mocha to do this comparison, as it will raise an exception if the two expressions don't match. Let's test an API To demonstrate more practical tests, I've opted to create a basic API that returns data, and tests that the data has the expected format. The API is a simple express application that returns an array of movies as JSON by calling GET /movies against it. Testing an API isn't as simple as calling a function, and checking a response, so we will use chai and supertest to help us. The tests are as follows: ` Chai, as mentioned before, is an assertion library, and we use it here primarily for type checking the data coming back from the API. We use SuperTest to make API calls. We pass our app object to SuperTest, and it handles making sure the API is listening, and gives us access to some useful HTTP request builders that include assertion functions as well. We import the expect function from Chai, and use it instead of the built-in Mocha ones. The tests read out like plain English, and are very easy to understand. Chai actually has several interfaces that allow you to write tests such as "should" and "assert". These are detailed on their website. Hooks Imagine that the API we're testing has a persistent database of some kind. The data stored in these database will naturally persist between tests, and this can be bad if we want tests to be reproducible. To solve this problem, we can use hooks. Hooks allow code to be run before and after test blocks. In this case we want to use them to initialize and destroy the database the API will be using for each test to ensure the state is consistent every time a test is run, and will always have the same result. Here is an example of the beforeEach and afterEach hooks in action. ` I've removed the contents of the actual tests since they're a bit long, but they're in the GitHub repository if you're curious. The details of the db and app modules are outside the context of this article. The main takeaway here is the code inside of the beforeEach hook will tell Mocha to create the database before each test, and the code in the afterEach hook will tell Mocha to destroy the database after each test. I should also mention it and describe are also hooks as well! Mocha Hooks: - it: Defines a single test. - describe: Defines a block of tests. - before: Run only once before the first test in the describe block. - after: Run only once after the last test in the describe block. - beforeEach: Runs before every test in the describe block. - afterEach: Runs after every test in the describe block. Conclusion Effective use of testing libraries in Node.js like Mocha and Chai can help make it easy to write tests and make your application more reliable. The example projects used throughout this article can be found on my GitHub. I have only scratched the surface here when it comes to testing. There are also frontend testing frameworks such as Selenium and Cypress. We've actually written about Cypress on our blog as well, and you can find that here if you're curious....

Connecting to PostgreSQL using TypeORM cover image

Connecting to PostgreSQL using TypeORM

In my previous article we learned how to connect to a PostgreSQL database using the node-postgres package pg. This works fine, and will be perfect for many applications, but one may also choose to opt for using an ORM instead. What is an ORM? ORM stands for object-relational-mapping and allows you to interact with your database using objects from your programming language's type system rather than hand writing queries. This allows ORMs to simplify your logic that manipulates data in the database. In this article, we've opted to use TypeORM as it is widely used, and it supports many of the advanced features expected from an ORM framework. Setup We'll be demonstrating examples with both Data Mapper and Active record patterns. Both the Data Mapper and Active Record patterns are supported by TypeORM, and we'll cover the former first. Data Mapper allows you to represent tables and other data as Entity classes. You can then define properties in these classes that map to the respective columns in the database, and even include custom methods in the entities that manipulate them. For starters, you'll want to have TypeScript and TypeORM installed. TypeORM uses TypeScript exclusive features for defining your entities, and although it's not required to use TypeORM, it makes it much easier to do so. TypeScript is an extension to JavaScript that adds a strict type system. At the time of writing this article, TypeORM requires TypeScript v3.3 at a minimum, and this is subject to change. ` Now, TypeORM has to be installed! This can be done with the following npm command in your project root: ` Let's define a database and a simple schema with a people tables like we did in our node-postgres article. Please check out that article if you need instructions on how to set up a PostgreSQL to develop against. ` Data Mapper The Data Mapper pattern allows you to define Entities that represent your data types in the database, and repositories to store your query methods and other domain logic. Entities in Data Mapper are very simple. The following example is how an Entity for our person table is defined using the Data Mapper pattern. This is contained in a file called person.ts. ` Note that this Entity uses decorators that are experimental at the time of writing this article. Make sure you set experimentalDecorators and emitDecoratorMetadata in your tsconfig.json for this to work. You can generate a tsconfig.json by running tsc --init. Let's break down what these decorators and their options mean. @Column: Put above a property that maps to a column. The name of the property is assumed to be the name of the column in the table. If the name in the Entity doesn't match the database column name, you can specify a name in an options object in the decorator. @PrimaryGeneratedColumn(): This signals to TypeORM that this is a PRIMARY KEY column that uniquely represents the entity. To write a query method, we'll need to create a repository. A repository is a class that contains query methods and other helpers. Domain logic that works with Entities is separated from the definition of the data types themselves. This will go in people-repository.ts. ` The following code can be used to set up a connection and interact with the database with our Entity and Repository. It's a better practice to change the fullname property on the class instance and use save() to update it in the database. However, I wanted to demonstrate how the query builder can be used for UPDATE queries as well. We put this in our main index.ts file. ` Now, with all the code in place, it can be executed with tsc && node index.js. The above demos a basic CRUD flow just like we do in our previous article where we use raw SQL. All methods for interacting with the data exist in the Repository class, and this class has our own custom functions and pre-existing functions for saving and removing data that we use. Active Record The Active Record approach allows you to define query methods in the model itself, rather than doing it in a repository like we did with the Data Mapper pattern. You may have used the Active Record pattern before if you've ever used Ruby on Rails. For our example, there won't be very many changes. We can actually use the same Entity as we did in the Data Mapper class, however we will move our query methods into this class. We also make one of the queries static so we don't need an instance of the class to call the method. The name update method can remain an instance method as it can utilize the id property in an instance to simplify making calls to it. Here is the new person.ts. ` The code in index.ts is similar to the data mapper approach, but differs in how the query methods are called. Since we don't have a repository class with Active Record, we just call the methods directly on the Entity instance. ` Both methods are valid, and you should choose whatever you're more comfortable with, and whatever makes the most sense for your project. Conclusion TypeORM is a useful tool, and is capable of so much more than what is showcased here. We've only skimmed the surface of what TypeORM provides, and there is so much more. There is support for many more advanced features such as lazy relations, migrations, pagination, etc. The full documentation on their website has a lot of useful information. You can find all the source code in this GitHub repo!...