Skip to content
Maarten Bicknese

AUTHOR

Maarten Bicknese

Senior Software Engineer

Software Engineer, Mentor, Sailor, and Game designer. Passionate about all aspects of web development, with a special love for React.

Select...
Select...
Challenges of SSR with SolidStart and TanStack Query v4 cover image

Challenges of SSR with SolidStart and TanStack Query v4

Coming from developing in React, a lot of us are big fans of TanStack Query. It adds that layer for async data fetching to React we needed. So when shifting to a new framework, Solid, which has a familiar signature as React, we wanted to bring our beloved tools with us. During the development of our showcase, we came to realize that the combination of TanStack Query (v4, v5 seems to include positive changes) and SolidStart was not meant to be. Understanding the differences Different interface Right out of the box, the experience between Solid and React differs. There’s the first very obvious issue that the documentation for Solid consists of a single page, whereas React gets a full book on documentation. But more important is the way one uses TanStack Query. React directly takes the tuple containing the query name and variables. Where Solid, due to the way reactivity works, needs a function returning the tuple. This way, Solid can bind an effect to the query to ensure it triggers when the dependencies change. It’s not a big difference, but it indicates that TanStack Query React and TanStack Query Solid are not the same. ` — TanStack Query Docs Stores What is not so apparent from the documentation are the changes under the hood. React triggers rerenders when state changes are pushed. These rerenders will, in turn, compare the new variables against dependencies to determine what to run. This does not require special treatment of the state. Whatever data is passed to React will be used directly as is. Solid, on the other hand, requires Signals to function. To save you the hassle, TanStack will create stores from the returned data for you. With the dependency tuple as a function and the return value as store, TanStack Query closes the reactivity loop. Whenever a signal changes, the query will be triggered and load new data. The new data gets written to the store, signalling all observers. Why it doesn’t work Solid comes prepacked with Resources. These basically fill the same functionality as TanStack Query offers. Although TanStack does offer more features for the React version. Resources are Signal wrappers around an async process. Typically they’re used for fetching data from a remote source. Although both Resources and TanStack Query do the same thing, the different signatures makes it so they’re not interchangeable. Resources have loading where TanStack uses isLoading. SolidStart SolidStart is an opinionated meta-framework build on top of SolidJS and Solid router. One of the features it brings to the table is Server-side rendering (SSR). This sends a fully rendered page to the client, as opposed to just sending the skeleton HTML and having the client build the page after the initial page load. With SSR, the server also send additional information to the client for SolidJS to hydrate and pick up where the server left off. This prevents the client from re-rendering all the work the server had already done. In order for SSR to work, one needs to create pages. SolidStart offers a feature that allows developers to inject data into their pages. By doing so, one can set up a generic GUI for loading data when changing between pages. A very minimal example of this looks like: ` When combining this setup with routing and createResource, there’s some caveats that need to be taken into consideration. These are described in the official SolidStart docs. In order to keep the routes maintainable, SolidStart offers createRouteData that simplifies the setup and to mitigate potential issues caused by misusing the system. createRouteData, resources and TanStack Query It is with createRouteData that we run into issues with combining SolidStart and TanStack Query. In order to use SSR, SolidStart needs the developers to use createRouteData. Which in turn expects to create a resource for the async operation that is required to load the page’s data. By relying on a resource being returned, SolidStart can take control of the flow. It knows when it’s rendering on the server, how to pass both the HTML and the data to server, and finally how to pick up on the client. As stated before, TanStack Query relies on stores, not on resources. Therefore we cannot swap out createRouteData and createQuery even though they both fill the same purpose. Our initial attempt was to wrap the returned data from createQuery to resemble the shape of a resource. But that started to throw errors as soon as we tried to load a page. Under the hood, both SolidStart and TanStack Query are doing their best to hold control over the data flow. Systems like caching, hydration strategies and refetching logic are running while it seems like we’re just fetching data and passing it to the render engine. These systems conflict (they both are trying to do the same thing and get stuck in a tug-o-war for the data). This results in a situation where we can either satisfy TanStack Query or SolidStar. We can probably make it work by creating an advanced adapter that awaits and pulls the data from a query. Use that data to create our own resource and feed that to createRouteData to have SolidStart do its thing. Our conclusion is that there’s too much effort needed to create and maintain such an adapter especially when taking into consideration that we can simply move away from TanStack Query (for now) and use resources as SolidStart intents....

Deep Dive Into How Signals Work In SolidJS cover image

Deep Dive Into How Signals Work In SolidJS

SolidJS and Qwik have shown the world the power of signals, and Angular is following suit. There’s no way around them, so let's see what they are, why one would use them, and how they work. Signal basics Signals are built using the observer pattern. In this pattern, a subject holds a list of observers who are subscribed to changes to the subject. Whenever the subject gets changed, all subscribers will receive a notification of the update. Typically through a registered callback method. Observers may push new changes to that subject or other subjects. Triggering another set of updates throughout the observers. > From the above, you might have guessed that infinite loops are a big caveat with using the observer pattern. The same holds true for signals. The power of this pattern lies in the separation of concerns. Observers need to know little about the subject, except that it can change. Whichever actor is going to change the subject needs to know nothing about the observers. This makes it easy to build standalone services that know only about their domain. Signals in front-end frameworks SolidJS brings the observer pattern to the table for front-end frameworks through signals. Observers can be added to a signal through SolidJS’s concept of Effects (by using createEffect). Components within SolidJS can be seen as both an observer and a subject at the same time. The component subscribes to all signals that are used to render the components HTML. SolidJS’s rendering system, in turn, is subscribed to all components. Where the components act as subjects and SolidJS as an observer. So whenever a signal changes in a component, the component reacts by changing its output, which then triggers SolidJS to put the new output on the screen. Compare this to, for example, Vue or React. When a change to the state occurs, the new values are passed down the component hierarchy. Each component returns its new output, which can be either the same as the previous render or changed. The framework then compares this tree against what it already had, and determines which parts to update. This is more dependent on a single source of truth which needs to know about all the components in the system. Changes are loosely related to each other, and only by diffing the results can the next step be determined. This differs from SolidJS setup, which makes hard connections between changes and results, making what will get updated when a signal changes more straightforward. Writing our own Signals At first sight, signals seem magical, and one might be inclined to believe there are some compiler tricks going on. Yet it is all plain JavaScript, and in this article, we’ll demystify signals in order to use them to their full potential. We can create our own signals with pure JavaScript in less than 25 lines. > Our simple version will not take objects or arrays as values as these are references in JavaScript and require special attention. Let's start with the interface. We want the signal creator, which is a function that returns a tuple with the first value being the getter and the second value the setter. The function accepts a value, which will be used as the initial value. This gives us: ` > Note that, due to the fact that we created a new variable within the closure of our createSignal, the variable outside of its scope will not change. original on the last line will still be “1”. For simplicity, we’re going to leave objects and arrays out of the picture as these are references instead of scalar values, and need extra code to do the same thing. Now that we can read from and write to our signal (a.k.a. subject), we’ll need to add subscribers to it. Whenever the getter is called (i.e., the value is read), we want the originator of the call to be registered as an observer. Then, when the setter is called, we are going to loop over all subscribed observers, and notify them of the new value. Consider this fully working signal creator. We’re almost there. ` This snippet has a downside. It needs the observer to be passed as the argument to the getter. But we don’t want to deal with that. Our interface was to read signal(), and have some sort of magic register the observer for us. What comes next was an eye-opener for me. I always believed there was some closure trick, or built-in JavaScript function to retrieve parent closures. That would have been a fantastic way to get who called the getter function and register it as an observer. But JavaScript offers nothing to support us in this. Instead, a way more simple trick is used, and it is seemingly used in every major framework. Frameworks, among others, React and SolidJS, store the parent in a global variable. Because JavaScript is single-threaded, it needs to execute all operations in order. It does a lot under the hood to get stuff like async to work. Clever developers have relied on this single-threaded aspect by writing to a global variable, and reading from it in the next function. This gets a little abstract, so here’s a concrete example to demonstrate this setup. ` We do not need to worry about current getting overwritten, as the code will always execute in order. It’s safe to assume that it will have the value we expect it to have when the body of second is executed. Note that we clear the value in the body of first as we don’t want unwanted side-effects by leaving the variable set. The Fully Working Signal Let’s add effects to our signals to complete the minimal signal minimal framework. With what we have learned in the previous section, we can create effects by Registering the effect callback (i.e., the observer) to our global variable current Calling the observer for the first time. This will read all signals it depends upon, therefore adding current to the observer list. Clearing current to prevent registering the observer to signals read in the future. For this, we remove the current argument from the getter, as this is now globally readable. And we can add the createEffect function. ` With this setup, we already have a working signal system. We can register effects and write out signals to trigger them. ` And there we have it! Working signals in just a couple of lines, no magic, no difficult JavaScript API. All just plain code. You can play around with the fully working example in this StackBlitz project. It has the signal setup as described, plus an example of stores. Run node index.js to see the result. This simple framework is only focused on showcasing signals. For demonstration purposes, it simply logs to the console. Frameworks like SolidJS have advanced effects and logic to get HTML rendering to work. If you’re interested in learning more about rendering, you can read Mark’s blog on how to create your own custom renderer in SolidJS Or put your newly learned skills to use in a new SolidJS project created with Starter.dev’s SolidJS and Tailwind starter!...

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2 cover image

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2

Suppose you want to set up your infrastructure on AWS, but avoid going over the manual steps, or you want reproducible results. In that case, CDK might be the thing for you. CDK stands for Cloud Development Kit; it allows you to program your hosting setup using either TypeScript, JavaScript, Python, Java, C#, or Go. CDK does require you to be familiar with AWS terminology. This series will explain the services used, but it might be a good idea to read up on what AWS offers. Or read one of our earlier articles on AWS. CDK is imperative, which means you can code your infrastructure. There is a point to be made, however, that it behaves more like a declarative tool. All the code one writes ends up in a stack definition. This definition is sent to AWS to set up the desired services, or alter an already running stack. The imperative approach allows one to do easy conditional statements or loops without learning a new language. AWS Toolkit To make things easier for us, AWS offers the AWS Toolkit for VS code. The installation of the plugin in VS Code is straightforward. We had some issues with the authentication, and recommend using the "Edit credentials" route over the "Add a new connection" option. When on the account start page, select the profile you'd like to use. Open the accordion, so it shows the authentication options. Pick "Command line or programmatic access" to open a dialog with the required values. Click the text underneath the heading "Option 2: Add a profile to your AWS credentials file". This will automatically copy the values for you. Next, go back to VS Code, and paste these values into your credentials file. Feel free to change the name between the square brackets to something more human-readable. You can now pick this profile when connecting to AWS in VS Code. First stack With our handy toolkit ready, let's deploy our first stack to AWS using CDK. For this, the CDK needs to make a CloudFormation stack. In your terminal, create a new empty directory (the default name of the app will be the same as your directory's name) and navigate into it. Scaffold a new project with ` This will create all the required files to create your stack in AWS. From here on, we can bootstrap our AWS environment for use with CDK. Run the bootstrap command with the profile you’ve configured earlier. For example, I pasted my credentials, and named the profile ‘sandbox’. ` CDK will now create the required AWS resources to deploy our stack. Having all our prerequisites met, let’s create a lambda to test if our stack is working properly. Create a new JavaScript file lambda/Hello.js containing this handler ` And add our lambda to our stack in the constructor in lib/-stack.ts ` That’s all we need to deploy our lambda to our stack. We can now run the deploy command, which will compare our new local configuration with what is already deployed. Before any changes are pushed, this diff will be displayed on your terminal, and ask for confirmation. This is a good moment to evaluate whether what you’ve written has correctly translated to the desired infrastructure. ` This same command will push updates. Note that you will only see the diff and confirmation prompt when CDK is about to create new resources. When updating the contents of your Lambda function, it simply pushes the code changes. Now in VS Code, within your AWS view, you’ll find a new CloudFormation, Lambda, and S3 bucket in the explorer view. Right click your Lambda to “Invoke on AWS”. This opens a new window for that specific Lambda. In the right-hand corner, click “Invoke”. The output window will open, and you should see the returned payload including the message we set in our handler. This is not very practical yet. We’re still missing an endpoint to call from our client or browser. This can be done by adding a FunctionURL. Simply add the following line in your stack definition. The authentication is disabled for now, but this makes it possible to make a GET request to the lambda, and see its result. This might not be the desired situation, and AWS offers options to secure your endpoints. ` After redeploying this change, right click your Lambda in VS Code and copy the URL. Paste it in your browser and you should see the result of your Lambda! Our first stack is deployed and working. Cleanup By following this article, you should remain within the free tier of AWS and not incur any costs. To keep costs low, it’s a good practice to clean up your stacks that are no longer in use. ` The CDK destroy command will remove your stack, but leaves the CDK bootstrapped for future deployments. If you want to fully remove all resources created by following this article, also remove the CloudFormation and S3 bucket. This can be done through VS Code by right clicking your CloudFormation and selecting “Delete CloudFormation Stack” and simply “Delete” for the associated S3 bucket. This brings you back to a completely clean slate and future use of the CDK should be bootstrapped again. Round up You should now be able to bootstrap CDK, Create a stack, and run a Lambda function within the stack which is accessible through a FunctionURL. You can grow your stack by adding more Lambda functions, augmenting the logic of those functions, or adding other AWS resources not covered in this article. The setup created can be torn down and recreated in the exact same way over and over, making it easy to share with your team. Changes are incremental, and can be rolled back if need be. This should offer confidence in managing your infrastructure, over manually creating it through the AWS console. Have fun building your own infrastructure!...

Git Strategies For Teams, Another Take cover image

Git Strategies For Teams, Another Take

Recently we published an article about a pragmatic approach to using Git in teams. It outlines a strategy which is easy to implement while safeguarding against common issues when using git. It is, however, in my opinion, a compromise. Allowing some issues with edge cases as a trade-off for ease of use. Status Quo Before jumping into another strategy, let us establish the pros and cons of what we’re up against. This will give us a reference for determining whether we did better, or not. Pro: Squash Merging One of the stronger arguments is the squash merge. It offers freedom to all developers within the team to develop as they see fit. One might prefer to develop all at the same time, and push a big commit in the end. Others might like to play it safe and simply commit every 5 minutes, allowing them to roll back or backup changes. The only rule is that at the end of the work, all changes get squashed into a single commit that has to adhere to the team's standards. Con: You Get A Single Commit Each piece of work gets a single commit. To circumvent this, one could create multiple PRs and break-up the work into bite-size chunks. This is, undeniably, a good practice. But it can be cumbersome and distributive when trying to keep pace. In addition, you put the team at risk of getting git conflicts. Say you’re in the middle of building your feature and uncover a bug which requires fixing. As a good scout, you implement the fix and create a separate PR to deliver the fix to the team. At the same time, you keep the fix in your branch as you need it for your feature. At the point of squashing, your newly squashed commit conflicts with the stand-alone fix you’ve delivered to your team. This is nothing that git can’t fix, but it is a nuisance nonetheless. Con: Review Potential A good PR shouldn’t have too many changes, making it easy to review. In the real world however, things get messy. Commits can give us some insight into how the complete changeset came to be. This requires the team to write well-curated commits though. This conflicts with the strength we’re getting from allowing freedom to commit as one sees fit. The History Rewriting Controversy It is good to know that what I’m about to suggest is considered blasphemy by many. Rewriting history is not without its dangers. Changes can go missing, and others who have based their work on now-changed history need to deal with conflicts. However, when applied prudently, rewriting history can yield benefits as well. In this context, some advanced git knowledge is required. The Alternative There was a soft hint towards using conventional commits in Dustin’s article. Let's go ahead and fully endorse adopting it. The convention is simple enough, and the documentation is exhaustive. Now I hear you think, “but we just concluded that allowing us to commit as we like was a good thing”. And you are not wrong. This is where history rewriting comes into play. As you’re working, commit as you like. Then, when it’s time to put your changes up for review, start editing your branch to ensure each change is nicely wrapped and documented in a proper commit. Finally, after getting a thumbs-up on the PR, rebase your changes on top of the branch you’re merging into, and finally do a normal merge. Most git hosting services offer this workflow for you. While I endorse rewriting history in your own branch, restrain from altering shared branches like “main” or “develop”. By sticking to this small rule, you’ve already negated most disadvantages of rewriting history. Shared Changes If we look back at our scenario where both the main and our feature branch include a fix, we get to the same point where we want to merge. However in this case, given that you’ve made the same commit in both branches, git is clever enough to fix the flow of history and remove the fix from your new changes. The following flow... ... will look like this after merging: Fixes On Your Own Features Although this is part of the conventional commit strategy, I feel it deserves some special attention. If you have introduced a new feature in your branch, and committed the changes. It can happen that you introduced a bug. Your first intuition might be to create a “fix” commit. Instead, consider going back to your feature commit and amending the fix to it. This has two advantages. First of all, the history will be less cluttered. Looking back at what changed, it's easy to see which features got introduced and what bugs we found along the way. On top of that, it will prevent confusion for your reviewers. Now, the code presented to others is fixed code. At no time in its history does it ever contain the bug. Your co-workers are not going to have any comments on it. How To Rewrite Your Branch Now we know why to clean-up, let's look at strategies to actually do the orchestration. The most obvious route is to keep the changeset you want to present in mind. Doing so, one prevents having to go back and rewrite everything from scratch. As an added bonus, I’ve found that it helps me better separate concerns. Complete Wipe If you like making periodic commits (or some other strategy that results in you creating arbitrary commits) chances are you are going to completely wipe all commits (not the changes) in your branch. The simplest way to accomplish this is by doing a soft reset to where you forked from the main branch. This can be achieved by rebasing and resetting to main (given main is where you want to merge into). This is a good approach as you also prepare your branch for being merged back. ` This can also be accomplished by counting the amount of commits and making that amount of steps back from HEAD. For example, if you have made 4 commits in your branch. ` And lastly, you can do this by knowing where you started off. One can find this by looking at the logs: ` Using either of these methods will leave you with no commits in your branch, and all your changes in your workspace. From this point, you can start cherry-picking your changes, and making well-curated commits. Interactive Rebase If you already have somewhat of a structure, interactive rebasing might be a better solution for you. This will allow you to go over each commit, and decide on how to alter them. The most interesting options being: s, squash - this will add the changes from this commit to its parent, followed by allowing you to change the commit message, and thus appending the message with the squashed changes. e, edit - using this option, the rebase will stop right before the commit gets added to the branch as if you went back in time and just did the development work. From this point, you can add files, split the commit in multiple different commits, change the commit message, or do whatever you’d like to do. d, drop - in the rare occasion you simply don’t want this commit anymore. r, reword - like edit, but you’re only offered the option to change the commit message. To start an interactive rebase, simply run ` Conclusion By embracing history rewriting and dropping squash merging. A team could produce an even cleaner git history. This option might not be for everyone, as it requires a little work and git knowledge. But if done well, it will circumvent some of the drawbacks of our pragmatic approach....