Skip to content

Developer Insights

Join millions of viewers! Our engineers craft human-written articles solving real-world problems weekly. Enjoy fresh technical content and numerous interviews featuring modern web advancements with industry leaders and open-source authors.

Newest First
Tags:Express.js
SSR Finally a First-Class Citizen in Angular? cover image

SSR Finally a First-Class Citizen in Angular?

What’s the state of Angular SSR? And what’s happened to @angular/universal? This article explains!...

Building a Multi-Response Streaming API with Node.js, Express, and React cover image

Building a Multi-Response Streaming API with Node.js, Express, and React

Introduction As web applications become increasingly complex and data-driven, efficient and effective data transfer methods become critically important. A streaming API that can send multiple responses to a single request can be a powerful tool for handling large amounts of data or for delivering real-time updates. In this article, we will guide you through the process of creating such an API. We will use video streaming as an illustrative example. With their large file sizes and the need for flexible, on-demand delivery, videos present a fitting scenario for showcasing the power of multi-response streaming APIs. The backend will be built with Node.js and Express, utilizing HTTP range requests to facilitate efficient data delivery in chunks. Next, we'll build a React front-end to interact with our streaming API. This front-end will handle both the display of the streamed video content and its download, offering users real-time progress updates. By the end of this walkthrough, you will have a working example of a multi-response streaming API, and you will be able to apply the principles learned to a wide array of use cases beyond video streaming. Let's jump right into it! Hands-On Implementing the Streaming API in Express In this section, we will dive into the server-side implementation, specifically our Node.js and Express application. We'll be implementing an API endpoint to deliver video content in a streaming fashion. Assuming you have already set up your Express server with TypeScript, we first need to define our video-serving route. We'll create a GET endpoint that, when hit, will stream a video file back to the client. Please make sure to install cors for handling cross-origin requests, dotenv for loading environment variables, and throttle for controlling the rate of data transfer. You can install these with the following command: ` ` In the code snippet above, we are implementing a basic video streaming server that responds to HTTP range requests. Here's a brief overview of the key parts: 1. File and Range Setup: We start by determining the path to the video file and getting the file size. We also grab the range header from the request, which contains the range of bytes the client is requesting. 2. Range Requests Handling: If a range is provided, we extract the start and end bytes from the range header, then create a read stream for that specific range. This allows us to stream a portion of the file rather than the entire thing. 3. Response Headers: We then set up our response headers. In the case of a range request, we send back a '206 Partial Content' status along with information about the byte range and total file size. For non-range requests, we simply send back the total file size and the file type. 4. Data Streaming: Finally, we pipe the read stream directly to the response. This step is where the video data actually gets sent back to the client. The use of pipe() here automatically handles backpressure, ensuring that data isn't read faster than it can be sent to the client. With this setup in place, our streaming server is capable of efficiently delivering large video files to the client in small chunks, providing a smoother user experience. Implementing the Download API in Express Now, let's add another endpoint to our Express application, which will provide more granular control over the data transfer process. We'll set up a GET endpoint for '/download', and within this endpoint, we'll handle streaming the video file to the client for download. ` This endpoint has a similar setup to the video streaming endpoint, but it comes with a few key differences: 1. Response Headers: Here, we include a 'Content-Disposition' header with an 'attachment' directive. This header tells the browser to present the file as a downloadable file named 'video.mp4'. 2. Throttling: We use the 'throttle' package to limit the data transfer rate. Throttling can be useful for simulating lower-speed connections during testing, or for preventing your server from getting overwhelmed by data transfer operations. 3. Data Writing: Instead of directly piping the read stream to the response, we attach 'data' and 'end' event listeners to the throttled stream. On the 'data' event, we manually write each chunk of data to the response, and on the 'end' event, we close the response. This implementation provides a more hands-on way to control the data transfer process. It allows for the addition of custom logic to handle events like pausing and resuming the data transfer, adding custom transformations to the data stream, or handling errors during transfer. Utilizing the APIs: A React Application Now that we have a server-side setup for video streaming and downloading, let's put these APIs into action within a client-side React application. Note that we'll be using Tailwind CSS for quick, utility-based styling in our components. Our React application will consist of a video player that uses the video streaming API, a download button to trigger the download API, and a progress bar to show the real-time download progress. First, let's define the Video Player component that will play the streamed video: ` In the above VideoPlayer component, we're using an HTML5 video tag to handle video playback. The src attribute of the source tag is set to the video endpoint of our Express server. When this component is rendered, it sends a request to our video API and starts streaming the video in response to the range requests that the browser automatically makes. Next, let's create the DownloadButton component that will handle the video download and display the download progress: ` In this DownloadButton component, when the download button is clicked, it sends a fetch request to our download API. It then uses a while loop to continually read chunks of data from the response as they arrive, updating the download progress until the download is complete. This is an example of more controlled handling of multi-response APIs where we are not just directly piping the data, but instead, processing it and manually sending it as a downloadable file. Bringing It All Together Let's now integrate these components into our main application component. ` In this simple App component, we've included our VideoPlayer and DownloadButton components. It places the video player and download button on the screen in a neat, centered layout thanks to Tailwind CSS. Here is a summary of how our system operates: - The video player makes a request to our Express server as soon as it is rendered in the React application. Our server handles this request, reading the video file and sending back the appropriate chunks as per the range requested by the browser. This results in the video being streamed in our player. - When the download button is clicked, a fetch request is sent to our server's download API. This time, the server reads the file, but instead of just piping the data to the response, it controls the data sending process. It sends chunks of data and also logs the sent chunks for monitoring purposes. The React application collects these chunks and concatenates them, displaying the download progress in real-time. When all chunks are received, it compiles them into a Blob and triggers a download in the browser. This setup allows us to build a full-featured video streaming and downloading application with fine control over the data transmission process. To see this system in action, you can check out this video demo. Conclusion While the focus of this article was on video streaming and downloading, the principles we discussed here extend beyond just media files. The pattern of responding to HTTP range requests is common in various data-heavy applications, and understanding it can be a useful tool in your web development arsenal. Finally, remember that the code shown in this article is just a simple example to demonstrate the concepts. In a real-world application, you would want to add proper error handling, validation, and possibly some form of access control depending on your use case. I hope this article helps you in your journey as a developer. Building something yourself is the best way to learn, so don't hesitate to get your hands dirty and start coding!...

BullMQ with ExpressJS cover image

BullMQ with ExpressJS

Node.js uses an event loop to process asynchronous tasks. The event loop is responsible for handling the execution of asynchronous tasks, but it does it in a single thread. If there is some CPU-intensive or long-running (blocking) logic that needs to be executed, it should not be run on the main thread. This is where BullMQ can help us. In this blog post, we are going to set up a basic queue, using BullMQ. If you'd like to jump right into developing with a queue, check out our starter.dev kit for ExpressJS, where a fully functioning queue is already provided for you. Why and when to use a queue? BullMQ is a library that can be used to implement message queues in Node.js applications. A message queue allows different parts of an application, or different applications, to communicate with each other asynchronously by sending and receiving messages. This can be useful in a variety of situations, such as when one part of the application needs to perform a task that could take a long time, or when different parts of the application need to be decoupled from each other for flexibility and scalability. If you need to process large numbers of messages in a distributed environment, it can help you solve your problems. One such scenario to use a queue is when you need to deal with webhooks. Webhook handler endpoints usually need to respond with 2xx quickly, therefore, you cannot put long running tasks inside that handler, but you also need to process the incoming data. A good way of mitigating this is to put the incoming data into the queue, and respond quickly. The processing gets taken care of with the BullMQ worker, and if it is set up correctly, it will run inside a child process not blocking the main thread. Prerequisites BullMQ utilizes Redis to handle its message queue. In development mode, we are using docker-compose to start up a redis instance. We expose the redis docker container's 6379 port to be reachable on the host machine. We also mount the /misc/data and the misc/conf folders to preserve data for our local development environments. ` We can start up our infrastructure with the docker-compose up -d command and we can stop it with the docker-compose stop command. We also need to set up connection information for BullMQ. We make it configurable by using environment variables. ` To start working with BullMQ, we also need to install it to our node project: ` Setting up a queue Creating a queue is pretty straightforward, we need to pass the queue name as a string and the connection information. ` We also create a function that can be used to add jobs to the queue from an endpoint handler. I suggest setting up a rule that removes completed and failed jobs in a timely manner. In this example we remove completed jobs after an hour from redis, and we leave failed jobs for a day. These values depend on what you want to achieve, It can very well happen that in a production app, you would keep the jobs for weeks. ` It would be called when a specific endpoint gets called. ` The queue now can store jobs, but in order for us to be able to process those jobs, we need to set up a worker. Put the processing into a thread We set up a worker with an async function at the beginning. The worker needs the same name as the queue to start consuming jobs in that queue. It also needs the same connection information as the queue we set up before. ` After we create the worker and set up event listeners, we call the setUpWorker() method in the queue.ts file after the Queue gets created. Let's set up the job processor function. ` This example processor function doesn't do much, but if we had a long running job, like complex database update operations, or sending data towards a third-party API, we would do it here. Let's make sure our worker will run these jobs on a separate thread. ` If you provide a file path to the worker as the second parameter, BullMQ will run the function exported from the file in a separate thread. That way, the main thread is not used for the CPU intense work the processor does. The above example works if you run the TypeScript compiler on your back-end code (tsc), but if you prefer keeping your code in TypeScript and run the logic with ts-node, then you should use the TypeScript file as your processor path. ` The problem with bundlers Sometimes, your back-end code gets bundled with webpack. For example, if you use NX to keep your front-end and back-end code together in one repository, you will notice that your back-end code gets bundled with webpack. In order to be able to run your processors in a separate thread, you need to tweak your project.json configuration: ` Conclusion In an ExpressJS application, running CPU intense tasks on the main thread could cause your endpoints to turn unresponsive and/or slow. Moving these tasks into a different thread can help alleviate performance issues, and using BullMQ can help you out greatly. If you want to learn more about NodeJS, check out node.framework.dev for a curated list of libraries and resources. If you are looking to start a new ExpressJS project, check out our starter kit resources at starter.dev...

Introducing the express-typeorm-postgres Starter Kit cover image

Introducing the express-typeorm-postgres Starter Kit

At This Dot, we've been working with ExpressJS APIs for a while, and we've created a starter.dev kit for ExpressJS that you can use to scaffold your next backend project....

Migrating a classic Express.js to Serverless Framework cover image

Migrating a classic Express.js to Serverless Framework

Problem Classic Express.js applications are great for building backends. However, their deployment can be a bit tricky. There are several solutions on the market for making deployment "easier" like Heroku, AWS Elastic Beanstalk, Qovery, and Vercel. However, "easier" means special configurations or higher service costs. In our case, we were trying to deploy an Angular frontend served through Cloudfront, and needed a separately deployed backend to manage an OAuth flow. We needed an easy to deploy solution that supported HTTPS, and could be automated via CI. Serverless Framework The Serverless Framework is a framework for building and deploying applications onto AWS Lambda, and it allowed us to easily migrate and deploy our Express.js server at a low cost with long-term maintainability. This was so simple that it only took us an hour to migrate our existing API, and get it deployed so we could start using it in our production environment. Serverless Init Script To start this process, we used the Serverless CLI to initialize a new Serverless Express.js project. This is an example of the settings we chose for our application: ` Here's a quick explanation of our choices: What do you want to make? This prompt offers several possible scaffolding options. In our case, the Express API was the perfect solution since that's what we were migrating. What do you want to call this project? You should put whatever you want here. It'll name the directory and define the naming schema for the resources you deploy to AWS. What org do you want to add this service to? This question assumes you are using the serverless.com dashboard for managing your deployments. We're choosing to use Github Actions and AWS tooling directly though, so we've opted out of this option. Do you want to deploy your project? This will attempt to deploy your application immediately after scaffolding. If you don't have your AWS credentials configured correctly, this will use your default profile. We needed a custom profile configuration since we have several projects on different AWS accounts so we opted out of the default deploy. Serverless Init Output The init script from above outputs the following: - .gitignore - handler.js - package.json - README.md - serverless.yml The key here is the serverless.yml and handler.js files that are outputted. serverless.yml ` handler.js ` As you can see, this gives a standard Express server ready to just work out of the box. However, we needed to make some quality of life changes to help us migrate with confidence, and allow us to use our API locally for development. Quality of Life Improvements There are several things that Serverless Framework doesn't provide out of the box that we needed to help our development process. Fortunately, there are great plugins we were able to install and configure quickly. Environment Variables We need per-environment variables as our OAuth providers are specific per host domain. Serverless Framework supports .env files out of the box but it does require you to install the dotenv package and to turn on the useDotenv flag in the serverless.yml. Babel/TypeScript Support As you can see in the above handler.js file, we're getting CommonJS instead of modern JavaScript or TypeScript. To get these, you need webpack or some other bundler. serverless-webpack exists if you want full control over your ecosystem, but there is also serverless-bundle that gives you a set of reasonable defaults on webpack 4 out of the box. We opted into this option to get us started quickly. Offline Mode With classic Express servers, you can use a simple node script to get the server up and running to test locally. Serverless wants to be run in the AWS ecosystem making it. Lucky for us, David Hérault has built and continues to maintain serverless-offline allowing us to emulate our functions locally before we deploy. Final Configuration Given these changes, our serverless.yml file now looks as follows: ` Some important things to note: - The order of serverless-bundle and serverless-offline in the plugins is critically important. - The custom port for serverless-offline can be any unused port. Keep in mind what port your frontend server is using when setting this value for local development. - We set the profile and stage in our provider configuration. This allowed us to use specify the environment settings and AWS profile credentials to use for our deployment. With all this set, we're now ready to deploy the basic API. Deploying the new API Serverless deployment is very simple. We can run the following command in the project directory: ` This command will deploy the API to AWS, and create the necessary resources including the API Gateway and related Lambdas. The first deploy will take roughly 5 minutes, and each subsequent deply will only take a minute or two! In its output, you'll receive a bunch of information about the deployment, including the deployed URL that will look like: ` You can now point your app at this API and start using it. Next Steps A few issues we still have to resolve but are easily fixed: - New Lambdas are not deploying with their Environment Variables, and have to be set via the AWS console. We're just missing some minor configuration in our serverless.yml. - Our deploys don't deploy on merges to main. For this though, we can just use the official Serverless Github Action. Alternatively, we could purchase a license to the Serverless Dashboard, but this option is a bit more expensive, and we're not using all of its features on this project. However, we've used this on other client projects, and it really helped us manage and monitor our deployments. Conclusion Given all the above steps, we were able to get our API up and running in a few minutes. And due to it being a 1-to-1 replacement for an existing Express server, we were able to port our existing implementations into this new Serverless implementation, deploy it to AWS, and start using it in just a couple of hours. This particular architecture is a great means for bootstraping a new project, but it does come with some scaling issues for larger projects. As such, we do not recommend a pure Serverless, and Express for monolithic projects, and instead suggest utilizing some of the amazing capabilities of Serverless Framework with AWS to horizontally scale your application into smaller Lambda functions....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co