Skip to content

Web Scraping with Deno

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Maybe you've had this problem before: you need data from a website, but there isn't a good way to download a CSV or hit an API to get it. In these situations, you may be forced to write a web scraper. A web scraper is a script that downloads the HTML from a website as though it were a normal user, and then it parses that HTML to extract information from it. JavaScript is a natural choice for writing your web scraper, as it's the language of the web and natively understands the DOM. And with TypeScript, it is even better, as the type system helps you navigate the trickiness of HTMLCollections and Element types.

You can write your TypeScript scraper as a Node script, but this has some limitations. TypeScript support isn't native to Node. Node also doesn't support web APIs very well, only recently implementing fetch for example. Making your scraper in Node is, to be frank, a bit of a pain. But there's an alternative way to write your scraper in TypeScript: Deno!

Deno is a newer JavaScript/TypeScript runtime, co-created by one of Node's creators, Ryan Dahl. Deno has native support for TypeScript, and it supports web APIs out of the box. With Deno, we can write a web scraper in TypeScript with far fewer dependencies and boilerplate than what you'd need in order to write the same thing in Node.

In this blog, we’re going to build a web scraper using Deno. Along the way, we’ll also explore some of the key advantages of Deno, as well as some differences to be aware of if you’re coming from writing code exclusively for Node.

Getting Started

First, you'll need to install Deno. Depending on your operating system, there are many methods to install it locally.

After installation, we'll set up our project. We'll keep things simple and start with an empty directory.

mkdir deno-scraper
cd deno-scraper

Now, let's add some configuration. I like to keep my code consistent with linters (e.g. ESLint) and formatters (e.g. Prettier). But Deno doesn't need ESLint or Prettier. It handles linting and formatting itself. You can set up your preferences for how to lint and format your code in a deno.json file. Check out this example:

{
  "compilerOptions": {
    "noUnusedLocals": true,
    "noUnusedParameters": true,
    "noUncheckedIndexAccess": true
  },
  "fmt": {
    "options": {
      "useTabs": true,
      "lineWidth": 80,
      "singleQuote": true
    }
  }
}

You can then lint and format your Deno project with deno lint and deno fmt, respectively.

If you're like me, you like your editor to handle linting and formatting on save. So once Deno is installed, and your linting/formatting options are configured, you'll want to set up your development environment too.

I use Visual Studio Code with the Deno extension. Once installed, I let VS Code know that I'm in a Deno project and that I'd like to auto-format by adding a .vscode/settings.json file with the following contents:

{
	"deno.enable": true,
	"editor.formatOnSave": true,
	"editor.defaultFormatter": "denoland.vscode-deno"
}

Writing a Web Scraper

With Deno installed and a project directory ready to go, let's write our web scraper! And what shall we scrape? Let's scrape the list of Deno 3rd party modules from the official documentation site. Our script will pull the top 60 Deno modules listed, and output them to a JSON file.

For this example, we'll only need one script. I'll name it index.ts for simplicity's sake. Inside our script, we'll create three functions: sleep, getModules and getModule.

The first will give us a way to easily wait between fetches. This will prevent issues with our script, and the owners of the website we’ll contact, because it's unkind to flood a website with many successive requests. This type of automation could appear like a Denial of Service (DoS) attack, and could cause the API owners to ban your IP address from contacting it. The sleep function is fully implemented in the code block below. The second function (getModules) will scrape the first three pages of the Deno 3rd party modules list, and the third (getModule) will scrape the details for each individual module.

// This sleep function creates a Promise that
// resolves after a given number of milliseconds.
async function sleep(milliseconds: number) {
	return new Promise((resolve) => {
		setTimeout(resolve, milliseconds);
	});
}

async function getModules() {
	// ...
}

async function getModule(url: string | URL) {
	// ...
}

Scraping the Modules List

First, let's write our getModules function. This function will scrape the first three pages of the Deno 3rd party modules list. Deno supports the fetch API, so we can use that to make our requests. We'll also use the deno_dom module to parse the HTML response into a DOM tree that we can traverse.

Two things we'll want to do upfront: let's import the deno_dom parsing module, and create a type for the data we're trying to scrape.

import { DOMParser } from 'https://deno.land/x/deno_dom/deno-dom-wasm.ts';

interface Entry {
	name?: string;
	author?: string;
	repo?: string;
	href?: string;
	description?: string;
}

Next, we'll set up our initial fetch and data parsing:

async function getModules() {
	// Here, we define the base URL we want to use, the number
	// of pages we want to fetch, and we create an empty array
	// to store our scraped data.
	const BASE_URL = 'https://deno.land/x?page=';
	const MAX_PAGES = 3;
	const entries: Entry[] = [];

	// We'll loop for the number of pages we want to fetch,
	// and parse the contents once available
	for (let page = 1; page <= MAX_PAGES; page++) {
		const url = `${BASE_URL}${page}`;
		const pageContents = await fetch(url).then((res) => res.text());
		// Remember, be kind to the website and wait a second!
		await sleep(1000);

		// Use the deno_dom module to parse the HTML
		const document = new DOMParser().parseFromString(pageContents, 'text/html');

		if (document) {
			// We'll handle this in the next code block
		}
	}
}

Now that we're able to grab the contents of the first three pages of Deno modules, we need to parse the document to get the list of modules we want to collect information for. Then, once we've got the URL of the modules, we'll want to scrape their individual module pages with getModule.

Passing the text of the page to the DOMParser from deno_dom, you can extract information using the same APIs you'd use in the browser like querySelector and getElementsByTagName. To figure out what to select for, you can use your browser's developer tools to inspect the page, and find selectors for elements you want to select. For example, in Chrome DevTools, you can right-click on an element and select "Copy > Copy selector" to get the selector for that element.

DevTools Copy Selector
if (document) {
	// Conveniently, the modules are all the only <li> elements
	// on the page. If you're scraping different data from a
	// different website, you'll want to use whatever selectors
	// make sense for the data you're trying to scrape.
	const modules = document.getElementsByTagName('li');

	for (const module of modules) {
		const entry: Entry = {};
		// Here we get the module's name and a short description.
		entry.name = module.querySelector(
			'.text-primary.font-semibold',
		)?.textContent;
		entry.description = module.querySelector('.col-span-2.text-gray-400')
			?.textContent;

		// Here, we get the path to this module's page.
		// The Deno site uses relative paths, so we'll
		// need to add the base URL to the path in getModule.
		const path =
			module.getElementsByTagName('a')[0].getAttribute('href')?.split(
				'?',
			)[0];
		entry.href = `https://deno.land${path}`;

		// We've got all the data we can from just the listing.
		// Time to fetch the individual module page and add
		// data from there.
		let moduleData;
		if (path) {
			moduleData = await getModule(path);
			await sleep(1000);
		}

		// Once we've got everything, push the data to our array.
		entries.push({ ...entry, ...moduleData });
	}
}

Scraping a Single Module

Next we'll write getModule. This function will scrape a single Deno module page, and give us information about it.

If you're following along so far, you might've noticed that the paths we got from the directory of modules look like this:

/x/deno_dom?pos=11&qid=6992af66a1951996c367e6c81c292b2f

But if you navigate to that page in the browser, the URL looks like this:

https://deno.land/x/deno_dom@v0.1.36-alpha

Deno uses redirects to send you to the latest version of a module. We'll need to follow those redirects to get the correct URL for the module page. We can do that with the redirect: 'follow' option in the fetch call. We'll also need to set the Accept header to text/html, or else we'll get a 404.

async function getModule(path: string | URL) {
	const modulePage = await fetch(new URL(`https://deno.land${path}`), {
		redirect: 'follow',
		headers: {
			'Accept':
				'text/html'
		},
	}).then(
		(res) => {
			return res.text();
		},
	);

	const moduleDocument = new DOMParser().parseFromString(
		modulePage,
		'text/html',
	);

	// Parsing will go here...
}

Now we'll parse the module data, just like we did with the directory of modules.

async function getModule(path: string | URL) {
	// ...
	const moduleData: Entry = {};

	const repo = moduleDocument
		?.querySelector('a.link.truncate')
		?.getAttribute('href');

	if (repo) {
		moduleData.repo = repo;
		moduleData.author =
			repo.match(/https?:\/\/(?:www\.)?github\.com\/(.*)\//)![1];
	}
	return moduleData;
}

Writing Our Data to a File

Finally, we'll write our data to a file. We'll use the Deno.writeTextFile API to write our data to a file called output.json.

async function getModules() {
	// ...

	await Deno.writeTextFile('./output.json', JSON.stringify(entries, null, 2));
}

Lastly, we just need to invoke our getModules function to start the process.

getModules();

Running Our Script

Deno has some security features built into it that prevent it from doing things like accessing the file system or the network without granting it explicit permission. We give it these permissions by passing the --allow-net and --allow-write flags to our script when we run the script.

deno run --allow-net --allow-write index.ts

After we let our script run (which, if you've wisely set a small delay with each request, will take some time), we'll have a new output.json file with data like this:

[
	{
		"name": "flat",
		"description": "A collection of postprocessing utilities for flat",
		"href": "https://deno.land/x/flat",
		"repo": "https://github.com/githubocto/flat-postprocessing",
		"author": "githubocto"
	},
	{
		"name": "lambda",
		"description": "A deno runtime for AWS Lambda. Deploy deno via docker, SAM, serverless, or bundle it yourself.",
		"href": "https://deno.land/x/lambda",
		"repo": "https://github.com/hayd/deno-lambda",
		"author": "hayd"
	},
	// ...
]

Putting It All Together

Tada! A quick and easy way to get data from websites using Deno and our favorite JavaScript browser APIs. You can view the entire script in one piece on GitHub.

In this blog, you've gotten a basic intro to how to use Deno in lieu of Node for simple scripts, and have seen a few of the key differences. If you want to dive deeper into Deno, start with the official documentation. And when you're ready for more, check out the resources available from This Dot Labs at deno.framework.dev, and our Deno backend starter app at starter.dev!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to configure and optimize a new Serverless Framework project with TypeScript cover image

How to configure and optimize a new Serverless Framework project with TypeScript

How to configure and optimize a new Serverless Framework project with TypeScript If you’re trying to ship some serverless functions to the cloud quickly, Serverless Framework is a great way to deploy to AWS Lambda quickly. It allows you to deploy APIs, schedule tasks, build workflows, and process cloud events through a code configuration. Serverless Framework supports all the same language runtimes as Lambda, so you can use JavaScript, Ruby, Python, PHP, PowerShell, C#, and Go. When writing JavaScript though, TypeScript has emerged as a popular choice as it is a superset of the language and provides static typing, which many developers have found invaluable. In this post, we will set up a new Serverless Framework project that uses TypeScript and some of the optimizations I recommend for collaborating with teams. These will be my preferences, but I’ll mention other great alternatives that exist as well. Starting a New Project The first thing we’ll want to ensure is that we have the serverless framework installed locally so we can utilize its commands. You can install this using npm: ` From here, we can initialize the project our project by running the serverless command to run the CLI. You should see a prompt like the following: We’ll just use the Node.js - Starter for this demo since we’ll be doing a lot of our customization, but you should check out the other options. At this point, give your project a name. If you use the Serverless Dashboard, select the org you want to use or skip it. For this, we’ll skip this step as we won’t be using the dashboard. We’ll also skip deployment, but you can run this step using your default AWS profile. You’ll now have an initialized project with 4 files: .gitignore index.js - this holds our handler that is configured in the configuration README.md - this has some useful framework commands that you may find useful serverless.yml - this is the main configuration file for defining your serverless infrastructure We’ll cover these in more depth in a minute, but this setup lacks many things. First, I can’t write TypeScript files. I also don’t have a way to run and test things locally. Let’s solve these. Enabling TypeScript and Local Dev Serverless Framework’s most significant feature is its rich plugin library. There are 2 packages I install on any project I’m working on: - serverless-offline - serverless-esbuild Serverless Offline emulates AWS features so you can test your API functions locally. There aren’t any alternatives to this for Serverless Framework, and it doesn’t handle everything AWS can do. For instance, authorizer functions don’t work locally, so offline development may not be on the table for you if this is a must-have feature. There are some other limitations, and I’d consult their issues and READMEs for a thorough understanding, but I’ve found this to be excellent for 99% of projects I’ve done with the framework. Serverless Esbuild allows you to use TypeScript and gives you extremely fast bundler and minifier capabilities. There are a few alternatives for TypeScript, but I don’t like them for a few reasons. First is Serverless Bundle, which will give you a fully configured webpack-based project with other features like linters, loaders, and other features pre-configured. I’ve had to escape their default settings on several occasions and found the plugin not to be as flexible as I wanted. If you need that advanced configuration but want to stay on webpack, Serverless Webpack allows you to take all of what Bundle does and extend it with your customizations. If I’m getting to this level, though, I just want a zero-configuration option which esbuild can be, so I opt for it instead. Its performance is also incredible when it comes to bundle times. If you want just TypeScript, though, many people use serverless-plugin-typescript, but it doesn’t support all TypeScript features out of the box and can be hard to configure. To configure my preferred setup, do the following: 1. Install the plugins by setting up your package.json. I’m using yarn but you can use your preferred package manager. ` Note: I’m also installing serverless here, so I have a local copy that can be used in package.json scripts. I strongly recommend doing this. 2. In our serverless.yml, let’s install the plugins and configure them. When done, our configuration should look like this: ` 3. Now, in our newly created package.json, let’s add a script to run the local dev server. Our package.json should now look like this: ` We can now run yarn dev in our project root and get a running server 🎉 But our handler is still in JavaScript. We’ll want to pull in some types and set our tsconfig.json to fix this. For the types, I use aws-lambda and @types/serverless, which can be installed as dev dependencies. For reference, my tsconfig.json looks like this: ` And I’ve updated our index.js to index.ts and updated it to read as follows: ` With this, we now have TypeScript running and can do local development. Our function doesn’t expose an HTTP route making it harder to test so let’s expose it quickly with some configuration: ` So now we can point our browser to http://localhost:3000/healthcheck and see an output showing our endpoint working! Making the Configuration DX Better ion DX Better Many developers don’t love YAML because of its strict whitespace rules. Serverless Framework supports JavaScript or JSON configurations out of the box, but we want to know if our configuration is valid as we’re writing it. Luckily, we can now use TypeScript to generate a type-safe configuration file! We’ll need to add 2 more packages to make this work to our dev dependencies: ` Now, we can change our serverless.yml to serverless.ts and rewrite it with type safety! ` Note that we can’t use the export keyword for our configuration and need to use module.exports instead. Further Optimization There are a few more settings I like to enable in serverless projects that I want to share with you. AWS Profile In the provider section of our configuration, we can set a profile field. This will relate to the AWS configured profile we want to use for this project. I recommend this path if you’re working on multiple projects to avoid deploying to the wrong AWS target. You can run aws configure –profile to set this up. The profile name specified should match what you put in your Serverless Configuration. Individual Packaging Cold starts#:~:text=Cold%20start%20in%20computing%20refers,cache%20or%20starting%20up%20subsystems.) are a big problem in serverless computing. We need to optimize to make them as small as possible and one of the best ways is to make our lambda functions as small as possible. By default, serverless framework bundles all functions configured into a single executable that gets uploaded to lambda, but you can actually change that setting by specifying you want each function bundled individually. This is a top-level setting in your serverless configuration and will help reduce your function bundle sizes leading to better performance on cold starts. ` Memory & Timeout Lambda charges you based on an intersection of memory usage and function runtime. They have some limits to what you can set these values to but out of the box, the values are 128MB of memory and 3s for timeout. Depending on what you’re doing, you’ll want to adjust these settings. API Gateway has a timeout window of 30s so you won’t be able to exceed that timeout window for HTTP events, but other Lambdas have a 15-minute timeout window on AWS. For memory, you’ll be able to go all the way to 10GB for your functions as needed. I tend to default to 512MB of memory and a 10s timeout window but make sure you base your values on real-world runtime values from your monitoring. Monitoring Speaking of monitoring, by default, your logs will go to Cloudwatch but AWS X-Ray is off by default. You can enable this using the tracing configuration and setting the services you want to trace to true for quick debugging. You can see all these settings in my serverless configuration in the code published to our demo repo: https://github.com/thisdot/blog-demos/tree/main/serverless-setup-demo Other Notes Two other important serverless features I want to share aren’t as common in small apps, but if you’re trying to build larger applications, this is important. First is the useDotenv feature which I talk more about in this blog post. Many people still use the serverless-dotenv-plugin which is no longer needed for newer projects with basic needs. The second is if you’re using multiple cloud services. By default, your lambdas may not have permission to access the other resources it needs. You can read more about this in the official serverless documentation about IAM permissions. Conclusion If you’re interested in a new serverless project, these settings should help you get up and running quickly to make your project a great developer experience for you and those working with you. If you want to avoid doing these steps yourself, check out our starter.dev serverless kit to help you get started....

Export Your Data from Universal Analytics before you lose it cover image

Export Your Data from Universal Analytics before you lose it

Introduction In 2023, Google announced the retirement of Universal Analytics and told everyone to migrate to Google Analytics 4, commonly known as GA4. Google Analytics has long been a staple for businesses and website owners seeking insights into their audience and performance metrics. However, with the introduction of Google Analytics 4, there comes a pivotal moment for users of the older Universal Analytics (UA) platform. Difference between UA and GA4 What data will you lose? You will lose ALL of the historical data! By July 1, 2024, Google will delete all UA data, and you will no longer have access to it after that. Also, Universal Analytics will no longer track new conversions, affecting ad campaign performance that relies on these conversions for Smart Bidding. Audience lists from Universal Analytics will also disappear, potentially impacting media activation and performance. API requests tied to Universal Analytics properties will fail, preventing data deletion requests and causing Looker Studio to stop displaying Universal Analytics data. Additionally, Attribution Projects (beta) in Google Analytics will be deleted. How to Export Your Data There are multiple ways to export your data from Google Analytics, such as using the Google Analytics add-on for Google Sheets or using BigQuery integration to export historical data from your Universal Analytics 360 property. This article will discuss the easiest way to export your data using the Google Analytics add-on from Google Workspace Marketplace. From “Extensions” > “Google Analytics” > “Create new report” The Google Analytics extension will display a Wizard that requires the following steps: * Set a name for your report * Then, you select the Google Analytics account * Then, the property you want to export * Also, there are some optional configuration that helps you filter the data you want to export Then click on “Create Report,” and it will export the report for you Here is an example of the template https://docs.google.com/spreadsheets/d/1zXEBEQQk6TPeGb7-Wm2J0uM0q0XnTdJqrfikCTmYs8c/copy Running the Report After setting up the report template with the necessary configurations, users can run the report to extract their UA data into Google Sheets. Depending on the size of the dataset, the report may take some time to generate, especially for larger datasets spanning multiple years. In such cases, splitting the data into smaller chunks may be necessary to ensure efficient processing. Conclusion As Google transitions from Universal Analytics to Google Analytics 4, businesses and website owners must proactively preserve their valuable historical data. With the impending deletion of all Universal Analytics data by July 1, 2024, failing to export your data in time could result in significant data loss, affecting ad campaign performance and analytics capabilities. By following the outlined methods to export data using tools like the Google Analytics add-on for Google Sheets, you can ensure that your historical data is safely archived and accessible for future analysis....

Upgrading from Astro 2 to Astro 4 cover image

Upgrading from Astro 2 to Astro 4

Upgrading from Astro 2 to Astro 4 Astro is building fast. Right on the heels of their version 3 launch on August 30th, Astro version 4 launched on December 6th, 2023. They've built so fast, that I didn't even have a chance to try 3 before 4 came out! But the short time between versions makes sense, because the two releases are very complementary. Many Astro features introduced in version 3 as experimental are made stable by version 4. If, like me, you're looking at a two-version upgrade, here's what you need to know about Astro 3 and 4 combined. View Transitions Astro makes it easy to include animated transitions between routes and components with the component. You can add it to the of specific pages, or to your site-wide to be enabled across the entire site. No configuration is required, but you'll probably still want to configure it. Adding between pages effectively turns your site into a single-page application, animating in and out content on route change rather than downloading a new statically generated HTML document. You can further customize how specific components animate in and out with the transition:animate property. If you don't want client side routing for a specific link, you can opt out for that link with the data-astro-reload property. Image Optimization The way Astro works with images has changed a lot from version 2. If you were using @astrojs/image, then updating how you handle images is probably going to be the most time-consuming part of your Astro migration. Astro's and components have had API changes, which will require you to make changes in your usage of them. You should definitely check out the full image migration guide in the Astro docs for all of the details. But some of the details you should be aware of: - @astrojs/image is out, astro:assets is in - You get image optimization when you import images inside /src. This can change your entire image src referencing strategy. Optimization only works for images imported from inside /src, so you might want to relocate images you've been keeping inside the /public directory. - Importing an image file no longer returns the path as a string, and an ImageMetadata object with src, width, height, and format properties. If you need the previous behavior, add ?url to the import path. - Markdown documents can reference image paths inside /src for automatic image optimization, no need to use in MDX nor reference the root relative paths of images in the /public directory. - The and components have changes to properties. For example, aspectRatio is no longer a valid property because it is inferred from the width and height of images. A new pictureAttributes property lets you do things like add a CSS style string. - You can use a helper image schema validator in your content collection schemas. Dev Toolbar The Dev Toolbar is a new local development feature to help developers work with their interactive islands and to integrate with other tools. The Inspect option highlights what parts of the page are interactive, and lets you examine them. You can view their props and even open them directly in your editor. Audit checks for accessibility issues, such as missing alt attributes in images. And the Menu lets you use specific integrations. Currently only Storyblok and spotlight are available, but you can expect more integrations in the future. And if you don't want to wait, you can also extend the Dev Toolbar yourself with their API. If you don't like the Dev Toolbar, you can turn it off in the CLI with astro preferences disable devToolbar. Conclusion Astro has added a lot of cool features in 2 major back-to-back releases, and you should absolutely consider upgrading if you're still on version 2. Be prepared to modify how you've handled images, but also get excited to play with view transitions!...

Making AI Deliver: From Pilots to Measurable Business Impact cover image

Making AI Deliver: From Pilots to Measurable Business Impact

A lot of organizations have experimented with AI, but far fewer are seeing real business results. At the Leadership Exchange, this panel focused on what it actually takes to move beyond experimentation and turn AI into measurable ROI. Over the past few years, many organizations have experimented with AI, but the challenge today is translating experimentation into measurable business value. Moderated by Tracy Lee, CEO at This Dot Labs, panelists featured Dorren Schmitt, Vice President IT Strategy & Innovation at Allen Media Group, Greg Geodakyan, CTO at Client Command, and Elliott Fouts, CAIO & CTO at This Dot Labs. Panelists discussed how companies are moving from early AI experiments to initiatives that deliver real results. They began by examining how experimentation has evolved over the past year. While many organizations did not fully utilize AI experimentation budgets in 2025, 2026 is showing a shift toward more intentional investment. Structured budgets and clearly defined frameworks are enabling companies to explore AI strategically and identify initiatives with high potential impact. The conversation then turned to alignment and ROI. Panelists highlighted the importance of connecting AI projects to corporate strategy and leadership priorities. Ensuring that AI initiatives translate into operational efficiency, productivity gains, and measurable business impact is essential. Companies that successfully align AI efforts with organizational goals are better equipped to demonstrate tangible outcomes from their investments. Moving from pilots and proofs of concept to production was another major focus. Governance, prioritization, and workflow integration were cited as essential for scaling AI initiatives. One panelist shared that out of nine proofs of concept, eight successfully launched, resulting in improvements in quality and operational efficiency. Panelists also explored the future of AI within organizations, including the potential for agentic workflows and reduced human-in-the-loop processes. New capabilities are emerging that extend beyond coding tasks, reshaping how teams collaborate and how work is structured across departments. Key Takeaways - Structured experimentation and defined budgets allow organizations to explore AI strategically and safely. - Alignment with business priorities is essential for translating AI capabilities into measurable outcomes. - Governance and workflow integration are critical to moving AI initiatives from pilot stages to production deployment. Successfully leveraging AI requires a balance between experimentation, strategic alignment, and operational discipline. Organizations that approach AI as a structured, measurable initiative can capture meaningful results and unlock new opportunities for innovation. Curious how your organization can move from AI experimentation to real impact? Let’s talk. Reach out to continue the conversation or join us at an upcoming Leadership Exchange. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co