Skip to content

Git Basics: Diff and Stash

Getting started with Git

Today’s article will cover some of the basics of Git. This article is written under the assumption that you have already made a GitHub repository or have access to one, and that you also have a basic understanding of the command line.

If you haven’t opened up a GitHub account, I recommend going to GitHub, creating an account, setting up a repository and following this guide before continuing on.

Now, we’ll move on to a brief rundown of the Git commands that will be used in this article, and then follow it up with how to use each of them.

The Rundown

Git diff

  • This command is used to show changes between commits and the working tree. Git stash
  • This command is used to stash or remove the changes made to your working directory (no worries these haven’t gone up in smoke) Git stash pop
  • This command is used to retrieve your most recent stash made by popping it from your stash stack Git stash list
  • This command is used to display a list of your current stash entries. Git stash apply
  • This command is used to reapply a git stash, but also keep it in your stash

Git Diff

Alright, now we’re going to move on to how to do a git diff. I’m going to be going to my console, and heading over to the blog repo I used last time. From here, I’m going to open up my README file with nano and edit it. After saving, I’ll use a git status to verify that the changes are showing up. Now, we can see that the file is edited, but say we don’t know or remember what was changed. In this instance, we can use a git diff, and it’ll show us the changes that were made.

Git status latest git blog 1

Git diff latest git blog 2

Git Stash

Say we decide we don’t want or need those README changes at the moment we can use a git stash. With that done, we’ll use git status, and we can see that those changes are gone. While the change do appear to be gone, we can easily retrieve it by doing git stash pop. Once again, we’ll use git status, and verify that the changes are back.

Git stash latest git blog 3

Git status latest git blog 4

Git stash pop latest git blog 5

Git status again latest git blog 1

Git stash list & apply

Alright, so we’re going to do a git stash again to get rid of our current changes. We’re going to edit the README with nano again. We’ll run another git status to verify the changes were made. Then, we’re going to do another git stash to get rid of those changes. Now, with a couple of changes stashed, we’re going to do a git stash list to see our list of stashed changes.

Git stash latest git blog 3

Nano changes and git status latest git blog 6

Git stash again latest git blog 3

Git stash list latest git blog 7

Now we want our initial changes. In order to get those, we’ll use a git stash apply 1. This will keep it in your git stash and it is useful if you want to apply the same changes in multiple branches.

Git stash apply n (in our case n === 1) latest git blog 8

Conclusion

We made it to the end! I hope this article was helpful, and that you were able to learn about, and be more comfortable with Git and GitHub. This article covered some of the basics of Git to try and help people starting out, or for those who might need a refresher.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Web Scraping with Deno cover image

Web Scraping with Deno

Maybe you've had this problem before: you need data from a website, but there isn't a good way to download a CSV or hit an API to get it. In these situations, you may be forced to write a web scraper. A web scraper is a script that downloads the HTML from a website as though it were a normal user, and then it parses that HTML to extract information from it. JavaScript is a natural choice for writing your web scraper, as it's the language of the web and natively understands the DOM. And with TypeScript, it is even better, as the type system helps you navigate the trickiness of HTMLCollections and Element types. You can write your TypeScript scraper as a Node script, but this has some limitations. TypeScript support isn't native to Node. Node also doesn't support web APIs very well, only recently implementing fetch` for example. Making your scraper in Node is, to be frank, a bit of a pain. But there's an alternative way to write your scraper in TypeScript: **Deno**! Deno is a newer JavaScript/TypeScript runtime, co-created by one of Node's creators, Ryan Dahl. Deno has native support for TypeScript, and it supports web APIs out of the box. With Deno, we can write a web scraper in TypeScript with far fewer dependencies and boilerplate than what you'd need in order to write the same thing in Node. In this blog, we’re going to build a web scraper using Deno. Along the way, we’ll also explore some of the key advantages of Deno, as well as some differences to be aware of if you’re coming from writing code exclusively for Node. Getting Started First, you'll need to install Deno. Depending on your operating system, there are many methods to install it locally. After installation, we'll set up our project. We'll keep things simple and start with an empty directory. `bash mkdir deno-scraper cd deno-scraper ` Now, let's add some configuration. I like to keep my code consistent with linters (e.g. ESLint) and formatters (e.g. Prettier). But Deno doesn't need ESLint or Prettier. It handles linting and formatting itself. You can set up your preferences for how to lint and format your code in a deno.json` file. Check out this example: `json { "compilerOptions": { "noUnusedLocals": true, "noUnusedParameters": true, "noUncheckedIndexAccess": true }, "fmt": { "options": { "useTabs": true, "lineWidth": 80, "singleQuote": true } } } ` You can then lint and format your Deno project with deno lint` and `deno fmt`, respectively. If you're like me, you like your editor to handle linting and formatting on save. So once Deno is installed, and your linting/formatting options are configured, you'll want to set up your development environment too. I use Visual Studio Code with the Deno extension. Once installed, I let VS Code know that I'm in a Deno project and that I'd like to auto-format by adding a .vscode/settings.json` file with the following contents: `json { "deno.enable": true, "editor.formatOnSave": true, "editor.defaultFormatter": "denoland.vscode-deno" } ` Writing a Web Scraper With Deno installed and a project directory ready to go, let's write our web scraper! And what shall we scrape? Let's scrape the list of Deno 3rd party modules from the official documentation site. Our script will pull the top 60 Deno modules listed, and output them to a JSON file. For this example, we'll only need one script. I'll name it index.ts` for simplicity's sake. Inside our script, we'll create three functions: `sleep`, `getModules` and `getModule`. The first will give us a way to easily wait between fetches. This will prevent issues with our script, and the owners of the website we’ll contact, because it's unkind to flood a website with many successive requests. This type of automation could appear like a Denial of Service (DoS) attack, and could cause the API owners to ban your IP address from contacting it. The sleep` function is fully implemented in the code block below. The second function (`getModules`) will scrape the first three pages of the Deno 3rd party modules list, and the third (`getModule`) will scrape the details for each individual module. `ts // This sleep function creates a Promise that // resolves after a given number of milliseconds. async function sleep(milliseconds: number) { return new Promise((resolve) => { setTimeout(resolve, milliseconds); }); } async function getModules() { // ... } async function getModule(url: string | URL) { // ... } ` Scraping the Modules List First, let's write our getModules` function. This function will scrape the first three pages of the Deno 3rd party modules list. Deno supports the `fetch` API, so we can use that to make our requests. We'll also use the `deno_dom` module to parse the HTML response into a DOM tree that we can traverse. Two things we'll want to do upfront: let's import the deno_dom` parsing module, and create a type for the data we're trying to scrape. `ts import { DOMParser } from 'https://deno.land/x/denodom/deno-dom-wasm.ts'; interface Entry { name?: string; author?: string; repo?: string; href?: string; description?: string; } ` Next, we'll set up our initial fetch and data parsing: `ts async function getModules() { // Here, we define the base URL we want to use, the number // of pages we want to fetch, and we create an empty array // to store our scraped data. const BASEURL = 'https://deno.land/x?page='; const MAXPAGES = 3; const entries: Entry[] = []; // We'll loop for the number of pages we want to fetch, // and parse the contents once available for (let page = 1; page res.text()); // Remember, be kind to the website and wait a second! await sleep(1000); // Use the denodom module to parse the HTML const document = new DOMParser().parseFromString(pageContents, 'text/html'); if (document) { // We'll handle this in the next code block } } } ` Now that we're able to grab the contents of the first three pages of Deno modules, we need to parse the document to get the list of modules we want to collect information for. Then, once we've got the URL of the modules, we'll want to scrape their individual module pages with getModule`. Passing the text of the page to the DOMParser` from `deno_dom`, you can extract information using the same APIs you'd use in the browser like `querySelector` and `getElementsByTagName`. To figure out what to select for, you can use your browser's developer tools to inspect the page, and find selectors for elements you want to select. For example, in Chrome DevTools, you can right-click on an element and select "Copy > Copy selector" to get the selector for that element. `ts if (document) { // Conveniently, the modules are all the only elements // on the page. If you're scraping different data from a // different website, you'll want to use whatever selectors // make sense for the data you're trying to scrape. const modules = document.getElementsByTagName('li'); for (const module of modules) { const entry: Entry = {}; // Here we get the module's name and a short description. entry.name = module.querySelector( '.text-primary.font-semibold', )?.textContent; entry.description = module.querySelector('.col-span-2.text-gray-400') ?.textContent; // Here, we get the path to this module's page. // The Deno site uses relative paths, so we'll // need to add the base URL to the path in getModule. const path = module.getElementsByTagName('a')[0].getAttribute('href')?.split( '?', )[0]; entry.href = https://deno.land${path}`; // We've got all the data we can from just the listing. // Time to fetch the individual module page and add // data from there. let moduleData; if (path) { moduleData = await getModule(path); await sleep(1000); } // Once we've got everything, push the data to our array. entries.push({ ...entry, ...moduleData }); } } ` Scraping a Single Module Next we'll write getModule`. This function will scrape a single Deno module page, and give us information about it. If you're following along so far, you might've noticed that the paths we got from the directory of modules look like this: ` /x/denodom?pos=11&qid=6992af66a1951996c367e6c81c292b2f ` But if you navigate to that page in the browser, the URL looks like this: ` https://deno.land/x/denodom@v0.1.36-alpha ` Deno uses redirects to send you to the latest version of a module. We'll need to follow those redirects to get the correct URL for the module page. We can do that with the redirect: 'follow'` option in the `fetch` call. We'll also need to set the `Accept` header to `text/html`, or else we'll get a 404. `ts async function getModule(path: string | URL) { const modulePage = await fetch(new URL(https://deno.land${path}`), { redirect: 'follow', headers: { 'Accept': 'text/html' }, }).then( (res) => { return res.text(); }, ); const moduleDocument = new DOMParser().parseFromString( modulePage, 'text/html', ); // Parsing will go here... } ` Now we'll parse the module data, just like we did with the directory of modules. `ts async function getModule(path: string | URL) { // ... const moduleData: Entry = {}; const repo = moduleDocument ?.querySelector('a.link.truncate') ?.getAttribute('href'); if (repo) { moduleData.repo = repo; moduleData.author = repo.match(/https?:\/\/(?:www\.)?github\.com\/(.)\//)![1]; } return moduleData; } ` Writing Our Data to a File Finally, we'll write our data to a file. We'll use the Deno.writeTextFile` API to write our data to a file called `output.json`. `ts async function getModules() { // ... await Deno.writeTextFile('./output.json', JSON.stringify(entries, null, 2)); } ` Lastly, we just need to invoke our getModules` function to start the process. `ts getModules(); ` Running Our Script Deno has some security features built into it that prevent it from doing things like accessing the file system or the network without granting it explicit permission. We give it these permissions by passing the --allow-net` and `--allow-write` flags to our script when we run the script. `bash deno run --allow-net --allow-write index.ts ` After we let our script run (which, if you've wisely set a small delay with each request, will take some time), we'll have a new output.json` file with data like this: `json [ { "name": "flat", "description": "A collection of postprocessing utilities for flat", "href": "https://deno.land/x/flat", "repo": "https://github.com/githubocto/flat-postprocessing", "author": "githubocto" }, { "name": "lambda", "description": "A deno runtime for AWS Lambda. Deploy deno via docker, SAM, serverless, or bundle it yourself.", "href": "https://deno.land/x/lambda", "repo": "https://github.com/hayd/deno-lambda", "author": "hayd" }, // ... ] ` Putting It All Together Tada! A quick and easy way to get data from websites using Deno and our favorite JavaScript browser APIs. You can view the entire script in one piece on GitHub. In this blog, you've gotten a basic intro to how to use Deno in lieu of Node for simple scripts, and have seen a few of the key differences. If you want to dive deeper into Deno, start with the official documentation. And when you're ready for more, check out the resources available from This Dot Labs at deno.framework.dev, and our Deno backend starter app at starter.dev!...

3 Web Performance Concepts that Will Help Start a Conversation Around Performance cover image

3 Web Performance Concepts that Will Help Start a Conversation Around Performance

In 2021, This Dot Labs released PerfBuddy, the free online platform for testing web and mobile based sites. With the release of this tool, it was our sincere hope to simplify the conversation around web performance, helping team leaders develop easy to understand metrics that they can use to advocate for further investment into their various web technologies. But we also realize that many new to web development, or who work in software but not as developers, might need more clarification on some of the basic key terms to help them engage more actively in conversations surrounding web development. Below, I’ve defined three of the top terms in web performance to help readers better ascertain your site’s performance, and play an active role in refining their technologies to provide the best experience for their customers. First Contentful Paint Time (FCP) FCP__, or __First Contentful Paint Time__, is a critical metric that measures the time that users must wait in order for a page to load its first visible element. For some sites, this could be the entire page. However, for others, the FCP time might measure the seconds between a user navigating to a site, and any responsive element, such as a loading bar, appearing in front of them. This is not a measurement of backend nor even frontend script loading speed, but a metric that affords development teams the ability to infer the quality of their site’s initial UX. According to metrics published by Akamai in 2018, sites are liable to lose nearly half of their visitors if their page takes more than three seconds to load. In fact, just a single second of load time delay can result in a 7% decrease in sales conversions for eCommerce platforms. This is especially true when considering mobile users, whose likelihood of leaving a page increases 90% when made to wait 5 seconds for a page to load And as more eCommerce shoppers turn to using their mobile devices- with 53% of users accessing shopping sites via mobile platforms on 2019’s Cyber Monday, representing a 40% YOY increase- teams need to be acutely aware of their cross platform performance with respect to FCP. Time to First Byte (TTFB) Not to be confused with FCP, TTFB, or Time to First Byte, refers to the amount of time that the browser waits in order to receive initial data from its server. In order for a site to display any information, a browser must make dozens, if not more, data requests. Issues related either to the quality of the host, site functionality, or complexity can all contribute to a site’s latency, or the amount of time it takes for data to be passed between the server and the browser. Of course, reducing site latency will improve user experience by decreasing FCP, and generally increasing browsing speed. However, ensuring low TTFB will also boost your SEO by making your site more quickly crawlable by leading search engines. Page Weight As developers add features and functionality to support more advanced user experience, web pages get heavier. As of 2020, the average desktop webpage weighs 2080 KB, up from an average of 1532 KB in 2017, with the weight of mobile web pages slightly lower, but still seeing a near 40% increase in size when compared to stats from just four years ago. eCommerce websites need to maintain acute awareness of their page weight, and ensure that their latency is not overly impacted by it, due to the tendency for shopping sites to be especially complex, supporting large catalogs of products along with other features to promote customer engagement. And as this era of advanced digital transformation continues to expand, eCommerce sites must develop strategies to meet market expectations for performance without over burdening their sites with heavy plugins and functionalities. Finding Your Path to Performance It starts with equipping yourself with the right tools to test your site’s speed and weight. There are countless platforms used for testing sites, however, there are only a handful that are capable of unlocking the insight that you need to support your most critical websites. Though PerfBuddy is a great place to start in order to identify potential roadblocks, it cannot do the work of actually improving site performance. By leveraging testing platforms such as Lighthouse, and continuously improving your performance metrics with assets such as DevTools, and strategies like Google’s PRPL, eCommerce retailers can ensure that their sites meet user expectations and promote their most critical business objectives. Need help? Contact This Dot Labs to learn more about how developing the tools and strategies to ensure optimal site performance can support scalable growth as you continue refining user experience!...

How to Integrate Mailchimp Forms in a React Project cover image

How to Integrate Mailchimp Forms in a React Project

Intro Today we will cover how to set up an email signup form using React and Mailchimp. This blog will be using the starter.dev cra-rxjs-styled-components template to expedite the process. This article assumes you have a basic understanding of React, and have set up a Mailchimp account. Here is the code repo if you want to review it while reading, or just skip ahead. We will start with setting up our React project using Starter.dev for simplicity, and then finish it up by integrating the two for our signup form. To start, we will be using the command yarn create @this-dot/starter --kit cra-rxjs-styled-components, which can be found here. We’ll go ahead, and give the project a name. I will be calling mine react-mailchimp. Now we will navigate into the project and do a yarn install. Then we can run yarn run dev to get it up and running locally on localhost:3000. This should have us load up on the React App, RxJS, and styled-components Starter kit page. With that all set, we’ll also need to install jsonp by using yarn add jsonp`. We’ll be using jsonp instead of fetch to avoid any CORS issues we may run into. This also makes for an easy and quick process by not relying on their API, which can’t be utilized by the client. Now that we have our project set up, we will go ahead and go and grab our form action URL from MailChimp. This can be found by going to your Audience > Signup Forms > Embedded Forms > Continue and then grabbing the form action URL found in the Embedded Form Code. We need to make a small change to the URL and swap /post? with /post-json?. We can now start setting up our form input, and our submit function. I will add a simple form input and follow it up, and a submit function. Inside the submit function, we will use our imported jsonp to invoke our action URL. ` import { useState } from 'react'; import jsonp from 'jsonp'; export const MailChimp = () => { const [email, setEmail] = useState(''); const onSubmit = (e: any) => { e.preventDefault(); const url = 'insert-mailchimp-action-url-here'; jsonp(${url}&EMAIL=${email}`, { param: 'c' }, (_: any, data: any) => { console.log('data', data); const { msg } = data; alert(msg); }); }; return ( Email setEmail(e.target.value)} > Submit ); }; ` We’ll also add a quick alert to let the user know that it was successful and that’s it! We’ve now successfully added the email to our MailChimp account. Conclusion Today, we covered how to integrate Mailchimp with a react app using the cra-rxjs-styled-components template from starter.dev. I highly recommend using starter.dev to get your project up and running quickly. Here is the code repo again for you to check out....

Intro to DevRel: What's the Difference Between External and Internal DevRel Programs? cover image

Intro to DevRel: What's the Difference Between External and Internal DevRel Programs?

Developer Relations (DevRel__) is a proactive, multifaceted discipline that bridges the gap between developers and companies to drive adoption while cultivating an energetic and supportive developer community for their product, service, or technology. The term and the profession are often misunderstood even among those in other technical roles. Some have never heard of DevRel before, and others believe it’s a kind of tech support for developers. Many organizations even think starting a DevRel program means just giving away free software and hoping it catches on. But DevRel is none of these things. At its core, a successful DevRel program builds strong bonds within their target market to ensure that developers can interface with a company or organization behind the product they’re using. Great teams establish authentic connections with developers, cultivate trust, and actively engage with them. DevRel defies traditional marketing strategies. Instead of prioritizing numbers and eyes that contribute to a sales funnel, it focuses on enhancing developer satisfaction. This creates a feedback loop between users and a company to better meet their needs, and foster a sense of collaboration within a product’s user community. The Two Main Domains of Developer Relations DevRel is split into 2 main domains__: __external__ and __internal__. External: Accessing an existing developer community If a company already has a product with an existing community or a product that may appeal to an existing community, and they want to establish a DevRel program around it, this would fall within the external domain. A successful external program will establish credibility and support developers through a number of evangelistic measures like blog posts, tutorials, webinars, giving talks at meetups and conferences, or creating useful code examples to teach concepts. These activities and their goals are rarely product-specific. Instead, they incorporate a number of technologies within their product’s technical ecosystem to demonstrate its value to a developer’s workflow. I had the pleasure of working with Doron Sherman during his tenure at Cloudinary as VP of Developer Relations. Doron has extensive experience with building developer communities, and successfully advocated internally to build a website called Media Jams, a learning resource for developers working with media in their apps. By having this initiative live under Developer Relations, and not under Product, Marketing, or Engineering, Doron and his team built quickly and created a site that prioritized education, without needing to meet the business objectives of other parts of the organization. > “Media Jams has had great organic growth as a community resource. We were able to attract non-Cloudinary users as well as organic search traffic of those looking for media use cases who would have otherwise gotten lost in the Cloudinary docs and/or could not find help through the Cloudinary blog or knowledge base.” says Doron. Internal: Building a developer community In order to support the adoption and retention of developers using a product, companies must have a space where developers can interface with them. Building their own community around a product is the best way to do this. By creating open lines of communication, developers can provide immediate feedback about a product in a productive way to product and engineering teams thereby shortening the feedback loop and improving the speed at which a team is able to innovate based on user needs. This strategy falls under the internal domain. These forums also provide synergistic opportunities for developers that are using a product to learn from each other. By working on similar problems, developers are able to bond and feel more ownership or excitement toward a product, increasing user retention. Danny Thompson, a developer influencer and mentor who has built a community of over a quarter million followers, says that he admires Appwrite’s DevRel program, helmed by Tessa Mero, Head of Developer Relations: > “The Appwrite DevRel team is great at answering questions. They are on Discord, jumping on calls with developers, answering questions, and doing office hours, all of which are super valuable in building that community. The main difference between Appwrite DevRel and other teams is, a lot of communities are run very passively and not always available or taking an active approach within community forums to help out.” - Danny Thompson on Appwrite. > “When we think about how to become successful as a company through DevRel, our first consideration is, what made us successful in the first place? Appwrite became an open-source company and a successful open-source project because of community, so we focus on a community-first approach. Contributors and developers that have supported us since before we were a company are what led us to where we are now. Every initiative, every planning, and everything we do on our team, we consider the community's feedback and perspective before we make any decisions.” - Tessa Mero at Appwrite. The Value-First Approach to Developer Relations Successful DevRel programs prioritize delivering value to cultivate credibility among developers and support product adoption free from reciprocal demands. External efforts involve engaging with existing technology communities, establishing credibility through various evangelistic measures, and delivering value to the community. On the other hand, internal programs build communities around their product, facilitating direct communication between developers and the company. These internal forums not only enhance user retention but also foster a space for developers to learn from each other, creating a sense of ownership and excitement around the product. And by diverting equity to these two programs, DevRel teams find new users, retain them, and receive invaluable feedback. Real-world examples, such as Doron Sherman's work at Cloudinary and Tessa Mero's leadership at Appwrite, showcase the effectiveness of DevRel in action, and highlight how DevRel programs contribute to the success and sustainability of developer-focused products. In the ever-evolving landscape of technology, DevRel emerges not only as a bridge between developers and organizations, but as a crucial driver of innovation, ensuring products remain relevant, adaptive, and deeply integrated into the communities they serve. If you’re thinking about building a successful DevRel program for the first time, the best place to start is to reflect on some of your favorite brands and how they connect with the developer community. Do they simply distribute discount codes and free swag, or are they reaching out to their users, and providing them a platform to learn, collaborate with others, and contribute? If they are, what methods do they use, and how do those methods coincide with your team’s existing strengths? And if you ever have any questions or want to connect with a DevRel specialist, do not hesitate to reach out!...