Skip to content

Introducing the New Serverless, GraphQL, Apollo Server, and Contentful Starter kit

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Introducing the new Serverless, GraphQL, Apollo Server, and Contentful Starter kit

The team at This Dot Labs has released a brand new starter kit which includes the Serverless Framework, GraphQL, Apollo Server and Contentful configured and ready to go. This article will walk through how to set up the new kit, the key technologies used, and reasons why you would consider using this kit.

Table of Contents

How to get started setting up the kit

Generate the project

In the command line, you will need to start the starter.dev CLI by running the npx @this-dot/create-starter command. You can then select the Serverless Framework, Apollo Server, and Contentful CMS kit and name your new project. Then you will need to cd into your new project directory and install the dependencies using the tool of your choice (npm, yarn, or pnpm). Next, you will need to Run cp .env.example .env to copy the contents of the .env.example file into the .env file.

Setup Contentful access

You will first need to create an account on Contentful, if you don't have one already. Once you are logged in, you will need to create a new space. From there, go to Settings -> API keys and click on the Content Management Tokens tab. Next, click on the Generate personal token button and give your token a name. Copy your new Personal Access Token, and add it to the CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN variable. Then, go to Settings -> General settings to get the CONTENTFUL_SPACE_ID. The last step is to add those CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN and CONTENTFUL_SPACE_ID values to your .env file.

Setting up Docker

You will first need to install Docker Desktop if you don't have it installed already. Once installed, you can start up the Docker container with the npm run infrastructure:up command.

Starting the local server

While the Docker container is running, open up a new tab in the terminal and run npm run dev to start the development server. Open your browser to http://localhost:3000/dev/graphql to open up Apollo server.

How to Create the Technology Model in Contentful

To get started with the example model, you will first need to create the model in Contentful.

  1. Log into your Contentful account
  2. Click on the Content Model tab
  3. Click on the Design your Content Modal button if this is your first modal
  4. Create a new model called Technology
  5. Add three new text fields called displayName, description and url
  6. Save your new model

How to seed the database with demo data

This starter kit comes with a seeding script that pre-populates data for the Technology Content type.

In the command line, run npm run db:seed which will add three new data entries into Contentful.

If you want to see the results from seeding the database, you can execute a small GraphQL query using Apollo server.

First, make sure Docker, and the local server(npm run dev) are running, and then navigate to http://localhost:3000/dev/graphql.

Add the following query:

query TechnologyQuery {
  technologies {
    description
    displayName
    url
  }
}

When you run the query, you should see the following output.

{
  "data": {
    "technologies": [
      {
        "description": "GraphQL provides a strong-typing system to better understand and utilize our API to retrieve and interact with our data.",
        "displayName": "GraphQL",
        "url": "https://graphql.framework.dev/"
      },
      {
        "description": "Node.jsยฎ is an open-source, cross-platform JavaScript runtime environment.",
        "displayName": "Node.js",
        "url": "https://nodejs.framework.dev/"
      },
      {
        "description": "Express is a minimal and flexible Node.js web application framework.",
        "displayName": "Express",
        "url": "https://www.npmjs.com/package/express"
      }
    ]
  }
}

How to work with the migration scripts

Migrations are a way to make changes to your content models and entries. This starter kit comes with a couple of migration scripts that you can study and run to make changes to the demo Technology model. These migration scripts are located in the scripts/migration directory.

To get started, you will need to first install the contentful-cli.

   npm i -g contentful-cli

You can then login to Contentful using the contentful-cli.

   contentful login

You will then need to choose the Contentful space where the Technology model is located.

   contentful space use

If you want to modify the existing demo content type, you can run the second migration script from the starter kit.

   contentful space migration scripts/migrations/02-edit-technology-contentType.js -y

If you want to build out more content models using the CLI, you can study the example code in the /scripts/migrations/01-create-technology-contentType.js file. From there, you can create a new migration file, and run the above contentful space migration command.

If you want to learn more about migration in Contentful, then please check out the documentation.

Technologies included in this starter kit

Why use GraphQL?

GraphQL is a query language for your API and it makes it easy to query all of the data you need in a single request. This starter kit uses GraphQL to query the data from our Contentful space.

Why use Contentful?

Contentful is a headless CMS that makes it easy to create and manage structured data. We have integrated Contentful into this starter kit to make it easy for you to create new entries in the database.

Why use Amazon Simple Queue Service (SQS)?

Amazon Simple Queue Service (SQS) is a queuing service that allows you to decouple your components and process and store messages in a scalable way.

In this starter kit, an SQS message is sent by the APIGatewayProxyHandler using the sendMessage function, which is then stored in a queue called DemoJobQueue. The SQS handler sqs-handler polls this queue, and processes any message received.

import { APIGatewayProxyHandler } from "aws-lambda";
import { sendMessage } from "../utils/sqs";

export const handler: APIGatewayProxyHandler = async (event) => {
  const body = JSON.parse(event.body || "{}");
  const resp = await sendMessage({
    id: Math.ceil(Math.random() * 100),
    message: body.message,
  });

  return {
    statusCode: resp.success ? 200 : 400,
    body: JSON.stringify(resp.data),
  };
};

Why use Apollo Server?

Apollo Server is a production-ready GraphQL server that works with any GraphQL client, and data source. When you run npm run dev and open the browser to http://localhost:3000/dev/graphql, you will be able to start querying your Contentful data in no time.

Why use the Serverless Framework?

The Serverless Framework is used to help auto-scale your application by using AWS Lambda functions. In the starter kit, you will find a serverless.yml file, which acts as a configuration for the CLI and allows you to deploy your code to your chosen provider.

This starter kit also includes the following plugins:

Why use Redis?

Redis is an open-source in-memory data store that stores data in the server memory. This starter kit uses Redis to cache the data to reduce the API response times and rate limiting. When you make a new request, those new requests will be retrieved from the Redis cache.

Why use the Jest testing framework?

Jest is a popular testing framework that works well for creating unit tests.

You can see some example test files under the src/schema/technology directory. You can use the npm run test command to run all of the tests.

Project structure

Inside the src directory, you will find the following structure:

.
โ”œโ”€โ”€ generated
โ”‚   โ””โ”€โ”€ graphql.ts
โ”œโ”€โ”€ handlers
โ”‚   โ”œโ”€โ”€ graphql.ts
โ”‚   โ”œโ”€โ”€ healthcheck.spec.ts
โ”‚   โ”œโ”€โ”€ healthcheck.ts
โ”‚   โ”œโ”€โ”€ sqs-generate-job.spec.ts
โ”‚   โ”œโ”€โ”€ sqs-generate-job.ts
โ”‚   โ”œโ”€โ”€ sqs-handler.spec.ts
โ”‚   โ””โ”€โ”€ sqs-handler.ts
โ”œโ”€โ”€ models
โ”‚   โ””โ”€โ”€ Technology
โ”‚       โ”œโ”€โ”€ create.spec.ts
โ”‚       โ”œโ”€โ”€ create.ts
โ”‚       โ”œโ”€โ”€ getAll.spec.ts
โ”‚       โ”œโ”€โ”€ getAll.ts
โ”‚       โ”œโ”€โ”€ getById.spec.ts
โ”‚       โ”œโ”€โ”€ getById.ts
โ”‚       โ”œโ”€โ”€ index.ts
โ”‚       โ”œโ”€โ”€ TechnologyModel.spec.ts
โ”‚       โ””โ”€โ”€ TechnologyModel.ts
โ”œโ”€โ”€ schema
โ”‚   โ”œโ”€โ”€ technology
โ”‚   โ”‚   โ”œโ”€โ”€ index.ts
โ”‚   โ”‚   โ”œโ”€โ”€ technology.resolver.spec.ts
โ”‚   โ”‚   โ”œโ”€โ”€ technology.resolvers.ts
โ”‚   โ”‚   โ””โ”€โ”€ technology.typedefs.ts
โ”‚   โ””โ”€โ”€ index.ts
โ””โ”€โ”€ utils
    โ”œโ”€โ”€ contentful
    โ”‚   โ”œโ”€โ”€ contentful-healthcheck.spec.ts
    โ”‚   โ”œโ”€โ”€ contentful-healthcheck.ts
    โ”‚   โ”œโ”€โ”€ contentful.spec.ts
    โ”‚   โ”œโ”€โ”€ contentful.ts
    โ”‚   โ””โ”€โ”€ index.ts
    โ”œโ”€โ”€ redis
    โ”‚   โ”œโ”€โ”€ index.ts
    โ”‚   โ”œโ”€โ”€ redis-healthcheck.spec.ts
    โ”‚   โ”œโ”€โ”€ redis-healthcheck.ts
    โ”‚   โ”œโ”€โ”€ redis.spec.ts
    โ”‚   โ””โ”€โ”€ redis.ts
    โ”œโ”€โ”€ sqs
    โ”‚   โ”œโ”€โ”€ client.spec.ts
    โ”‚   โ”œโ”€โ”€ client.ts
    โ”‚   โ”œโ”€โ”€ getQueueUrl.spec.ts
    โ”‚   โ”œโ”€โ”€ getQueueUrl.ts
    โ”‚   โ”œโ”€โ”€ index.ts
    โ”‚   โ”œโ”€โ”€ is-offline.spec.ts
    โ”‚   โ”œโ”€โ”€ is-offline.ts
    โ”‚   โ”œโ”€โ”€ sendMessage.spec.ts
    โ”‚   โ””โ”€โ”€ sendMessage.ts
    โ””โ”€โ”€ test
        โ””โ”€โ”€ mocks
            โ”œโ”€โ”€ contentful
            โ”‚   โ”œโ”€โ”€ entry.ts
            โ”‚   โ””โ”€โ”€ index.ts
            โ”œโ”€โ”€ aws-lambda-handler-context.ts
            โ”œโ”€โ”€ graphql.ts
            โ”œโ”€โ”€ index.ts
            โ””โ”€โ”€ sqs-record.ts

This given structure makes it easy to find all of the code and tests related to that specific component. This structure also follows the single responsibility principle which means that each file has a single purpose.

How to deploy your application

The Serverless Framework needs access to your cloud provider account so that it can create and manage resources on your behalf. You can follow the guide to get started.

Steps to get started:

  1. Sign up for an AWS account
  2. Create an IAM User and Access Key
  3. Export your AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY credentials.
       export AWS_ACCESS_KEY_ID=<your-key-here>
       export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>
    
  4. Deploy your application on AWS Lambda:
       npm run deploy
    
  5. To deploy a single function, run:
       npm run deploy function --function myFunction
    

To stop your Serverless application, run:

   serverless remove

For more information on Serverless deployment, check out this article.

What can this starter kit be used for?

This starter kit is very versatile, and can be used with a front-end application for a variety of situations.

Here are some examples:

  • personal developer blog
  • small e-commerce application

Conclusion

In this article, we looked at how we can get started using the Serverless, GraphQL, Apollo Server, and Contentful Starter kit. We also looked at the different technologies used in the kit, and why they were chosen. Lastly, we looked at how to deploy our application using AWS.

I hope you enjoy working with our new starter kit!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to create and use custom GraphQL Scalars cover image

How to create and use custom GraphQL Scalars

How to create and use custom GraphQL Scalars In the realm of GraphQL, scalars form the bedrock of the type system, representing the most fundamental data types like strings, numbers, and booleans. As explored in our previous post, "Leveraging GraphQL Scalars to Enhance Your Schema," scalars play a pivotal role in defining how data is structured and validated. But what happens when the default scalars aren't quite enough? What happens when your application demands a data type as unique as its requirements? Enter the world of custom GraphQL scalars. These data types go beyond the conventional, offering the power and flexibility to tailor your schema to precisely match your application's unique needs. Whether handling complex data structures, enforcing specific data formats, or simply bringing clarity to your API, custom scalars open up a new realm of possibilities. In this post, we'll explore how to understand, create, and effectively utilize custom scalars in GraphQL. From conceptualization to implementation, we'll cover the essentials of extending your GraphQL toolkit, empowering you to transform abstract ideas into concrete, practical solutions. So, let's embark together on the journey of understanding and utilizing custom GraphQL scalars, enhancing and expanding the capabilities of your GraphQL schema. Understanding Custom Scalars Custom scalars in GraphQL extend beyond basic types like String or Int, allowing data to be defined, validated, and processed more precisely. They're instrumental when default types don't quite capture the complexity or specificity of the data, such as with specialized date formats or unique identifiers. The use of custom scalars brings several benefits: * Enhanced Clarity: They offer a clearer representation of what data looks like and how it behaves. * Built-in Validation: Data integrity is bolstered at the schema level. * Flexibility: They can be tailored to specific data handling needs, making your schema more adaptable and robust. With this understanding, we'll explore creating and integrating custom scalars into a GraphQL schema, turning theory into practice. Creating a Custom Scalar Defining a Custom Scalar in TypeScript: Creating a custom scalar in GraphQL with TypeScript involves defining its behavior through parsing, serialization, and validation functions. * Parsing: Transforms input data from the client into a server-understandable format. * Serializing: Converts server data back to a client-friendly format. * Validation: Ensures data adheres to the defined format or criteria. Example: A 'Color' Scalar in TypeScript The Color scalar will ensure that every color value adheres to a valid hexadecimal format, like #FFFFFF for white or #000000 for black: ` In this TypeScript implementation: * validateColors: a function that checks if the provided string matches the hexadecimal color format. * parseValue: a method function that converts the scalarโ€™s value from the client into the serverโ€™s representation format - this method is called when a client provides the scalar as a variable. See parseValue docs for more information * serialize: a method function that converts the scalarโ€™s server representation format to the client format, see serialize docs for more information * parseLiteral: similar to parseValue, this method function converts the scalarโ€™s value from the client to the serverโ€™s representation format. Still, this method is called when the scalar is provided as a hard-coded argument (inline). See parseLiteral docs for more information In the upcoming section, we'll explore how to incorporate and validate these custom scalars within your schema, ensuring they function seamlessly in real-world scenarios. Integrating Custom Scalars into a Schema Incorporating the 'Color' Scalar After defining your custom Color scalar, the next crucial step is effectively integrating it into your GraphQL schema. This integration ensures that your GraphQL server recognizes and correctly utilizes the scalar. Step-by-Step Integration 1. Add the scalar to Type Definitions: Include the Color scalar in your GraphQL type definitions. This inclusion informs GraphQL about this new scalar type. 2. Resolver Mapping: Map your custom scalar type to its resolver. This connection is key for GraphQL to understand how to process this type during queries and mutations. ` 1. Use the scalar: Update your type to use the new custom scalar ` Testing the Integration With your custom Color scalar integrated, conducting thorough testing is vital. Ensure that your GraphQL server correctly handles the Color scalar, particularly in terms of accepting valid color formats and rejecting invalid ones. For demonstration purposes, I've adapted a creation mutation to include the primaryColor field. To keep this post focused and concise, I won't detail all the code changes here, but the following screenshots illustrate the successful implementation and error handling. Calling the mutation (createTechnology) successfully: Calling the mutation with forced fail (bad color hex): Conclusion The journey into the realm of custom GraphQL scalars reveals a world where data types are no longer confined to the basics. By creating and integrating scalars like the Color type, we unlock precision and specificity in our GraphQL schemas, which significantly enhance our applications' data handling capabilities. Custom scalars are more than just a technical addition; they testify to GraphQL's flexibility and power. They allow developers to express data meaningfully, ensuring that APIs are functional, intuitive, and robust. As we've seen, defining, integrating, and testing these scalars requires a blend of creativity and technical acumen. It encourages a deeper understanding of how data flows through your application and offers a chance to tailor that experience to your project's unique needs. So, as you embark on your GraphQL journey, consider the potential of custom scalars. Whether you're ensuring data integrity, enhancing API clarity, or simply making your schema a perfect fit for your application, the possibilities are as vast as they are exciting. Embrace the power of customization, and let your GraphQL schemas shine!...

Leveraging GraphQL Scalars to Enhance Your Schema cover image

Leveraging GraphQL Scalars to Enhance Your Schema

Introduction GraphQL has revolutionized the way developers approach application data and API layers, gaining well-deserved momentum in the tech world. Yet, for all its prowess, there's room for enhancement, especially when it comes to its scalar types. By default, GraphQL offers a limited set of these primitives โ€” Int, Float, String, Boolean, and ID โ€” that underpin every schema. While these types serve most use cases, there are scenarios where they fall short, leading developers to yearn for more specificity in their schemas. Enter graphql-scalars, a library designed to bridge this gap. By supplementing GraphQL with a richer set of scalar types, this tool allows for greater precision and flexibility in data representation. In this post, we'll unpack the potential of enhanced Scalars, delve into the extended capabilities provided by graphql-scalars, and demonstrate its transformative power using an existing starter project. Prepare to redefine the boundaries of what your GraphQL schema can achieve. Benefits of Using Scalars GraphQL hinges on the concept of "types." Scalars, being the foundational units of GraphQL's type system, play a pivotal role. While the default Scalars โ€” Int, Float, String, Boolean, and ID โ€” serve many use cases, there's an evident need for more specialized types in intricate web development scenarios. 1. Precision: Using default Scalars can sometimes lack specificity. Consider representing a date or time in your application with a String; this might lead to ambiguities in format interpretation and potential inconsistencies. 2. Validation: Specialized scalar types introduce inherent validation. Instead of using a String for an email or a URL, for example, distinct types ensure the data meets expected formats at the query level itself. 3. Expressiveness: Advanced Scalars provide clearer intentions. They eliminate ambiguity inherent in generic types, making the schema more transparent and self-explanatory. Acknowledging the limitations of the default Scalars, tools like graphql-scalars have emerged. By broadening the range of available data types, graphql-scalars allows developers to describe their data with greater precision and nuance. Demonstrating Scalars in Action with Our Starter Project To truly grasp the transformative power of enhanced Scalars, seeing them in action is pivotal. For this, we'll leverage a popular starter kit: the Serverless framework with Apollo and Contentful. This kit elegantly blends the efficiency of serverless functions with the power of Apollo's GraphQL and Contentful's content management capabilities. Setting Up the Starter: 1. Initialize the Project: ` 2. When prompted, name your project enhance-with-graphql-scalars. ` 3. For a detailed setup, including integrating with Contentful and deploying your serverless functions, please follow the comprehensive guide provided in the starter kit here. 4. And we add the graphql-scalars package ` Enhancing with graphql-scalars: Dive into the technology.typedefs.ts file, which is the beating heart of our GraphQL type definitions for the project. Initially, these are the definitions we encounter: ` Our enhancement strategy is straightforward: * Convert the url field from a String to the URL scalar type, bolstering field validation to adhere strictly to the URL format. Post-integration of graphql-scalars, and with our adjustments, the revised type definition emerges as: ` To cap it off, we integrate the URL type definition along with its resolvers (sourced from graphql-scalars) in the schema/index.ts file: ` This facelift doesn't just refine our GraphQL schema but infuses it with innate validation, acting as a beacon for consistent and accurate data. Testing in the GraphQL Sandbox Time to witness our changes in action within the GraphQL sandbox. Ensure your local server is humming along nicely. Kick off with verifying the list query: ` Output: ` Success! Each url in our dataset adheres to the pristine URL format. Any deviation would've slapped us with a format error. Now, let's court danger. Attempt to update the url field with a _wonky_ format: ` As anticipated, the API throws up a validation roadblock: ` For the final act, re-run the initial query to reassure ourselves that the original dataset remains untarnished. Conclusion Enhancing your GraphQL schemas with custom scalars not only amplifies the robustness of your data structures but also streamlines validation and transformation processes. By setting foundational standards at the schema level, we ensure error-free, consistent, and meaningful data exchanges right from the start. The graphql-scalars library offers an array of scalars that address common challenges developers face. Beyond the URL scalar we explored, consider diving into other commonly used scalars such as: * DateTime: Represents date and time in the ISO 8601 format. * Email: Validates strings as email addresses. * PositiveInt: Ensures integer values are positive. * NonNegativeFloat: Guarantees float values are non-negative. As a potential next step, consider crafting your own custom scalars tailored to your project's specific requirements. Building a custom scalar not only offers unparalleled flexibility but also provides deeper insights into GraphQL's inner workings and its extensibility. Remember, while GraphQL is inherently powerful, the granular enhancements like scalars truly elevate the data-fetching experience for developers and users alike. Always evaluate your project's needs and lean into enhancements that bring the most value. To richer and more intuitive GraphQL schemas!...

Introducing the Next.js 12 and Chakra UI Starter Kit cover image

Introducing the Next.js 12 and Chakra UI Starter Kit

Next.js is a very popularly used React framework, and to help support developers using Next, This Dot Labs has just created a new starter kit that they can use to bootstrap their next projects. This Next.js 12 starter kit comes with formatting, linting, example components, unit testing and styling with Chakra UI. In this article, we will take a deeper look into what this kit has to offer. Table of Contents - How to initialize a new project - Technologies and tools included with the kit - Next.js v.12 - Chakra UI - Jest - Storybook - ESLint and Prettier - A note about state management - Deployment options - Reasons for using this kit - Conclusion How to initialize a new project 1. Run npm create @this-dot/starter -- --kit next12-chakra-ui or yarn create @this-dot/starter --kit next12-chakra-ui 2. Follow the prompts to select the next12-chakra-ui starter kit, and name your new project. 3. cd into your project directory and run npm install or yarn install . 4. Run npm run dev or yarn run dev to start the development server. 5. Open your browser to http://localhost:3000 to see the included example code running. Technologies and tools included with the kit Next.js v.12 This starter kit uses version 12 of Next.js with the TypeScript configuration. We have also included an example inside the src/pages/api/hello.ts file on how to work with the built-in types for API routes. ` Chakra UI This starter kit uses Chakra UI for all of the styling. We have already setup the ChakraProvider inside the src/pages/_app.tsx file, which includes extended theme objects for colors, font weights, and breakpoints that you can customize to your liking. ` You can take a look at any of the example components inside of the src/components folder on how to best use Chakra's components. Jest This starter kit uses the Jest testing framework for its unit tests. The unit tests for the home page can be found in the tests directory. ` Storybook This starter kit comes with Storybook so you can test out your UI components in isolation. We have also included the @storybook/addon-a11y addon, which is used to check for common accessibility errors in your components. When you run Storybook, each story will show detailed explanations with suggested fixes if errors are found. Examples of stories can be found in the components directory. ESLint and Prettier This start kit uses ESLint for linting and Prettier for formatting. All of the configurations have been setup for you so you can get to building out your project faster. A note about state management This starter kit does not use a global state management library. Instead we are managing state within the routing system. For examples, please look at the /src/pages/counter-example.tsx and src/pages/fetch-example.tsx files. Deployment options You can use services like Netlify or Vercel to deploy your application. Both of these services will come with a built-in CI/CD pipeline and live previews. Reasons for using this kit Next.js is a versatile framework, and can be used for a variety of situations. Here are some examples of what you can use our starter kit for. - personal blog - e commerce application - user dashboard application - MVP (Minimum Viable Product) Conclusion Next.js has a lot to offer, and this new starter kit will help you bootstrap your next project. Study the example components to learn about best practices with Next.js and Chakra UI. Get started building out new features for your project with our new starter kit!...

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co