Skip to content

Introducing the New Serverless, GraphQL, Apollo Server, and Contentful Starter kit

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Introducing the new Serverless, GraphQL, Apollo Server, and Contentful Starter kit

The team at This Dot Labs has released a brand new starter kit which includes the Serverless Framework, GraphQL, Apollo Server and Contentful configured and ready to go. This article will walk through how to set up the new kit, the key technologies used, and reasons why you would consider using this kit.

Table of Contents

How to get started setting up the kit

Generate the project

In the command line, you will need to start the starter.dev CLI by running the npx @this-dot/create-starter command. You can then select the Serverless Framework, Apollo Server, and Contentful CMS kit and name your new project. Then you will need to cd into your new project directory and install the dependencies using the tool of your choice (npm, yarn, or pnpm). Next, you will need to Run cp .env.example .env to copy the contents of the .env.example file into the .env file.

Setup Contentful access

You will first need to create an account on Contentful, if you don't have one already. Once you are logged in, you will need to create a new space. From there, go to Settings -> API keys and click on the Content Management Tokens tab. Next, click on the Generate personal token button and give your token a name. Copy your new Personal Access Token, and add it to the CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN variable. Then, go to Settings -> General settings to get the CONTENTFUL_SPACE_ID. The last step is to add those CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN and CONTENTFUL_SPACE_ID values to your .env file.

Setting up Docker

You will first need to install Docker Desktop if you don't have it installed already. Once installed, you can start up the Docker container with the npm run infrastructure:up command.

Starting the local server

While the Docker container is running, open up a new tab in the terminal and run npm run dev to start the development server. Open your browser to http://localhost:3000/dev/graphql to open up Apollo server.

How to Create the Technology Model in Contentful

To get started with the example model, you will first need to create the model in Contentful.

  1. Log into your Contentful account
  2. Click on the Content Model tab
  3. Click on the Design your Content Modal button if this is your first modal
  4. Create a new model called Technology
  5. Add three new text fields called displayName, description and url
  6. Save your new model

How to seed the database with demo data

This starter kit comes with a seeding script that pre-populates data for the Technology Content type.

In the command line, run npm run db:seed which will add three new data entries into Contentful.

If you want to see the results from seeding the database, you can execute a small GraphQL query using Apollo server.

First, make sure Docker, and the local server(npm run dev) are running, and then navigate to http://localhost:3000/dev/graphql.

Add the following query:

query TechnologyQuery {
  technologies {
    description
    displayName
    url
  }
}

When you run the query, you should see the following output.

{
  "data": {
    "technologies": [
      {
        "description": "GraphQL provides a strong-typing system to better understand and utilize our API to retrieve and interact with our data.",
        "displayName": "GraphQL",
        "url": "https://graphql.framework.dev/"
      },
      {
        "description": "Node.jsยฎ is an open-source, cross-platform JavaScript runtime environment.",
        "displayName": "Node.js",
        "url": "https://nodejs.framework.dev/"
      },
      {
        "description": "Express is a minimal and flexible Node.js web application framework.",
        "displayName": "Express",
        "url": "https://www.npmjs.com/package/express"
      }
    ]
  }
}

How to work with the migration scripts

Migrations are a way to make changes to your content models and entries. This starter kit comes with a couple of migration scripts that you can study and run to make changes to the demo Technology model. These migration scripts are located in the scripts/migration directory.

To get started, you will need to first install the contentful-cli.

   npm i -g contentful-cli

You can then login to Contentful using the contentful-cli.

   contentful login

You will then need to choose the Contentful space where the Technology model is located.

   contentful space use

If you want to modify the existing demo content type, you can run the second migration script from the starter kit.

   contentful space migration scripts/migrations/02-edit-technology-contentType.js -y

If you want to build out more content models using the CLI, you can study the example code in the /scripts/migrations/01-create-technology-contentType.js file. From there, you can create a new migration file, and run the above contentful space migration command.

If you want to learn more about migration in Contentful, then please check out the documentation.

Technologies included in this starter kit

Why use GraphQL?

GraphQL is a query language for your API and it makes it easy to query all of the data you need in a single request. This starter kit uses GraphQL to query the data from our Contentful space.

Why use Contentful?

Contentful is a headless CMS that makes it easy to create and manage structured data. We have integrated Contentful into this starter kit to make it easy for you to create new entries in the database.

Why use Amazon Simple Queue Service (SQS)?

Amazon Simple Queue Service (SQS) is a queuing service that allows you to decouple your components and process and store messages in a scalable way.

In this starter kit, an SQS message is sent by the APIGatewayProxyHandler using the sendMessage function, which is then stored in a queue called DemoJobQueue. The SQS handler sqs-handler polls this queue, and processes any message received.

import { APIGatewayProxyHandler } from "aws-lambda";
import { sendMessage } from "../utils/sqs";

export const handler: APIGatewayProxyHandler = async (event) => {
  const body = JSON.parse(event.body || "{}");
  const resp = await sendMessage({
    id: Math.ceil(Math.random() * 100),
    message: body.message,
  });

  return {
    statusCode: resp.success ? 200 : 400,
    body: JSON.stringify(resp.data),
  };
};

Why use Apollo Server?

Apollo Server is a production-ready GraphQL server that works with any GraphQL client, and data source. When you run npm run dev and open the browser to http://localhost:3000/dev/graphql, you will be able to start querying your Contentful data in no time.

Why use the Serverless Framework?

The Serverless Framework is used to help auto-scale your application by using AWS Lambda functions. In the starter kit, you will find a serverless.yml file, which acts as a configuration for the CLI and allows you to deploy your code to your chosen provider.

This starter kit also includes the following plugins:

Why use Redis?

Redis is an open-source in-memory data store that stores data in the server memory. This starter kit uses Redis to cache the data to reduce the API response times and rate limiting. When you make a new request, those new requests will be retrieved from the Redis cache.

Why use the Jest testing framework?

Jest is a popular testing framework that works well for creating unit tests.

You can see some example test files under the src/schema/technology directory. You can use the npm run test command to run all of the tests.

Project structure

Inside the src directory, you will find the following structure:

.
โ”œโ”€โ”€ generated
โ”‚   โ””โ”€โ”€ graphql.ts
โ”œโ”€โ”€ handlers
โ”‚   โ”œโ”€โ”€ graphql.ts
โ”‚   โ”œโ”€โ”€ healthcheck.spec.ts
โ”‚   โ”œโ”€โ”€ healthcheck.ts
โ”‚   โ”œโ”€โ”€ sqs-generate-job.spec.ts
โ”‚   โ”œโ”€โ”€ sqs-generate-job.ts
โ”‚   โ”œโ”€โ”€ sqs-handler.spec.ts
โ”‚   โ””โ”€โ”€ sqs-handler.ts
โ”œโ”€โ”€ models
โ”‚   โ””โ”€โ”€ Technology
โ”‚       โ”œโ”€โ”€ create.spec.ts
โ”‚       โ”œโ”€โ”€ create.ts
โ”‚       โ”œโ”€โ”€ getAll.spec.ts
โ”‚       โ”œโ”€โ”€ getAll.ts
โ”‚       โ”œโ”€โ”€ getById.spec.ts
โ”‚       โ”œโ”€โ”€ getById.ts
โ”‚       โ”œโ”€โ”€ index.ts
โ”‚       โ”œโ”€โ”€ TechnologyModel.spec.ts
โ”‚       โ””โ”€โ”€ TechnologyModel.ts
โ”œโ”€โ”€ schema
โ”‚   โ”œโ”€โ”€ technology
โ”‚   โ”‚   โ”œโ”€โ”€ index.ts
โ”‚   โ”‚   โ”œโ”€โ”€ technology.resolver.spec.ts
โ”‚   โ”‚   โ”œโ”€โ”€ technology.resolvers.ts
โ”‚   โ”‚   โ””โ”€โ”€ technology.typedefs.ts
โ”‚   โ””โ”€โ”€ index.ts
โ””โ”€โ”€ utils
    โ”œโ”€โ”€ contentful
    โ”‚   โ”œโ”€โ”€ contentful-healthcheck.spec.ts
    โ”‚   โ”œโ”€โ”€ contentful-healthcheck.ts
    โ”‚   โ”œโ”€โ”€ contentful.spec.ts
    โ”‚   โ”œโ”€โ”€ contentful.ts
    โ”‚   โ””โ”€โ”€ index.ts
    โ”œโ”€โ”€ redis
    โ”‚   โ”œโ”€โ”€ index.ts
    โ”‚   โ”œโ”€โ”€ redis-healthcheck.spec.ts
    โ”‚   โ”œโ”€โ”€ redis-healthcheck.ts
    โ”‚   โ”œโ”€โ”€ redis.spec.ts
    โ”‚   โ””โ”€โ”€ redis.ts
    โ”œโ”€โ”€ sqs
    โ”‚   โ”œโ”€โ”€ client.spec.ts
    โ”‚   โ”œโ”€โ”€ client.ts
    โ”‚   โ”œโ”€โ”€ getQueueUrl.spec.ts
    โ”‚   โ”œโ”€โ”€ getQueueUrl.ts
    โ”‚   โ”œโ”€โ”€ index.ts
    โ”‚   โ”œโ”€โ”€ is-offline.spec.ts
    โ”‚   โ”œโ”€โ”€ is-offline.ts
    โ”‚   โ”œโ”€โ”€ sendMessage.spec.ts
    โ”‚   โ””โ”€โ”€ sendMessage.ts
    โ””โ”€โ”€ test
        โ””โ”€โ”€ mocks
            โ”œโ”€โ”€ contentful
            โ”‚   โ”œโ”€โ”€ entry.ts
            โ”‚   โ””โ”€โ”€ index.ts
            โ”œโ”€โ”€ aws-lambda-handler-context.ts
            โ”œโ”€โ”€ graphql.ts
            โ”œโ”€โ”€ index.ts
            โ””โ”€โ”€ sqs-record.ts

This given structure makes it easy to find all of the code and tests related to that specific component. This structure also follows the single responsibility principle which means that each file has a single purpose.

How to deploy your application

The Serverless Framework needs access to your cloud provider account so that it can create and manage resources on your behalf. You can follow the guide to get started.

Steps to get started:

  1. Sign up for an AWS account
  2. Create an IAM User and Access Key
  3. Export your AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY credentials.
       export AWS_ACCESS_KEY_ID=<your-key-here>
       export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>
    
  4. Deploy your application on AWS Lambda:
       npm run deploy
    
  5. To deploy a single function, run:
       npm run deploy function --function myFunction
    

To stop your Serverless application, run:

   serverless remove

For more information on Serverless deployment, check out this article.

What can this starter kit be used for?

This starter kit is very versatile, and can be used with a front-end application for a variety of situations.

Here are some examples:

  • personal developer blog
  • small e-commerce application

Conclusion

In this article, we looked at how we can get started using the Serverless, GraphQL, Apollo Server, and Contentful Starter kit. We also looked at the different technologies used in the kit, and why they were chosen. Lastly, we looked at how to deploy our application using AWS.

I hope you enjoy working with our new starter kit!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to Leverage Apollo Client Fetch Policies Like the Pros cover image

How to Leverage Apollo Client Fetch Policies Like the Pros

Apollo Client provides a rich ecosystem and cache for interfacing with your GraphQL APIs. You write your query and leverage the useQuery hook to fetch your data. It provides you with some state context and eventually resolves your query. That data is stored in a local, normalized, in-memory cache, which allows Apollo Client to respond to most previously run requests near instantaneously. This has huge benefits for client performance and the feel of your apps. However, sometimes Apollo's default doesn't match the user experience you want to provide. They provide fetch policies to allow you to control this behavior on each query you execute. In this article, we'll explore the different fetch policies and how you should leverage them in your application. cache-first This is the default for Apollo Client. Apollo will execute your query against the cache. If the cache can fully fulfill the request, then that's it, and we return to the client. If it can only partially match your request or cannot find any of the related data, the query will be run against your GraphQL server. The response is cached for the next query and returned to the handler. This method prioritizes minimizing the number of requests sent to your server. However, it has an adverse effect on data that changes regularly. Think of your social media feeds - they typically contain constantly changing information as new posts are generated. Or a real-time dashboard app tracking data as it moves through a system. cache-first is probably not the best policy for these use cases as you won't fetch the latest data from the upstream source. You can lower the cache time of items for the dashboard to avoid the staleness issue and still minimize the requests being made, but this problem will persist for social media feeds. The cache-first policy should be considered for data that does not change often in your system or data that the current user fully controls. Data that doesn't change often is easily cached, and that's a recommended pattern. For data that the user controls, we need to consider how that data changes. If only the current user can change it, we have 2 options: Return the updated data in the response of any mutation which is used to update the cache Use cache invalidation methods like refetchQueries or onQueryUpdated These methods will ensure that our cache stays in sync with our server allowing the policy to work optimally. However, if other users in the system can make changes that impact the current user's view, then we can not invalidate the cache properly using these strategies which makes this policy unideal. network-only This policy skips the cache lookup and goes to the server to fetch the results. The results are stored in the cache for other operations to leverage. Going back to the example I gave in my explanation cache-first of a social media feed, the network-only policy would be a great way to implement the feed itself as it's ever-changing, and we'll likely even want to poll for changes every 10s or so. The following is an example of what this component could look like: ` Whenever this SocialFeed component is rendered, we always fetch the latest results from the GraphQL server ensuring we're looking at the current data. The results are put in the cache which we can leverage in some children components. cache-only cache-only only checks the cache for the requested data and never hits the server. It throws an error if the specified cache items cannot be found. At first glance, this cache policy may seem unhelpful because it's unclear if our cache is seeded with our data. However, in combination with the network-only policy above, this policy becomes helpful. This policy is meant for components down tree from network-only level query. This method is for you if you're a fan of React components' compatibility. We can modify the return of our previous example to be as follows: ` Notice we're not passing the full post object as a prop. This simplifies our Post component types and makes later refactors easier. The Post would like like the following: ` In this query, we're grabbing the data directly from our cache every time because our top-level query should have fetched it. Now, a small bug here makes maintainability a bit harder. Our top-level GetFeed query doesn't guarantee fetching the same fields. Notice how our Post component exports a fragment. Fragments are a feature Apollo supports to share query elements across operations. In our SocialFeed component, we can change our query to be: ` Now, as we change our Post to use new fields and display different data, the refactoring is restricted to just that component, and the upstream components will detect the changes and handle them for us making our codebase more maintainable. Because the upstream component is always fetching from the network, we can trust that the cache will have our data, making this component safe to render. With these examples, though, our users will likely have to see a loading spinner or state on every render unless we add some server rendering. cache-and-network This is where cache-and-network comes to play. With this policy, Apollo Client will run your query against your cache and your GraphQL server. This further simplifies our example above if we want to provide the last fetched results to the user but then update the feed immediately upon gathering the latest data. This is similar to what X/Twitter does when you reload the app. You'll see the last value that was in the cache then it'll render the network values when ready. This can cause a jarring user experience though, if the data is changing a lot over time, so I recommend using this methodology sparsely. However, if you wanted to update our existing example, we'd just change our SocialFeed component to use this policy, and that'll keep our client and server better in sync while still enabling 10s polling. no-cache This policy is very similar to the network-only policy, except it bypasses the local cache entirely. In our previous example, we wrote engagement as a sub-selector on a Post and stored fields there. These metrics can change in real time pretty drastically. Chat features, reactions, viewership numbers, etc., are all types of data that may change in real time. The no-cache policy is good when this type of data is active, such as during a live stream or within the first few hours of a post going out. You may typically want to use the cache-and-network policy eventually but during that active period, you'll probably want to use no-cache so your consumers can trust your data. I'd probably recommend changing your server to split these queries and run different policies for the operations for performance reasons. I haven't mentioned this yet, but you can make the fetch policy on a query dynamic, meaning you combine these different policies' pending states. This could look like the following: ` We pass whether the event is live to the component that then leverages that info to determine if we should cache or not when fetching the chat. That being said, we should consider using subscription operations for this type of feature as well, but that's an exercise for another blog post. standby This is the most uncommon fetch policy, but has a lot of use. This option runs like a cache-first query when it executes. However, by default, this query does not run and is treated like a "skip" until it is manually triggered by a refetch or updateQueries caller. You can achieve similar results by leveraging the useLazyQuery operator, but this maintains the same behavior as other useQuery operators so you'll have more consistency among your components. This method is primarily used for operations pending other queries to finish or when you want to trigger the caller on a mutation. Think about a dashboard with many filters that need to be applied before your query executes. The standby fetch policy can wait until the user hits the Apply or Submit button to execute the operation then calls a await client.refetchQueries({ include: ["DashboardQuery"] }), which will then allow your component to pull in the parameters for your operation and execute it. Again, you could achieve this with useLazyQuery so it's really up to you and your team how you want to approach this problem. To avoid learning 2 ways, though, I recommend picking just one path. Conclusion Apollo Client's fetch policies are a versatile and helpful tool for managing your application data and keeping it in sync with your GraphQL server. In general, you should use the defaults provided by the library, but think about the user experience you want to provide. This will help you determine which policy best meets your needs. Leveraging tools like fragments will enable you to manage your application and use composable patterns more effectively. With the rise of React Server Components and other similar patterns, you'll need to be wary of how that impacts your Apollo Client strategy. However, if you're on a legacy application that leverages traditional SSR patterns, Apollo allows you to pre-render queries on the server and their related cache. When you combine these technologies, you'll find that your apps perform great, and your users will be delighted....

How to create and use custom GraphQL Scalars cover image

How to create and use custom GraphQL Scalars

How to create and use custom GraphQL Scalars In the realm of GraphQL, scalars form the bedrock of the type system, representing the most fundamental data types like strings, numbers, and booleans. As explored in our previous post, "Leveraging GraphQL Scalars to Enhance Your Schema," scalars play a pivotal role in defining how data is structured and validated. But what happens when the default scalars aren't quite enough? What happens when your application demands a data type as unique as its requirements? Enter the world of custom GraphQL scalars. These data types go beyond the conventional, offering the power and flexibility to tailor your schema to precisely match your application's unique needs. Whether handling complex data structures, enforcing specific data formats, or simply bringing clarity to your API, custom scalars open up a new realm of possibilities. In this post, we'll explore how to understand, create, and effectively utilize custom scalars in GraphQL. From conceptualization to implementation, we'll cover the essentials of extending your GraphQL toolkit, empowering you to transform abstract ideas into concrete, practical solutions. So, let's embark together on the journey of understanding and utilizing custom GraphQL scalars, enhancing and expanding the capabilities of your GraphQL schema. Understanding Custom Scalars Custom scalars in GraphQL extend beyond basic types like String or Int, allowing data to be defined, validated, and processed more precisely. They're instrumental when default types don't quite capture the complexity or specificity of the data, such as with specialized date formats or unique identifiers. The use of custom scalars brings several benefits: * Enhanced Clarity: They offer a clearer representation of what data looks like and how it behaves. * Built-in Validation: Data integrity is bolstered at the schema level. * Flexibility: They can be tailored to specific data handling needs, making your schema more adaptable and robust. With this understanding, we'll explore creating and integrating custom scalars into a GraphQL schema, turning theory into practice. Creating a Custom Scalar Defining a Custom Scalar in TypeScript: Creating a custom scalar in GraphQL with TypeScript involves defining its behavior through parsing, serialization, and validation functions. * Parsing: Transforms input data from the client into a server-understandable format. * Serializing: Converts server data back to a client-friendly format. * Validation: Ensures data adheres to the defined format or criteria. Example: A 'Color' Scalar in TypeScript The Color scalar will ensure that every color value adheres to a valid hexadecimal format, like #FFFFFF for white or #000000 for black: ` In this TypeScript implementation: * validateColors: a function that checks if the provided string matches the hexadecimal color format. * parseValue: a method function that converts the scalarโ€™s value from the client into the serverโ€™s representation format - this method is called when a client provides the scalar as a variable. See parseValue docs for more information * serialize: a method function that converts the scalarโ€™s server representation format to the client format, see serialize docs for more information * parseLiteral: similar to parseValue, this method function converts the scalarโ€™s value from the client to the serverโ€™s representation format. Still, this method is called when the scalar is provided as a hard-coded argument (inline). See parseLiteral docs for more information In the upcoming section, we'll explore how to incorporate and validate these custom scalars within your schema, ensuring they function seamlessly in real-world scenarios. Integrating Custom Scalars into a Schema Incorporating the 'Color' Scalar After defining your custom Color scalar, the next crucial step is effectively integrating it into your GraphQL schema. This integration ensures that your GraphQL server recognizes and correctly utilizes the scalar. Step-by-Step Integration 1. Add the scalar to Type Definitions: Include the Color scalar in your GraphQL type definitions. This inclusion informs GraphQL about this new scalar type. 2. Resolver Mapping: Map your custom scalar type to its resolver. This connection is key for GraphQL to understand how to process this type during queries and mutations. ` 1. Use the scalar: Update your type to use the new custom scalar ` Testing the Integration With your custom Color scalar integrated, conducting thorough testing is vital. Ensure that your GraphQL server correctly handles the Color scalar, particularly in terms of accepting valid color formats and rejecting invalid ones. For demonstration purposes, I've adapted a creation mutation to include the primaryColor field. To keep this post focused and concise, I won't detail all the code changes here, but the following screenshots illustrate the successful implementation and error handling. Calling the mutation (createTechnology) successfully: Calling the mutation with forced fail (bad color hex): Conclusion The journey into the realm of custom GraphQL scalars reveals a world where data types are no longer confined to the basics. By creating and integrating scalars like the Color type, we unlock precision and specificity in our GraphQL schemas, which significantly enhance our applications' data handling capabilities. Custom scalars are more than just a technical addition; they testify to GraphQL's flexibility and power. They allow developers to express data meaningfully, ensuring that APIs are functional, intuitive, and robust. As we've seen, defining, integrating, and testing these scalars requires a blend of creativity and technical acumen. It encourages a deeper understanding of how data flows through your application and offers a chance to tailor that experience to your project's unique needs. So, as you embark on your GraphQL journey, consider the potential of custom scalars. Whether you're ensuring data integrity, enhancing API clarity, or simply making your schema a perfect fit for your application, the possibilities are as vast as they are exciting. Embrace the power of customization, and let your GraphQL schemas shine!...

How to Handle Uploaded Images and Avoid Image Distortion cover image

How to Handle Uploaded Images and Avoid Image Distortion

When you are working with images in your application, you might run into issues where the image's aspect ratio is different from the container's specified width and height. This could lead to images looking stretched and distorted. In this article, we will take a look at how to solve this problem by using the object-fit CSS property. A Look Into the Issue Using the "Let's Chat With" App Let's Chat With is an open source application that facilitates networking between attendees for virtual and in-person conferences. When users sign up for the app, they can join a conference and create a new profile with their name, image, and bio. When the team at This Dot Labs was testing the application, they noticed that some of the profile images were coming out distorted. The original uploaded source image did not have an aspect ratio of 1:1. A 1:1 aspect ratio refers to an image's width and height being the same. Since the image was not a square, it was not fitting well within the dimensions below. ` In order to fix this problem, the team decided to use the object-fit CSS property. What is the object-fit CSS property? The object-fit property is used to determine how an image or video should resize in order to fit inside its container. There are 5 main values you can use with the object-fit property. - object-fit: contain; - resizes the content to fit inside the container without cropping it - object-fit: cover; - ensures the all of the content covers the container and will crop if necessary - object-fit: fill; - fills the container with the content by stretching it and ignoring the aspect ratio. This could lead to image distortion. - object-fit: none; - does not resize the content which could lead to the content spilling out of the container - object-fit: scale-down; - scales larger content down to fit inside the container When the object-fit: cover; property was applied to the profile image in Let's Chat With, the image was no longer distorted. ` When Should You Consider Using the object-fit Property? There will be times where you will not be able to upload different sized images to fit different containers. You might be in a situation like Let's Chat With, where the user is uploading images to your application. In that case, you will need to apply a solution to ensure that the content appropriately resizes within the container without becoming distorted. Conclusion In this article, we learned about how to fix distorted uploaded images using the object-fit property. We examined the bug inside the Let's Chat With application and how that bug was solved using object-fit: cover;. We also talked about when you should consider using the object-fit property. If you want to check out the Let's Chat with app, you can signup here. If you are interested in contributing to the app, you can check out the GitHub repository....

AI Is Speeding Up Development. But Where Are the New Bottlenecks? cover image

AI Is Speeding Up Development. But Where Are the New Bottlenecks?

AI is accelerating development, but itโ€™s also exposing everything else thatโ€™s broken. At the Leadership Exchange, leaders unpacked how AI is reshaping the SDLC and what organizations need to address beyond just coding to make adoption successful. Moderated by Rob Ocel, VP of Innovation at This Dot Labs, the panel featured Itai Gerchikov at Anthropic and Harald Kirschner, Principal Product Manager for GitHub Copilot & VS Code at Microsoft. Panelists explored the current state of AI adoption across the software development lifecycle and shared practical insights into how organizations can effectively integrate AI tools. Panelists discussed how companies are investing in AI tools, skills, and managed competency programs to support developers. While AI can dramatically accelerate coding, the panel emphasized that adoption affects every stage of the SDLC. Bottlenecks now appear in testing, DevOps, product delivery, and marketing as AI speeds up development. Organizations that address technical debt and process inefficiencies are better positioned to extract maximum value from AI tools. The conversation also focused on opportunities and risks. Security, governance, and workforce education were highlighted as critical factors for adoption. Panelists stressed that AI initiatives should be aligned with broader business goals rather than pursued in isolation. They noted that companies experimenting at the cutting edge need to consider organizational readiness just as carefully as technical capabilities. Panelists also explored how leading organizations are navigating the early stages of adoption. Those ahead of the curve are using structured experimentation, prioritizing process improvements, and continuously evaluating outcomes to refine their AI strategies. Learning from these early adopters allows other organizations to anticipate emerging trends and prepare for the next phase of AI adoption rather than simply replicating past approaches. Key Takeaways - Investing in AI skills and tools should be done thoughtfully, with clear alignment to business objectives. - Examining the full SDLC helps identify bottlenecks that AI may accelerate or expose. - Organizations can gain a competitive advantage by learning from early adopters and planning for where AI adoption is heading. AI adoption is not just a technical initiative; it is a strategic transformation that requires attention to people, process, and technology. Organizations that balance innovation with operational discipline will be best positioned to capture the full potential of AI across the software lifecycle. Seeing similar challenges in your own SDLC? Letโ€™s compare notes. Join us at an upcoming Leadership Exchange or reach out to continue the conversation. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co