Skip to content

Tips for Better Amplify Experience: Initial Setup Pitfalls

Learning about CI/CD can be a tricky task, especially if you're unfarmiliar with the technologies available, and how to use them. AWS, Amazon's cloud-based services, is one such tool that can help accomplish a lot of your deployment needs. However, it can be a steep learning curve to getting started. Whether it's a front-end application, or an app needing a back-end, AWS can simplify some of the Dev Ops-related tasks for you.

This guide is a first of a series of helpful tips for making your deployments to AWS easier to manage.

In this installment, we'll walk through:

Note: This guide assummes you've already created an account to utilize AWS features.

AWS CLI vs Amplify CLI

First, aws-cli is different from amplify-cli. Where aws-cli controls credentials for AWS services, the other can also modify access, but mainly controls the deployment similar to git's version control. Because both have some level of access control, which to use and why can be confusing questions.

Access and Logging in via Amplify CLI

The access type of the account (non-SSO or SSO) determines which way you login to perform amplify-cli commands. Using amplify configure, you could setup access key id, and secret access key. These credentials would be stored under a profile name (usually default).

# on macOS and linux
$ cat ~/.aws/credentials
[default]
aws_access_key_id = <a_key_id>
aws_secret_access_key = <a_secret>

Then, when performing amplify pull, it asks for the type of access to use authentication:

AWS profile
AWS access keys

Normally, choosing the first option would get you going. Choosing profile wouldn't work in this scenario because it expects those access credentials. Profile here is not the same as the one stored at ~/.aws/credentials. These credentials are bound to an IAM user setup under the account used to access the AWS. (Verify this by going to IAM > Users).

What happens when you need to switch between instances of AWS accounts that contain their own application? Eventually, using access keys becomes a hassle because you'd have to create a new IAM user for each instance, then store multiple profiles in your credentials document with unique profile names to distinguish them. An easier solution would be to use the account level access to AWS as the single access point for these app instances.

Using AWS SSO for Amplify CLI

One solution to this problem is to setup SSO for AWS account(s) then utilize the same login for every app instance that SSO account has access to.

To do this, first follow steps here to configure a named profile to use AWS SSO. Continue onto the section "Using an AWS SSO enabled named profile" to login which should open a browser to finalize authentication via aws-cli. However, the AWS doc misses a crucial step before you can use the new profile. Notice how that profile was setup for aws-cli, and not amplify-cli. The newly created profile lives under aws config:

$ cat ~/.aws/config
[profile my-dev-profile]
sso_start_url = https://my-sso-portal.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789011
sso_role_name = readOnly
region = us-west-2
output = json

Why is this a problem? Let's try to perform an amplify pull again. We are asked the same authentication question as before (assuming previous access timed out already). Choosing the first option again gives an error that access key and secret access key are missing.

AWS profile
AWS access keys

To connect the new profile, simply intsall aws-sso-util. Once installed, selecting the AWS profile option will now work for any app.

Conclusion

Some AWS resolutions aren't obvious, and it can be time-consuming finding the right information to help move development along. The tips presented here are subject to change due to amplify's ever-chaning ecosystem, and the many applications that utilize its core features.

The next article will feature how to perform a migration in AWS using Amplify.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Avoiding Null and Undefined with NonNullable<T> in TypeScript cover image

Avoiding Null and Undefined with NonNullable<T> in TypeScript

Use Cases Use Case 1: Adding Two Numbers Scenario: A function that adds two numbers and returns their sum. But if one of the numbers is undefined or null, it returns the other number as-is. ` function addNumbers(a: number, b?: number | null): NonNullable { return a + (b ?? 0); } const sum1 = addNumbers(2, 3); // Returns 5 const sum2 = addNumbers(2, null); // Returns 2 const sum3 = addNumbers(2, undefined); // Returns 2 ` In this code snippet, the addNumbers()` function takes two parameters, `a` and `b`. `a` is a required parameter of type `number`, while `b` is an optional parameter of type `number` or `null`. The function uses the NonNullable&lt;T&gt; conditional type to ensure that the return value is not null or undefined. If `b` is null or undefined, the function returns the value of `a`. Otherwise, it adds `a` and `b` together and returns the sum. To handle the case where `b` is null or undefined, the code uses the nullish coalescing operator, `??`, which returns the value on its left-hand side if it is not null or undefined, and the value on its right-hand side otherwise. Use Case 2: Checking Contact Information Scenario: A class representing a person, but with optional email and phone properties. The contact()` function logs the email and phone numbers if they are defined and not null. Otherwise, it logs a message saying that no contact information is available. ` class Person { name: string; email?: string | null; phone?: string | null; constructor(name: string, email?: string | null, phone?: string | null) { this.name = name; this.email = email ?? null; this.phone = phone ?? null; } contact() { if(this.email !== undefined && this.email !== null && this.phone !== undefined && this.phone !== null) { console.log(${this.name} can be reached at ${this.email} or ${this.phone}`); } else { console.log(${this.name} has no contact information available`); } } } const john = new Person('John Doe', 'john.doe@example.com', '(123) 456-7890'); const jane = new Person('Jane Smith', null, '987-654-3210'); john.contact(); // logs 'John Doe can be reached at john.doe@example.com or (123) 456-7890' jane.contact(); // logs 'Jane Smith has no contact information available' ` Explanation: In this code snippet, the Person` class has a `name` property and optional `email` and `phone` properties. The `contact()` function checks if both `email` and `phone` are not undefined and not null before logging the details. Otherwise, it logs a message saying that no contact information is available. To initialize the properties with null, the constructor uses the nullish coalescing operator, `??`. When creating a new `Person`, you can pass null or undefined as arguments, and the class will interpret them as null. Conclusion Understanding and correctly implementing conditional types like NonNullable&lt;T&gt; in TypeScript is crucial to reduce potential code pitfalls. By reviewing examples of numerical operations and contact information handling, we've seen how this conditional type helps reinforce our code against null or undefined values. This highlights the utility of TypeScript's conditional types not only for enhancing code stability, but also for easing our coding journey. So keep experimenting and finding the best ways to deploy these tools for creating robust, secure, and efficient programs!...

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2 cover image

How to Setup Your Own Infrastructure Using the AWS Toolkit and CDK v2

Suppose you want to set up your infrastructure on AWS, but avoid going over the manual steps, or you want reproducible results. In that case, CDK might be the thing for you. CDK stands for Cloud Development Kit; it allows you to program your hosting setup using either TypeScript, JavaScript, Python, Java, C#, or Go. CDK does require you to be familiar with AWS terminology. This series will explain the services used, but it might be a good idea to read up on what AWS offers. Or read one of our earlier articles on AWS. CDK is imperative, which means you can code your infrastructure. There is a point to be made, however, that it behaves more like a declarative tool. All the code one writes ends up in a stack definition. This definition is sent to AWS to set up the desired services, or alter an already running stack. The imperative approach allows one to do easy conditional statements or loops without learning a new language. AWS Toolkit To make things easier for us, AWS offers the AWS Toolkit for VS code. The installation of the plugin in VS Code is straightforward. We had some issues with the authentication, and recommend using the "Edit credentials" route over the "Add a new connection" option. When on the account start page, select the profile you'd like to use. Open the accordion, so it shows the authentication options. Pick "Command line or programmatic access" to open a dialog with the required values. Click the text underneath the heading "Option 2: Add a profile to your AWS credentials file". This will automatically copy the values for you. Next, go back to VS Code, and paste these values into your credentials file. Feel free to change the name between the square brackets to something more human-readable. You can now pick this profile when connecting to AWS in VS Code. First stack With our handy toolkit ready, let's deploy our first stack to AWS using CDK. For this, the CDK needs to make a CloudFormation stack. In your terminal, create a new empty directory (the default name of the app will be the same as your directory's name) and navigate into it. Scaffold a new project with `sh cdk init app --language typescript ` This will create all the required files to create your stack in AWS. From here on, we can bootstrap our AWS environment for use with CDK. Run the bootstrap command with the profile you’ve configured earlier. For example, I pasted my credentials, and named the profile ‘sandbox’. `sh cdk bootstrap –profile sandbox ` CDK will now create the required AWS resources to deploy our stack. Having all our prerequisites met, let’s create a lambda to test if our stack is working properly. Create a new JavaScript file lambda/Hello.js` containing this handler `typescript exports.handler = async function (event) { return { statusCode: 200, headers: { "Content-Type": "text/plain" }, body: Hello, CDK! You've hit ${event.requestContext.domainName}\n`, }; }; ` And add our lambda to our stack in the constructor in lib/-stack.ts` `typescript const helloLambda = new lambda.Function(this, "HelloHandler", { runtime: lambda.Runtime.NODEJS18_X, code: lambda.Code.fromAsset("lambda"), handler: "Hello.handler", }); ` That’s all we need to deploy our lambda to our stack. We can now run the deploy command, which will compare our new local configuration with what is already deployed. Before any changes are pushed, this diff will be displayed on your terminal, and ask for confirmation. This is a good moment to evaluate whether what you’ve written has correctly translated to the desired infrastructure. `sh cdk deploy –profile sandbox ` This same command will push updates. Note that you will only see the diff and confirmation prompt when CDK is about to create new resources. When updating the contents of your Lambda function, it simply pushes the code changes. Now in VS Code, within your AWS view, you’ll find a new CloudFormation, Lambda, and S3 bucket in the explorer view. Right click your Lambda to “Invoke on AWS”. This opens a new window for that specific Lambda. In the right-hand corner, click “Invoke”. The output window will open, and you should see the returned payload including the message we set in our handler. This is not very practical yet. We’re still missing an endpoint to call from our client or browser. This can be done by adding a FunctionURL. Simply add the following line in your stack definition. The authentication is disabled for now, but this makes it possible to make a GET request to the lambda, and see its result. This might not be the desired situation, and AWS offers options to secure your endpoints. `typescript helloLambda.addFunctionUrl({ authType: lambda.FunctionUrlAuthType.NONE }); ` After redeploying this change, right click your Lambda in VS Code and copy the URL. Paste it in your browser and you should see the result of your Lambda! Our first stack is deployed and working. Cleanup By following this article, you should remain within the free tier of AWS and not incur any costs. To keep costs low, it’s a good practice to clean up your stacks that are no longer in use. `sh cdk destroy –profile sandbox ` The CDK destroy command will remove your stack, but leaves the CDK bootstrapped for future deployments. If you want to fully remove all resources created by following this article, also remove the CloudFormation and S3 bucket. This can be done through VS Code by right clicking your CloudFormation and selecting “Delete CloudFormation Stack” and simply “Delete” for the associated S3 bucket. This brings you back to a completely clean slate and future use of the CDK should be bootstrapped again. Round up You should now be able to bootstrap CDK, Create a stack, and run a Lambda function within the stack which is accessible through a FunctionURL. You can grow your stack by adding more Lambda functions, augmenting the logic of those functions, or adding other AWS resources not covered in this article. The setup created can be torn down and recreated in the exact same way over and over, making it easy to share with your team. Changes are incremental, and can be rolled back if need be. This should offer confidence in managing your infrastructure, over manually creating it through the AWS console. Have fun building your own infrastructure!...

Common Patterns and Nuances Using React Query cover image

Common Patterns and Nuances Using React Query

React hosts a number of solutions to various design problems due to its flexible paradigm. The decisions we make at the design and architecture phase of a project can either alleviate the time cost of development for a simple robust solution, or hinder it due to taxing implementations. One easy-to-implement yet sometimes tricky to use tool is react-query`- a powerful library for asynchronous state management. It's simplicity in implementation makes it a desireable choice for writing component state logic. However, there are some unspoken aspects of React Query that may seem frustrating, and increase its difficulty, yet often fall on the context of the problem we hope to resolve. The next few concepts demonstrate some patterns for simple solutions while addressing some nuances one might encounter along the way. Note that react-query` is now TanStack Query, and these concepts can be used in Vue, Solid, and Svelte. It'll continue to be referred to as React Query (RQ) in this React article. Understanding Query State Keys Under the hood, RQ performs some mappings similarly to "typed" actions works in other state machines. Actions, or in our case, queries, are mapped to a key where the value is either null`, or some initial state (more on this in the following section). Because RQ is a state management tool, state will update relative to other stateful changes within a component. This means that whenever state changes in a component, these changes also affect the query's state performing state checks, and update accordingly. Take a Todo app that loads some tasks. The snapshot of the query's state can look like: `typescript { queryKey: { 0: 'tasks }, queryHash: "[\"tasks\"]" } ` If the state key changes, so does the cached data accessible at the key at that moment. State changes will only occur on a few select events. If the state key doesn't change, the query can still run, but the cached data won't update. Using Initial and Placeholder Data An example of this is with the use of initial data to represent values that haven't been fully loaded. It's common in state machines to introduce an initial state before hydration). Take the Todo app earlier that needs to show an initial loading state. On initial query, the tasks query is in loading` status (aka `isLoading = true`). The UI will flicker this content in place when it's ready. This isn't the best UX, but it can be fixed quickly. RQ provides options for settings initialData` or `placeholderData`. Although these properties have similarities, the difference is where caching occurs: cache-level vs observer-level. Cache-level** refers to caching via Query Key, which is where `initialData` resides. This initial cache overrides observer-level caches. Observer-level** refers to the location the subscription lives, and where `placeholderData` renders. Data at this level won't cache, and works if no initial data was cached. With initialData`, you have more control over cache `staleTime` and refetching strategies. Whereas, `placeholderData` is a valid option for a simple UX enhancement. Keep in mind that error states change with the choice between caching initial data or not. `typescript export default function TasksComponent() { const { data, isPlaceholderData } = useQuery( ['tasks'], getTasks, { placeholderData: { tasks: [], }, } ); return ( {isPlaceholderData ? ( Initial Placeholder ) : ( {data?.tasks?.map((task) => ( {task.name} ))} )} ); } ` Managing Refreshing State Expanding on the previous concept, updating the query happens when either state is stale, and we control the refresh, or we use other mechanisms, provided by RQ, to perform the same tasks. Re-hydrating Data RQ exposes a refresh` method that makes a new network request for the query and updates state. However, if the cached state is the same as the newly fetched state, nothing will update. This can be a desired outcome to limit state updates, but take a situation where manual refetching is required once a mutation occurs. Using a query key with fetch queries increases the query's specificity, and adds control refetching. This is known as Invalidation*, where the Query Key's data is marked as stale, cleared, and refetched. Refetching data can occur automatically or manually in varying degree for either approach. What this means is we can automatically have a query refetch data on stateTime` or other refetching options (`refetchInterval`, `refetchIntervalInBackground`, `refetchOnMount`, `refetchOnReconnect`, and `refetchOnWindowFocus`). And we can refetch manually with the `refetch` function. Automatic refresh is rather straightforward, but there are times when we want to manually trigger an update to a specific Query Key thus fetching the latest data. However, you probably encountered a situation where the refetch` function doesn't perform a network request. Processing HTTP Errors One of the more common situations beginners to RQ face is handling errors returned from failed http requests. In a standard fetch request using useState` and `useEffect`, it is common to create some state to manage network errors. However, RQ can capture runtime errors if the API handler used doesn't contain proper error-handling. Whether runtime or network error, they appear in the error` property of the query or mutation (also indicated by `isError` status). To overcome this and pass network error into the query, we can either do one or a combination of: - telling the fetch API how to process the failed response - using error boundaries with queries and mutations Handling Errors with Fetch For a simple solution to handling errors, process network requests from the fetch API (or axios) like the following: `typescript async function getTasks() { try { const data = await fetch('https://example.com/tasks'); // if data is possibly null if(!data) { throw new Error('No tasks found') } return data; } catch (error) { throw new Error('Something went wrong') } } ` Then, in your component: `typescript export default function TasksComponent() { const { data, isError } = useQuery(['tasks'], getTasks); return ( {isError ? ( Unable to load errors at this time. ) : ( {data?.tasks?.map((task) => ( {task.name} ))} )} ); } ` This pattern gets repetitive, but is useful for simple apps that may not require heavy query use. Handling Errors with Error Boundaries Arguably the best thing a React dev can do for their app is to set an Error Boundary. Error Boundaries help contain runtime errors that would normally crash an app by propagating through the component tree. However, they can't process network errors without some setup. Thankfully, RQ makes this very easy with the useErrorBoundary` option: `typescript const { data, isError } = useQuery(['tasks'], getTasks, { useErrorBoundary: true }); ` The query hook takes the error, caches it, and rethrows it, so the error boundary captures it accordingly. Additionally, passing a function to useErrorBoundary` that returns a boolean increases the granularity of network error handling like: `typescript const { data, isError } = useQuery(['tasks'], getTasks, { useErrorBoundary: (error) => error.response?.status >= 500 }); ` Takeaways The three main concepts using React Query are: - hydrating the cache with placeholders or defaults - re-hydrating the cache with fresh data - handling network errors with the correct setup and error boundaries There are a number of state management tools to use with React, but React Query makes it simple to get up and running with an effective tool that adheres to some simple patterns and React rendering cycles. Learning how and when to execute queries to achieve the desire pattern takes some understanding of the React ecosystem....

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....