Skip to content

Vue 3.2 - Using Composition API with Script Setup

Introduction

Vue 3 introduced the Composition API as a new way to work with reactive state in a Vue application. Rather than organizing code by functionality, (data, computed, methods, watch, etc), you can group code by feature (users, API, form). This allows for a greater amount of flexibility while building a Vue application. We've already talked about the Composition in other articles (if you haven't read them, check them out!), but with the release of Vue 3.2, another Composition-related feature has been released as stable - <script setup>.

In short, <script setup> allows developers to define a component without having to export anything from your JavaScript block - simply define your variables and use them in your template! This style of writing a component resembles Svelte in many ways, and is a massive improvement for anyone coming into Vue for the first time.

<script setup> Basics

Let's look at an example. If you were using the Options API (the standard of Vue 2), all of your single-file components would look something like this:

<template>
  <div>Hello, {{ name }}!</div>
  <input v-model="name" />
  <button :disabled="!isNamePresent" @click="submitName">Submit</button>
</template>

<script>
export default {
  data() {
    return {
      name: ''
    }
  },
  computed: {
    isNamePresent() {
      return this.name.length > 0
    }
  },
  methods: {
    submitName() {
      console.log(this.name)
    }
  }
}
</script>

We have our template (a simple form), and our script block. Within the script block, we export an object with three keys: name, computed, and methods. If you are familiar with Vue, this should look familiar to you. Now, let's switch this code to use the Composition API.

<template>
  <div>Hello, {{ name }}!</div>
  <input v-model="name" />
  <button :disabled="!isNamePresent" @click="submitName">Submit</button>
</template>

<script>
import { ref, computed } from 'vue'

export default {
  setup() {
    const name = ref('')
    const isNamePresent = computed(() => name.value.length > 0)

    function submitName() {
      console.log(name.value)
    }

    return {
      name,
      isNamePresent,
      submitName
    }
  }
}
</script>

Our component does the exact same thing as before. We define our state (name), a computed property (isNamePresent), and our submit function. If any of this is unfamiliar, check out my previous articles on the Vue Composition API. Rather than having to scaffold our application within an object being exported, we are free to define our variables as we want. This flexibility also allows us to extract repeated logic from the component if we want to, but in this case our component is pretty straightforward.

However, we still have that awkward export default statement. Our code all lives within the setup function, while the rest is really just boilerplate. Can't we just remove it? Actually, we can now! This is where <script setup> comes in. Let's switch to use script setup instead of the standard script block.

<template>
  <div>Hello, {{ name }}!</div>
  <input v-model="name" />
  <button :disabled="!isNamePresent" @click="submitName">Submit</button>
</template>

<script setup>
import { ref, computed } from 'vue'

const name = ref('')
const isNamePresent = computed(() => name.value.length > 0)

function submitName() {
  console.log(name.value)
}
</script>

Let's go over what changed here. First, we added the word "setup" to our script tag, which enables this new mode for writing Vue components. Second, we took our code from within the setup function, and replaced our existing exported object with just our code. And everything works as expected!

Note that everything declared within the script tags is available in the template of your component. This includes non-reactive variables or constants, as well as utility functions or other libraries. The major benefit of this is that you don't need to manually bind an external file (Constants.js, for example) as a value of your component - Vue handles this for you now.

Additional Features

You may be wondering how to handle some of the core aspects of writing Vue components, like utilizing other components or defining props. Vue 3.2 has us covered for those use cases as well! Let's take a look at some of the additional features provided by this approach to building Vue single-file components.

Defining Components

When using <script setup>, we don't have to manually define our imported components any more. By importing a component into the file, the compiler is able to automatically add it to our application. Let's update our component by abstracting the form into its own component. We'll call it Form.vue. For now, it will simply be the template, and we'll get to the logic in a moment.

<!-- Form.vue -->
<template>
  <form @submit.prevent="submitHandler">
    <label>Name
      <input type="text" />
    </label>
    <button>Submit</button>
  </form>
</template>

<script setup>
function submitHandler() {
  // Do something
}
</script>

<!-- App.vue -->
<template>
  <div>Hello, {{ name }}!</div>
  <Form />
</template>

<script setup>
import { ref } from 'vue'
import Form from './components/Form.vue'

const name = ref('')

function submitForm() {
  console.log(name.value)
}
</script>

That's it! Our component now has to be imported into our Vue file, and it's automatically available in our template. No more components block taking up space in our file!

Now, we need to pass name into our child component as a prop. But wait, we can't define props! We don't have an object to add the props option to! Also, we need to emit that the form was submitted so that we can trigger our submission. How can we define what our child component emits?

defineProps and defineEmits

We can still define our components props and emits by using new helper methods defineProps and defineEmits. From the Vue docs, "defineProps and defineEmits are compiler macros only usable inside <script setup>. They do not need to be imported, and are compiled away when <script setup> is processed." These compile-time functions take the same arguments as the standard keys would use with a full export object. Let's update our app to use defineProps and defineEmits.

<!-- Form.vue -->
<template>
  <form @submit.prevent="submitHandler">
    <label>Name
      <input v-model="name" type="text" />
    </label>
    <button>Submit</button>
  </form>
</template>

<script setup>
import { computed } from 'vue'
const props = defineProps({
  modelValue: {
    type: String,
    default: ''
  }
})
const emit = defineEmits(['update:modelValue', 'submit']);

const name = computed({
  get () {
    return props.modelValue
  },
  set(val) {
    emit('update:modelValue', val);
  }
})

function submitHandler() {
  emit('submit')
}
</script>

<!-- App.vue -->
<template>
  <div>Hello, {{ name }}!</div>
  <Form v-model="name" @submit="submitForm" />
</template>

<script setup>
import { ref } from 'vue'
import Form from './components/Form.vue'

const name = ref('')

function submitForm() {
  console.log(name.value)
}
</script>

Let's go over what changed here.

  • First, we used defineProps to expect a modelValue (the expected prop for use with v-model in Vue 3).
  • We then defined our emits with defineEmits, so that we are both reporting what this component emits, and are also getting access to the emit function (previously available on `this.$emit).
  • Next, we create a computed property that utilizes a custom getter and setting. We do this so we can easily use v-model on our form input, but it's not a requirement. The getter returns our prop, where the setter emits the update event to our parent component.
  • Last of all, we hook up our submitHandler function to emit a submit event as well.

Our App.vue component is more or less as we left it, with the addition of v-model="name" and @submit="submitForm" to the Form child component. With that, our application is working as expected again!

Other Features

There are a lot more features available to us here, but they have fewer use cases in a typical application.

  • Dynamic Components - Since our components are immediately available in the template, we can utilize them when writing a dynamic component (<component :is="Form" />, for example).
  • Namespaced Components - If you have a number of components imported from the same file, these can be namespaced by using the import * as Form syntax. You then have access to <Form.Input> or <Form.Submit>, for example, without any extra work on your part.
  • Top-Level Await - If you need to make an API request as part of the setup for a component, you are free to use async/await syntax at the top level of your component - no wrapping in an async function required! Keep in mind that a component that utilizes this must be wrapped externally by a <Suspense> component - read more here to learn how to use Suspense in Vue.

Another point to keep in mind is that you aren't locked into using <script setup>. If you are using this new syntax for a component and run into a case where you aren't able to get something done, or simply want to use the Options syntax for a particular case, you are free to do so by adding an additional <script> block to your component. Vue will mix the two together for you, so your Composition code and Options code can remain separate. This can be extremely useful when using frameworks like Nuxt that provide additional methods to the standard Options syntax that are not exposed in <script setup>. See the Vue docs for a great example of this.

Conclusion

This is a big step forward for Vue and the Composition API. In fact, Evan You has gone on the record as saying this is intended to be the standard syntax for Vue single-file components going forward. From a discussion on Github:

There's some history in this because the initial proposal for Composition API indicated the intention to entirely replace Options API with it, and was met with a backlash. Although we did believe that Composition API has the potential to be "the way forward" in the long run, we realized that (1) there were still ergonomics/tooling/design issues to be resolved and (2) a paradigm shift can't be done in one day. We need time and early adopters to validate, try, adopt and experiment around the new paradigm before we can confidently recommend something new to all Vue users.

That essentially led to a "transition period" during which we intentionally avoided declaring Composition API as "the new way" so that we can perform the validation process and build the surrounding tooling /ecosystem with the subset of users who proactively adopted it.

Now that <script setup> has shipped, along with improvements in IDE tooling support, we believe Composition API has reached a state where it provides superior DX and scalability for most users. But we needed time to get to this point.

Earlier in that same thread, Evan expressed his views on what development looks like going forward for Vue:

The current recommended approach is:

  • Use SFC + <script setup> + Composition API
  • Use VSCode + Volar (or WebStorm once its support for <script setup> ships soon)
  • Not strictly required for TS, but if applicable, use Vite for build tooling.

If you're looking to use Vue 3 for either a new or existing application, I highly recommend trying out this new format for writing Vue single-file components. Looking to try it out? Here's a Stackblitz project using Vite and the example code above.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Converting Your Vue 2 Mixins into Composables Using the Composition API cover image

Converting Your Vue 2 Mixins into Composables Using the Composition API

Introduction There are two main ways to add additional functionality to our Vue components: mixins and composables. Adding mixins is similar to adding properties through the options API because you can add them by creating and "injecting" the properties into your Vue components. However, this comes with a couple of problems, the primary one being that mixins can lead to messy code and conflicting names. This is because they dump their properties into the component. With the composition API that was introduced in Vue 3, we can mitigate the problems we just mentioned by using composables. The Vue team created composables as a better alternative to mixins that allow for greater code reuse and structure. Additionally, they are designed specifically for each component, and don't cause any naming issues. We'll see more of this later on. In this article, we'll show you how to take your mixins and turn them into composables for even more functionality and ease of maintenance. The Structure of a Mixin Now, before we say goodbye to mixins and embrace composables, we need to understand the main differences between the two. In a way, a mixin is like a bag of magic tricks you can easily add to your components to do their magic. However, having all these tricks in her one backpack can make it challenging to keep track of what's happening, especially if multiple components use the same mixin. For example; if there was a data` property called `message` in our `mixin`, and a property by the same name (`message`) in our component, the mixin `message` would clash with our component's `message` (which would have been declared in the `data` section of our component). We will be going over composables next. But before we do, here's an example of a mixin: `javascript export const testMixin = { data() { return { message: "This message is in our mixin!" }; }, methods: { printMessage() { console.log('Message:',this.message); } } }; ` In this example, the mixin` contains a `data` function that returns an object with a `message` property, as well as a `printMessage` method that logs the given message to the console. To use this mixin in a component, the component would need to import the mixin, and add it to its `mixins` option: `javascript import testMixin from './testMixin.js'; export default { mixins: [testMixin], // The rest of your code }; ` Creating the Composable Composables are a new feature that was added within the Vue 3 composition API. Composables are also called composition functions. They allow us to add functionality to components in a cleaner, more maintainable way. Instead of merging properties and methods directly into the component like in a mixin, composables are explicitly tailored to the component, and return a value or a set of reactive properties. What this means is that with composables, you can group all the functionality that is related together (rather than in separate data`, `computed`, and `methods` sections). Again, you also don't have name collisions anymore because each call to a composable will give you an object that is not tied to your existing component. That means that you can resolve any naming conflicts at development time rather than breaking your application because of the same thing happening at runtime as is the case with mixins. Here is a simple composable that achieves the same result as the mixin we had created earlier. `javascript import { ref } from 'vue'; export const useMessage = () => { const message = ref("This message is in our composable!"); const printMessage = () => { console.log(message.value); }; return { message, printMessage }; }; ` Here's how the composable can be used in a .vue` file component: `js import { useMessage } from './useMessage'; const { message, printMessage } = useMessage(); // You can now access the message and printMessage in your template after extracting them from the useMessage composable ` With this setup, the component will access the reactive message` property and the `printMessage` method returned by the composable. Notice that composables allow us to add functionality to components more intuitively and easier to understand without the clutter of a mixin's structure. In general, the naming convention for composables is useX`, where `X` is the domain our composable is meant to represent. In this case, since our functionality is around our message, our composable is called `useMessage`. Conclusion In this article, we've explored how to convert Vue 2 mixins to Composition API composables. We started by understanding the structure of a simple Vue 2 mixin, which can make components challenging to maintain due to how they merge properties and methods directly into the component. Next, we looked at how to create a composable in Vue.js 3's Composition API, which allows us to add functionality to cleaner and more maintainable components. Composables are explicitly tailored to the component and return a value or a set of reactive properties, making them easier to understand and use in components. With this understanding of mixins and composables, developers can now confidently convert their Vue 2 mixins to Vue 3 composables, taking advantage of the new and improved functionality offered by the Composition API. By doing so, developers can create more manageable components to maintain, test, and debug, resulting in a better user experience for their applications. Also, if you are looking to start a new Vue JS project and are not sure of how to structure your project, feel free to check out our Vue JS starter.dev GitHub showcases which showcase (no kidding) a mini-GitHub clone application built using Vue in different ways (eg. using Nuxt, Quasar, Vite, etc). Alternatively, if you are simply looking to start a new project without worrying about all the config required to do so, you can check out the Vue JS starter kit instead. Either way, thanks for stopping by!...

Introducing the Vue 3 and XState kit for starter.dev cover image

Introducing the Vue 3 and XState kit for starter.dev

Starter.dev is an open source community resource developed by This Dot Labs that provides code starter kits in a variety of web technologies, including React, Angular, Vue, etc. with the hope of enabling developers to bootstrap their projects quickly without having to spend time configuring tooling.* Intro Today, we’re delighted to announce a new starter kit featuring Vue and XState! In this blog post, we’ll dive into what’s included with the kit, how to get started using it, and what makes this kit unique. What’s included All of our kits strive to provide you with popular and reliable frameworks and libraries, along with recommended tooling all configured for you, and designed to help you spin your projects up faster. This kit includes: - Vue as the core JS framework - XState for managing our application’s state - CSS for styling - Cypress component testing - Vue Router to manage navigation between pages - Storybook for visual prototyping - ESlint and Prettier to lint and format your code How to get started using the kit To get started using this kit, we recommend the starter CLI tool. You can pass in the kit name directly, and the tool will guide you through naming your project, installing your dependencies, and running the app locally. Each kit comes with some sample components, so you can see how the provided tooling works together right away. `js // npm npm create @this-dot/create-starter -- --kit vue3-xstate-css // yarn yarn create @this-dot/create-starter --kit vue3-xstate-css ` Now let’s dive into some of the unique aspects of this kit. Vue 3 Vue is a very powerful JS framework. We chose to use Vue directly to highlight some of the features that make it such a joy to work with. One of our favorite features is Vue’s single file components (SFC). We can include our JavaScript, HTML, and CSS for each component all in the same file. This makes it easier to keep all related code directly next to each other, making it easier to debug issues, and allowing less file flipping. Since we’re in Vue 3, we’re also able to make use of the new Composition API, which looks and feels a bit more like vanilla JavaScript. You can import files, create functions, and do most anything you could in regular JavaScript within your component’s script tag. Any variable name you create is automatically available within your HTML template. Provide and Inject Another feature we got to use specifically in this starter kit is the new provide and inject functionality. You can read more details about this in the Vue docs, but this feature gives us a way to avoid prop drilling and provide values directly where they’re needed. In this starter kit, we include a “greeting” example, which makes an API call using a provided message, and shows the user a generated greeting. Initially, we provided this message as a prop through the router to the greeting component. This works, but it did require us to do a little more legwork to provide a fallback value, as well as needing our router to be aware of the prop. Using the provide / inject setup, we’re able to provide our message through the root level of the app, making it globally available to any child component. Then, when we need to use it in our `GreetView``` component, we inject the “key” we expect (our message), and it provides a built-in way for us to provide a default value to use as a fallback. Now our router doesn’t need to do any prop handling! And our component consistently works with the provided value or offers our default if something goes wrong. ` // src/main.ts const app = createApp(App); app.provide('query', 'from This Dot Labs!'); ` ` // src/views/GreetView.vue import { inject } from 'vue'; const providedQuery = inject('query', ''); const { state } = useMachine(greetMachine(providedQuery)); ` Using XState If you haven’t had a chance to look into XState before, we highly recommend checking out their documentation. They have a great intro to state machines and state charts that explains the concepts really well. One of the biggest mindset shifts that happens when you work with state machines is that they really help you think through how your application should work. State machines make you think explicitly through the different modes or “states” your application can get into, and what actions or side effects should happen in those states. By thinking directly through these, it helps you avoid sneaky edge cases and mutations you don’t expect. Difference between Context and State One of the parts that can be a little confusing at first is the difference between “state” and “context” when it comes to state machines. It can be easy to think of state as any values you store in your application that you want to persist between components or paths, and that can be accurate. However, with XState, a “state” is really more the idea of what configurations your app can be in. A common example is a music player. Your player can be in an “off” state, a “playing” state, or a “paused” state. These are all different modes, if you will, that can happen when your music player is interacted with. They can be values in a way, but they’re really more like the versions of your interface that you want to exist. You can transition between states, but when you go back to a specific state, you expect everything to behave the same way each time. States can trigger changes to your data or make API calls, but each time you enter or leave a state, you should be able to see the same actions occur. They give you stability and help prevent hidden edge cases. Values that we normally think of as state, things like strings or numbers or objects, that might change as your application is interacted with. These are the values that are stored in the “context” within XState. Our context values are the pieces of our application that are quantitative and that we expect will change as our application is working. ` export const counterMachine = createMachine( { id: 'Counter', initial: 'active', context: { count: 0, }, states: { active: { on: { INC: { actions: 'increment' }, DEC: { actions: 'decrement' }, RESET: { actions: 'reset' }, }, }, }, }, { actions: { increment: assign({ count: (context) => context.count + 1 }), decrement: assign({ count: (context) => context.count - 1 }), reset: assign({ count: (context) => (context.count = 0) }), }, } ); ` Declaring Actions and Services When we create a state machine with XState, it accepts two values- a config object and an options object. The config tells us what the machine does. This is where we define our states and transitions. In the options object, we can provide more information on how the machine does things, including logic for guards, actions, and effects. You can write your actions and effect logic within the state that initiates those calls, which can be great for getting the machine working in the beginning. However, it’s recommended to make those into named functions within the options object, making it easier to debug issues and improving the readability for how our machine works. Cypress Testing The last interesting thing we’d like to talk about is our setup for using component testing in Cypress! To use their component testing feature, they provide you with a mount command, which handles mounting your individual components onto their test runner so you can unit test them in isolation. While this works great out of the box, there’s also a way to customize the mount command if you need to! This is where you’d want to add any configuration your application needs to work properly in a testing setup. Things like routing and state management setups would get added to this function. Since we made use of Vue’s provide and inject functions, we needed to add the provided value to our `mount``` command in order for our greeting test to properly work. With that set up, we can allow it to provide our default empty string for tests that don’t need to worry about injecting a value (or when we specifically want to test our default value), and then we can inject the value we want in the tests that do need a specific value! ` // cypress/support/component.ts Cypress.Commands.add('mount', (component, options = {}) => { options.global = options.global || {}; options.global.provide = options.global.provide || {}; return mount(component, options); }); ` Conclusion We hope you enjoy using this starter kit! We’ve touched a bit on the benefits of using Vue 3, how XState keeps our application working as we expect, and how we can test our components with Cypress. Have a request or a question about a [starter.dev] project? Reach out in the issues to make your requests or ask us your questions. The project is 100% open sourced so feel free to hop in and code with us!...

TC39 - How Changes are Made to JavaScript cover image

TC39 - How Changes are Made to JavaScript

Introduction The JavaScript ecosystem is constantly changing. As developers, we are very familiar with the ever-shifting landscape of frameworks, libraries, and tooling required to write our applications. In addition, there are other runtimes for Javascript beyond the browser, including Node, Deno, Cloudflare Workers, with more being released all the time. All of this - the tooling, the frameworks, the runtimes, even the language - are based on standards developed by a group of individuals and companies know as TC39. TC39 (Technical Committee 39) is a committee organized by Ecma International, a nonprofit standards organization for information and communication systems. In 1996, Netscape (the original creators of JavaScript) began meeting with Ecma to discuss standardizing the language. The first standard edition of JavaScript (called ECMAScript) was adopted in 1997, with further releases of the standard happening since then. The JavaScript we use today is an implementation of these standards, and each runtime of JavaScript works to implement them for use by developers. This standardization across runtimes was not always a guarantee, however. For a long time, the Node project tended to go its own way, implementing Node-specific APIs and methods of accomplishing development work. Many within Node originally felt that TC39 was forcing their standards on the Node project, despite Node havings its own needs and solutions. There are a number of examples where Node went one way, and the JavaScript standards went the other - Promises and imports are two good examples. However, the Node steering committee today is much more open to adopting standards, any many of its members participate in discussions with TC39 regarding new features and changes to JavaScript. This is in part because developers want the same language and APIs in both the browser and their Node environments, but also, because there are other runtimes to consider when developing JavaScript code. This standardization has brought about a number of changes to the language and the JS ecosystem, as more voices are coming together to work on new solution to existing problems. What does TC39 do? As I mentioned, TC39 is a committee focused on developing and ensuring the JavaScript standard. From their website, "Ecma International's TC39 is a group of JavaScript developers, implementers, academics, and more, collaborating with the community to maintain and evolve the definition of JavaScript." The committee takes proposals from the community, and determines which are going to be worked on to be implemented in the JavaScript standard. A number of major companies are directly involved with TC39, with members representing Microsoft, Google, Apple, Intel, Mozilla, eBay, and more. Some are connected to universities, while others participate as individuals. In addition to voting members, many people participate in discussions regarding the various proposals that have been submitted. While the committee itself only meets every two months, these discussions on the proposals and specifications are taking place publicly, and anyone can participate in the conversation. Proposals are hosted on GitHub, and so discussions are as simple as creating an issue or pull request. A TC39 Discourse page is another way for the JavaScript community to discuss any current proposals or new ideas that haven't been formalized yet. When the committee votes to approve a new standard, this change is then implemented in the runtime authors (such as Google's V8). But how does a new standard get added to JavaScript? The Stages of Proposal There are 5 stages to adding a new standard to JavaScript, starting at Stage 0. Each of these stages has different requirements for completion. There is no time limit on moving a proposal from one stage to the next, and no guarantee that a given proposal will be completed. TC39's website hosts a process document that explains in detail what a given stage means, and how a proposal advances to the next stage. Let's walk through the stages, and look at some of the proposals currently at each stage. Stage 0 The first stage for any proposal is stage 0. This stage is the first step in adding a feature to JavaScript. Anyone can make a proposal. You don't have to be a member of TC39. A detailed document outlines the process for submitting a new proposal into stage 0. The pain purpose of this stage is to start a conversation and begin formalizing the proposal in order for future work to be done with it. The first thing that needs to be done when a proposal is stage 0 is to find a champion. A champion is someone from TC39 who will take the lead on moving a proposal forward. In addition, work will need to go into the documentation for the proposal, such as an outline of the problem that is being addressed and a high-level API design. Once these requirements are met, the committee can vote to move the proposal to Stage 1. An interesting Stage 0 proposal is to add a deprecated global or directive to the language, so that it's easier to alert a developer when a given API has been deprecated. Example: `javascript function deprecatedFunction() { deprecated; // do stuff } // or function deprecatedFunction() { 'deprecated'; // do stuff } ` Stage 1 The purpose of Stage 1 is to make the case for changing the JavaScript standard, describing the proposed solution, and any potential problems that it could cause or could be impacted by. The main goal of the committee for a Stage 1 proposal is to devote time to examining the problem, and ensuring the proposal resolves it. Typically, browser/runtimes won't make any changes to implement a Stage 1 proposal, because the API could still change pretty drastically. However, polyfills or demos may be created in order to get additional feedback on a given API. These features should not be considered production ready. Once the initial spec has been developed, the committee can vote to move the proposal to Stage 2. The pipeline operator is a great example of a Stage 1 proposal. Its goal is to add a pipeline operator (|>`) to JavaScript, in order to pipe function returns or values from one function to the next. There has been some discussion around how it should pass arguments into the second function `javascript function getName() { return 'World'; } function sayHello(name) { return 'Hello, ' + name; } getName() |> sayHello ` Another Stage 1 proposal is the compartments proposal, which helps resolve a number of issues regarding global scope of a JS file or application. Check it out! Stage 2 When a proposal reaches Stage 2, the committee is focused on writing a precise syntax using formal language. This still doesn't mean that a feature is going to make it to JavaScript, but some experimental implementations will start appearing. This process to create a defined syntax could take from months to a year, with some proposals sitting in Stage 2 for much longer than that. However, when a feature leaves Stage 2, it typically means that the proposal will eventually make it to the final spec. Changes may still happen, but typically only limited changes will happen once a proposal moves out of Stage 2. There are a number of interesting proposals in Stage 2 at the moment, including decorators and iterator helpers. Often, proposals may get stalled in Stage 2. Decorators are a good example of that. According to the TC39 proposals repository, Decorators haven't been presented since September 2020, and were originally discussed back in 2018. Sometimes, the problem being solved has multiple solutions, or there are multiple competing solutions that could be adopted. Other times, the problem turns out to be less urgent or important than previously thought. While it can be frustrating to have a proposal stall out, it's important to remember that any change to JavaScript is permanent - no standardized feature in JavaScript is removed from the spec. Better to move slowly than to end up with half-finished APIs that don't actually solve anything. Stage 3 Stage 3 is the final stage for changes to be made to the specification. Spec compliant implementations will start to roll out, typically behind feature flags, in order to get developers to start using the feature and provide feedback. Changes are still possible, but they are expected to be limited in nature. The new Temporal object is a Stage 3 proposal that's pretty exciting for the JS ecosystem. Temporal will act as an upgrade from the Date object and support additional feature such as time zones. A prototype polyfill can be found on NPM, although keep in mind that it doesn't create a global Temporal object like the finished spec would do. And again, remember that this is still a proposal, and should not be treated as production ready. Another great example of a Stage 3 proposal is Realms, which provides a way to create distinct global environments. Stage 4 When a proposal reaches Stage 4, it is considered complete and ready for implementation by the different runtime vendors. Browsers will start to ship the feature, and other runtimes like Node and Deno will also work to include it in upcoming versions. A features is ready for Stage 4 when it passes all the agreed upon tests, and there has been sufficient testing by developers to ensure that the API is sound. Once a feature is in Stage 4, its spec is not intended to be changed. This is to ensure that the web platform is stable into the future - it's important to not break the web with changes to JavaScript. Two good examples of recent Stage 4 proposals are nullish coalescing and Promise.any. These features have been released into major browsers, and are available to be used today in modern JavaScript applications. Conclusion It's pretty amazing that the JavaScript language is developed in the open like this, for all interested parties to add a voice to the discussion. Not every standard or programming language is developed like this. However, this level of openness can also be difficult, especially if a specific feature gets stalled or a proposed API ends up not being accepted. If you submit your own proposal to TC39, remember that you are trying to solve a specific problem, not simply create a feature in JavaScript. Your proposal may be adjusted or replaced as other voices are added to the discussion. Also, keep in mind that it could take a long time for a proposal to make it into the language, if ever (looking at you, decorators). Also, while I've highlighted mostly good things about this process, it's also possible for a single member to hold back a feature from advancing into the next stage. This can be frustrating, but as noted above, it's important for JavaScript to be developed methodically. Having multiple standards or multiple interpretations of those standards wouldn't benefit anyone, after all. At the end of the day, remember that TC39 is made up of indivduals who are invested in the JavaScript ecosystem, and want to work together with developers to improve the language. They have a lot of context and understanding for how features are implemented that developers may not have. Proposals that don't make it into the language may not make it for valid reasons. Does any of this interest you? Do you want to contribute to the discussion? You can find ways to participate on TC39's website, including links to their Github and Discourse....

Embracing Risk and Doing Your Part in Open Source: Lesson from Platformatic CTO Matteo Collina cover image

Embracing Risk and Doing Your Part in Open Source: Lesson from Platformatic CTO Matteo Collina

In this episode of the engineering leadership series, Rob Ocel interviews Matteo Collina, the co-founder and CTO of Platformatic, a backend development platform that enables users to build APIs using open source tools. Matteo talks about how so many leaders struggle with the concept of risk, leading into a conversation about one of Platformatic's key features: a breaking change detector. This tool leverages data from the user's open telemetry production system to identify potential changes that could disrupt their microservice system. Unlike traditional approaches that rely on fixed sets of information, the breaking change detector uses real data to provide a more accurate and reliable assessment of potential risks. By analyzing actual production data, it offers a powerful way to eliminate the risk of modeling reality inaccurately. Matteo emphasizes the importance of understanding and prioritizing risks for engineering leaders. He believes that leaders must have a deep understanding of the potential risks involved in their projects and be able to make informed decisions based on that understanding. By effectively managing risks, leaders can ensure the success and stability of their engineering initiatives. Furthermore, Matteo shares insights into his team's approach to staying on the cutting edge of technology. He explains that at Platformatic, they are often at the forefront of the Node.js platform, actively experimenting with and testing its experimental features. This proactive approach allows them to identify and address bugs before they become significant issues. By continuously pushing the boundaries and exploring new features, they can provide their users with the most up-to-date and reliable tools. In addition to discussing technical aspects, Matteo also emphasizes the importance of contributing to open source projects. He encourages companies to measure and recognize contributions to open source, as it can lead to a more vibrant and collaborative community. By incentivizing engineers to contribute to open source projects, companies can foster a culture of giving back and create a positive impact on the broader engineering community. Overall, this episode provides valuable insights into the work of Platformatic and Matteo Collina's perspective on engineering leadership. The breaking change detector technology offers a practical solution for detecting potential disruptions in microservice systems, based on real data rather than fixed sets of information. Matteo's emphasis on understanding and prioritizing risks highlights the importance of effective leadership in engineering. Additionally, his team's commitment to staying on the cutting edge of technology showcases their dedication to providing the best tools for their users. Finally, Matteo's call to measure and incentivize contributions to open source projects serves as a reminder of the benefits of collaboration and community involvement in the engineering field....