Skip to content

Getting Started with Here Maps and Web Components using Lit

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In the past months, I have been actively working on the development of geolocation-oriented solutions in web applications. My experience has been good so far, taking into account the availability of modern tools and libraries, with a wide range of features that even enable the use of interactive maps and location services.

In this article, I'll explain how to integrate Here Maps through web components, using Lit and TypeScript, in just a few steps.

Introduction

Let's start by explaining the technologies that we will be integrating into this post, step by step.

Lit

Lit is the next-generation library for creating simple, fast, and lightweight web components.

One of the benefits of using Lit is the power of interoperability, which means that your web components can be integrated within any framework, or can even be used on their own. This is possible since we're talking about native web components that can be rendered in any modern web browser.

If you're interested in learning more about this technology, you can take a look at the LitElement Series that has been published in this blog.

Here Maps

Let's say your application requires real-time navigation, the creation of custom maps, or the ability to visualize datasets over a map. HERE Maps meet these requirements without too much effort.

In fact, there is a Maps API for JavaScript, which can be used to build web applications using either JavaScript or TypeScript.

Note

Although this guide provides an API key for testing purposes only, you may need to register your app, and get your API Key to move forward with your applications.

Creating the Project

Project Scaffolding

An easy way to initialize an application is through the project generator from the Open Web Components initiative. They provide a set of tools and guides to create a web components application, adding the testing, linting, and build support in the beginning.

Let's create the project using a command-line interface

npm init @open-wc
# Select "Scaffold a new project" (What would you like to do today?)
# Select "Application" (What would you like to scaffold?)
# You can select any option for (What would you like to add?)
# Yes (Would you like to use TypeScript?)
# Mark/Select "Testing", "Demoing" and "Building" (Would you like to scaffold examples files for?)
# spa-heremaps-lit (What is the tag name of your application/web component?)
# Yes (Do you want to write this file structure to disk?)
# Yes, with npm (Do you want to install dependencies?)

Pay attention to the project structure that is generated. Also, take a look at your package.json file to review the dependencies and scripts that have been added.

Update to Lit

In case you generated a project based on LitElement, it shouldn't be complicated to replace dependencies with lit:

  • Install the last version of Lit
npm install --save lit
  • Uninstall LitElement and lit-html library
npm uninstall lit-element lit-html

Also, you can make sure to update the imports as follows to make it work:

import { LitElement, html, css } from 'lit';
import { customElement, property } from 'lit/decorators.js';

Next, make sure the preview of the application is working fine using the following command:

npm run start

Creating a Map Component

Installing Here Maps

Before creating a web component that will render a map, you are required to load the API code libraries in our index.html file:

<link
  rel="stylesheet"
  type="text/css"
  href="https://js.api.here.com/v3/3.1/mapsjs-ui.css"
/>
<script
  src="https://js.api.here.com/v3/3.1/mapsjs-core.js"
  type="text/javascript"
  charset="utf-8"
></script>
<script
  src="https://js.api.here.com/v3/3.1/mapsjs-service.js"
  type="text/javascript"
  charset="utf-8"
></script>
<script
  src="https://js.api.here.com/v3/3.1/mapsjs-ui.js"
  type="text/javascript"
  charset="utf-8"
></script>
<script
  src="https://js.api.here.com/v3/3.1/mapsjs-mapevents.js"
  type="text/javascript"
  charset="utf-8"
></script>

You can see HERE Maps API for JavaScript Modules for more information about each imported script.

Since the initial project is based on TypeScript, we may need to install the type definitions for the HERE Maps API.

npm install --save-dev @types/heremaps

Now we're ready to go and code the web component using TypeScript.

The Map Component Implementation

Let's create a src/map/map.ts file with the following content:

// map.ts

import { LitElement, html, css } from 'lit';
import { customElement, property } from 'lit/decorators.js';

@customElement('spa-map')
export class Map extends LitElement {
  static styles = css`
    :host {
      display: block;
      width: 100%;
      height: 100vh;
    }
  `;

  @property({ type: Object }) location: H.geo.IPoint = {
    lat: 40.71427,
    lng: -74.00597,
  };

  render() {
    return html`
      <link
        rel="stylesheet"
        type="text/css"
        href="https://js.api.here.com/v3/3.1/mapsjs-ui.css"
      />
      <div id="spa-map" style="width:100%; height:100vh"></div>
    `;
  }
}

The above code snippet allows the definition of a new custom element through the @customElement decorator. Also, it defines the location value as a component property so that it can be configured later as:

<spa-map .location="${VALUE}"></spa-map>

Also, the render method is ready to return a TemplateResult object which contains a div element that will be rendered in all the available space. The <div> element is the HTML container in which the map will be rendered later.

To finish the component implementation, we'll need to write the firstUpdated method.

// map.ts

@customElement('spa-map')
export class Map extends LitElement {
  // Other methods/properties defined above...

  firstUpdated() {
    const platform = new H.service.Platform({
      apikey: '<API_KEY>',
    });

    const defaultLayers = platform.createDefaultLayers();

    const divContainer = this.shadowRoot?.getElementById(
      'spa-map'
    ) as HTMLDivElement;
    const map = new H.Map(divContainer, defaultLayers.vector.normal.map, {
      center: this.location,
      zoom: 10,
      pixelRatio: window.devicePixelRatio || 1,
    });

    const behavior = new H.mapevents.Behavior(new H.mapevents.MapEvents(map));
    const ui = H.ui.UI.createDefault(map, defaultLayers);

    window.addEventListener('resize', () => map.getViewPort().resize());
  }
}

Before creating the map object, it's required to initialize the communication with the back-end services provided by HERE maps. In that way, an H.service.Platform object is created using the <API_Key> value you got in previous steps.

However, most of the code in the previous block has to do with the initialization of the map:

  • An instance of H.Map class is created specifying the container element, the map type to use, the geographic coordinates to be used to center the map, and a zoom level.
  • In case you need to make your map interactive, an H.mapevents.Behavior object can be created to enable the event system through a MapEvents instance.
  • On other hand, the default UI components for the map can be added using the H.ui.UI.createDefault method.
  • The resize listener can be used to make sure the map is rendered in the entire container.

Using the Map Component

As a final step, we'll need to update the spa-heremaps-lit.ts file:

import { LitElement, html, css } from 'lit';
import { customElement, property } from 'lit/decorators.js';

import './map/map';

@customElement('spa-heremaps-lit')
export class SpaHeremapsLit extends LitElement {
  static styles = css`
    :host {
      min-height: 100vh;
      display: flex;
      flex-direction: column;
      align-items: center;
      justify-content: flex-start;
      font-size: calc(10px + 2vmin);
      color: #1a2b42;
      max-width: 960px;
      margin: 0 auto;
      text-align: center;
    }
  `;

  render() {
    return html` <spa-map></spa-map> `;
  }
}

Since this class defines a new custom element as a container for the map component, it's required to import the previous component definition using import './map/map'.

Then, inside the class definition we can find:

  • The static styles attribute defines the styles for the component using a tagged template literal(css)
  • The render method returns the HTML content through a template literal(html).
    • The HTML content uses the previous custom element we created: <spa-map></spa-map>
HERE maps example using Lit for Web Components and TypeScript

The screenshot shows the map rendered in the main component of the application.

Source Code of the Project

Find the latest version of this project in this GitHub repository: spa-heremaps-lit. If you want to see the final code for this article, take a look at 01-setup-lit-heremaps tag. Do not forget to give it a star ⭐️ and play around with the code.

Feel free to reach out on Twitter if you have any questions. Follow me on GitHub to see more about my work.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

End-to-end type-safety with JSON Schema cover image

End-to-end type-safety with JSON Schema

End-to-end type-safety with JSON Schema I recently wrote an introduction to JSON Schema post. If you’re unfamiliar with it, check out the post, but TLDR: It’s a schema specification that can be used to define the input and output data for your JSON API. In my post, I highlight many fantastic benefits you can reap from defining schemas for your JSON API. One of the more interesting things you can achieve with your schemas is end-to-end type safety from your backend API to your client application(s). In this post, we will explore how this can be accomplished slightly deeper. Overview The basic idea of what we want to achieve is: * a JSON API server that validates input and output data using JSON Schema * The JSON Schema definitions that our API uses transformed into TypeScript types With those pieces in place, we can achieve type safety on our API server and the consuming client application. The server side is pretty straightforward if you’re using a server like Fastify with already enabled JSON Schema support. This post will focus on the concepts more than the actual implementation details though. Here’s a simple diagram illustrating the high-level concept: We can share the schema and type declaration between the client and server. In that case, we can make a request to an endpoint where we know its type and schema, and assuming the server validates the data against the schema before sending it back to the client, our client can be confident about the type of the response data. Marrying JSON Schema and TypeScript There are a couple of different ways to accomplish this: * Generating types from schema definitions using code generation tools * Creating TypeBox definitions that can infer TypeScript types and be compiled to JSON Schema I recommend considering both and figuring out which would better fit your application and workflows. Like anything else, each has its own set of trade-offs. In my experience, I’ve found TypeBox to be the most compelling if you want to go deep with this pattern. Code generation A couple of different packages are available for generating TS types from JSON Schema definitions. * https://github.com/bcherny/json-schema-to-typescript * https://github.com/vega/ts-json-schema-generator They are CLI tools that you can provide a glob path to where your schema files are located and will generate TS declaration files to a specified output path. You can set up an npm hook or a similar type of script that will generate types for your development environment. TypeBox TypeBox is a JSON Schema type builder. With this approach, instead of json files, we define schemas in code using the TypeBox API. The TypeBox definitions infer to TypeScript types directly, which eliminates the code generation step described above. Here’s a simple example from the documentation of a JSON Schema definition declared with TypeBox: ` This can then be inferred as a TypeScript type: ` Aside from schemas and types, TypeBox can do a lot more to help us on our type-safety journey. We will explore it a bit more in upcoming sections. Sharing schemas between client and server applications Sharing our JSON Schema between our server and client app is the main requirement for end-to-end type-safety. There are a couple of different ways to accomplish this, but the simplest would be to set up our codebase as a monorepo that contains both the server and client app. Some popular options for TypeScript monorepos are: PNPM, Turborepo, and NX. If a monorepo is not an option, you can publish your schema and types as a package that can be installed in both projects. However, this setup would require a lot more maintenance work. Ultimately, as long as you can import your schemas and types from the client and server app, you are in good shape. Server-to-client validation and type-safety For the sake of simplicity, let's focus on data flowing from the server to the client for now. Generally speaking, the concepts also apply in reverse, as long as your JSON API server validates your inputs and outputs. We’ll look at the most basic version of having strongly typed data on the client from a request to our server. Type-safe client requests In our server application, if we validate the /users endpoint with a shared schema - on the client side, when we make the request to the endpoint, we know that the response data is validated using the user schema. As long as we are confident of this fact, we can use the generated type from that schema as the return type on our client fetch call. Here’s some pseudocode: ` Our server endpoint would look something like this: ` You could get creative and build out a map that defines all of your endpoints, their metadata, and schemas, and use the map to define your server endpoints and create an API client. Transforming data over the wire Everything looks stellar, but we can still take our efforts a bit further. To this point, we are still limited to serialized JSON data. If we have a created_at field (number or ISO string) tied to our user, and we want it to be a Date object when we get a hold of it on the client side - additional work and consideration are required. There are some different strategies out there for deserializing JSON data. The great thing about having shared schemas between our client and server is that we can encode our type information in the schema without sending additional metadata from our server to the client. Using format to declare type data In my initial JSON Schema blog post, I touched on the format field of the specification. In our schema, if the actual type of our date is a string in ISO8601 format, we can declare our format to be "date-time". We can use this information on the client to transform the field into a proper Date object. ` Transforming serialized JSON Data This can be a little bit tricky; again, there are many ways to accomplish it. To demonstrate the concept, we’ll use TypeBox to define our schemas as discussed above. TypeBox provides a Transform type that you can use to declare, encode, and decode methods for your schema definition. ` It even provides helpers to statically generate the decoded and encoded types for your schema ` If you declare your decode and encode functions for your schemas, you can then use the TypeBox API to handle decoding the serialized values returned from your JSON API. Here’s what the concept looks like in practice fetching a user from our API: ` Nice. You could use a validation library like Zod to achieve a similar result but here we aren’t actually doing any validation on our client side. That happened on the server. We just know the types based on the schema since both ends share them. On the client, we are just transforming our serialized JSON into what we want it to be in our client application. Summary There are a lot of pieces in play to accomplish end-to-end type safety. With the help of JSON Schema and TypeBox though, it feels like light work for a semi-roll-your-own type of solution. Another great thing about it is that it’s flexible and based on pretty core concepts like a JSON API paired with a TypeScript-based client application. The number of benefits that you can reap from defining JSON Schemas for your APIs is really great. If you’re like me and wanna keep it simple by avoiding GraphQL or other similar tools, this is a great approach....

How to Create Standalone Components in Angular cover image

How to Create Standalone Components in Angular

Angular has become one of the most popular frameworks to build web applications today. One of the key features of the framework is its component-based architecture, which allows best practices like modularity and reusability. Each Angular component consists of a template, a TypeScript class and metadata. In this blog post, we will dive deeper into standalone components, and we will explore the anatomy of an application based on them. For the demo application, we will create a card-like component which can be used to render blog posts in a web application. Prerequisites You'll need to have installed the following tools in your local environment: * The latest LTS version of Node.js is recommended. * Either NPM or Yarn as a package manager. * The Angular CLI tool(Command-line interface for Angular). Initialize the Project Let's create a project from scratch using the Angular CLI tool: ` This command will initialize a base project using some configuration options: * --routing. It will create a routing module. * --prefix corp. It defines a prefix to be applied to the selectors for created components(corp in this case). The default value is app. * --style css. The file extension for the styling files. * --skip-tests. Disable the generation of testing files for the new project. If you pay attention to the generated files and directories, you'll see an initial project structure including the main application module and component: ` Creating Standalone Components First, let's create the custom button to be used as part of the Card component later. Run the following command on your terminal: ` It will create the files for the component. The --standalone option will generate the component as _standalone_. Let's update the button.component.ts file using the content below. ` Pay attention to this component since it's marked as standalone: true. Starting with Angular v15: components, directives, and pipes can be marked as standalone by using the flag standalone. When a class is marked as _standalone_, it does not need to be declared as part of an NgModule. Otherwise, the Angular compiler will report an error. Also, imports can be used to reference the dependencies. > The imports property specifies the standalone component's template dependencies — those directives, components, and pipes that can be used within its template. Standalone components can import other standalone components, directives, and pipes as well as existing NgModules. Next, let's create the following components: card-title, card-content, card-actions and card. This can be done at once using the next commands. ` On card-title.component.ts file, update the content as follows: ` Next, update the card-content.component.ts file: ` The card-actions.component.ts file should have the content below: ` Finally, the card.component.ts file should be defined as follows: ` Using Standalone Components Once all Standalone components are created, we can use them without the need to define an NgModule. Let's update the app.component.ts file as follows: ` Again, this Angular component is set as standalone: true and the imports section specifies the other components created as dependencies. Then, we can update the app.component.html file and use all the standalone components. ` This may not work yet, since we need to configure the Angular application and get rid of the autogenerated module defined under the app.module.ts file. Bootstrapping the Application Angular provides a new API to use a standalone component as the application's root component. Let's update the main.ts file with the following content: ` As you can see, we have removed the previous bootstrapModule call and the bootstrapApplication function will render the standalone component as the application's root component. You can find more information about it here. Live Demo and Source Code Want to play around with this code? Just open the Stackblitz editor or the preview mode in fullscreen. Conclusion We’re now ready to use and configure the Angular components as standalone and provide a simplified way to build Angular applications from scratch. Do not forget that you can mark directives and pipes as standalone: true. If you love Angular as much as we do, you can start building today using one of our starter.dev kits or by looking at their related showcases for more examples....

AI Is Speeding Up Development. But Where Are the New Bottlenecks? cover image

AI Is Speeding Up Development. But Where Are the New Bottlenecks?

AI is accelerating development, but it’s also exposing everything else that’s broken. At the Leadership Exchange, leaders unpacked how AI is reshaping the SDLC and what organizations need to address beyond just coding to make adoption successful. Moderated by Rob Ocel, VP of Innovation at This Dot Labs, the panel featured Itai Gerchikov at Anthropic and Harald Kirschner, Principal Product Manager for GitHub Copilot & VS Code at Microsoft. Panelists explored the current state of AI adoption across the software development lifecycle and shared practical insights into how organizations can effectively integrate AI tools. Panelists discussed how companies are investing in AI tools, skills, and managed competency programs to support developers. While AI can dramatically accelerate coding, the panel emphasized that adoption affects every stage of the SDLC. Bottlenecks now appear in testing, DevOps, product delivery, and marketing as AI speeds up development. Organizations that address technical debt and process inefficiencies are better positioned to extract maximum value from AI tools. The conversation also focused on opportunities and risks. Security, governance, and workforce education were highlighted as critical factors for adoption. Panelists stressed that AI initiatives should be aligned with broader business goals rather than pursued in isolation. They noted that companies experimenting at the cutting edge need to consider organizational readiness just as carefully as technical capabilities. Panelists also explored how leading organizations are navigating the early stages of adoption. Those ahead of the curve are using structured experimentation, prioritizing process improvements, and continuously evaluating outcomes to refine their AI strategies. Learning from these early adopters allows other organizations to anticipate emerging trends and prepare for the next phase of AI adoption rather than simply replicating past approaches. Key Takeaways - Investing in AI skills and tools should be done thoughtfully, with clear alignment to business objectives. - Examining the full SDLC helps identify bottlenecks that AI may accelerate or expose. - Organizations can gain a competitive advantage by learning from early adopters and planning for where AI adoption is heading. AI adoption is not just a technical initiative; it is a strategic transformation that requires attention to people, process, and technology. Organizations that balance innovation with operational discipline will be best positioned to capture the full potential of AI across the software lifecycle. Seeing similar challenges in your own SDLC? Let’s compare notes. Join us at an upcoming Leadership Exchange or reach out to continue the conversation. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co