June 7, 2025
AI news

Here technologies increase the developer’s productivity with the new AI coding generating assistant

Here technologies increase the developer’s productivity with the new AI coding generating assistant

This blog post is co-written with Jonas Neuman from here technologies.

Here Technologies, a 40-year-old pioneer in map and location technology, collaborated with the AWS Innovation Center (Genaiic) to improve the productivity of the developer with a coding generating assistant. This innovative tool is designed to improve on board experience for here API Self-Service Service Maps for JavaScript. Here the use of that generator strengthens its community of global developers to quickly translate natural language questions into interactive map visualizations, simplifying the evaluation and adaptation of map services here.

New developers who try these api for the first time often start with questions such as “How can I create a standing path from point A to B?” Or “How can I display a circle about a point?” Although here is the API documentation is wide, here it acknowledged that accelerating the board process can significantly increase developers’ engagement. They aim to improve retention rates and create skilled product advocates through personalized experiences.

To create a solution, here he collaborated with Genaiic. Our common mission was to create an intelligent coding assistant that can provide executable explanations and solutions of the code in response to the questions of the user’s natural languages. The demand was to build a scaled system that could translate natural language questions into HTML code with embedded JavaScript, ready for immediate appearance as an interactive map that users can see on screen.

The team had to build a solution it made as follows:

  • Provide value and reliability by giving a correct code, made that is important to a user’s question
  • Facilitate an interaction of natural and productive developers by giving codes and explanations in low latent (such as this article, about 60 seconds) while maintaining the context of context for the following questions
  • Maintain the integrity and usefulness of the feature within the system and the brand here by applying strong filters for insignificant or infectious questions
  • Offer a reasonable cost of the system to maintain a positive roi when escalated throughout the API system

Together, here and Genaiic built a solution based on the Amazon bed that balanced goals with inherent trade. Amazon Bedrock is a fully managed service that provides access to the Foundation Models (FMS) from the leadership of it through a single API, along with a wide group of skills, enabling you to build it with integrated security, intimacy and responsible features of it. The service allows you to experiment and adjust different FMS by using techniques such as regulated generation and increased generation (Rag), and build agents that execute tasks. Amazon Bedeck is server without a server, facilitates infrastructure management needs and integrates smoothly with existing AWS services.

Built in the comprehensive group of managed and unless AWS servers, including Amazon Bedrock FMS, Amazon Bedrock Knowledge Basics for Rag Implementation, Amazon Bedrock Guardrails for Content Filtering, and Amazon Dynamodb for Conversation Management Give a Strong Assistant Management Coding infrastructure. The result is a practical, user -friendly tool that can improve the developer’s experience and provide a new way of exploring API and the rapid resolution of location and navigation experiences.

In this post, we describe the details of how this was achieved.

Data

We have used the following resources as part of this solution:

  • Field documentation – We have used two publicly available sources: here API maps for the JavaScript developer guide and here API maps for the API JavaScript reference. The developer’s guide provides conceptual explanations, and API reference provides detailed information of the API function.
  • Examples of example – Here provided 60 cases, each containing a user question, HTML/javascript code solution and short description. These examples include numerous categories, including geodes, markers and geosapes, and were divided into training and test groups.
  • Out -of -field questions – Here they provided samples of questions beyond the API here for the Maps for the JavaScript field, which the large language model (LLM) should not respond.

Settlement

To develop the coding assistant, we designed and implemented a Rag work flow. Although standard can generate code, they often work with outdated knowledge and cannot adapt to the most recent API here for JavaScript changes or best practices. Here the API maps for JavaScript documentation can significantly improve coding assistants by providing an accurate, up -to -date context. Storage of the Maps API for JavaScript documentation on a vector database allows the coding assistant to draw the corresponding fragments for user questions. This allows LLM to establish its answers on official documentation than potentially outdated training data, leading to more accurate code suggestions.

The following diagram illustrates the general architecture.

The solution architecture includes four main modules:

  1. The following questions module -This module enables the following question by answering the treatment of contextual conversations. Conversation stories are stored in Dynamodb and taken when users ask new questions. If there is a conversation story, it combines with the new question. LLM then processes it to reformulate the following questions in independent questions for lower -course processing. The module holds the context awareness as it recognizes the topic changes, maintaining the original question when the new question deviates from the previous context of the conversation.
  2. Filtering and Protection Module – This module estimates if the questions are inside the API here the Javascript field maps and determines their feasibility. We applied Amazon Bedrock and Claude 3 Anthropic Haiku guard guard at Amazon’s Bedrock to filter out -field questions. With a brief description of the natural language, Amazon Bedrock Guardrails helps to determine a range of topics outside the field to block for coding assistant, for example themes for other products here. Guardrails Amazon Bedrock also helps to filter the harmful content that contain topics such as hate speech, insults, sex, violence and bad behavior (including criminal activity), and helps protect rapid attacks. This ensures that the coding assistant pursues the responsible policies of him. For space questions, we use anthropic model Claude 3 Haiku to evaluate feasibility by analyzing both the user question and the domain documents of the intake. We selected the anthropic Claude 3 Claude 3 for its optimal performance and speed balance. The system generates standard answers to out -of -field questions or shameless, and applicable questions continue in the response generation.
  3. Knowledge base module – This module uses the basics of knowledge in Amazon Bedrock for indexing documents and relapse operations. The basics of the knowledge of the Amazon Bedrock is a comprehensive managed service that simplifies the cloth process from bottom to bottom. It deals with everything, from swallowing data to indexing and returning and generating automatically, removing the complexity of construction and storing custom integration and managing data flows. For this coding assistant, we have used the basics of knowledge in Amazon Bedrock for indexing documents and receiving. Numerous options for changing documents, embedding generation and recovery methods provided by the basics of the knowledge of the Amazon Bedrock make it very adaptable and allow us to test and identify optimal configuration. We created two separate indices, one for each domain document. This approach to the double index ensures that the content is obtained from both sources of documentation for the response generation. The indexing process implements hierarchical chunking with the Conhere English V3 model in the Amazon Bedrock, and the semantic relapse has been implemented for obtaining documents.
  4. Reply generation module -The response generation module processes the field and potential questions using the Claude 3.5 anthropic pattern of the Amazon Bedrock. It combines user questions with documents obtained to generate HTML code with the embedded JavaScript code, capable of giving interactive maps. Moreover, the module offers a summary description of the main solution points. We selected anthropic Claude 3.5 Sonnets for its Superior Code generation skills.

Orchestration

Each module discussed in the previous section was decomposed in smaller sub-sections. This allowed us to model the functionality and different points of decision within the system as a directed acycic graph (DAG) using Langgraph. A dag is a graph where the nodes (vertices) are connected to the directed ends (arrows) that represent the relationship, and essentially, there are no cycles (loops) in the graph. A DAG allows representation of addictions with a guaranteed order, and helps enable secure and efficient execution of tasks. Langgraph orchestration has several benefits, such as parallel execution of tasks, code readability and maintenance through state management and transmission support.

The following diagram illustrates the workflow of coding assistant.

When a user poses a question, a working flow is called, starting from the node of reformulated questions. This node deals with the implementation of the following question module (module 1). Apply Guardrail, obtain documents and review the question nodes directed in parallel, using the question of reformulated information. The Apple Guardrail knot uses topics denied by Amazon Bedrock guards to implement borders and apply protective measures from harmful inputs, and the review node filters out of the field using anthropic 3 haiku (Module 2). The relapse documents node draws relevant documents from the knowledge basis of the Amazon Bedrock to provide the language model with the necessary information (Module 3).

The results of the application and review questions determine the next node call. If the input passes the two checks, the node of the review documents evaluates the feasibility of the question by analyzing whether it can be answered with the documents obtained (Module 2). If possible, the generating response node answers the question and the code and description are transmitted to the UI, allowing the user to start receiving reactions from the system within seconds (Moduli 4). Otherwise, the block response node returns a predetermined response. Finally, the node of the story of the update conversations persistently holds the story of the conversation about the next reference (Module 1).

This pipeline supports the ability of the chatbot code assistant, providing an efficient and user -friendly experience for developers seeking guidelines for the implementation of the map API here for JavaScript. The following code and photograph is an example of the generated code and code map for the question “How to open an infobubble when click on a marker?




    
    

"; // add an event listener click on the marker of the marker.adeventlistener ('Tap', function (EVT) {// Create a bubble bubble bubble = h.ui.infobubble (Evt.target.target (), {Content: .Addb; Windows.addeventlistener ('size of size', () => map.getViewport (). Components UI var ui = h.ui.ui.createdefault (map, preposition);

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video