
This is a Servoy Tutorial on tool calling with the AI Runtime Plugin, and how to build agentic workflows in Servoy where an LLM actually does things instead of just answering questions. This is the second article in the AI Runtime Plugin series. If you have not read the Getting Started tutorial on chat completions and embeddings, go read that one first. I am going to assume you are comfortable with the builder pattern, you understand that plugins.ai returns Promises, and you have an OpenAI API key in your servoy.properties file.
In the previous tutorial, I promised a follow-up on tool calling. Here we are. By the end of this article, you will have a working agent that takes a natural language request and actually executes it against your Servoy solution. Not a demo. Real foundsets, real records, real business logic.
What Tool Calling Actually Is
Let’s clear something up right away. When you hear “tool calling” or “function calling,” it sounds like the LLM is reaching into your code and running functions. That is not what happens. What actually happens is more interesting.
You register a function with the chat client and describe what it does in plain English. When you send a prompt, the LLM looks at the available tools and decides, based on the description, whether any of them are relevant. If one is, the LLM returns a structured response saying “I want to call search_customers with these parameters.” The plugin intercepts that, runs your Servoy function with those parameters, feeds the result back to the LLM, and the LLM continues reasoning with that new information. It can call more tools, combine results, or produce a final answer.
Think of the LLM as a very well-read intern who cannot touch your database directly but can ask you to look things up. You give the intern a list of things you are willing to do, the intern asks for what it needs, and you hand back the results. The intern does the reasoning. You do the work.
The power is that the LLM decides when and in what order to call your tools. You do not write the control flow. You just describe the tools clearly enough that the LLM can figure out which ones to use. This is what makes it “agentic.” The agent plans its own execution path.
How the API Is Shaped
Before we write any code, you need to understand the builder chain, because it is the part most people get wrong on first read.
Tools are registered on the chat builder, not separately. The method createTool(functionRef, name, description) is called on the chat builder itself and returns a ToolBuilder. You chain parameter definitions onto that ToolBuilder, then call .build() to return control to the chat builder. At the end, you call .build() one more time to produce the chat client. The pattern looks like this at a glance:
plugins.ai.createOpenAiChatBuilder() .apiKey(...) .modelName(...) .createTool(fnRef, 'tool_name', 'what the tool does') .addStringParameter('param1', 'description', true) .addNumberParameter('param2', 'description', false) .build() // returns to chat builder .createTool(anotherFn, ...) .addStringParameter(...) .build() // returns to chat builder .build(); // returns the chat clientTwo .build() calls per tool is not a typo. The first closes the tool, the second closes the client. Once you see it, it is obvious. Until you see it, it is confusing.
The parameter types available on the ToolBuilder are:
addStringParameter(name, description, required)for text valuesaddNumberParameter(name, description, required)for numeric valuesaddBooleanParameter(name, description, required)for true/false flags
Each takes a name (what the LLM sees), a description (what the LLM uses to decide what to pass), and a boolean indicating whether the parameter is required.
Here is a minimal working example that registers a tool with both a string parameter and a boolean parameter. The LLM receives the boolean as a native boolean type, not as a string, which matters when you branch on it:
/** * Demonstrates addBooleanParameter on a tool builder. * @author Gary Dotzlaw * @since 2026-04-17 * @public */function runBooleanParamDemo() { try { /**@type {String}*/ const sApiKey = application.getServoyProperty('openai_api_key');
/**@type {plugins.ai.ChatClient}*/ const oClient = plugins.ai.createOpenAiChatBuilder() .apiKey(sApiKey) .modelName('gpt-4o') .addSystemMessage('You are a test assistant.') .createTool( testBoolTool, 'check_status', 'Checks if a feature is enabled. Returns a status message.' ) .addStringParameter('featureName', 'The feature to check', true) .addBooleanParameter('verbose', 'Whether to return detailed info', false) .build() .build();
oClient.chat('Is the "dark_mode" feature enabled? Give me the verbose version.').then(function(oResponse) { application.output(oResponse.getResponse().substring(0, 100)); }).catch(function(oError) { application.output('Tool call failed: ' + oError.message, LOGGINGLEVEL.ERROR); }); } catch (e) { application.output('Error in runBooleanParamDemo: ' + e.message, LOGGINGLEVEL.ERROR); }}
/** * Tool function registered above. Returns a status message for a named feature. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sFeatureName the feature name from the LLM * @param {Boolean} bVerbose whether to return the verbose message * @return {String} the status message */function testBoolTool(sFeatureName, bVerbose) { application.output('testBoolTool called: feature=' + sFeatureName + ', verbose=' + bVerbose + ' (type=' + typeof bVerbose + ')'); if (bVerbose) { return 'Feature "' + sFeatureName + '" is currently enabled. Last toggled 2026-04-01.'; } return 'Enabled';}Expected Output:
testBoolTool called: feature=dark_mode, verbose=true (type=boolean)The "dark_mode" feature is currently enabled. The last time it was toggled was on April 1, 2026.Two things to notice in the output. First, the boolean arrives as a native JavaScript boolean, not as a string — typeof bVerbose is boolean, so if (bVerbose) works correctly. Second, one important gotcha: do not mark the tool function as @private. The plugin resolves the callback function by name when the LLM invokes the tool, and a @private function (or one prefixed with _) is not visible to that resolution path. If you see an error like "_myTool" is not defined, drop the @private JSDoc and the underscore prefix.
How Tool Functions Receive Parameters
This is important: the plugin calls your tool function with individual arguments, one per registered parameter, in the order you registered them. It does not pass a single object. If you register a tool with addStringParameter('productName', ...) and addNumberParameter('maxResults', ...), your function receives two arguments:
function myTool(sProductName, iMaxResults) { // sProductName is a String, iMaxResults is a Number}This matches how the Example Solution implements its tools. Keep it in mind as we build our first one.
Registering Your First Tool
Let’s walk through the simplest possible example. We are going to register a single function that searches customers by name, and let the LLM call it.
First, the Servoy function that does the actual work:
/** * Searches customers by a company name fragment. Called by the AI agent as a tool. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sQuery the company name or name fragment to search for * @param {Number} iMaxResults maximum number of customers to return * @return {Array<Object>} array of matching customer objects */function toolSearchCustomers(sQuery, iMaxResults) { /**@type {Number}*/ const iCap = Math.min(iMaxResults || 20, 100);
/**@type {QBSelect}*/ const query = datasources.db.example_data.customers.createSelect(); query.result.add(query.columns.customerid); query.result.add(query.columns.companyname); query.result.add(query.columns.contactname); query.where.add(query.columns.companyname.like('%' + sQuery + '%')); query.sort.add(query.columns.companyname.asc);
/**@type {JSDataSet}*/ const dsResults = databaseManager.getDataSetByQuery(query, iCap);
/**@type {Array<Object>}*/ const aResults = []; for (let i = 1; i <= dsResults.getMaxRowIndex(); i++) { aResults.push({ customerId: dsResults.getValue(i, 1), companyName: dsResults.getValue(i, 2), contactName: dsResults.getValue(i, 3) }); } return aResults;}Nothing fancy here. It is a normal Servoy function that uses QBSelect to find customers against the example_data Northwind-style server, caps the result count so the LLM cannot request ten thousand rows, and returns an array of plain objects. The only thing that makes it a “tool” is how we register it with the chat client.
Now let’s register it:
/** * Builds a chat client with the customer search tool registered. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @return {plugins.ai.ChatClient} the configured chat client */function buildAgentWithSearchTool() { /**@type {String}*/ const sApiKey = application.getServoyProperty('openai_api_key');
/**@type {plugins.ai.ChatClient}*/ const oClient = plugins.ai.createOpenAiChatBuilder() .apiKey(sApiKey) .modelName('gpt-4o') .addSystemMessage('You are a sales assistant. Use the available tools to find information about customers.') .createTool( toolSearchCustomers, 'search_customers', 'Search for customers by company name. Returns matching customers with id, code, and name.' ) .addStringParameter('query', 'The customer name or name fragment to search for', true) .addNumberParameter('maxResults', 'Maximum number of customers to return; defaults to 20', false) .build() .maxMemoryTokens(4096) .build();
return oClient;}Let’s walk through the key parts:
- createTool(functionRef, name, description): The first argument is the actual Servoy function reference, not a string. The plugin calls that function when the LLM decides to invoke the tool. The second argument is the name the LLM sees. The third is the description the LLM uses to decide whether to call it.
- The description is not decoration. It is the entire mechanism the LLM uses to choose tools. Write it like you are describing the function to a new developer who has never seen your codebase. “Search for customers by company name” is clear. “Does customer stuff” is not.
- addStringParameter(name, description, required): The name is what the LLM uses to label the argument, the description tells the LLM what to put there, and the boolean says whether the parameter is required.
- The first .build() returns control to the chat builder. You can continue chaining
.maxMemoryTokens(), more.createTool()calls, or anything else. The second.build()produces the chat client.
To actually use the client, call chat() and handle the Promise:
/** * Sends a prompt to the agent and logs the response. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sPrompt the user's natural-language request */function runAgent(sPrompt) { try { /**@type {plugins.ai.ChatClient}*/ const oClient = buildAgentWithSearchTool(); oClient.chat(sPrompt).then(function(oResponse) { application.output('Agent: ' + oResponse.getResponse()); }).catch(function(oError) { application.output('Agent failed: ' + oError.message, LOGGINGLEVEL.ERROR); }); } catch (e) { application.output('Error in runAgent: ' + e.message, LOGGINGLEVEL.ERROR); plugins.dialogs.showErrorDialog('Error', 'Agent error: ' + e.message, 'OK'); }}Now call runAgent("Find customers with 'Market' in their name"). The LLM sees the available tool, figures out it should call search_customers with query: "Market", and the plugin executes toolSearchCustomers for you. The results flow back into the LLM’s reasoning, and you get a final natural-language answer that actually reflects your database.
Expected Output (calling the tool directly):
toolSearchCustomers('a', 3):[ {customerId:ANATR, companyName:Ana Trujillo Emparedados y helados, contactName:Ana Trujillo}, {customerId:ANTON, companyName:Antonio Moreno Taquería, contactName:Antonio Moreno}, {customerId:BSBEV, companyName:B's Beverages, contactName:Victoria Ashworth}]Expected Output (calling through the agent):
Agent: Hello! I'm a sales assistant, here to help you with any questions orinformation you need about our customers or services. How can I assist you today?Make sense? Let’s kick it up a notch.
A Realistic Scenario: Three Tools, One Agent
One tool is cute. Real agentic workflows use multiple tools that the LLM chains together on its own. Let’s imagine for a moment that a support rep types this into your Servoy app:
“Find all customers who ordered ‘Boston Crab Meat’ and list their companies and countries so we can notify them about a shipping delay.”
A traditional approach would require the rep to do three separate things: look up the product to get its ID, filter the orders table by that product ID, then look up each customer from the order list. That is several different screens and a lot of clicking. With an agent and three well-defined tools, the rep types one sentence and the agent does the work.
One note on the data: the example_data.orders table is Northwind, with dates from 1996-07 to 1998-05. The daysBack parameter on toolGetOrdersForProduct below is therefore optional — leave it off or pass a very large number to get historical orders. In a production table with current dates, you would pass a normal range like 30.
Here are the two additional tool functions. toolSearchCustomers is the one we already wrote above:
/** * Looks up a product by name and returns its ID and category. AI tool. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sProductName the product name or name fragment * @return {Array<Object>} array of matching products */function toolLookupProductByName(sProductName) { /**@type {QBSelect}*/ const query = datasources.db.example_data.products.createSelect(); query.result.add(query.columns.productid); query.result.add(query.columns.productname); query.result.add(query.columns.unitprice); query.where.add(query.columns.productname.like('%' + sProductName + '%')); query.sort.add(query.columns.productname.asc);
/**@type {JSDataSet}*/ const dsResults = databaseManager.getDataSetByQuery(query, 10);
/**@type {Array<Object>}*/ const aResults = []; for (let i = 1; i <= dsResults.getMaxRowIndex(); i++) { aResults.push({ productId: dsResults.getValue(i, 1), productName: dsResults.getValue(i, 2), unitPrice: dsResults.getValue(i, 3) }); } return aResults;}
/** * Gets orders containing a given product id. AI tool. * If iDaysBack is omitted or <= 0, no date filter is applied. * Note: the example_data orders table holds Northwind data from 1996-07 to 1998-05, * so a "last 30 days" filter against today's date will return nothing. Leave daysBack * unset to see all historical orders. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {Number} iProductId the products.productid to search for * @param {Number} [iDaysBack] optional number of days back from today * @return {Array<Object>} array of matching orders */function toolGetOrdersForProduct(iProductId, iDaysBack) { /**@type {QBSelect}*/ const query = datasources.db.example_data.orders.createSelect(); query.result.add(query.columns.orderid); query.result.add(query.columns.customerid); query.result.add(query.columns.orderdate); /**@type {QBJoin}*/ const joinDtl = query.joins.add('db:/example_data/order_details', QBJoin.INNER_JOIN, 'dtl'); joinDtl.on.add(joinDtl.columns.orderid.eq(query.columns.orderid)); query.where.add(joinDtl.columns.productid.eq(iProductId));
if (iDaysBack && iDaysBack > 0) { /**@type {Date}*/ const dSince = new Date(); dSince.setDate(dSince.getDate() - iDaysBack); query.where.add(query.columns.orderdate.ge(dSince)); }
query.sort.add(query.columns.orderdate.desc);
/**@type {JSDataSet}*/ const dsResults = databaseManager.getDataSetByQuery(query, 50);
/**@type {Array<Object>}*/ const aResults = []; for (let i = 1; i <= dsResults.getMaxRowIndex(); i++) { aResults.push({ orderId: dsResults.getValue(i, 1), customerId: dsResults.getValue(i, 2), orderDate: dsResults.getValue(i, 3) }); } return aResults;}
/** * Gets company details for a customer by id. AI tool. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sCustomerId the customers.customerid key * @return {Object} customer detail or null */function toolGetCustomerById(sCustomerId) { if (!sCustomerId) { return { error: 'customerId is required' }; }
/**@type {QBSelect}*/ const query = datasources.db.example_data.customers.createSelect(); query.result.add(query.columns.customerid); query.result.add(query.columns.companyname); query.result.add(query.columns.contactname); query.result.add(query.columns.country); query.where.add(query.columns.customerid.eq(sCustomerId));
/**@type {JSDataSet}*/ const ds = databaseManager.getDataSetByQuery(query, 1); if (ds.getMaxRowIndex() === 0) { return { error: 'customer not found' }; } return { customerId: ds.getValue(1, 1), companyName: ds.getValue(1, 2), contactName: ds.getValue(1, 3), country: ds.getValue(1, 4) };}All three tools validate their inputs, cap the row count the LLM can request, and return plain objects the LLM can read.
Now let’s build the agent that wires all three together:
/** * Builds a sales support agent with three tools against example_data. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @return {plugins.ai.ChatClient} the configured chat client */function buildSalesSupportAgent() { try { /**@type {String}*/ const sApiKey = application.getServoyProperty('openai_api_key');
/**@type {plugins.ai.ChatClient}*/ const oClient = plugins.ai.createOpenAiChatBuilder() .apiKey(sApiKey) .modelName('gpt-4o') .temperature(0.2) .addSystemMessage( 'You are a sales support agent. Use the available tools to look up products, find orders, ' + 'and retrieve customer details. Chain tools together when needed. Reply with a concise summary.' ) .createTool( toolLookupProductByName, 'lookup_product_by_name', 'Look up a product by name or name fragment. Returns productId, productName, and unitPrice.' ) .addStringParameter('productName', 'The product name fragment to search for', true) .build() .createTool( toolGetOrdersForProduct, 'get_orders_for_product', 'Get recent orders containing a given productId. Returns orderId, customerId, and orderDate.' ) .addNumberParameter('productId', 'The products.productid to search for', true) .addNumberParameter('daysBack', 'How many days back to look; defaults to 30', false) .build() .createTool( toolGetCustomerById, 'get_customer_by_id', 'Get company details for a customer by id. Returns companyName, contactName, and country.' ) .addStringParameter('customerId', 'The customers.customerid key', true) .build() .maxMemoryTokens(8192) .build();
return oClient; } catch (e) { application.output('Error in buildSalesSupportAgent: ' + e.message, LOGGINGLEVEL.ERROR); plugins.dialogs.showErrorDialog('Error', 'Agent build failed: ' + e.message, 'OK'); return null; }}A couple of things worth noting here:
temperature(0.2)sets the model to a more deterministic mode. For tool-calling workflows, lower temperature gives you more consistent tool selection and argument formatting. For creative generation tasks, you would set it higher. The default varies by model but is usually around 0.7.- Every parameter has its
requiredflag set explicitly. Required parameters are markedtrue. Optional parameters are markedfalse. This tells the LLM when it can omit an argument and when it must ask for more information or search for it.
When you call this agent with “Find all customers who ordered Boston Crab Meat and list their companies and countries,” here is what happens under the hood:
- The LLM sees three tools and reads the user request.
- It decides to call
lookup_product_by_namewithproductName: "Boston Crab Meat", receivesproductId: 40. - It calls
get_orders_for_productwithproductId: 40, receives a list of orders with customer IDs and order dates. - For each unique customer ID, it calls
get_customer_by_id, building up the company and country details. - The LLM produces a final natural-language response summarizing which customers ordered the product, their companies, and their countries.
You did not write any of that control flow. You described three tools, the LLM planned the execution, and your Servoy business logic ran at each step. That is agentic Servoy.
Expected Output (calling each tool directly against the real example_data server):
toolLookupProductByName('a'):[ {productId:40, productName:Boston Crab Meat, unitPrice:18.4}, {productId:60, productName:Camembert Pierrot, unitPrice:34.0}, {productId:18, productName:Carnarvon Tigers, unitPrice:62.5}, {productId:1, productName:Chai, unitPrice:18.0}, {productId:2, productName:Chang, unitPrice:19.0}, {productId:39, productName:Chartreuse verte, unitPrice:18.0}, {productId:4, productName:Chef Anton's Cajun Seasoning, unitPrice:22.0}, {productId:48, productName:Chocolade, unitPrice:12.75}, {productId:38, productName:Côte de Blaye, unitPrice:263.5}, {productId:58, productName:Escargots de Bourgogne, unitPrice:13.25}]
toolGetOrdersForProduct(40, 11000):[ {orderId:11063, customerId:HUNGO, orderDate:1998-04-30}, {orderId:11038, customerId:SUPRD, orderDate:1998-04-21}, {orderId:11003, customerId:THECR, orderDate:1998-04-06}, ... 38 more rows ... {orderId:10267, customerId:FRANK, orderDate:1996-07-29}]
toolGetCustomerById('LINOD'):{customerId:LINOD, companyName:LINO-Delicateses, contactName:Felipe Izquierdo, country:Venezuela}To actually run the agent, the call is the same as before:
/** * Sends a prompt to the sales support agent. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sPrompt the user's request */function runSupportAgent(sPrompt) { try { /**@type {plugins.ai.ChatClient}*/ const oClient = buildSalesSupportAgent(); if (!oClient) return;
plugins.svyBlockUI.show('Processing...'); oClient.chat(sPrompt).then(function(oResponse) { application.output('Agent: ' + oResponse.getResponse()); }).catch(function(oError) { application.output('Agent failed: ' + oError.message, LOGGINGLEVEL.ERROR); }).finally(function() { plugins.svyBlockUI.stop(); }); } catch (e) { application.output('Error in runSupportAgent: ' + e.message, LOGGINGLEVEL.ERROR); plugins.svyBlockUI.stop(); }}Expected Output:
Agent: Hello! I'm a sales support agent here to assist you with product inquiries,order details, and customer information. How can I help you today?Smart Tools: Combining SQL and Semantic Search
The Example Solution includes a pattern that I think is worth highlighting because it shows how the features from Article 1 compose with tool calling. The idea is a tool function that tries a traditional SQL search first, and if that comes up empty, falls back to a semantic search over an embedding store.
Here is what that looks like in a product lookup tool:
/** * Embedding store for product names, used as a search fallback. * @type {plugins.ai.EmbeddingStore} */var _oProductStore = null;
/** * Looks up a product by name. Tries SQL first, falls back to vector search. * @author Gary Dotzlaw * @since 2026-04-17 * @public * * @param {String} sProductName the product name to search for * @return {Number} the product ID, or -1 if not found */function toolProductLookup(sProductName) { application.output('TOOL CALLED: productLookup for "' + sProductName + '"');
// Try the traditional SQL search first /**@type {JSFoundSet}*/ const fsProducts = datasources.db.example_data.products.getFoundSet(); if (fsProducts.find()) { fsProducts.productname = '#%' + sProductName + '%'; if (fsProducts.search() > 0) { application.output('SQL match: ' + fsProducts.productname + ' (id=' + fsProducts.productid + ')'); return fsProducts.productid; } }
// SQL came up empty. Fall back to vector search. if (_oProductStore) { application.output('No SQL match. Trying vector search...'); /**@type {Array<plugins.ai.SearchResult>}*/ const aResults = _oProductStore.search(sProductName, 5); if (aResults.length > 0 && aResults[0].getScore() > 0.7) { /**@type {Number}*/ const iProdId = aResults[0].getMetadata().productid; application.output('Vector match: id=' + iProdId + ' score=' + aResults[0].getScore().toFixed(3)); return iProdId; } }
application.output('Product not found: ' + sProductName); return -1;}Expected Output:
toolProductLookup('a'):TOOL CALLED: productLookup for "a"SQL match: Chai (id=1)The SQL branch wins on this run because 'a' is a common fragment. The foundset’s .search() populates the foundset with every match, and the tool returns the currently-selected record’s PK — typically the first row by default sort, which here is productid = 1, Chai. If the user had typed something the SQL branch could not find (e.g., "chai tea latte"), execution would fall through to the vector branch.
This is a production pattern. The SQL search is fast, free, and exact. The vector search is a fallback for when users type something approximate like “chai tea” instead of the exact product name “Chai.” The 0.7 score threshold prevents the vector search from returning garbage matches. You get the best of both worlds in a single tool function.
To use this, you would pre-populate _oProductStore with product embeddings on form load (using the embedAll pattern from Article 1), and register toolProductLookup as a tool on your agent.
Built-In Tools
The chat builder also has a useBuiltInTools(true) method that injects Servoy’s own built-in tools into the agent, such as tools for getting the current user’s information. Turn it on if you want the agent to know who it is talking to:
/**@type {plugins.ai.ChatClient}*/const oClient = plugins.ai.createOpenAiChatBuilder() .apiKey(sApiKey) .modelName('gpt-4o') .useBuiltInTools(true) .addSystemMessage('You are a helpful assistant.') .build();This is useful for personalization. An agent that knows which user is asking can greet them by name, filter results by their permissions, or include their department in generated output.
Security: The Part You Cannot Skip
The Servoy docs include a warning I want to repeat here: “Tool-Calling gives some control of your application functionality to the LLM.” That is not a scary footnote. It is the whole point, and it is also the thing that will bite you if you do not think it through.
A few rules I follow on every agentic project:
- Validate every tool input inside the tool function. Do not trust that the LLM will pass sensible values. If
customerIdneeds to belong to the current tenant, check it. IfmaxResultsshould be capped at 100, cap it (you saw this intoolSearchCustomers). The LLM might hallucinate, might be manipulated by a user’s prompt, or might just be wrong. - Keep tool functions narrow. A tool that “does anything to a customer record” is a security problem. A tool that “updates the customer’s phone number” is a feature. Each tool should do one thing with clear boundaries.
- Never expose destructive operations without confirmation. A
delete_customertool is a bad idea. Amark_customer_inactivetool that requires a reason parameter is much safer. Permanent deletions belong behind a confirmation dialog driven by a human, not an agent. - Respect your existing permission model. If a user cannot delete tickets through the UI, the agent should not be able to delete tickets on their behalf. Check your permission layer inside the tool function.
- Log every tool invocation. When an agent does something, you need to know what it did, why, and with what parameters. You saw the
application.output('TOOL CALLED: ...')pattern in the examples above. In production, log to a dedicated table you can audit later. - Do not pass sensitive data to the LLM unnecessarily. If a tool returns customer records, strip out fields the LLM does not need. The LLM sees everything you return, and that data goes to the provider’s servers. Remember that anything a tool function returns is visible to the LLM, which means it is visible to the provider.
Bottom-line, treat the LLM like an intern who is brilliant, fast, and completely unaccountable. Give the intern exactly the tools needed for the job, and nothing more. Your future self will thank you the first time a user types something weird and the agent does not delete half your database.
Advantages of the Tool Calling Approach
Using tool calling instead of hardcoded workflows has several clear advantages:
- You do not write control flow. Describe the tools, and the LLM figures out how to chain them. Adding a new capability means adding a new tool, not rewriting orchestration logic.
- Users talk to the app in their own words. They do not need to know which screen to open or which button to click. They describe what they want, and the agent works out the rest.
- The same tools serve many workflows. Your
search_customerstool works for sales, support, billing, and reporting agents. Write it once, reuse everywhere. - Tools are testable as normal Servoy functions. Call them directly from JSUnit tests. No mocking an LLM required. The tool is just a function.
- Business logic stays in Servoy. The LLM does the planning, but every actual database operation runs through your existing QBSelect, foundset, and
databaseManagercode. Your validation rules, security checks, and audit logs all still apply.
Keep in mind that agentic workflows are not free. Every tool call adds an API round-trip, which adds latency and cost. A simple three-tool workflow might take five or six LLM calls before the agent finishes. If you need sub-second response times, do not use an agent. Use a traditional workflow and be done. Agents are for tasks where the flexibility and natural language input are worth the extra seconds and the extra cents.
What Comes Next
Tool calling is the foundation of agentic Servoy, but it is one piece of a larger puzzle. In the next two tutorials, I will cover:
- FoundSet embedding and PgVector stores. The in-memory store from Article 1 is fine for demos, but it does not survive server restarts and it does not scale. Article 3 covers persistent stores backed by PgVector,
embedAll()for embedding database records with PK metadata, PDF document chunking, and the search-to-foundset pattern that gets you back to real Servoy records from semantic search results. - QBVectorColumn and hybrid queries. Article 4 brings it all together with the Query Builder’s native vector column support, letting you combine semantic similarity with traditional WHERE clauses in a single database query.
That concludes this Servoy tutorial on tool calling with the AI Runtime Plugin. I hope you enjoyed it, and I look forward to bringing you more Servoy tutorials on AI integration in the future.
The Series
This is Part 2 of a four-part series on the Servoy AI Runtime Plugin:
- Getting Started with the Servoy AI Runtime Plugin. Chat completions, streaming, conversation memory, embeddings, and your first semantic search.
- Tool Calling with the AI Runtime Plugin: Agentic Servoy (this article). Register Servoy methods as tools and let the LLM decide when to call them.
- Embedding Your Servoy Data for Semantic Search. PgVector production stores, FoundSet
embedAll(), and PDF document chunking. - Hybrid Queries with QBVectorColumn: Semantic Meets SQL. Combine semantic similarity with traditional WHERE clauses in a single database round-trip.