Skip to main content

· 8 min read
Stefan Anca
Matthias Broecheler

Have you wondered how companies can leverage their data through GenAI technologies for business growth? With the open-source Acorn Agent Framework, businesses can now extract actionable insights, enhance user engagement, and catalyze new revenue streams by using Generative AI technologies on their data. Acorn Agent enables Large Language Models (LLMs) to connect seamlessly with a company’s data via API calls, providing natural language access to complex sources of data. Let’s explore how Acorn Agent can unlock the value of your data, using financial services as an example.

Unlocking Business Potential with Acorn Agent

Imagine a financial institution, such as a bank, which has extensive data on their users and their transactions. How can this data be used to drive additional business? Here are some examples that unlock the value of customer data through LLM Agents powered by Acorn Agent.

Step 1: Developing a Financial Assistant ChatBot

A Financial Assistant ChatBot that can retrieve user transactions and credit card history when responding to user questions and extract valuable insights about user spending. Having access to the user data, the Financial Assistant can perform tasks like:

  • Transaction Analysis: Which transactions accounted for most of my spending this month?
  • Expense Breakdown: Show me a breakdown of my expenses per category for this month.
  • Trend Identification: Highlight categories where my expenses increased from last month.
Acorn Agent Financial Use Case >

By displaying information in natural language, tables and visual charts, this chatbot enables users to retrieve the information they need. This improves customer satisfaction and reduces customer support costs. And as users interact with the chatbot for insights and analytics, they use their bank accounts more, fostering loyalty and increasing user engagement.

· 7 min read
Matthias Broecheler
Stefan Anca

Large-Language Models (LLMs) provide a natural language interface capable of understanding user intent and delivering highly specific responses. The recent addition of "tooling" as a primary feature of LLMs has enabled them to retrieve information on-demand and trigger actions. This makes LLMs a viable natural language interface for various applications, including dashboards, ERPs, CRMs, BI systems, HRMS, SCM, and even customer-facing mobile and web applications. The Acorn Agent framework offers the infrastructure to build such LLM-powered applications by instrumenting LLMs with custom tooling in a safe, secure, and efficient manner. The Acorn Agent framework is open-source under the Apache 2.0 license.

Acorn Agent Mascot >

Background

Large-Language Models are neural networks that process input text and incrementally generate intelligent responses. Advances in the size, topology, and training of LLMs increased their performance as conversational interfaces to near-human levels. To date, LLMs have been confined to chat and search applications, where they are trained on extensive corpora and augmented with query-specific information through methods such as RAG, FLARE, or text search.

The recent addition of "tooling" as a primary feature has broadened the applicability of LLMs. "Tooling" is a set of function definitions provided to the LLM either in-prompt or through training. The LLM can invoke these functions to retrieve information on demand (e.g., looking up the current weather) or trigger an external action (e.g., placing an order). This enables LLMs to interact with external systems and information outside the immediate user context. The LLM can invoke APIs, run database queries, execute computations, trigger UI updates, and more.

Tooling makes it possible to build conversational interfaces with LLMs for user facing applications that enable the user to interact naturally with the application through text or voice. However, integrating LLMs with custom tooling poses several challenges:

  • Safety: LLMs are non-deterministic and prone to hallucinations, requiring careful validation and comprehensive observability to build safe and predictable applications.
  • Security: LLMs are susceptible to injection attacks, necessitating safeguards to build secure applications that do not leak user information or secrets.
  • Performance: LLMs are expensive to invoke and highly sensitive to prompt variation, requiring efficient function management to reduce context tokens and improve performance.
  • Efficiency: The implementations of tools in chatbots or agents have to be written from scratch by the developers, needing to create the function definitions for the LLM, but also the function execution against an API or a Database.

Acorn Agent Framework

Acorn Agent Overview

Acorn Agent is a framework for building LLM-powered applications through tooling. It provides the necessary infrastructure to instrument LLMs with custom tools, addressing challenges related to safety, security, performance, and efficiency:

  • Safety: The framework validates function calls and supports auto-retry of invalid function invocations. It enables quick iteration of function definitions to improve accuracy and performance.
  • Security: The framework sandboxes function calls through a defined "context," which includes sensitive function arguments (e.g., userid, sessionid, tokens) that are handled by the framework and not exposed to the LLM call stack.
  • Performance: The framework gives developers full control over the context window construction to optimize cost and performance.
  • Efficiency: The framework provides abstraction layers for managing tooling across many popular LLMs through a standard interface which reduces boilerplate code and custom instrumentation logic. while giving developers full control over context window constructionAt its core, Acorn Agent is a repository of tools that instruments and executes tools for LLMs using a JSON configuration format for semantic annotations, execution logic, and security context.

Additionally, Acorn Agent facilitates seamless integration of APIs, databases, libraries, and UI frontends as tooling for LLMs. Acorn Agent supports three types of tools:

  • API: Invokes an external system through an API to retrieve information or trigger an action. The framework supports GraphQL, REST, and JDBC, giving the LLM access to internal microservices, external web services, databases, search engines, ERP systems, and more.
  • Local: Invokes a local function to execute a library method or computation. This enables the LLM to execute mathematical or other computations where accuracy and determinism are important.
  • Client: Forwards the tool call to the client or frontend to trigger a UI update, implemented as a function callback in JavaScript. This allows the LLM to customize the presentation of information to the user.

The Acorn Agent framework is open-source under the Apache 2.0 license. You can view the full source code, download it, and contribute to the project on Github.

Example LLM Application

To illustrate how the Acorn Agent framework works, consider building an LLM application providing a natural language interface for an online garden store. This application uses the same GraphQL API that the web application uses to place and retrieve customer orders. Additionally, it implements functions for unit conversions and React components on the frontend to display order status. With Acorn Agent, we can register all of these as tools within the framework, configure an LLM (such as OpenAI’s GPT, Llama3 on AWS Bedrock, or Google Gemini), and set up our system prompt for a friendly shopping assistant. When a user issues a request, the Acorn Agent framework manages the tools for the LLM.

Acorn Agent Example Instrumentation

For example, suppose the user requests: "Order me fertilizer for 3 plots of lawn: 10x20 ft, 50x15 ft, and 30x35 ft. The same fertilizer I ordered last time." Acorn Agent injects the relevant tool definitions into the context and hands it to the LLM. The LLM processes the request and calls tools as follows:

  • Look up the last orders for the fertilizer product ID and weight. Acorn Agent augments secure information from the user session, invokes the GraphQL API to retrieve the user’s last orders, and returns the information to the LLM. The LLM identifies which order was for fertilizer as well as the associated product id and weight.
  • Convert the given measurements to total square footage and compute the number of bags needed for the fertilizer product based on the retrieved weight. Acorn Agent executes that tool by invoking a local function that implements the math and conversion. It then returns the number of bags to the LLM.
  • Place the order for the computed number of fertilizer bags and retrieved product ID. Acorn Agent invokes the GraphQL API to place the order within the secure sandbox of the user session. The order details are returned to the LLM.
  • Update the UI with the order information. Acorn Agent forwards that client function call with additional context to the UI to update the React component.

Acorn Agent handles all tool validations, sandboxing, invocations, and result propagations, allowing developers to focus on building tools and optimizing the context window.

Using Acorn Agent Framework

To experiment with Acorn Agent as a natural language interface for an existing API or database, you need only write tools and a configuration file—no coding required. Check out the examples for public APIs to get started. To build an LLM microservice or web application, you can include the Acorn Agent framework as a dependency in your Java, Scala, or Kotlin project. See the documentation for all the details. As an example, refer to the Spring Boot application using Acorn Agent to implement a ChatBot. If you have any questions, feedback, or would like to contribute, please join us on Slack. Currently, the Acorn Agent framework is limited to the JVM. We aim to support other programming environments soon.

Conclusion

Building natural language interfaces for user facing applications requires instrumenting LLMs with custom tooling. The open-source Acorn Agent framework provides the infrastructure to manage custom tooling for LLMs and ensures that application is safe, secure, and efficient.

· 3 min read
Michael Canzoneri

Unfortunately, my semi-retirement has come to an end.

It’s been an incredible ten months waking up and not having to be anywhere other than school dropoff.  However, all good things must turn into even better things!

I’m happy to announce that I’ve joined my old friends from DataStax, Matthias Broecheler, and Daniel Henneberger, as co-founder of DataSQRL.

And what a time to join! Our self-funded venture is already profitable, with a growing customer base.

What does DataSQRL do?

Your “Operating System for Data” builds data products in minutes.  Gone are months wasted attempting to integrate multiple vendor technologies for a data pipeline.

DataSQRL abstracts and automates the building of data pipelines through our SQL interface, which we’ve affectionately dubbed SQRL.  Yes, pronounced “Squirrel,” just like the fuzzy little guys running around your yard.

Have you struggled with Apache Kafka?  Flink?  Streaming?  Data in motion? Integrating your Artificial Intelligence into your systems?  Anything “Real-Time?” Struggle no more, and get started here!

We provide an open-source tool that can help you along with some professional services.

Now that that is out of the way, onto more interesting things…

What did I do over these last ten months? You’re wondering…

What didn’t I do?  I…

  • Spent a ton of time watching my autistic son make incredible progress
  • Helped produce a shark documentary, Unmasking Monsters Below, with my friend, Kevin Lapsley
  • Traveled to more than 30 different places, both nationally and internationally (pictures below)
  • Read 41 books
  • Wrote 200 pages in my journal
  • Started writing a new book
  • Started writing a new screenplay
  • Continued working on my photography skills
  • Watched SO MUCH college football (We are!)
  • Cooked my wife breakfast (almost) every day
  • Made a ton of new recipes
  • Worked out 4-6 days per week
  • Mentored several young professionals to find their way
  • Caught up with some old friends
  • Served as an advisor to ten AI-based startups
  • And attended a few concerts…

More on all of this as the months roll on…

What’s next for me and DataSQRL?

If you’re attending Data Days in Austin, TX, next week, check out Matthias’ presentation.

Otherwise, make sure you follow us here on LinkedIn.

I promise that some of my upcoming posts will cover what I did over the last ten months.  You know me… However, here are some highlights in pictures…

A 3x Silicon Valley Unicorn veteran with IPO experience, award-winning screenwriter, and 2006 Time Magazine “Person of the Year,” Canz is often mistaken for Joe Rogan while walking down the street.  He can be found on LinkedIn, IMDB, and helping people how to pronounce “Conshohocken.”

#autism #journaling #imdb #joerogan #movies #metallica #concerts

Picture Highlights

Highlights |

Innescron, County Sligo, Ireland

Highlights |

Mystic, CT

Highlights |

Innescron, County Sligo, Ireland

Highlights |Newport, RIHighlights |Metlife Stadium, NJ - Metallica ConcertHighlights |Napa Valley, CA

· 10 min read
Matthias Broecheler

A common problem in search is ordering large result sets. Consider a user searching for “jacket” on an e-commerce platform. How do we order the large number of results to show the most relevant products first? In other words, what kind of jackets is the user looking for? Suit jackets, sport jackets, winter jackets?

Often, we have the context to infer what kind of jacket a user is looking for based on their interactions on the site. For example, if a user has men’s running shoes in their shopping cart, they are likely looking for men’s sports jackets when they search for “jacket”.

At least to a human that seems pretty obvious. Yet, Amazon will return a somewhat random assortment of jackets in this scenario as shown in the screenshot below.

Amazon search results for `jacket` |

To humans the semantic association between “running shoes” and “sport jackets” is natural, but for machines making such associations has been a challenge. With recent advances in large-language models (LLMs) computers can now compute semantic similarities with high accuracy.

We are going to use LLMs to compute the semantic context of past user interactions via vector embeddings, aggregate them into a semantic profile, and then use the semantic profile to order search results by their semantic similarity to a user’s profile.

In other words, we are going to rank search results by their semantic similarity to the things a user has been browsing. That gives us the context we are missing when the user enters a search query.

In this article, you will learn how to build a personalized shopping search with semantic vector embeddings step-by-step. You can apply the techniques in this article to any kind of search where a user can browse and search a collection of items: event search, knowledge bases, content search, etc.

· 11 min read
Matthias Broecheler

Let’s build a personalized recommendation engine using AI as an event-driven microservice with Kafka, Flink, and Postgres. And since Current23 is starting soon, we will use the events of this event-driven conference as our input data (sorry for the pun). You’ll learn how to apply AI techniques to streaming data and what talks you want to attend at the Kafka conference - double win!

We will implement the whole microservice in 50 lines of code thanks to the DataSQRL compiler, which eliminates all the data plumbing so we can focus on building.

Watch the video to see the microservice in action or read below for step-by-step building instructions and details.

What We Will Build

We are going to build a recommendation engine and semantic search that uses AI to provide personalized results for users based on user interactions.

Let’s break that down: Our input data is a stream of conference events, namely the talks with title, abstract, speakers, time, and so forth. We consume this data from an external data source.

In addition, our microservice has endpoints to capture which talks a user has liked and what interests a user has expressed. We use those user interactions to create a semantic user profile for personalized recommendations and personalized search results.

We create the semantic user profile through vector embeddings, an AI technique for mapping text to numbers in a way that preserves the content of the text for comparison. It’s a great tool for representing the meaning of text in a computable way. It's like mapping addresses (i.e. street, city, zip, country) onto geo-coordinates. It’s hard to compare two addresses, but easy to compute the distance between two geo-coordinates. Vector embeddings do the same thing for natural language text.

Those semantic profiles are then used to serve recommendations and personalized search results.

· 14 min read
Matthias Broecheler

When developing streaming applications or event-driven microservices, you face the decision of whether to preprocess data transformations in the stream engine or execute them as queries against the database at request time. The choice impacts your application’s performance, behavior, and cost. An incorrect decision results in unnecessary work and potential application failure.

To preprocess or to query? >|

In this article, we’ll delve into the tradeoff between preprocessing and querying, guiding you to make the right decision. We’ll also introduce tools to simplify this process. Plus, you’ll learn how building streaming applications is related to fine cuisine. It’ll be a fun journey through the land of stream processing and database querying. Let’s go!

Recap: Anatomy of a Streaming Application

If you're in the process of building an event-driven microservice or streaming application, let's recap what that entails. An event-driven microservice consumes data from one or multiple data streams, processes the data, writes the results to a data store, and exposes the final data through an API for external users to access.

The figure below visualizes the high-level architecture of a streaming application and its components: data streams (e.g. Kafka), stream processor (e.g. Flink), database (e.g. Postgres), and API server (e.g. GraphQL server).

Streaming Application Architecture

An actual event-driven microservice might have a more intricate architecture, but it will always include these four elements: a system for managing data streams, an engine for processing streaming data, a place to store the results, and a server to expose the service endpoint.

This means an event-driven architecture has two stages: the preprocess stage, which processes data as it streams in, and the query stage which processes user requests against the API. Each stage handles data, but they differ in what triggers the processing: incoming data triggers the preprocess stage, while user requests trigger the query stage. The preprocess stage handles data before the user needs it, and the query stage handles data when the user explicitly requests it.

Understanding these two stages is vital for the successful implementation of event-driven microservices. Unlike most web services with only a query stage or data pipelines with only a preprocess stage, event-driven microservices require a combination of both stages.

This leads to the question: Where should data transformations be processed? In the preprocessing stage or the query stage? And what’s the difference, anyways? That’s what we will be investigating in this article.

· 9 min read
Matthias Broecheler

Stream processing technologies like Apache Flink introduce a new type of data transformation that’s very powerful: the temporal join. Temporal joins add context to data streams while being efficient and fast to execute.

Temporal Join >

This article introduces the temporal join, compares it to the traditional inner join, explains when to use it, and why it is a secret superpower.

Table of Contents:

· 9 min read
Matthias Broecheler

In the world of data-driven applications, Apache Flink is a powerful tool that transforms streams of raw data into valuable results. But how do you make these results accessible to users, customers, or consumers of your application? Most often, we found the answer to that question was: GraphQL. GraphQL gives users a flexible way to query for data, makes it easy to ingest events, and supports pushing data updates to the user in real-time.

Flink hearts GraphQL >

In this blog post, we’ll discuss what GraphQL is and why it is a good fit for Flink applications. Like peanut butter and jelly, Flink and GraphQL don’t seem related but the combination is surprisingly good.

Table of Contents:

How To Access Flink Results?

Quick background before we dive into the details. Apache Flink is a scalable stream processor that can ingest data from multiple sources, integrate, transform, and analyze the data, and produce results in real time. Apache Flink is the brain of your data processing operations.

Flink Logo >

But Apache Flink cannot make the processed results accessible to users of your application. Flink has an API, but that API is only for administering and monitoring Flink jobs. It doesn’t give outside users access to the result data. In other words, Flink is a brain without a mouth to communicate results externally.

To make results accessible, you have to write them somewhere and expose them through an interface. But how? We have built a number of Flink applications and in most cases, the answer was: write the results to a database or Kafka and expose them through an API. Over the years, our default choice for the API has become GraphQL. Here’s why.

· 5 min read
Matthias Broecheler

Apache Flink is an incredibly powerful stream processor. But to build a complete application with Flink you need to integrate multiple complex technologies which requires a significant amount of custom code. DataSQRL is an open-source tool that simplifies this process by compiling SQL into a data pipeline that integrates Flink, Kafka, Postgres, and API layer.

DataSQRL allows you to focus on your application logic without getting bogged down in the details of how to execute your data transformations efficiently across multiple technologies.

We have built several applications in Flink: recommendation engines, data mesh endpoints, monitoring dashboards, Customer 360 APIs, smart IoT apps, and more. Across those use cases, Flink proved to be versatile and powerful in its ability to instantly analyze and aggregate data from multiple sources. But we found it quite difficult and time-consuming to build applications with Flink.

DataSQRL compiled data pipeline >

To start, you need to learn Flink: the table and datastream API, watermarking, windowing, and all the other stream processing concepts. Flink alone gets our heads spinning. And Flink is just one component of the application.

To build a complete data pipeline, you need Kafka to hold your streaming data and a database like Postgres to query the processed data. On top, you need an API layer that captures input data and provides access to the processed data. Your team must learn, implement, and integrate multiple complex technologies. It takes a village to build a Flink app.

DataSQRL >

That’s why we built DataSQRL. DataSQRL compiles the SQL that defines your data processing into an integrated data pipeline that orchestrates Flink, Kafka, Postgres, and API - saving us a ton of time and headache in the process. Why not let the computer do all the hard work?

Let me show you how DataSQRL works by building an IoT monitoring service.

· 8 min read
Matthias Broecheler

When creating data-intensive applications or services, your data logic (i.e. the code that defines how to process the data) gets fragmented across multiple data systems, languages, and mental models. This makes data-driven applications difficult to implement and hard to maintain.

SQRL is a high-level data programming language that compiles into executables for all your data systems, so you can implement your data logic in one place. SQRL adds support for data streams and relationships to SQL while maintaining its familiar syntax and semantics.

Why Do We Need SQRL?

Data Layer of data-driven application >

The data layer of a data-driven application comprises multiple components: There’s the good ol’ database for data storage and queries, a server for handling incoming data and translating API requests into database queries, a queue/log for asynchronous data processing, and a stream processor for pre-processing and writing new data to the database. Consequently, your data processing code becomes fragmented across various systems, technologies, and languages.