Vestmark manages more than 1.7 trillion in assets, and its CTO, Freedom Dumlao, joins Julián Duque to discuss how AI is helping its advisors in their day-to-day work. Filmed at the Palace Hotel in downtown San Francisco, the pair discuss the role of AI in development and why all its new products are being built using Ruby.
Building Agentic Apps with RubyLLM
- Tools and Tips
- January 21st, 2026
- 20:24
SHARE
Building Agentic Apps with RubyLLM
Hosted by Julián Duque, Freedom Dumlao
Show Notes
Julián
Hello, hello! Welcome to Code[ish]. My name is Julián Duque, I am Principal Developer Advocate at Heroku, and I’m your host for the Code[ish] podcast. And today with me, I have Freedom Dumlao, CTO of Vestmark. Freedom, how are you doing?
Freedom
I’m doing great. Thank you so much for having me, Julián.
Julián
Before we start, please, what is Vestmark? What do you do? Tell me a little bit more.
Freedom
So Vestmark is a wealth tech company. We have a little over 1.7 trillion in assets under management. Our customers are asset management firms, so we currently have six of the 10 largest asset management firms in North America as our, as part of our customer base. And we’ve been growing that customer base into retail investment advisories, and other firms that can make use of our portfolio management and trading technology.
Julián
Beautiful. So, we are right now in the era of AI and agentic applications. It’s become like the thing to talk about.
Freedom
That’s right.
Julián
Everybody is implementing systems and migrating to AI. What’s Vestmark doing in that area?
Freedom
So, we’ve been working with building agentic systems since the agent… since the time where agent wasn’t necessarily the buzzword that it is today.
Julián
Okay, yeah.
Freedom
So as soon as we started getting access to large language model technology at Vestmark, the question we asked ourself is, what is the thing I can do now that I couldn’t do when I didn’t have this technology, right? And I think everyone looks at AI and they say, oh, well, you know, they, they obviously gravitate towards some kind of a chatbot, right?
Julián
Yes.
Freedom
That’s a chat assistant thing. Or some kind of note-taking or note-summarizer application or whatever. And those are, those are interesting tools, and I think they’re very valuable. But to us it was important to look well, what what’s now possible that wasn’t possible before because we had chatbots before, right? We had we had note-taking apps. They’re better now with large language models. And so for us, we started looking at some of the workflows that our clients spend a lot of time doing things that aren’t as necessarily essential to the work that they do. And we wanted to solve the problem of scaling them up so that they could do more with the time that they’ve got in their day. So, we focused our first AI product, we just launched it. It’s called the Advisor Assistant, and it’s a tool for financial advisors that gives them a bunch of agentic capabilities that help them to solve problems that they have throughout their their day-to-day. It gives them sort of a virtual home office, if you will, to work with. They can use it to automatically generate proposals, which is something that’s been very time-consuming for them in the past. They can use it to automatically update and manage client details and information. So, it’s it’s a really incredible start to a set of tools that we’re bringing to market for our clients.
Julián
Beautiful. You mentioned initially that most of the apps we have seen out there using generative AI are chatbots or assistants. You are like interacted with this technology like through… through text.
Freedom
That’s right.
Julián
With the… with the tool you, you launched, what’s the… what’s it different? How users interact with it? What’s like the U… the UX looks like?
Freedom
So, in this case, it is a chatbot, but it is our first one that we’ve launched. And so we knew that that would be the… the best way to introduce our clients to this technology was through a chatbot. But what we wanted to make sure it was doing, you know, what we were seeing at the time when we started building this thing is most of these tools, what they looked like was a question-and-answer kind of chatbot like, oh, how do I do this? Or some way of getting some support or researching data from a knowledge base. And we wanted this to be a tool that does a thing, like actually goes and, and performs a task without the financial advisor having to sit there and like come up with answers to questions. So, a typical financial advisor generating a proposal for their client, for example, using our proposal tool, it’s as efficient as can be. But they historically, what they have to do is fill out a bunch of form fields, answer a bunch of questions, and ultimately they submit that and then the proposal gets generated. And if they want to do a few proposals with different ideas, it could take it could take, some time putting all that together. With the Advisor Assistant, the advis… the financial advisor can just go in to the assistant and say, hey, I’m meeting with the Smith family. We already know about the Smith family, and I would like to come up with some proposals for them. And if, if the advisor wanted to, they could have the assistant generate 100 proposals and pick the 10 best based on some criteria, and it’ll do that for them automatically. They don’t have to just sit there all day and generate proposals manually. It’s something that we know really accelerates an advisor, especially in their ability to get ready for meetings that they’re going to be having with their clients every day.
Julián
Beautiful. And this solution is just like a single agent, or do you have like a multi-agent architecture? How are you now trying to solve these type of like new architectures and new problems with that?
Freedom
Yeah. That’s a great question. So it is a multi-agent architecture. And what we learned pretty quickly is that, you know, we… again we started building this product before MCP actually was was a defined protocol. But we were already working with tool calls and things like that, even before MCP. And we learned early on that you can get a lot of pollution when you have too many tools available to any given agent, right? So, if you’ve just got dozens and dozens of tools, your agent is going to struggle to pick the right tool for the job at some point. And we also learned that there’s this concept that’s later been defined as context rot. So even if you have a really nice context window and we keep seeing, like, new agents or new… foundation models with bigger and bigger context windows, it doesn’t really matter that much because at some point you kind of fall off a cliff and the context is rotted away and you got to refresh or branch off from some earlier point. We saw those problems early on, and one of the solutions that we came up with was building an agent-to-agent sort of connectivity. So, if you’ve got multiple…
Julián
Before A2A.
Freedom
Before A2A, yes.
Julián
Okay, so you are like ahead of…
Freedom
Yeah, we were we before the, yeah, when we saw the Google announcement about A2A, we were like, hey, we’re doing that! So, yeah. And I’m not… you know, it just seemed like the natural way to move forward with what we were doing. So, what we wanted to be able to do is say, hey, the financial advisor has a task they want to accomplish. That task is going to need a certain set of tools. So why give it every possible tool? Once we know what the task is, hand that work off to an agent that can then perform the task using the tools that are necessarily just for that task. And so the primary agent is really a broker for sub-agents to do the additional tasks. That was… that felt like a natural way to build it. So my, I, used to work on a product called Alexa, and it was a very similar the at the time, we would have a… an intent engine to, to kind of figure out what was the intent of the initial request. Sometimes it was weather, but most of the time it was play Baby Shark when I was working there. But once we knew the intent, we would know, okay, which set of skills do we want to give this intent to, to figure out who should be responding to that. And I’m looking at agent-to-agent as an evolution of that kind of algorithmic thinking, right? So, it’s very similar.
Julián
That’s beautiful. I’ve been like, playing with different orchestration tools like for agents. Building agents with Agentforce or LangGraph.
Freedom
Yeah.
Julián
Like Mastra for Typescript. I’m a JavaScript Node.js developer, so I always try to look like the technologies I’m familiar with.
Freedom
Yeah.
Julián
So, I I know you work with with Ruby, but I don’t know how’s the AI ecosystem in Ruby? Could you tell me a little bit more about how can I start building as a Ruby developer, AI and agents with Ruby libraries?
Freedom
Oh, yeah. I mean, every programming language is… has come up with great ways to work with large language models and to start building these kinds of capabilities. Python definitely got the head start there, right? We look at Python, you look at LangChain, you look at LangGraph, and you see a lot of work going into building a comprehensive system for constructing a large language model application. And I think every other, you know, JavaScript has some, has some great tech as well that’s sort of spawned from that. And even, even as we started our journey down this, this large language model application path, we started with, with LangGraph as well, because it seemed like the natural choice.
Julián
Yes.
Freedom
The, the painful part was now I have this other thing in this other language, and everything else I’ve got is in Ruby. And so when I want to go, when I want to talk with the large language model, it’s like, now I’ve got to make this like service call, and I’m smuggling credentials across the wire to make sure that the agent can’t do something that the user can’t do. And it started to get very complex, and it didn’t feel like it needed to be. So, I talked with other people in the Ruby community and, by and large, two main Ruby gems were were mentioned everywhere I went. Number one was RubyLLM by Carmine Paolino I think is his name. And I mean, he’s not the only contributor to it at this point. I think it’s been, it’s grown tremendously. And Active Agents, by a gentleman named Justin Bowen. And I think that’s also got many contributors beyond him as well.
Julián
Active Agents. That’s like a very fitting name for…
Freedom
Very fitting, yeah. Active Agents is if you if you’re already building a Rails app, Active Agents fits in with the way that you build a Rails app. And so you create an agent, you define an agent in a very Rails-ish way, which is nice. RubyLLM is is sort of a, a, I would say a step below that and a bit broader. And it comes with so many capabilities out of the box. In fact, if you have a Rails app, you can add the RubyLLM gem, and then you can generate a… sort of like a chat interface that works with whatever large language model you’re working with, and have a working UI with chat interface just by running a generate command. Very, also very Ruby on Rails-ish. For us, that tool was, that RubyLLM tool became a natural fit for us. You can define tools as Ruby methods very easily and just give them to the RubyLLM main class. And so you can just plug in tools very easily. Orchestration in Ruby, for anybody who’s ever used Ruby, Ruby is phenomenal at things like state machines and orchestration systems. And then the other stuff that, that’s in these, you know, like LangGraph and LangChain is the connectivity, the HTTP requests out to th… RubyLLM, of course, handles that too, because it’s most of those are API calls. So, for us, that was a, a pretty natural fit into our stack, and it allowed us to speed up development.
Julián
There is like multiple different providers that you can use with with RubyLLM?
Freedom
That’s right.
Julián
Like, if I can use like different models and you’re just like… it is like an external gem that you add to it, or like RubyLLM already has like that, that support?
Freedom
RubyLLM will work with, at least it’ll work with all of the ones that that we use. So it… anything that’s using the OpenAI spec, it’ll work. So, if you’re using the Heroku AI, for example, it’ll work fine with that. No problem. If you’re using Bedrock, it has no problem using that as well. And, you know, there may be some esoteric foundation model that hasn’t yet been added, but I’m guessing they’d be happy to take a pull request if somebody wanted to add that capability.
Julián
And because with with some providers like… with LangChain, with the AI SDK, I have to like build for a Heroku AI build the specific provider just to… like basic generations work, but like more complex things like function calls, or structure outputs, things that are not like 100% OpenAI compatible in different… different models like Bedrock, for example, doesn’t have like native instructor output.
Freedom
That’s right.
Julián
You will need to implement, like either like through tool… tool calling or prompt engineering, so there is like minor tweaks that you have to to do to, to to the provider.
Freedom
Yeah. That’s right.
Julián
So what I see there is that there is also like a good opportunity for like open-source developers like me, and like contributors at our company or yourself as well…
Freedom
That’s right.
Julián
…to like add these capabilities to these these libraries.
Freedom
Absolutely. There’s plenty… and it’s growing incredibly fast. I would say the… the amount of contributions that I’ve seen and, I’m just looking at it as an outsider and, and a consumer of this, of this gem. But just seeing the amount of contributions getting added and merged into the Ruby AI ecosystem is like it gives me… it’s thrilling to watch, right? Because you can see the gravity, sometimes when you use a technology like Ruby, which is foundational, but also it’s been around for a very long time, right? And so people get this… it’s funny, like every… at least once a year, somebody is going to post something on Reddit like, oh, is Ruby dead? Like, this question always comes up. It’s… which is crazy to me. But then you see this, right? You see this community swarm of people who are building and increasing these capabilities and, and you know, like, it’s… we’re so far from dead. Like we are… we’re way out ahead. I think Ruby also happens to be a particularly good language for this, for a couple of reasons. The language itself is just built to be very easy for a person to mentally adapt to their whatever programming model they’re used to, to. So, if you’re trying to build something or change data or change the change an interface or work with it in a slightly different way, it’s very flexible, and it’ll bend to whatever paradigm you’re trying to create. So, you’re not really stuck with any particular modality in the way that you use it.
Julián
I’ve always seen like Ruby as a beautiful language to work with.
Freedom
Yeah.
Julián
Like I’m reading Ruby code. I used to write Ruby like, back in the day, right now, and sadly, I don’t have that, that opportunity, but it still is pretty readable. And I think for, for now, these, these new agentic and AI applications, that could be like a good way for people that are not familiar to these type of apps, like using a language as like as simple as Ruby to to build this. One… one question, like semi-related, and I want to have your opinion as a technical person and developer.
Freedom
Sure.
Julián
What’s your take on using AI assistants to write code? Have you like used those? How they perform with Ruby code, for example? Because I know they are like, great at doing Python and maybe JavaScript, but other programming languages, there might not be that, that amount of training data out there. Right now what’s the what’s the current status for for Ruby with coding agents?
Freedom
I think it’s a great question. We do. We use it. We use Claude Code. We use Cursor. We, we use Codex. ChatGPT. So, we try to give our engineering team access to whatever the best possible tools are to help them choose the path that makes them the most productive. My experience with it has been, you know, I got off to an early start when CoPilot was first released from from GitHub, and I loved… I was so delighted that it was like this super autocomplete, right? Like, I’d start typing a method, and it would complete the whole method for me. And I was like, “Wow, this is great.” And I evolved along with everybody else as the capabilities of these things became better. And then learned very quickly that when you start asking, like Claude Code, for example, to do something much bigger, it will certainly do something bigger, but not always what you want it to do.
Julián
Yes, exactly.
Freedom
Yeah. With Ruby code, it’s interesting. It has no trouble at all writing good Ruby code, but Rails itself has had so many updates since these models have been trained, and those updates have represented big improvements in what’s possible. So, for example, from Rails 8 onward, there’s a built-in authentication capability. So, historically, when you build a Rails app, you almost always reach for Devise, right? But now you don’t have to. If you if you want a simpler authentication, you can use the one from Rails. So, if you ask Claude Code to start building you a Rails app, naturally, it just reaches for Devise and pulls it right in, and you’re like, I didn’t want that. So, what I found is, you know, write the CLAUDE.md file or write the write the AGENTS.md file. Give it the extra context, and then share that with the other developers on your team, and everyone will get that sort of consistent experience. The nice thing about Ruby code, especially though in this case, is these systems generate a bunch of code very fast if you’re not careful. And what’s what I really appreciate is that when I have to dive into the code, because the agent has generated so much and now I’ve got to look at it, I’m diving into Ruby, which is very, very easy to read. So even if it’s even if the code was just this enormous ball of nonsense, it’s easy to follow and easy to read. It’s easy to just dive into one particular place and be like, ahh. And I think that’s important. Like whatever language you choose, if you’re doing vibe coding or just, you know, building parts of your system with an agent, you got to ask yourself, what’s the story when the agent ends and the human developer begins and picks up? Where are they going to be when that moment occurs? Because in every development process, that’s very likely to happen.
Julián
Yeah, and one thing that it’s… and, and I really like that now people is getting conscious, understanding like to implement this more is like, we are getting to the more spec-driven approach.
Freedom
Yes.
Julián
Okay, like, let’s start from a pretty-well-defined spec with a pretty-well-defined architecture. This is exactly what I need. And the the assistants are pretty good at following those instructions.
Freedom
Yeah.
Julián
If you start with a pretty big prompt, sure, it will do it.
Freedom
Yeah.
Julián
But you are going to get like what what is commonly known as slop.
Freedom
Yep. And and a lot of it. A lot of it in a short amount of time. Yeah. The planning step is… you can’t overlook it, honestly. I think it’s a… it seems obvious now, it wasn’t at first, that we should probably spend some time planning, and… but the great thing is, is that the tool is a great assistant at planning as well, until you’re ready to start coding. And so that’s a part of the… so we did a… at Vestmark, we did a company-wide training that I led on how to use Claude Code. I think one of the first things I pointed out is, you know, don’t forget to run_init first and let it, like, scan your project and put together its own understanding. Then you read that and say, “Okay, here’s what it knew and here’s what it missed.” Then you update the CLAUDE.md. Then you start planning and say, “Here’s what I think I want to do.” Let’s build a plan together. And then that plan turns into the set of tasks that ultimately need to be completed. That workflow has, more often than not, been the most successful way to get something new, built and deployed and delivered. The other thing that it’s really good at, though, like we’re talking about building new stuff and writing new stuff. That’s great. And I think as a, as a… I self-identify as a maker. I love making things and building things. It’s also incredibly good at helping you with an impenetrable code base that’s some… you know, like a legacy code base. So, at Vestmark, all of our new products are being built using Ruby, but we also have a core piece of technology that’s really the heart of what we do. And it is a very sophisticated Java code base that has been, you know, lovingly maintained over the last two decades. And when a new developer joins the team, it can be really difficult to become proficient in this code base very quickly because it’s… there’s a lot of algorithms in there that won’t be intuitive to somebody who’s not from this industry. And having access to a… an assistant that can help you to very quickly get answers about… it eliminates all those things that new developers run into, like, who do I go ask a question to? Am I going to be embarrassed to ask this? Is this a dumb question? Should I already know? Right? You’re never going to be embarrassed to ask your coding assistant a question. There’s no judgment, no judgment. And so suddenly now I’m able to, you know, I hear these… this idea that, like, the demise of the junior developer or something like that, I have a very different relationship to AI with respect to a junior developer, because now a junior developer can join the team and become proficient in our code base in way less time than they ever could before. And that means that there’s room for them to come in and become… and be, be productive and be valuable without having to spend six, eight months becoming an expert.
Julián
It’s a faster onboarding as well.
Freedom
That’s right.
Julián
And yes, it is a pretty good opportunity to be able to… get people into this mindset that even though we have these tools that are writing code for us, you don’t need to lose and and pretty much miss the foundation.
Freedom
That’s right.
Julián
The foundations are the most important part. Freedom, this is amazing discussion. I’m, I’m inspired to give it a try to RubyLLM and start like doing a a couple of demos.
Freedom
You’re going to enjoy it.
Julián
On on on Heroku AI, of course.
Freedom
Yeah.
Julián
And and see how can like we plug it in with MCP and do crazy experiments, so this is pretty, pretty inspiring. Good to have you here in the Code[ish] podcast and looking forward for a new opportunity to see what’s new with Vestmark and how your solution is evolving.
Freedom
We can’t wait to show you what’s next.
Julián
Awesome. Thank you so much.
Freedom
Thank you.
About Code[ish]
A podcast brought to you by the developer advocate team at Heroku, exploring code, technology, tools, tips, and the life of the developer.
Engineering Excellence and AI Productivity
How Salesforce Leverages Heroku
Subscribe to Code[ish]
Hosted By:
Julián Duque
with Guest:
Freedom Dumlao
CTO, Vestmark
![Code[ish]](https://wp-www-staging.heroku.com/wp-content/uploads/2025/06/codeish-cover.png?w=300)