Search overlay panel for performing site-wide searches

Boost Performance & Scale with Postgres Advanced. Join Pilot Now!

Heroku AI

If you’ve built a RAG (Retrieval Augmented Generation) system, you’ve probably hit this wall: your vector search returns 20 documents that are semantically similar to the query, but half of them don’t actually answer it.

A user asks “how do I handle authentication errors?” and gets back documentation about authentication, errors, and error handling in embedding space, but only one or two are actually useful.

This is the gap between demo and production. Most tutorials stop at vector search. This reference architecture shows what comes next. This AI Search reference app shows you how to build a production grade enterprise AI search using Heroku Managed Inference and Agents.

Today, we are announcing the general availability of reranking models on Heroku Managed Inference and Agents, featuring support for Cohere Rerank 3.5 and Amazon Rerank 1.0.

Semantic reranking models score documents based on their relevance to a specific query. Unlike keyword search or vector similarity, rerank models understand nuanced semantic relationships to identify the most relevant documents for a given question. Reranking acts as your RAG pipeline’s high-fidelity filter, decreasing noise and token costs by identifying which documents best answer the specific query.

This month marks significant expansion for Heroku Managed Inference and Agents, directly accelerating our AI PaaS framework. We’re announcing a substantial addition to our model catalog, providing access to leading proprietary AI models such as Claude Opus 4.5, Nova 2, and open-weight models such as Kimi K2 thinking, MiniMax M2, and Qwen3. These resources are fully managed, secure, and accessible via a single CLI command. We have also refreshed aistudio.heroku.com, please navigate to

Heroku is launching automatic prompt caching starting December 18, 2025. Prompt caching delivers a notable, zero-effort performance increase for Heroku Managed Inference and Agents. Enabled by default, this feature is designed to deliver significantly faster responses for common workloads. We have taken a pragmatic approach and currently only enabled this to cache system prompts and tool definition, and not user messages or conversation history. You can disable caching for any request by setting X-Heroku-Prompt-Caching: false.

Today’s businesses face a tremendous amount of complexity in tools, data silos, and systems that teams need to navigate to deliver unique and engaging experiences to their customers. Meanwhile developers are only able to spend a fraction of their time coding due to the cognitive load of technology complexity, constant context switching, and figuring out how to adopt AI effectively into their daily work.

The Heroku AI PaaS is the Cloud Native Application Platform from Salesforce to seamlessly build and scale any custom service for greenfield app development, modernizing existing apps, and as part of a Salesforce cloud implementation.

At Dreamforce, we are excited to introduce new innovations to our AI PaaS that expand the capabilities of every Salesforce org and empower new builders. Today’s announcement includes innovations in three key areas:

  1. Expanding the capabilities of every Salesforce org with the flexibility of more pro-code and elastic compute with enterprise performance, scale and security enhancements.
  2. Enhancing and expanding the data foundation to empower our customers’ modern and AI app strategies.
  3. Delivering new vibes using AI to make the process of building new applications as accessible as sending a text message.

Introducing the pilot of Heroku Vibes, your collaborative agent for turning ideas into running apps.

For those who have been with us on this journey for a while, the name “Heroku Garden” might stir up a bit of nostalgia. It was the web experience that enabled developers to become immediately productive in creating and deploying Rails applications with a turnkey, opinionated environment, with the goal of making software easier and more accessible. That seed …

Ever found yourself in the endless loop of tweaking a prompt, running your code, and waiting to see if you finally got the output you wanted? That slow, frustrating feedback cycle is a common headache for AI developers. What if you could speed that up and get back to what you do best? Let’s focus on building amazing applications.

We're excited to introduce Heroku AI Studio, a new set of tools designed to streamline your generative AI development from prompt to production. We've focused on creating a more intuitive and efficient workflow, so you can focus on innovation instead of wrestling with your development environment. When using the Heroku Managed Inference and Agents add-on, this new tool is about to become an essential part of your workflow.

The AI revolution presents a critical challenge: moving from experimentation to production. This year, Heroku has evolved beyond a traditional PaaS to become an AI PaaS, a fully managed platform designed to solve this problem and accelerate the delivery of AI-powered apps. With new capabilities like AppLink, Managed Inference and Agents, and MCP on Heroku, Heroku now provides a fully managed platform designed for the AI era.

This evolution accelerates the delivery of AI-powered apps and intelligent agents, bringing a new level of speed and simplicity to your development process. Dreamforce 2025 is your chance to see firsthand how the Heroku AI PaaS is delivering business value faster and with less complexity.

Building intelligent applications requires powerful, cost-effective AI. Today, we’re simplifying that process by making Amazon’s cutting-edge Nova models directly available via Heroku Managed Inference and Agents. Provisioning these models is as simple as attaching the add-on to your Heroku application, providing a direct, managed path for developers and businesses to leverage a new class of powerful and cost-effective AI models with unparalleled simplicity.

Start building with OpenAI’s new open-weight model, gpt-oss-120b, now available on Heroku Managed Inference and Agents. This gives developers a powerful, transparent, and flexible way to build and deploy AI applications on the platform they already trust. Access gpt-oss-120b with our OpenAI-compatible chat completions API, which you can drop into any OpenAI-compatible SDK or framework.

Subscribe to the full-text RSS feed for Heroku AI.