Discover Meko: The Data Infrastructure for Agents That Work and Learn Together
Knowledge, memory, conversations, and decision traces for multi-agent systems, unified with a single data infrastructure behind one MCP endpoint
If you’re reading this, there’s a good chance you are, or soon will be, responsible for bringing AI agents into the world. If that’s the case, you’re already contending with some major challenges when trying to deliver production-ready agentic AI applications.
At the top of that list of challenges? Start-from-scratch agents with no agentic knowledge-sharing, fragmented explainability/audit complexity, and high costs, all tied together by a tangled mess of underpinning and overlapping databases.
Builders need a way to abstract away this complexity and solve the multi-agent memory and knowledge problem, or they’re stuck building infrastructure and not multi-agent systems.
This blog discusses the key challenges developers encounter when building and releasing agentic AI applications and details how Meko, the agent-native data infrastructure for collective memory, shared knowledge, and decision traces, solves these issues.
The Challenge: Agents Can’t Share What They Know
When you start building a production-ready agentic application, the data infrastructure problem quickly becomes apparent. Multi-agent applications don’t fail because the agents can’t reason; they fail because they can’t share what they know.
The Multi-Agent System Failure Taxonomy (MAST) found that 36.9% of failures stem from inter-agent misalignment. Your agent needs to remember preferences, access knowledge, and store and retrieve conversation history. It needs to do all of this efficiently, at variable load, potentially across multiple concurrent agents working together.
What does that look like with traditional databases?
You first bolt together:
- PostgreSQL for relational data
- Pgvector for semantic search
- A separate graph database for memory
- An object store for conversations
Then you build custom logic for tiering, caching, and observability on top. Suddenly, you’re no longer building an agent; you’re building infrastructure.
This patchwork approach creates data silos, drives up costs, introduces latency from cross-system queries, and results in infrastructure that hinders how agents naturally work. Development cycles slow down, performance becomes unpredictable, and costs increase exponentially.
The root cause is simple: traditional databases expose object models such as relational tables, vectors, and graphs, whereas agents work with concepts like memory, knowledge, and conversation history.
Even when teams solve the infrastructure problem, they hit another wall: each agent operates in isolation.
One agent’s learned behavior, conversation patterns, and knowledge updates don’t propagate to the rest of the system. You end up with a fleet of individual agents that can’t benefit from each other’s experience and insight.
These are the gaps Meko was built to close.
Introducing Meko
Meko is the agent-native data infrastructure that enables multi-agent systems to learn together, building collective memory and shared knowledge that compounds across the entire system.
Whether you’re shipping a single agent or running multi-agent applications at scale, Meko’s agent-native data layer gives your agents collective memory, shared knowledge, and full decision traces of what they did and why. Developer teams can finally utilize native persistence without the complexity of stitching together multiple database systems.
Meko is built from the ground up around the four data constructs that agents actually work with:
- Knowledge is the unified, queryable layer your agents reason over
- Memory is what they retain and share
- Conversations are the full interaction history, automatically tiered for cost
- Traces are the decision record that connects all three
Meko exposes these four constructs through the datapack. Each datapack encapsulates the right storage, indexing, and retrieval behavior for an agentic application, rather than shoehorning agent data into a generic table, a vector store, or a pile of markdown files.
Through a single MCP endpoint, Meko gives multi-agent systems a compounding collective memory, shared knowledge, and full decision traceability.

Memory: Collective Intelligence That Compounds Across Every Agent
Most agent frameworks give each agent its own memory. Meko gives your entire system a memory that compounds.
Each agent has its own private memory: working state during a task, episodic history of past interactions, semantic facts that persist across sessions, and procedural patterns of what worked.
When an agent learns or corrects something during a conversation, that learning is automatically promoted from per-agent memory into the shared knowledge layer, and every other agent in the datapack picks it up from there. Per-agent memory plus shared knowledge is the mechanism. Collective memory is the result.
Meko handles the underlying operations: entity extraction, graph updates, per-agent scoping, and promotion to shared knowledge. Your agents inherit the benefits without your team having to build or maintain the infrastructure.
Meko’s memory layer covers all five types that agents actually use:
- Working memory for transient state during a task
- Episodic memory for task histories and interaction logs
- Semantic memory for durable facts and domain knowledge that persist across sessions
- Procedural memory for learned workflows and tool-use patterns
- Shared memory that spans all agents in the datapack as common ground for coordination
Different memory types need different retention and retrieval rules, not uniform treatment, and Meko handles that distinction natively.
Meko ships with mem0 as the default memory implementation, with entity extraction, graph updates, and other LLM-driven memory tasks optimized within Meko itself, rather than burning your agent’s token budget. Any other memory provider can be plugged in through the same interface, so you are not locked into a single approach.
This matters most at the handoff. The single biggest source of failure in multi-agent systems is not model quality or orchestration logic, but the state that gets lost or diluted when work passes from one agent to another, or from an agent back to a human.
When Agent B picks up where Agent A left off, what usually transfers is the curated output: a result, a summary, a status. The reasoning behind it gets lost, including the logic, the assumptions, the intermediate decisions, and more. Meko preserves that reasoning and its context alongside the curated output, so the next agent in the chain inherits not just what was decided, but also the understanding that shaped the decision. The same applies when a human steps in; instead of asking “what did you do and why,” they can see the entire decision-making process.
Meko’s foundational concept is the datapack. An agentic application creates a datapack and interacts with it through an MCP server.
The datapack maintains per-agent memory and shareable knowledge simultaneously, giving you isolation where you need it and sharing where it creates value. You don’t have to spend time modeling your agent’s data needs into database schemas, as Meko exposes the agent-native primitives your code actually needs to work with.
The key point is that the work of running memory, model selection, extraction, and graph maintenance happens inside the data layer, not in your agent code.

Knowledge: One Unified Layer Every Agent Can Access And Contribute To
Agent systems are only as good as their knowledge. The problem is that knowledge is never static. It arrives from conversations as they happen, from real-time data sources as they update, and from slower-changing documents and knowledge bases as they evolve.
Keeping that knowledge current, accessible, and unified across multiple agents is where traditional data stacks fall apart. Meko continuously builds and maintains that knowledge, without manual pipeline management.
When you add a knowledge source (for example, a PDF, SQL table, HTML page, markdown file, or live data feed of documents), Meko automatically processes it, generating embeddings, creating summaries, and updating the relevant indexes. When that source changes, Meko handles the incremental updates.
Your agents retrieve what they need, when they need it, without you having to build or maintain the pipelines in between: structured data from SQL tables and unstructured data from PDFs and documents, all live in a single, unified layer that every agent in your system can access and contribute to.
Hybrid queries are where this matters most. A typical agent memory query needs results that are semantically similar to the current task, created within the last hour, tagged with a specific workflow ID, and not marked as superseded. That requires vector similarity, relational filtering, and metadata traversal in one query.
With a stitched stack, that’s three round trips and custom merge logic to reconcile the results. With Meko, it’s one PostgreSQL statement against one data layer.
This means there is no need to stitch results together across systems and no latency from cross-database round-trips. This gives your agents real knowledge that grows with your system: not a static document store loosely bolted onto a database via bespoke AI agents.

Decision Traces: Know What Your Agents Learned And Why They Did What They Did
Production multi-agent systems require more than performance; they also require stakeholders’ trust.
Trust requires knowing not just what an agent did, but how it came to know what it knows. Meko captures this as a decision trace: the full chain of thought behind every agent action.
This includes everything from the prompt that kicked it off, to the plan the agent formed, to the tool calls and reasoning steps it executed, to what it ultimately learned and propagated to the rest of the system.
Decision traces go beyond conventional observability. Standard tracing tells you the technical sequence of steps. A decision trace tells you the why behind those steps and connects them to the knowledge and memory updates that resulted.
Meko provides full decision-tracing functionality, giving teams auditability and the ability to tune performance. Every decision trace is captured end-to-end, and you can pull it into the observability tooling your team already uses. Meko generates the signal; you decide where it lives.
This is no longer just a developer convenience. As regulators move toward mandatory documentation requirements for high-risk AI systems, including a 10-year retention policy under the EU AI Act, production teams need a defensible record of what their agents knew, how they came to know it, and what data operations underpinned every interaction.
Because every memory read, memory write, knowledge update, and retrieval in Meko routes through a single MCP endpoint backed by a unified database, that record is a property of the architecture, not something you have to bolt on afterward. Compliance becomes a beneficial aspect of how the system is built.
Conversations: Full History, Better Economics
Agents need conversation history to do their jobs, but storing it all in a real-time database gets expensive fast. Object stores are cheaper but make data harder to query. Meko handles this with a three-tier model that runs transparently underneath your agents.
Recent conversations stay “hot” in YugabyteDB at millisecond query latency, where agents need them for active reasoning. As conversations age, they auto-tier to S3 object storage, keeping costs in line with how rarely older data is actually read. Beyond this configurable retention window, Meko keeps only summaries, which remain queryable and long-lived without incurring the full transcript cost. Full, verbatim history is stored throughout, so there is no trade-off between cost and completeness, and tiering occurs without manual intervention.
Infrastructure: Built For How Agents Actually Run
Meko connects to any agentic framework that speaks MCP. You don’t need to adopt a new ecosystem; you add a data layer to the one you already have, all through a single MCP endpoint. Meko is built to work with Claude Code, Claude Desktop, Cursor, and other MCP-compatible frameworks.
Under the hood, Meko’s serverless, multi-tenant architecture is designed for the variable resource utilization inherent to multi-agent systems. An idle agent burns near-zero resources. An agent handling a burst of activity scales to full capacity instantly. Generic database architectures aren’t designed for this pattern – Meko is.
Everything runs on a unified, distributed PostgreSQL-compatible data layer built on YugabyteDB, which natively supports SQL, NoSQL, vector, time-series, and graph queries.
A single query can span multiple data models without stitching results across systems. Each tenant’s data is stored in a separate logical database, ensuring isolation and performance at multi-tenant scale. Your agents have a single place to store and query everything, with no additional databases to provision, integrate, or maintain.
Built on YugabyteDB
Meko’s stateful layer runs on YugabyteDB, the same horizontally scalable, PostgreSQL-compatible distributed database trusted by enterprises to run business-critical workloads. Production agentic applications inherit resilience and scalability without giving up the familiar Postgres interface developers already know.
Built For Real Agentic Workflows
Meko was built for developers and engineering teams shipping production multi-agent AI applications. When we talk with these teams, six patterns keep coming up in the systems they’re building.
- Knowledge handoff with context preserved. Agent B inherits Agent A’s reasoning, not just the final output, with no custom handoff schemas wired across three different systems.
- Collective learning across runs. What worked, what didn’t, and why, persisted across workflow runs automatically in one system rather than being rediscovered every time.
- Auditable learning trail. Full chain-of-thought traceability for what agents learned and how that learning flowed, ready for EU AI Act compliance.
- Conversation history with tiering. Recent history in-database for fast recall; older history auto-tiered to S3 to keep costs in check.
- Agent resumability. Full agent and sub-agent state persists in Meko, so any run can be paused and resumed reliably.
- Team and project memory. Coding standards, norms, and project context are portable across LLMs and tooling for individuals and teams.
If any of these patterns sound familiar, Meko was built for you!
Get Started Today
Request access to Meko’s fully managed service offering. The platform will support multi-region and multi-cloud deployments, enabling global scale and high availability for production AI systems.
We plan to make Meko available as open source software and follow a community-driven development model. Developers will be able to run Meko locally for experimentation, or deploy it across private, public, or hybrid clouds.
Explore Meko docs, spin up a datapack, and connect your first agent through the MCP server. Discover the data layer for agents that work and learn together today!
Get started with Meko:
- Visit the Meko website
- Request Access
- Read Meko docs
- Join our Discord
- Read the Meko launch press release