Postgres Without Limits
New YugabyteDB Functionality for Ultra-Resilient AI Apps
- YugabyteDB MCP Server for seamless AI-powered experiences in applications (also now available on AWS Marketplace)
- Integrations with LangChain, OLLama, LlamaIndex, AWS Bedrock, and Google Vertex AI.
- Multi-modal API support with the addition of MongoDB API support for scaling MongoDB workloads in addition to PostgreSQL (YSQL) and Cassandra (YCQL)
- Online upgrades and downgrades across major PostgreSQL versions with 99.99% uptime
- General Availability of enhanced PostgreSQL compatibility with generated columns, foreign keys on partitioned tables, and multi-range aggregates
- Connection pooling with 5x better performance than PostgreSQL
Deploy AI at Scale With YugabyteDB’s First Agentic AI Application and Extensible Vector Search
FAQ
YugabyteDB offers flexible deployment options to match your needs: YugabyteDB Aeon as a fully managed service on AWS, Azure, or Google Cloud with zero operational overhead, self-managed deployments on public clouds, on-premises, Kubernetes, or hybrid environments using YugabyteDB Anywhere, and our self-managed open source database, YugabyteDB. All options provide the same PostgreSQL-compatible API with built-in resilience and horizontal scaling capabilities.
Yes, YugabyteDB is 100% open source under the Apache 2.0 license. As of early 2025, all previously commercial enterprise features, including Distributed Backups, Data Encryption, and Read Replicas, are available in the open source project. There are no longer separate Community and Enterprise editions—just one fully open source database available on GitHub.
YugabyteDB supports AI applications through the native pgvector extension for storing embeddings and vector similarity search, a distributed architecture that handles massive AI workloads with low latency, and ACID transactions for data consistency. The PostgreSQL-compatible interface enables developers to quickly build AI applications while providing the scale and resilience needed for production deployments with real-time inference requirements.