25 Integrations with Apache Iceberg
View a list of Apache Iceberg integrations and software that integrates with Apache Iceberg below. Compare the best Apache Iceberg integrations as well as features, ratings, user reviews, and pricing of software that integrates with Apache Iceberg. Here are the current Apache Iceberg integrations in 2026:
-
1
Apache Hive
Apache Software Foundation
The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. Apache Hive is an open source project run by volunteers at the Apache Software Foundation. Previously it was a subproject of Apache® Hadoop®, but has now graduated to become a top-level project of its own. We encourage you to learn about the project and contribute your expertise. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. -
2
Impala
Command Line Software
Connect your product to hotel data in minutes Securely read from and write to many hotel systems using one powerful, well-documented JSON API. Connect your app to our Test Hotel in minutes, and connect to real hotels in days, not weeks. One easy-to-understand, universal REST API to connect to many hotel systems. Impala uses bank-grade security, is fully GDPR-compliant and uses geographically diverse hosting. Impala will be the last PMS integration you’ll ever need. We’re connecting your app to hotel data by integrating with a growing number of property management systems, so you don't have to. We're continuously adding hotel systems, so you can sell to a wider range of hotels every month. Modern hotel technology needs complete data and that's what Impala provides, seamlessly and in both directions. Whether you need to read guest information, write a new transaction or receive an update on a rate change, the Impala API provides a full range of data.Starting Price: €17 per month -
3
Trino
Trino
Trino is a query engine that runs at ludicrous speed. Fast-distributed SQL query engine for big data analytics that helps you explore your data universe. Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low-latency analytics. The largest organizations in the world use Trino to query exabyte-scale data lakes and massive data warehouses alike. Supports diverse use cases, ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high-volume apps that perform sub-second queries. Trino is an ANSI SQL-compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset, and many others. You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data. Access data from multiple systems within a single query.Starting Price: Free -
4
Tabular
Tabular
Tabular is an open table store from the creators of Apache Iceberg. Connect multiple computing engines and frameworks. Decrease query time and storage costs by up to 50%. Centralize enforcement of data access (RBAC) policies. Connect any query engine or framework, including Athena, BigQuery, Redshift, Snowflake, Databricks, Trino, Spark, and Python. Smart compaction, clustering, and other automated data services reduce storage costs and query times by up to 50%. Unify data access at the database or table. RBAC controls are simple to manage, consistently enforced, and easy to audit. Centralize your security down to the table. Tabular is easy to use plus it features high-powered ingestion, performance, and RBAC under the hood. Tabular gives you the flexibility to work with multiple “best of breed” compute engines based on their strengths. Assign privileges at the data warehouse database, table, or column level.Starting Price: $100 per month -
5
Apache Impala
Apache
Impala provides low latency and high concurrency for BI/analytic queries on the Hadoop ecosystem, including Iceberg, open data formats, and most cloud storage options. Impala also scales linearly, even in multitenant environments. Impala is integrated with native Hadoop security and Kerberos for authentication, and via the Ranger module, you can ensure that the right users and applications are authorized for the right data. Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment, with no redundant infrastructure or data conversion/duplication. For Apache Hive users, Impala utilizes the same metadata and ODBC driver. Like Hive, Impala supports SQL, so you don't have to worry about reinventing the implementation wheel. With Impala, more users, whether using SQL queries or BI applications, can interact with more data through a single repository and metadata stored from source through analysis.Starting Price: Free -
6
PuppyGraph
PuppyGraph
PuppyGraph empowers you to seamlessly query one or multiple data stores as a unified graph model. Graph databases are expensive, take months to set up, and need a dedicated team. Traditional graph databases can take hours to run multi-hop queries and struggle beyond 100GB of data. A separate graph database complicates your architecture with brittle ETLs and inflates your total cost of ownership (TCO). Connect to any data source anywhere. Cross-cloud and cross-region graph analytics. No complex ETLs or data replication is required. PuppyGraph enables you to query your data as a graph by directly connecting to your data warehouses and lakes. This eliminates the need to build and maintain time-consuming ETL pipelines needed with a traditional graph database setup. No more waiting for data and failed ETL processes. PuppyGraph eradicates graph scalability issues by separating computation and storage.Starting Price: Free -
7
StarRocks
StarRocks
Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. From streaming data to data capture, with a rich set of connectors, you can ingest data into StarRocks in real time for the freshest insights. A query engine that adapts to your use cases. Without moving your data or rewriting SQL, StarRocks provides the flexibility to scale your analytics on demand with ease. StarRocks enables a rapid journey from data to insight. StarRocks' performance is unmatched and provides a unified OLAP solution covering the most popular data analytics scenarios. Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. StarRocks' built-in memory-and-disk-based caching framework is specifically designed to minimize the I/O overhead of fetching data from external storage to accelerate query performance.Starting Price: Free -
8
Stackable
Stackable
The Stackable data platform was designed with openness and flexibility in mind. It provides you with a curated selection of the best open source data apps like Apache Kafka, Apache Druid, Trino, and Apache Spark. While other current offerings either push their proprietary solutions or deepen vendor lock-in, Stackable takes a different approach. All data apps work together seamlessly and can be added or removed in no time. Based on Kubernetes, it runs everywhere, on-prem or in the cloud. stackablectl and a Kubernetes cluster are all you need to run your first stackable data platform. Within minutes, you will be ready to start working with your data. Configure your one-line startup command right here. Similar to kubectl, stackablectl is designed to easily interface with the Stackable Data Platform. Use the command line utility to deploy and manage stackable data apps on Kubernetes. With stackablectl, you can create, delete, and update components.Starting Price: Free -
9
Amazon Data Firehose
Amazon
Easily capture, transform, and load streaming data. Create a delivery stream, select your destination, and start streaming real-time data with just a few clicks. Automatically provision and scale compute, memory, and network resources without ongoing administration. Transform raw streaming data into formats like Apache Parquet, and dynamically partition streaming data without building your own processing pipelines. Amazon Data Firehose provides the easiest way to acquire, transform, and deliver data streams within seconds to data lakes, data warehouses, and analytics services. To use Amazon Data Firehose, you set up a stream with a source, destination, and required transformations. Amazon Data Firehose continuously processes the stream, automatically scales based on the amount of data available, and delivers it within seconds. Select the source for your data stream or write data using the Firehose Direct PUT API.Starting Price: $0.075 per month -
10
Streamkap
Streamkap
Streamkap is a streaming data platform that makes streaming as easy as batch. Stream data from database (change data capturee) or event sources to your favorite database, data warehouse or data lake. Streamkap can be deployed as a SaaS or in a bring your own cloud (BYOC) deployment.Starting Price: $600 per month -
11
R2 SQL
Cloudflare
R2 SQL is Cloudflare’s serverless, distributed analytics query engine (currently in open beta) that enables you to run SQL queries over Apache Iceberg tables stored in R2 Data Catalog without needing to manage your own compute clusters. It is built to efficiently query large volumes of data by leveraging metadata pruning, partition-level statistics, file and row-group filtering, and Cloudflare’s globally distributed compute infrastructure to parallelize execution. The system works by integrating with R2 object storage and an Iceberg catalog layer, so you can ingest data via Cloudflare Pipelines into Iceberg tables, and then query that data with minimal overhead. Queries can be issued via the Wrangler CLI or HTTP API (with an API token granting permissions across R2 SQL, Data Catalog, and storage). During the open beta period, using R2 SQL itself is not billed, only storage and standard R2 operations incur charges.Starting Price: Free -
12
SQL
SQL
SQL is a domain-specific programming language used for accessing, managing, and manipulating relational databases and relational database management systems.Starting Price: Free -
13
Onehouse
Onehouse
The only fully managed cloud data lakehouse designed to ingest from all your data sources in minutes and support all your query engines at scale, for a fraction of the cost. Ingest from databases and event streams at TB-scale in near real-time, with the simplicity of fully managed pipelines. Query your data with any engine, and support all your use cases including BI, real-time analytics, and AI/ML. Cut your costs by 50% or more compared to cloud data warehouses and ETL tools with simple usage-based pricing. Deploy in minutes without engineering overhead with a fully managed, highly optimized cloud service. Unify your data in a single source of truth and eliminate the need to copy data across data warehouses and lakes. Use the right table format for the job, with omnidirectional interoperability between Apache Hudi, Apache Iceberg, and Delta Lake. Quickly configure managed pipelines for database CDC and streaming ingestion. -
14
Cloudflare R2
Cloudflare
Cloudflare R2 is a global object storage service that allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. It supports multiple scenarios, including storage for cloud-native applications, web content, podcast episodes, data lakes, and outputs for large batch processes such as machine learning model artifacts or datasets. R2 offers features like location hints to optimize data access, CORS configuration for interacting with objects, public buckets to expose contents directly to the Internet, and bucket-scoped tokens for granular access control. It integrates with Cloudflare Workers, enabling developers to perform authentication, route requests, and deploy edge functions across a network of over 330 data centers. Additionally, R2 supports Apache Iceberg through its data catalog, transforming object storage into a fully functional data warehouse without management overhead.Starting Price: $0.015 per GB -
15
OpenMetadata
OpenMetadata
OpenMetadata is an open, unified metadata platform that centralizes all metadata for data discovery, observability, and governance in a single interface. It leverages a Unified Metadata Graph and 80+ turnkey connectors to collect metadata from databases, pipelines, BI tools, ML systems, and more, providing a complete data context that enables teams to search, facet, and preview assets across their entire estate. Its API‑ and schema‑first architecture offers extensible metadata entities and relationships, giving organizations precise control and customization over their metadata model. Built with only four core system components, the platform is designed for simple setup, operation, and scalable performance, allowing both technical and non‑technical users to collaborate on discovery, lineage, quality, observability, collaboration, and governance workflows without complex infrastructure. -
16
CelerData Cloud
CelerData
CelerData is a high-performance SQL engine built to power analytics directly on data lakehouses, eliminating the need for traditional data‐warehouse ingestion pipelines. It delivers sub-second query performance at scale, supports on-the‐fly JOINs without costly denormalization, and simplifies architecture by allowing users to run demanding workloads on open format tables. Built on the open source engine StarRocks, the platform outperforms legacy query engines like Trino, ClickHouse, and Apache Druid in latency, concurrency, and cost-efficiency. With a cloud-managed service that runs in your own VPC, you retain infrastructure control and data ownership while CelerData handles maintenance and optimization. The platform is positioned to power real-time OLAP, business intelligence, and customer-facing analytics use cases and is trusted by enterprise customers (including names such as Pinterest, Coinbase, and Fanatics) who have achieved significant latency reductions and cost savings. -
17
Presto
Presto Foundation
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. For data engineers who struggle with managing multiple query languages and interfaces to siloed databases and storage, Presto is the fast and reliable engine that provides one simple ANSI SQL interface for all your data analytics and your open lakehouse. Different engines for different workloads means you will have to re-platform down the road. With Presto, you get 1 familar ANSI SQL language and 1 engine for your data analytics so you don't need to graduate to another lakehouse engine. Presto can be used for interactive and batch workloads, small and large amounts of data, and scales from a few to thousands of users. Presto gives you one simple ANSI SQL interface for all of your data in various siloed data systems, helping you join your data ecosystem together. -
18
Apache Spark
Apache Software Foundation
Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources. -
19
CodeConductor
CodeConductor
Say hello to CodeConductor, CodeConductor is an AI-powered platform that enables startups to build custom AI agents without coding. Dedicated to making AI accessible and flexible, CodeConductor allows users to create, own, and freely move their code and applications. The platform fosters collaboration between non-technical users and developers, empowering businesses to innovate quickly and effectively. -
20
Salesforce Data 360
Salesforce
Data 360 is Salesforce’s next-generation data platform, evolving from Data Cloud to unify and activate enterprise data in real time. It connects fragmented data across systems into a single, trusted Customer 360 view without requiring data movement. Through Zero-Copy integrations, Data 360 works directly with platforms like Snowflake, Databricks, BigQuery, and AWS. The platform harmonizes structured and unstructured data to power personalized experiences and intelligent workflows. Built-in identity resolution and governance tools ensure data accuracy, compliance, and privacy. Data 360 enables real-time segmentation, predictive insights, and triggered automation across Salesforce and external applications. It serves as the data foundation for Agentforce, delivering context-rich intelligence to AI agents and business teams. -
21
Purpose-built to run AI anywhere on data everywhere. Our solution unlocks the value of your unstructured data, allowing you to access, prepare, train, fine-tune, and drive AI without any limitations. We’ve joined our industry-leading file and object storage portfolio, including PowerScale, ECS, and ObjectScale, with our PowerEdge server and open, modern data lakehouse approach. This gives you the power to bring AI to your unstructured data, on-premises, at the edge, and in any cloud with the highest performance and infinite scale. Access a full team of trained data scientists and industry experts who will help you implement AI use cases that deliver the most value for your business. Protect, detect, and respond to cyber attackers with hardened software and hardware security and real-time threat detection. Train and fine-tune your AI models using a single point of data access and the highest performance, on-premises, at the edge, and in any cloud.
-
22
Apache Flink
Apache Software Foundation
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. Any kind of data is produced as a stream of events. Credit card transactions, sensor measurements, machine logs, or user interactions on a website or mobile application, all of these data are generated as a stream. Apache Flink excels at processing unbounded and bounded data sets. Precise control of time and state enable Flink’s runtime to run any kind of application on unbounded streams. Bounded streams are internally processed by algorithms and data structures that are specifically designed for fixed sized data sets, yielding excellent performance. Flink is designed to work well each of the previously listed resource managers. -
23
Daft
Daft
Daft is a framework for ETL, analytics and ML/AI at scale. Its familiar Python dataframe API is built to outperform Spark in performance and ease of use. Daft plugs directly into your ML/AI stack through efficient zero-copy integrations with essential Python libraries such as Pytorch and Ray. It also allows requesting GPUs as a resource for running models. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run out-of-core on a distributed cluster. Daft can handle User-Defined Functions (UDFs) in columns, allowing you to apply complex expressions and operations to Python objects with the full flexibility required for ML/AI. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run out-of-core on a distributed cluster. -
24
Cazpian
Cazpian
Cazpian is a unified data platform designed for modern data teams working with open lakehouse architectures. The platform brings together data governance, compute environments, catalog management, and AI capabilities into a single system. Cazpian allows organizations to connect and query data across object storage, Iceberg tables, and relational databases through one SQL interface. Its unified catalog enables teams to manage data sources without moving or duplicating datasets. The platform also includes tools for scheduling jobs, running queries, and managing compute resources. Built-in AI agents provide evidence-backed insights and help teams analyze data more efficiently. By combining governance, analytics, and automation in one platform, Cazpian helps organizations manage large-scale data environments more effectively. -
25
Dremio
Dremio
Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
- Previous
- You're on page 1
- Next