HV White Paper Voltdb Technical Overview
HV White Paper Voltdb Technical Overview
For fast, data-driven applications, VoltDB offers substantial performance and cost
advantages as illustrated by the results of the TPC-C-like benchmark below, in which
VoltDB and a well-known OLTP DBMS were compared running the same test on iden-
tical hardware (Dell R610, 2x 2.66Ghz Quad-Core Xeon 5550 with 12x 4GB (48GB)
DDR3-1333 Registered ECC DIMMs, 3x72GB 15K RPM 2.5in Enterprise SAS 6GBPS
Drives):
As shown above, VoltDB’s massively parallel architecture also provides elastic scal-
ability running on low-cost commodity servers. It allows application developers to scale
applications simply by adding servers to a VoltDB cluster, instead of building complex,
costly sharding layers. VoltDB’s ACID compliance ensures developers never need to
sacrifice data consistency to gain high performance or scalability.
1
TECHNICAL OVERVIEW
• Migration from an open source DBMS to a higher-scale commercial DBMS such as Oracle Scale up to
bigger hardware
Dealing with a fast-growing user workload is a good problem to have, but the popular “scale-up”
$103
approaches to scalability are costly, requiring expensive hardware and DBMS upgrades. Scale-up
approaches also add development complexity, and increase overhead and maintenance costs. And TPS
no matter what scheme is used, the opportunity cost is also high—time spent fixing performance
103 105 106
problems means less time spent implementing higher-value business functionality.
What’s needed is a DBMS that “scales out” linearly and limitlessly by adding new commodity servers
to a shared-nothing DBMS cluster. VoltDB is exactly that DBMS.
Locking: Before touching a record, a transaction must set a lock on it in the lock table. This is an
overhead-intensive operation. 11%
Latching: Updates to shared data structures (B-trees, the lock table, resource tables, etc.) must be
done carefully in a multi-threaded environment. Typically, this is done with short-duration latches, 29%
which are another considerable source of overhead. 12%
Buffer Management: Data in traditional systems is stored on fixed-size disk pages. A buffer pool
manages which set of disk pages is cached in memory at any given time. Moreover, records must
Index Management
be located on pages and the field boundaries identified. Again, these operations are overhead-
intensive. Logging
Designed originally for data integrity, this overhead prevents traditional databases from scaling to Locking
meet contemporary data volumes and workloads. Latching
Buffer Management
Useful Work
• Data is held in memory for maximum throughput and to eliminate the need for buffer manage-
ment
• Each single-threaded partition operates autonomously, eliminating the need for locking and
latching
VoltDB Architecture
VoltDB leverages the following architectural elements to achieve its performance, scaling and high
availability objectives:
• Automatic replication and command logging for high availability and durability
Partitioning is appropriate for large, frequently updated or accessed tables, while cloning of small
read-almost-always tables can significantly improve performance.
In VoltDB, each partition is stored in main memory and processed by its associated, single-threaded
execution engine at in-memory speed. In-memory processing eliminates disk waits from within
VoltDB transactions, along with the need for buffer management overhead.
VoltDB can save data snapshots and command logs to disk for backup and recovery purposes and
also spool data to a data warehouse database for analysis and querying. This feature is explained in
more detail below.
VoltDB doesn’t experience user stalls (since transactions happen within stored procedures) or disk
stalls (because VoltDB processes data in main memory). Therefore, VoltDB is able to eliminate the
overhead associated with multi-threading (latching) and locking. Each VoltDB execution engine is
single-threaded and contains a queue of transaction requests, which it executes sequentially—and
exclusively—against its data. Elimination of stalls and associated locking and latching overhead al-
lows typical VoltDB SQL operations to complete in microseconds.
For single-partition transactions, each VoltDB engine operates autonomously. For multi-partition
transactions, one engine distributes and coordinates work plans for the other engines. VoltDB as-
sumes that an application designer can construct a partitioning or cloning scheme and transaction
design that localizes a large majority of transactions to a single virtual node. Many fast data applica-
tions such as telco billing, personalization, game state, sensor management, and capital market risk
share this profile.
VoltDB automatically guarantees that every replica, whether in the same cluster or a different one,
runs transactions in the same order and thereby achieves the same global order.
VoltDB runs an active-active configuration within a cluster. If a node failure occurs, VoltDB auto-
matically (and seamlessly) switches over to a replica node. Hence, an application is unaware that a
problem occurred. Recovery is performed automatically; the recovered node (or cluster) queries the
running VoltDB cluster to recover its data.
VoltDB’s passive database replication feature, parallel binary replication between the master and
replica clusters, has been augmented with geo-distributed, cross-datacenter replication (XDCR). This
bidirectional database replication, also called active-active replication, allows enterprises to maintain
separate, active copies of the database in two separate locations.
VoltDB implements a concept called command logging for transaction-level durability. Unlike tradi-
tional write-ahead logs, VoltDB logs the instantiation of commands to the database rather than all
resulting actions. This style of logging greatly reduces the load on the disk system while providing
either synchronous or asynchronous logging for transaction-level durability.
Stored procedures are written in Java, with embedded SQL calls for database services. VoltDB sup-
ports ANSI-standard SQL with application-enhancing extensions.
Alternative in-memory systems may remove buffer management and logging (at the expense of du-
rability). Even with these features removed, the maximum performance improvement is roughly 2x.
To achieve the 50-100x speedup of VoltDB, all legacy OLTP time-syncs must be removed (buffer
management, logging, latching and locking).
To deliver better performance on scale-out hardware, some databases, such as NoSQL KV stores,
eliminate some of this overhead — and sacrifice SQL interactivity and data integrity as well (deliver-
ing “eventual consistency”). Unfortunately, since KV stores don’t execute SQL (although some
NoSQL offerings have begun to provide limited, proprietary SQL alternatives), functionality that
would normally be executed by the database must be implemented in the application layer.
The table below summarizes the differences between VoltDB, NoSQL KV stores and traditional
databases:
Multi-master replication
ACID compliant
Summary
VoltDB leverages observations derived from research and industry expertise about
OLTP workloads to achieve linear scaling as nodes are added to a VoltDB cluster. This
partitioning, across multiple machines and multiple cores, is central to the design of
VoltDB; it mitigates the contention for resources that limit legacy DBMS scalability.
The main-memory storage of all data greatly reduces network and disk latency. Com-
bined, these departures from the status quo allow VoltDB to offer next-generation
DBMS scalability, performance and manageability, liberating organizations from expen-
sive shared-memory, shared-disk database systems that do not scale.
Next Steps
To learn more about VoltDB, visit www.voltdb.com. Product documentation, developer
support forums and an open source version of VoltDB are freely available.