XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler. The project is actively migrating from tensorflow/xla under the governance of the OpenXLA SIG.
The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.
If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:
If you're not contributing code to the XLA compiler, you don't need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.
Here's how you can start developing in the XLA compiler:
To build XLA, you will need to install Bazel. Bazelisk is an easy way to install Bazel and automatically downloads the correct Bazel version for XLA. If Bazelisk is unavailable, you can manually install Bazel instead.
Clone this repository:
git clone https://github.com/openxla/xla && cd xla
We recommend using a suitable docker container to build/test XLA, such as TensorFlow's docker container:
docker run --name xla -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash
Run an end to end test using an example StableHLO module:
docker exec xla ./configure && bazel test xla/examples/axpy:stablehlo_compile_test --nocheck_visibility --test_output=all
This will take quite a while your first time because it must build the entire stack, including MLIR, StableHLO, XLA, and more.
When it's done, you should see output like this:
==================== Test output for //xla/examples/axpy:stablehlo_compile_test:
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from StableHloAxpyTest
[ RUN ] StableHloAxpyTest.LoadAndRunCpuExecutable
Loaded StableHLO program from xla/examples/axpy/stablehlo_axpy.mlir:
func.func @main(
%alpha: tensor<f32>, %x: tensor<4xf32>, %y: tensor<4xf32>
) -> tensor<4xf32> {
%0 = stablehlo.broadcast_in_dim %alpha, dims = []
: (tensor<f32>) -> tensor<4xf32>
%1 = stablehlo.multiply %0, %x : tensor<4xf32>
%2 = stablehlo.add %1, %y : tensor<4xf32>
func.return %2: tensor<4xf32>
}
Computation inputs:
alpha:f32[] 3.14
x:f32[4] {1, 2, 3, 4}
y:f32[4] {10.5, 20.5, 30.5, 40.5}
Computation output: f32[4] {13.64, 26.78, 39.920002, 53.06}
[ OK ] StableHloAxpyTest.LoadAndRunCpuExecutable (264 ms)
[----------] 1 test from StableHloAxpyTest (264 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (264 ms total)
[ PASSED ] 1 test.
This document contains more information about how to build XLA.
- For questions, contact Thea Lamkin - thealamkin at google.com.
- GitHub (current)
- Community Resources
While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.