Skip to content

Files

Latest commit

Jan 31, 2024
bef914a · Jan 31, 2024

History

History

Simple Add Sample

The Simple Add sample demonstrates the simplest programming methods for using SYCL*-compliant buffers and Unified Shared Memory (USM). Additionally, building and running this sample verifies that your development environment is configured correctly for Intel® oneAPI Toolkits.

Property Description
What you will learn How to use SYCL*-compliant extensions to offload computations using both buffers and USM.
Time to complete 15 minutes
Category Getting Started

Purpose

The Simple Add sample is a simple program that adds two large vectors of integers and verifies the results. In this sample, you will see how to use the most basic code in C++ language that offloads computations to a GPU, which includes using USM and buffers.

The basic SYCL implementations explained in the sample includes device selector, USM, buffer, accessor, kernel, and command groups.

Note: See the Base: Vector Add sample to examine another getting started sample you can use to learn more about using the Intel® oneAPI Toolkits to develop SYCL-compliant applications for CPU, GPU, and FPGA devices.

Prerequisites

Optimized for Description
OS Ubuntu* 18.04
Windows* 10
Hardware GEN9 or newer
Intel® Agilex® 7, Arria® 10, and Stratix® 10 FPGAs
Software Intel® oneAPI DPC++/C++ Compiler

Note: Even though the Intel DPC++/C++ OneAPI compiler is enough to compile for CPU, GPU, FPGA emulation, generating FPGA reports and generating RTL for FPGAs, there are extra software requirements for the FPGA simulation flow and FPGA compiles.

For using the simulator flow, Intel® Quartus® Prime Pro Edition and one of the following simulators must be installed and accessible through your PATH:

  • Questa*-Intel® FPGA Edition
  • Questa*-Intel® FPGA Starter Edition
  • ModelSim® SE

When using the hardware compile flow, Intel® Quartus® Prime Pro Edition must be installed and accessible through your PATH. Warning Make sure you add the device files associated with the FPGA that you are targeting to your Intel® Quartus® Prime installation.

Key Implementation Details

This sample provides examples of both buffers and USM implementations for simple side-by-side comparison.

  • USM requires an explicit wait for the asynchronous kernel's computation to complete.
  • Buffers, at the time they go out of scope, bring main memory in sync with device memory implicitly. The explicit wait on the event is not required as a result.

The program attempts first to run on an available GPU, and it will fall back to the system CPU if it does not detect a compatible GPU. If the program runs successfully, the name of the offload device and a success message is displayed.

Note: For comprehensive information about oneAPI programming, see the Intel® oneAPI Programming Guide. (Use search or the table of contents to find relevant information quickly.

Set Environment Variables

When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the setvars script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.

Build the Simple Add Program

Note: If you have not already done so, set up your CLI environment by sourcing the setvars script in the root of your oneAPI installation.

Linux*:

  • For system wide installations: . /opt/intel/oneapi/setvars.sh
  • For private installations: . ~/intel/oneapi/setvars.sh
  • For non-POSIX shells, like csh, use the following command: bash -c 'source <install-dir>/setvars.sh ; exec csh'

Windows*:

  • C:\"Program Files (x86)"\Intel\oneAPI\setvars.bat
  • Windows PowerShell*, use the following command: cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'

For more information on configuring environment variables, see Use the setvars Script with Linux* or macOS* or Use the setvars Script with Windows*.

Using Visual Studio Code* (VS Code) (Optional)

You can use Visual Studio Code* (VS Code) extensions to set your environment, create launch configurations, and browse and download samples.

The basic steps to build and run a sample using VS Code include:

  1. Configure the oneAPI environment with the extension Environment Configurator for Intel® oneAPI Toolkits.
  2. Download a sample using the extension Code Sample Browser for Intel® oneAPI Toolkits.
  3. Open a terminal in VS Code (Terminal > New Terminal).
  4. Run the sample in the VS Code terminal using the instructions below.

To learn more about the extensions and how to configure the oneAPI environment, see the Using Visual Studio Code with Intel® oneAPI Toolkits User Guide.

On Linux*

Configure the build system

  1. Change to the sample directory.

  2. Configure the project to use the buffer-based implementation.

    mkdir build
    cd build
    cmake ..
    

    or

    Configure the project to use the Unified Shared Memory (USM) based implementation.

    mkdir build
    cd build
    cmake .. -DUSM=1
    

    Note: When building for FPGAs, the default FPGA family will be used (Intel® Agilex® 7). You can change the default target by using the command:

    cmake .. -DFPGA_DEVICE=<FPGA device family or FPGA part number>
    

    Alternatively, you can target an explicit FPGA board variant and BSP by using the following command:

    cmake .. -DFPGA_DEVICE=<board-support-package>:<board-variant>
    

    You will only be able to run an executable on the FPGA if you specified a BSP.

Build for CPU and GPU

  1. Build the program.
    make cpu-gpu
    
  2. Clean the program. (Optional)
    make clean
    

Build for FPGA

  1. Compile for FPGA emulation.

    make fpga_emu
    
  2. Compile for simulation (fast compile time, targets simulator FPGA device):

    make fpga_sim
    
  3. Generate HTML performance reports.

    make report
    

    The reports reside at simple-add_report.prj/reports/report.html.

  4. Compile the program for FPGA hardware. (Compiling for hardware can take a long time.)

    make fpga
    
  5. Clean the program. (Optional)

    make clean
    

On Windows*

Configure the build system

  1. Change to the sample directory.

  2. Configure the project to use the buffer-based implementation.

    mkdir build
    cd build
    cmake -G "NMake Makefiles" ..
    

    or

    Configure the project to use the Unified Shared Memory (USM) based implementation.

    mkdir build
    cd build
    cmake -G "NMake Makefiles" .. -DUSM=1
    

    Note: When building for FPGAs, the default FPGA family will be used (Intel® Agilex® 7). You can change the default target by using the command:

    cmake -G "NMake Makefiles" .. -DFPGA_DEVICE=<FPGA device family or FPGA part number>
    

    Alternatively, you can target an explicit FPGA board variant and BSP by using the following command:

    cmake -G "NMake Makefiles" .. -DFPGA_DEVICE=<board-support-package>:<board-variant>
    

    You will only be able to run an executable on the FPGA if you specified a BSP.

Build for CPU and GPU

  1. Build the program.
    nmake cpu-gpu
    
  2. Clean the program. (Optional)
    nmake clean
    

Build for FPGA

Note: Compiling to FPGA hardware on Windows* requires a third-party or custom Board Support Package (BSP) with Windows* support.

  1. Compile for FPGA emulation.
    nmake fpga_emu
    
  2. Compile for simulation (fast compile time, targets simulator FPGA device):
    nmake fpga_sim
    
  3. Generate HTML performance reports.
    nmake report
    

The reports reside at simple-add_report.prj/reports/report.html.

  1. Compile the program for FPGA hardware. (Compiling for hardware can take a long time.)

    nmake fpga
    
  2. Clean the program. (Optional)

    nmake clean
    

Troubleshooting

If an error occurs, you can get more details by running make with the VERBOSE=1 argument:

make VERBOSE=1

If you receive an error message, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the Diagnostics Utility for Intel® oneAPI Toolkits User Guide for more information on using the utility.

Run the Simple Add Program

On Linux

Run for CPU and GPU

  1. Change to the output directory.

  2. Run the program for Unified Shared Memory (USM) and buffers.

    ./simple-add-buffers
    ./simple-add-usm
    

Run for FPGA

  1. Change to the output directory.

  2. Run for FPGA emulation.

    ./simple-add-buffers.fpga_emu
    ./simple-add-usm.fpga_emu
    
  3. Run on FPGA simulator.

    CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=1 ./simple-add-buffers.fpga_sim
    CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=1 ./simple-add-usm.fpga_sim
    
  4. Run on FPGA hardware (only if you ran cmake with -DFPGA_DEVICE=<board-support-package>:<board-variant>).

    ./simple-add-buffers.fpga
    ./simple-add-usm.fpga
    

On Windows

Run for CPU and GPU

  1. Change to the output directory.

  2. Run the program for Unified Shared Memory (USM) and buffers.

    simple-add-usm.exe
    simple-add-buffers.exe
    

Run for FPGA

  1. Change to the output directory.

  2. Run for FPGA emulation.

    simple-add-buffers.fpga_emu.exe
    simple-add-usm.fpga_emu.exe
    
  3. Run on FPGA simulator.

    set CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=1
    simple-add-buffers.fpga_sim.exe
    simple-add-usm.fpga_sim.exe
    set CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=
    
  4. Run on FPGA hardware (only if you ran cmake with -DFPGA_DEVICE=<board-support-package>:<board-variant>).

    simple-add-buffers.fpga.exe
    simple-add-usm.fpga.exe
    

Build and Run the Simple Add Sample in Intel® DevCloud (Optional)

When running a sample in the Intel® DevCloud, you must specify the compute node (CPU, GPU, FPGA) and whether to run in batch or interactive mode.

Note: Since Intel® DevCloud for oneAPI includes the appropriate development environment already configured, you do not need to set environment variables.

Use the Linux instructions to build and run the program.

You can specify a GPU node using a single line script.

qsub  -I  -l nodes=1:gpu:ppn=2 -d .
  • -I (upper case I) requests an interactive session.

  • -l nodes=1:gpu:ppn=2 (lower case L) assigns one full GPU node.

  • -d . makes the current folder as the working directory for the task.

    Available Nodes Command Options
    GPU qsub -l nodes=1:gpu:ppn=2 -d .
    CPU qsub -l nodes=1:xeon:ppn=2 -d .
    FPGA Compile Time qsub -l nodes=1:fpga_compile:ppn=2 -d .
    FPGA Runtime (Arria 10) qsub -l nodes=1:fpga_runtime:arria10:ppn=2 -d .

Note: For more information on how to specify compute nodes, read Launch and manage jobs in the Intel® DevCloud for oneAPI Documentation.

Only fpga_compile nodes support compiling to FPGA. When compiling for FPGA hardware, increase the job timeout to 24 hours.

Executing programs on FPGA hardware is only supported on fpga_runtime nodes of the appropriate type, such as fpga_runtime:arria10.

Neither compiling nor executing programs on FPGA hardware are supported on the login nodes. For more information, see the Intel® DevCloud for oneAPI Intel® oneAPI Base Toolkit Get Started page.

Example Output

simple-add output snippet changed to:
Running on device:        Intel(R) Gen9 HD Graphics NEO
Array size: 10000
[0]: 0 + 100000 = 100000
[1]: 1 + 100000 = 100001
[2]: 2 + 100000 = 100002
...
[9999]: 9999 + 100000 = 109999
Successfully completed on device.

License

Code samples are licensed under the MIT license. See License.txt for details.

Third-party program Licenses can be found here: third-party-programs.txt.