Skip to main content

Integrating AI models into your development workflow

Call AI models in the tools you use every day.

With GitHub Models extensions, you can call specific AI models from both Copilot Chat and GitHub CLI. These extensions integrate directly into your development workflow, allowing you to prompt models without context switching.

Using AI models in Copilot Chat

If you have a Copilot subscription, you can work with AI models in Copilot Chat in two different ways:

Using the GitHub Models Copilot Extension

Note

The GitHub Models Copilot Extension is in public preview and is subject to change.

  1. Install the GitHub Models Copilot Extension.

    • If you have a Copilot Individual subscription, you can install the extension on your personal account.
    • If you have access to Copilot through a Copilot Business or Copilot Enterprise subscription:
      • An organization owner or enterprise owner needs to enable the Copilot Extensions policy for your organization or enterprise.
      • An organization owner needs to install the extension for your organization.
  2. Open any implementation of Copilot Chat that supports GitHub Copilot Extensions. For a list of supported Copilot Chat implementations, see "Using extensions to integrate external tools with Copilot Chat."

  3. In the chat window, type @models YOUR-PROMPT, then send your prompt. There are several use cases for the GitHub Models Copilot Extension, including:

    • Recommending a particular model based on context and criteria you provide. For example, you can ask for a low-cost OpenAI model that supports function calling.
    • Executing prompts using a particular model. This is especially useful when you want to use a model that is not currently available in multi-model Copilot Chat.
    • Listing models currently available through GitHub Models

Using AI models from the command line

Note

The GitHub Models extension for GitHub CLI is in public preview and is subject to change.

You can use the GitHub Models extension for GitHub CLI to prompt AI models from the command line, and even pipe in the output of a command as context.

Prerequisites

To use the GitHub Models CLI extension, you need to have GitHub CLI installed. For installation instructions for GitHub CLI, see the GitHub CLI repository.

Installing the extension

  1. If you have not already authenticated to the GitHub CLI, run the following command in your terminal.

    Shell
    gh auth login
    
  2. To install the GitHub Models extension, run the following command.

    Shell
    gh extension install https://github.com/github/gh-models
    

Using the extension

To see a list of all available commands, run gh models.

There are a few key ways you can use the extension:

  • To ask a model multiple questions using a chat experience, run gh models run. Select your model from the listed models, then send your prompts.
  • To ask a model a single question, run gh models run MODEL-NAME "QUESTION" in your terminal. For example, to ask the gpt-4o model why the sky is blue, you can run gh models run gpt-4o "why is the sky blue?".
  • To provide the output of a command as context when you call a model, you can join a separate command and the call to the model with the pipe character (|). For example, to summarize the README file in the current directory using the gpt-4o model, you can run cat README.md | gh models run gpt-4o "summarize this text".